HOME

David Terr
Ph.D. Math, UC Berkeley

 

Home >> Pre-Calculus >> 8. Matrices and Determinants

<< 8.2. Determinants and Inverses

>> 8.4. Cramer's Rule

 

8.3. Row Reduction

In Chapter 7, we explained how to solve systems of linear equations in two or three variables by means of row reduction. We introduced a matrix as a shorthand notation for representing the system of equations. In this section, we will elaborate on this method and better explain why it works. We will also describe useful techniques for computing the determinant and inverse of large matrices by means of row reduction.

We begin with a fully general discussion of systems of linear equations. Suppose we have a system of n linear equations in n variables. Such a system looks as follows:

a11x1 + a12x2 + ... + a1nxn = b1
a21x1 + a22x2 + ... + a2nxn = b2
. . . . . . . . . . . . . . . . . . . . . . . . . .
an1x1 + an2x2 + ... + annxn = bn

Here, the coefficients aij and bi are real-valued constants and the xi are the n variables of the system. Note that we can write this equation as a matrix equation as follows:

general matrix equation

The n×n matrix on the left, call it A, is known as the coefficient matrix. It is followed by an n×1 matrix consisting of the variables of the system. Matrices consisting of a single column such as this are known as column vectors. Similarly, matrices consisting of a single row are known as row vectors. The column vector on the right side of the equation consists of the constant terms of each equation. This column vector is known as the constant vector. We denote this vector as b. The column vector in the middle, consisting of the n variables xi, is called the variable vector. We denote this vector as x. Thus, the above equation, written with matrix symbols, takes on the concise form

  • (8.3.1) Ax = b.

This equation is easy to solve in terms of matrices. Multiplying both sides on the left by A-1 and noting that Inx = x, we obtain the solution

  • (8.3.2) x = A-1b.

However, we must be careful, for the matrix A may not be invertible, in which case Equation (8.3.2) does not make sense. This case corresponds to either an inconsistent or dependent system.

So in the usual case in which A is invertible, what is the best way to solve for x? One might think all we have to do is to first compute A-1 and then apply Equation (8.3.2). But as we will see, this is inefficient, for A-1 does not need to be computed explicitly. Instead, the best way to solve for x, if this is all we care about, is to apply elementary row operations on both A and b until A becomes In and b becomes x. This works, because the way we end up transforming both A and b is multiplying both on the left by A-1 without having to compute it.

Let us assume that A is invertible, so that we can apply Equation (8.3.2). What did we do in the previous chapter and why did it work? We began by forming the augmented matrix A' = [A|b] consisting of A to the left and b to the right of the partition. Then we applied a series of elementary row operations, call them E1, E2, ..., Ek, to A'. The key here is to realize that each elementary row operation may be regarded as a matrix which acts on the rows of A' by multiplying on the left side. Thus, after the first elementary row operation, A' gets replaced with A1' = E1A' = [A1 = E1A|b1 = E1b]. After the second elementary row operation, our augmented matrix becomes A2' = E2A1' = [A2 = E2A1|b2 = E2b1]. And so on the pattern continues until we apply the last elementary row operation Ek such that Ak = EkAk-1 = In. Then we are left with Ak' = EkAk-1' = [Ak = EkAk-1 = In|bk = Ekbk-1].

We now claim that bk= x. How do we know this? We multiplied A on the left by a series of elementary row operations E1, E2, ..., Ek until we were left with the n×n identity matrix In. Thus we have EkEk-1...E2E1A = In, whence EkEk-1...E2E1 = A-1. But we also multiplied b on the left by the same series of matrices, whence bk = EkEk-1...E2E1b = A-1b, which by Equation (8.3.2) we know is equal to x. Thus, by the end of the procedure we are left with the augmented matrix [In|x], as we claimed in the previous chapter.

 

Besides solving systems of linear equations, row reduction may also be used to compute determinants and inverses of square matrices. This is especially useful if the matrices are large. Let us look once again look at the matrix we used in Examples 2-5 of the previous section, namely

sample 3 by 3 matrix

We will show how row reduction can be used to compute its determinant and inverse. For this purpose, we augment it again, this time with the 3×3 identity matrix to the right. Thus, we will perform elementary row operations on the following matrix:

augmented matrix

Recall the three elementary row operations from Chapter 7:

  1. Multiplying (or dividing) a row by a nonzero constant.
  2. Adding or subtracting a multiple of a row to another row.
  3. Swapping a pair of rows.

Now each of these operations can be represented as a matrix which acts to the left on the augmented matrix A'. Each time we perform one of these row operations, we will write down the associated matrix as well as its determinant. Now since the product of these matrices is EkEk-1...E2E1 = A-1, we see that by (8.2.3), the product of their determinants is equal to the determinant of the product, or det(A-1). By (8.2.4), this is equal to 1 / det(A). Thus, we have just derived an important formula for det(A), namely the following:

  • (8.3.3) det(A) = 1 / [det(E1) det(E2) ... det(Ek)].

Thus, to compute the determinant of A, we merely need to keep track of the determinants of each elementary row operation matrix, compute their product, and take the recipricol. This is in general much easier than using Equation (8.2.5), especially for large matrices!

We need to know the determinants of the matrices associated with each of the three types of elementary row operations. The determinant of the first type, namely multiplying a row by a nonzero constant, is equal to that constant. The determinant of the second type, namely adding or subtracting a multiple of a row to another row, is 1. Finally, the determinant of the third type, namely swapping a pair of rows, is -1.

 

The first elementary row operation we perform on A' is swapping the first two rows, bringing the leading 1 in the second row to the first row, where we can use it as a pivot to clear out the remaining entries in the first column. The determinant of the matrix E1 associated with this operation is -1. Below we write down E1 followed by A1' = E1A and det(E1).

elementary row operation matrix 1

augmented matrix 1

det(E1) = -1

Next we subtract three times the first row from the second. The determinant of the matrix associated with this operation is 1.

elementary row operation matrix 2

augmented matrix 2

det(E2E1) = -1

Next we subtract twice the first row from the third, clearing the leading entry of this row. The determinant of the matrix associated with this operation is 1.

elementary row operation matrix 3

augmented matrix 3

det(E3E2E1) = -1

Next we swap the second and third rows. The determinant of the matrix associated with this operation is -1.

elementary row operation matrix 4

augmented matrix 4

det(E4...E1) = 1

Next we divide the second row by -4. The determinant of the matrix associated with this operation is -1/4.

elementary row operatin matrix 5

augmented matrix 5

det(E5...E1) = -1/4

Next we subtract 5 times the second row from the first row. The determinant of the matrix associated with this operation is 1.

elementary row operation matrix 6

augmented matrix 6

det(E6...E1) = -1/4

Next we add 14 times the second row to the third row. The determinant of the matrix associated with this operation is 1.

elementary row operation matrix 7

augmented matrix 7

det(E7...E1) = -1/4

Next we multiply the third row by 2/45. The determinant of the matrix associated with this operation is 2/45.

elementary row operation matrix 8

augmented matrix 8

det(E8...E1) = -1/90

Next we add 29/4 times the third row to the first row. The determinant of the matrix associated with this operation is 1.

elementary row operation matrix 9

augmented matrix 9

det(E9...E1) = -1/90

Finally we subtract 13/4 times the third row from the second row. The determinant of the matrix associated with this operation is 1.

elementary row operation matrix 10

augmented matrix 10

1/ det(A) = det(A-1) = det(E10...E1) = -1/90

Thus we find det(A) = -90 and

matrix inverse

Note that our computations of det(A) and A-1 agree with those we found in the previous section.

 

Home >> Pre-Calculus >> 8. Matrices and Determinants

<< 8.2. Determinants and Inverses

>> 8.4. Cramer's Rule