HOME

David Terr
Ph.D. Math, UC Berkeley

 

Home >> Pre-Calculus >> 8. Matrices and Determinants

<< 8.1. Matrices

>> 8.3. Row Reduction

 

8.2. Determinants and Inverses

There are two extremely useful quantities associated with a square matrix, namely its determinant and its inverse. Before defining these, we define another useful quantity known as the identity matrix. For a fixed positive integer n, the n×n identity matrix is the n×n matrix In with ones along its main diagonal and zeros everywhere else. Explicitly, we have

identity matrix

What makes the identity matrix special, and the reason for its name, is that for every positive integer m and for every m×n matrix A, we have A In = A. Also, for every positive integer p and for every n×p matrix B, we have InB = B. In other words, the matrix In acts like the number 1 on matrices of the right dimensions. Sometimes when n is understood we simply write I for In,

 

So what about inverses? For a given matrix A, we say the matrix B is a left inverse of A if BA = I. Similarly, we say the matrix C is a right inverse of A if AC = I. It is easy to see that if A is an m×n matrix and B is a left inverse of A. then B must be an n×m matrix, in which case BA = In.Similarly, if A is an m×n matrix and C is a right inverse of A. then C must be an n×m matrix, in which case AC = Im.

It should be noted that left inverses are not necessarily right inverses and vice-versa. A matrix that is both a left inverse and a right inverse of a given matrix A is called a two-sided inverse of A. Left inverses, right inverses, and two-sided inverses of a given matrix do not necessarily exist and are not necessarily unique.

The most important case of matrix inverses arises when the matrix A in question is square, because in this case, int turns out that if A has a left inverse or a right inverse, then this inverse is a unique two-sided inverse, known simply as the inverse of A, denoted A-1. One must be careful with this definition, however, because not every square matrix has an inverse. A square matrix with an inverse is called an invertible matrix and one without an inverse is called a noninvertible or singular matrix.

A problem of key importance in linear algebra is finding the inverse of a given square matrix, assuming it exists, or if it does not exist, determining that this is the case. Another useful quantity for helping to do so is the determinant. The determinant of an n×n matrix A is a number, denoted as det(A) or simply as |A|, which is equal to zero if and only if A is singular. The general definition of the determinant is complicated, so we start by defining it for 2×2 and 3×3 matrices.

Let A be a 2×2 matrix. Specifically, we have

2 by 2 matrix

for some numbers a, b, c, and d. In this case, the determinant of A is defined to be

  • (8.2.1) det(A) = ad - bc.

 

Example 1: Compute the determinants of the following 2×2 matrices and determine which ones are singular.

  • (a) example 1a matrix
  • (b) example 1b matrix
  • (c) example 1c matrix
  • (d)example 1d matrix

Solution:

  • (a) We have det(A) = (1)(4) - (2)(3) = 4 - 6 = -2. This matrix is invertible (not singular).
  • (b) We have det(A) = (1)(6) - (2)(3) = 6 - 6 = 0. This matrix is singular.
  • (c) We have det(A) = (1)(1) - (0)(0) = 1 - 0 = 1. This matrix is invertible. (Note: Here we have A = I2, the 2×2 identity matrix. For every n, we have det(In) = 1, so In is invertible. In fact, In is equal to its own inverse for every n.
  • (d) We have det(A) = (0)(0) - (0)(0) = 0 - 0 = 0. This matrix is singlar. (Note: Here A is the 2×2 zero matrix. For every n, the n×n zero matrix is singular.

 

What about determinants of 3×3 matrices? These are considerably harder to compute, but the idea is similar. Note that in computing the determinant of a 2×2 matrix, we multiply entries along each diagonal and subtracted the result. The sign we use for the product along the main diagonal (upper-left to lower-right) is positive and the sign we use for the other diagonal is negative. We can represent this procedure with the following diagram:

2 by 2 determinant computation

Determinants of 3×3 matrices are computed in a similar way. We first write down the 3×3 matrix. Next, we extend some of the matrix entries to the left and to the right as shown below. We then draw six diagonal arrows through the entries, three going southeast and the other three going southwest. We then compute the six products of the three entries along each diagonal. Finally, we compute the sum of these products, negating the ones going southwest. The result is the determinant of the matrix.

3 by 3 determinant computation

Warning: This procedure does not extend to determinants of 4 ×4 matrices or larger!

 

Example 2: Compute the determinant of the following 3×3 matrix:

example 2 matrix

Solution: By the above procedure, we find

det(A) = (3)(5)(5) + (1)(9)(2) + (4)(1)(6) - (3)(9)(6) - (1)(1)(5) - (4)(5)(2)

= 75 + 18 + 24 - 162 - 5 - 40

= -90

 

There is another very useful rule for computing determinants, which can be used for arbitrarily large matrices. Before spelling out this rule, we first need another definition.

In order for this definition to make sense, we must first define a cofactor. Let A be an n×n matrix and let i and j be arbitrary integers from 1 to n. Then the cofactor of the matrix entry Aij is equal to (-1)i+j times the determinant of the matrix formed by crossing out the ith row and the jth column of A.

 

Example 3: Compute the cofactors of the entries A11= 3 and A23= 9 of the matrix A from Example 2.

Solution: The cofactor of 3 is (-1)1+1 = +1 times the determinant of the matrix obtained by crossing out the first row and the first column of A, namely

example 3 cofactor 1

Similarly, the cofactor of 9 is (-1)2+3 = -1 times the determinant of the matrix obtained by crossing out the second row and the third column of A, namely

example 3 cofactor 2

Now the rule for computing the determinant of an arbitrary square matrix is as follows: To compute the determinant of a matrix A, first choose any row or column of A. Then for each entry along the row or column, compute the product of this entry and its cofactor. Finally, compute the sum of all these products. The result is the determinant of A.

 

Example 4: Compute the determinant of the matrix A from Example 2 using the rule given above.

Solution: We can compute det(A) by expanding along the first column, which should be relatively easy since the entries along this column are small. By the rule given above, we have

example 4 computation

= 3[(5)(5) - (9)(6)] - [(1)(5) - (4)(6)] + 2[(1)(9) - (4)(5)]

= (3)(-29) - (-19) + 2(-11)

= -87 + 19 - 22 = -90.

Note that this agrees with our result from Example 2.

Before closing out our discussion of determinants, we should point out one very useful property they have, namely multiplicity. In other words, the determinant of a product of two or more matrices is equal to the product of their determinants. Mathematically speaking, this amount to the formula

  • (8.2.2) det(AB) = det(A) det(B).

for arbitary square matrices A and B of the same dimensions. By applying this formula repeatedly to an arbitrary product of matrices A1A2...Ak, each of the same dimensions, we find

  • (8.2.3) det(A1A2...Ak) = det(A1) det(A2) ... det(Ak).

We will find this formula useful in the following section. Another useful formula is the following, which follows easily from (8.2.2). We leave the proof as an exercise.

  • (8.2.4) det(A-1) = 1 / det(A).

 

So what about computing inverses? Once again, for simplicity, we start with inverses of 2×2 matrices, for which we can apply the following simple formula:

The inverse of the matrix

two-by-two matrix

is the matrix

inverse of a 2 by 2 matrix

It is easy to see that by this definition, we have AA-1 = A-1A = I2. We leave the proof as an exercise.

Inverses of larger matrices are more difficult to compute, and the difficulty grows quickly with the size of the matrices. However, there is a useful general formula for computing the inverse of a matrix. This formula is most useful for 3×3 matrices. As we will see in the following section, there are more efficient methods for computing the inverses of large matrices. First, however, we need yet another definition.

The adjoint of a square matrix A, denoted adj(A), is the transpose of the matrix B formed from the cofactors of each entry of A, i.e. we have Bij = cofactor of Aij.

Armed with this definition, we now present the following simple-looking formula for the inverse of a square matrix.

  • (8.2.5) A-1 = adj(A) / det(A).

We will not prove this formula. Note that this is a deceptively simple formula, since computing the adjoint and the determinant, especially for large matrices, can be quite difficult!

 

Example 5 : Compute the adjoint and the inverse of the matrix A from Example 2.

Solution: We must first compute all the cofactors of A. We have already computed four of them, so we just need the remaining five. Let B = adj(A)T. From Example 3, we see that B11 = -29 and B23 = -16, and from Example 4, we found that B21 = 19 and B31 = -11. We also have

example 5 cofactors

Thus we have

example 5 adjoint

whence

example 5 inverse

It is straightforward to check that AA-1 = A-1A = I3

 

Home >> Pre-Calculus >> 8. Matrices and Determinants

<< 8.1. Matrices

>> 8.3. Row Reduction