4 Inverses of Matrices
This chapter is concerned with inverses of matrices. In a sense this is just another aspect of solving linear equations. However, since inverses of matrices are so important, we devote a separate chapter to this topic.
4.1 Inverses of Matrices
One of the interesting aspects of matrix algebra is similarities between matrix algebra and the regular algebra of numbers. For example, we have discussed addition, subtraction and multiplication of matrices and we have seen that a number of formulas from regular algebra also hold in matrix algebra. The distributive and associative laws are good examples of these. However, there are some things in regular algebra that do not carry over to matrix algebra, for example the commutative law for multiplication.
Now we come to division. For numbers a/b can be written as ab^{1} where b^{1} is the multiplicative inverse of b. It has the property that bb^{1} = 1. This last formula leads to the definition of the inverse of a matrix.
Definition. A matrix A is said to be invertible if
1. A is square, and
2. There is a matrix B of the same size as A such that AB = I and BA = I.
The matrix B is called the inverse of A and is denoted by A^{1}.
This definition is different from all the previous definitions we have seen in that it doesn't tell one how to compute A^{1}. It only tells one how to recognize A^{1}. A little later we shall see how to compute A^{1}.
There is one question that we need to address. Could there be more than one matrix B that has the property that AB = I and BA = I? The answer is no and the proof is quite simple.
Proposition 1. Suppose AB = I and BA = I. Then either AC = I or CA = I implies B = C.
Proof. Suppose AC = I We multiply this equation on the left by B giving B(AC) = BI. We use the associative property and the fact that BI = B to rewrite this as (BA)C = B. Since BA = I we get IC = B. Since IC = C we get B = C. A similar argument works if CA = I. //
Now we turn to computing A^{1}. The usual method is an extension of the elimination procedure for solving linear equations. We transform A to reduced row echelon form using the elementary row operations. We saw that this can be interpreted as E_{n}^{}E_{1}A = R where R is the reduced row echelon form of A. If R = I then we have E_{n}^{}E_{1}A = I. This would say A^{1} = E_{n}^{}E_{1} if we could also show AE_{n}^{}E_{1} = I. This is true, but we postpone the proof of this until after we have discussed some other elementary properties of matrix inverses; see Proposition below.
There is a nice way to compute E_{n}^{}E_{1}. As we are transforming A to reduced row echelon form we perform the same elementary row operations on another set of matrices that begin with the identity matrix. When we are done we have computed E_{n}^{}E_{1}I = E_{n}^{}E_{1}. We illustrate this with the coefficient matrix in Example 1 in section 3.1.
Example 1. Find the inverse of A = .
We need to transform A to reduced row echelon form and at the same time do the same row operations on another set of matrices that start out as I. A convenient way to do this is to start with a different "augmented" matrix where we augment A by I, i.e.
Now we apply row operations that transform the left side to reduced row echelon form. If the reduced row echelon form is I then on the right side will be A^{1}. Here are the steps in doing that.
Remark. If you are doing these computations by hand, it might be easier to handle the fractions that arise in the right side by factoring out 1/5 from the right side. Thus for
we would write
with the understanding that the right side of the augmented matrix is to be multiplied by 1/5. As we continue to do the computations we just do them with the integers in the right side of the augmented matrix.
We have transformed A (the left side) to I by a sequence of row operations which is equivalent to multiplying by a sequence of elementary matrices. The same sequence of row operations transforms the identity (the right side) to the product of the same elementary matrices. So the final right side is A^{1}, i.e.
A^{1} = =
Problem 1. Find the inverse of the following matrices.
(a) A =
(b) A = =
Answers.
(a) A^{1} =
(b) A^{1} =
As you can see, it is usually quite a bit of work if you have to compute the inverse of a 33 matrix or larger by hand. If A is a 22 matrix there is a nice formula for A^{1}.
Proposition 2. If A = and ad – bc 0 then to get A^{1} we swap the diagonal elements, negate the off diagonal elements and divide by the determinant, i.e.
A^{1} = .
Proof. If we multiply and in either order we get I. //
Unfortunately, the generalization of this formula to larger matrices is more complicated. We will get to it in the section on determinants.
Example 2. Find the inverse of .
^{1} = = =
=
Now that we know how to compute the inverse of a matrix, what can we do with it. For many people, the most important use of the inverse of a matrix is that is can be used to compute the solution of a system of linear equations Au = b as u = A^{1}b.
Proposition 2. Suppose A is invertible. Then the equation Au = b has one and only one solution u for any given b and it is given by u = A^{1}b.
Proof. To show uniqueness, suppose Au = b. Multiply both sides on the left by A^{1} giving A^{1}(Au) = A^{1}b. Regroup on the left, (A^{1}A)u = A^{1}b, and use the fact that A^{1}A = I giving Iu = A^{1}b. So u = A^{1}b. To show that u = A^{1}b is actually a solution, multiply u = A^{1}b on the left by A giving Au = A^{1}(A^{1}b). Regroup on the right, Au = (A^{1}A^{1})b, and use the fact that AA^{1} = I giving Au = Ib = b. So u = A^{1}b is solution. //
Example 3. Find x, y and z that satisfy the following three equations at the same time.
x  y + 3z = 4
2x  y + 2z = 6
3x + y  2z = 9
The are the same equations as in Example 1 in section 3.1 Rewrite as
=
Multiply both sides by
^{1} =
So
= ^{1} =
= = =
x = 3
y = 2
z = 1
which is what we got in section 3.1. If one compares this method of solving the equations with what we did in section 3.1, it is actually more work to first compute A^{1} and then compute the solution as u = A^{1}b than to solve the equations directly. The value in the formula u = A^{1}b for the solution is for other reasons. For example, sometimes we want to do further computations with the solution and the formula u = A^{1}b is useful in those computations. Also, the entries of A^{1} often have important physical or mathematical meaning as rates of change.
Example 4. Suppose the variables p, q and r are related to the variables x, y and z by the equations
p = x  y + 3z
q = 2x  y + 2z
r = 3x + y  2z
or
= = A
(a) What is the rate of change of p with respect to z if we hold x and y constant?
(b) What is the rate of change of z with respect to p if we hold q and r constant?
The rate of change of p with respect to z if we hold x and y constant is To compute the rate of change of z with respect to p if we hold q and r constant we solve for x, y and z in terms of p, q and r.
= ^{1} =
or
x = q + r
y = 2p  q + r
z = p  q + r
So the rate of change of z with respect to p if we hold q and r constant is
So the entries of A^{1} are rates of change of the unknowns in a system of equation with respect to the numbers on the right hand sides of the equations.
4.1 
