CBrayMath216-1-7-b mp4



Download 25.35 Kb.
Date20.05.2017
Size25.35 Kb.
#18674
CBrayMath216-1-7-b.mp4

SPEAKER: One more application of determinants. We'll start by reminding ourselves of this formula here for how to compute determinants. Remember, what we did with this formula is simply take the minus 1s and the minor determinants and combine them into a single thing that we call the cofactor. The IJ cofactor for that matrix. And in so doing, the formula becomes a little simpler. In particular, it's a sum of products of two factors instead of a sum of product of three factors. We weren't clear at the time why this was an advantage. So what we're going to see now is why that ends up being extremely useful. 

We're going to start by taking these cofactors and instead of just using these as something that I compute along the way as part of computing the determinant, let's consider the entire matrix of all of the cofactors of the matrix A. Not clear yet why I would be interested in this thing, but let's write that down. So the formula's the same. Each individual cofactor is the same exact formula. It's just that instead of computing them only along a row, only for the purpose of computing determinant, we're going to compute all of them for all the positions throughout the matrix. 

In so doing, notice that we can take a different interpretation of this formula for determinant. In fact, this is, let's take a closer look here. Notice that in this summation, j is what's ranging through these values from 1 to n. So these values here are changing. Which means, I am in fact moving across the-- moving across the i-th row of the A matrix. And I'm also moving across the i-th row of this C matrix. And multiplying corresponding numbers and adding. So what I'm doing in this summation is I'm writing down a dot product of the i-th row of A. With the corresponding i-th row of A's cofactor matrix C. 

So kind of a weird point of view, we hadn't thought of determinants in that way before. Again, not clear why this is going to be useful. But nevertheless we've made a connection. We can think of a determinant as a dot product rows of these matrices. 

What are we going to do with this? Well, we're going to make another definition. Define this thing called the adjoint of the matrix A. And the adjoint is simply the transpose of the cofactor matrix. 

With that definition, here's our amazing result. We can use this new thing that we've just defined called the adjoint, along with determinant, to compute the inverse of a matrix. It's a very surprising, amazing formula. You wouldn't think this comes up. But amazingly, this works out. 

The way we're going to prove this formula is consider an equivalent formula, which is that A times the adjoint of A gives you this scalar determinant times the identity matrix. Not hard to persuade yourself that this the exact same thing, just written slightly differently. And In particular this is written in such a way that I see a matrix multiplication that I need to evaluate. What I'm specifically going to try to show is that this matrix product then, in order to confirm this formula, I need to show that this matrix product is equal to this matrix. Which, if you think about it, we start with the identity, which is all 0s, off the diagonal. And 1s on the diagonal, but I'm multiplying by determinant A and so I get determinant term of A on the diagonal. This is what I need to show-- This is what I need to show is equal to that. That will prove this amazing little formula. 

Let's get started. Let's start multiplying these matrices. A times adjoint A. You can see the ij element of this product given by that formula. And that's just an old formula for matrix multiplication. Notice that in this formula, as k goes from 1 to n, we are simply multiplying the i-th row of the A matrix by the j-th column of the adjoint A matrix. This is just row dot column. This is our old point of view. Corresponding row of the left matrix times dot product corresponding column of the right matrix. Writing it in [INAUDIBLE] notation. 

And don't forget now, the adjoint is the transpose of the cofactor matrix. And so column number becomes row number, row number becomes column number. In this product we're interested in, here is a formula for the ij-th element and what you notice is that this is-- let's keep track of what's going on here, k is ranging over the values from 1 to n. That means we have the i-th row of the A matrix dot the j-th row of the cofactor matrix. So what we have here is a dot product of two rows, a row of A and a row of C. 

This little argument here, how this matrix product has elements that are computed by this formula, you can see a breakdown of that in this little diagram. We start off by observing that in the product we're interested in, we take-- in order to compute a particular element, we take the corresponding row of A and the corresponding column of adjoint A and dot product. This is a visual representation of what's in the algebra on the previous page. Row of the left matrix dot column, corresponding column, of the right matrix. And then just remember that the adjoint is the transpose of the cofactor matrix. And so the j-th column, the j-th column of the adjoint is equal to the j-th row of the cofactor matrix. 

Instead of thinking of it as row dot column here, we could think of it as a row dot row here. The i-th row of A dot the j-th row of C. This is just a visual representation of what we did algebraically right here. 

If I want to show that I get the matrix that I'm supposed to get, keep in mind my goal is to show that I get determinant of A along the diagonals, and then I get 0 off the diagonals, I just have to consider those two cases. And we'll start off by thinking about, on the diagonal, keep in mind on the diagonal i is equal to j. If we're on the diagonal, and i is equal to j, what do I get? We have this formula down here that says, I'm just going to be taking a dot product of the corresponding row of the A matrix and the corresponding row of the C matrix. But wait a second, i is equal to j. So what we're really doing here in this case, when we're looking at a-- when we're looking at a diagonal entry, when we're on that diagonal, i is equal to j-- so this is over here-- we're taking a row of A and dotting it with the corresponding row of the cofactor matrix. Well, doesn't that sound familiar? In fact, that's exactly what we have here. The determinant, we've just newly conceived of this formula for the determinant as being a row of A dot the corresponding row of the cofactor matrix. So in fact, this formula, you see a justification now for writing the determinant in this way. Thinking of it as this dot product. 

Coming up with this cofactor matrix so that we can think of determinant as that dot product allows us to determine that on the diagonal what we get is exactly the determinant of A. As required, what we were hoping for, we get determinant of A for all those diagonal entries. Because of that right there. 

That's nice. We're sort of halfway done, in some sense. We've confirmed that this matrix product that we're interested in, that matrix product gets what it's supposed to get on these diagonal entries. What remains is, how do we know that we necessarily get zero off of the diagonal? In other words, when i is not equal to j, when we're off the diagonal here, what do we get in this product? What are these off diagonal entries here, and how do we know they're zero in particular, when we're doing this matrix multiplication? 

This is a tricky argument coming up here. Very tricky argument. This is not something that you should expect to naturally come up with on your own. This is kind of inspired. And while you might-- well you almost certainly would not come up with this clever little argument on your own, it is important that you understand the steps in this process. Remember what the trick is, what is this inspired little trick that I'm about to show you, and how do the details work. 

So here we go. Looking for non-diagonal elements in this product. In other words, when i is not equal to j. Here's the trick. We're going to create a new matrix. New matrix called A prime. It's not clear why we're doing this. The new matrix, A prime, is defined as follows. We're pretty much going to copy A. The i-th row will stay right where it was. We will, however, also take that i-th row and also put it in the j-th position. So we will trample what had previously been in the j-th row. So that j-th row just gets destroyed, and the i-th row goes in that place in addition to staying where it was. And furthermore, everything else stays the same. So all of this up here stays what it was. All of that stays. And all of this stays. 

The only difference between these two matrices, A and A prime, the only difference is in what's in the j-th row. Everything else is the same. 

Not at all clear at the moment why we're interested in this matrix A prime. Nevertheless, we're going to proceed. 

Looking at these two matrices, let's think about their cofactor matrices. Say, the matrix A is going to have a cofactor matrix that we'll call C. The matrix A prime is going to have a cofactor matrix, we're going to call that C prime. The assertion that I'm making here is that the j-th row of C, and correspondingly, the j-th row of C prime are the same. Why would that be? 

Here's the idea. If I want to compute one of these entries, say I want to compute that entry in this cofactor matrix, what I do is I go up to that position. Remember how we compute cofactors, from way back when. The very first thing I do is I cross out the row and the column corresponding to that position, and then I compute the determinant of what's left over. There's a certain number of minus 1s to factor in, and that's what you get. 

If I were to do the same thing over here. Same position In the C prime matrix. How would I compute that cofactor? I go up to the corresponding position in the A prime matrix. Cross out its row, cross out its column, and look at what's left over. Take the determinant, multiply by a certain number of minus 1s. And the big point here is-- keep in mind we already agreed that the only difference between A and A prime, the only differences are in that j-th row. Specifically, then, in computing cofactors that are on the j-th row, I'm crossing out the only things that make these two matrices different. And I'm also crossing out this column, doesn't really change the fact that what's left over over here is exactly equal to what's there. All of this is exactly equal to all of that. That's equal to that. That's equal to that. What's left over is exactly the same. The only differences between the calculation of that number and that number, the only thing that could have been different, are things that have been crossed out. Namely, the j-th row in these two matrices. 

Therefore these numbers are the same. And therefore these cofactor matrices agree on the j-th row. Not everywhere else. It's only on the j-th row that this argument works. Because it's only when you're on the j-th row that you are crossing out the two differences between these matrices. Remember the matrices are the same there. They're different here. And then they're the same there. 

Again, who cares? What does this have to do with what we were interested in? Let me remind ourself, let's remind ourselves, what we are trying to compute. Going way back. We are trying to compute this matrix product. Specifically we're trying to compute the off-diagonal entries up here. We already know, we've already computed what's on the diagonal. That's fine. But we're interested in these off-diagonal entries. Want to know what's up there and what's up there. Said differently, we are interested in dot products of the i-th row of the A matrix, and the j-th row of the cofactor matrix. So what we're interested in is this dot product. That dot that. Excuse me, I'll color code better. I'm interested in that dot this. 

We just argued, though, that that row, the j-th row of C, is the same as that row. And don't forget from our definition of A prime, that is the same as that and is also the same as that. The dot product of that, of this, is the same as the dot product of that with this. Exactly the same arithmetic. Because these two green rows are the same by definition, and these two purple rows are the same by the argument we just made about how cofactors are computed. 

That means that this dot product is equal to that dot product. Green dot purple in both cases. 

Who cares? Why are we interested in this? Well, because, this one, that dot product of the i-th row of A and the j-th row of C, that dot product is exactly what we're trying to compute. That's the ij element of this product that we're interested in. Now we're starting to see how this is connected to something. What does it matter that it's equal to this? The reason that that matters is because you'll notice here, we have a dot product of the j-th row of a matrix and it's corresponding row of its cofactor matrix. So that is exactly the determinant of the A prime matrix. 

Last observation. This A prime matrix, wonderfully convenient fact. Notice that it has two rows that are identical. Because it has two identical rows, we know that its determinant is equal to 0. And then as required, as we've been interested in this whole time, we see that this ij element of that product is equal to 0. It's a pretty circuitous little calculation but it gets us there. And that's what we needed to show way back here. We see now that these off-diagonal entries are indeed 0, as they're supposed to be. 

And therefore, I've finished this calculation here. I've shown that this product gives us what we want on the diagonal, it gives us what we want off the diagonal, and therefore this is true, and therefore A inverse is computed by that formula. It's an amazing little calculation, amazing little result. Happens to be true. 

Is this an efficient way to compute an inverse matrix? Oh my gosh, absolutely not. Not efficient at all. Look at all the determinants you have to compute. To compute this inverse matrix, right? For one thing, there's that denominator determinant. But furthermore, this adjoint is chock full of cofactors. Each of which involves a determinant. So it's not efficient at all. But it's an amazingly unexpected connection. 

There's a homework exercise that students will have to do, which is to go through a roughly analogous computation. The homework exercise asks students to compute adjoint of A times A. This product. Now notice that this looks a lot like that. And the way students are going to do this is by very closely copying the argument that we've just gone through here. There will be some differences. You should look at columns instead of rows. Turn this matrix product not into a statement about dot products of two rows, but into a statement about dot products of two columns. And then go through adapting as appropriate an analogous argument. Make an appropriately clever choice. Appropriate to the circumstance for what this matrix, A prime, would be. It's going to be very similar to this, except instead of replacing a row, you're going to replace an appropriately chosen column. 

Final observation. Let me note that even though we have already proved-- this is an old fact-- we've proved that if a matrix times another matrix gives you the identity-- if we think of this as in a denominator over there-- then you can switch the order. And that will also work. This is a fact we proved when we were discussing inverse matrices for the first time. So as a consequence of a theorem from 1.3, you could take this as an immediate result. That is not the intent of this exercise. The intent of the exercise is for you to go through a computation analogous to this one as a way of ensuring that you understand the arguments in this computation that we've done here. So be sure to actually do that. You will not receive credit if you make the section 1.3 argument.
Download 25.35 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page