Solving Equations with Gaussian Elimination
The row echelon form of a matrix isn’t unique, but it’s still useful for solving systems of linear equations. If we have a system written in matrix form as
then applying elementary row operations builds up a new matrix that we multiply on the left of both sides
If the new matrix is in row echelon form, the resulting system of equations will be simple to solve.
A little more explicitly, this works because each of the equations is linear, and so each of the elementary row operations corresponds to some manipulation of the equations which gives rise to an equivalent system. for example, the order of the equations doesn’t matter, so swapping two of them (swapping two rows) doesn’t change the solution. Similarly, multiplying both sides of one equation by a nonzero number doesn’t change the solution to that equation, and so doesn’t change the solution of the system. Finally, adding two equations gives another valid equation. Since the three equations “overdetermine” the solution (they’re linearly dependent), we can drop one of the first two. The result is the same as a shear.
Since we’re just doing the same thing to the rows of the column vector as we are to the matrix , we may as well stack them beside each other. In this way, we get the “augmented matrix” of the system of equations. For example, we might have the system
with the augmented matrix
Then we can follow along our elimination from yesterday, applying the same steps to the system of equations. Foe example, if at this point we add the first and third equations we get , “eliminating” the variable from the equation. We replace the third equation with this new, simpler equation, giving the effect of a shear just like we did yesterday.
At the end, we found the row echelon form
which corresponds with the system
Now the equations are arranged like a ladder for us to climb up. First, we solve the bottom equation to find . Then we can plug this into the next equation up to find , which is easily solved to find . Then we plug both of these answers into the next equation up to find , which is easily solved to find , and we’re done. In fact, this is why we use the word “echelon” — French for a rung of a ladder — to describe this form.
What about our other example from yesterday? This corresponds to a system of four equations with three unknowns, since we’re interpreting the last column as the column vector on the right side of the equation. The system is
We convert this to an augmented matrix, put this matrix into row echelon form, and return it to the form of a system of equations to find
The empty row on the bottom is fine, since it just tells us the tautology that . Unfortunately, the line above that contains the contradiction that . That is, whenever the row echelon form of a system of equations has a leading coefficient in the last column, that system is unsolvable. We don’t even need to climb the ladder to know that.
On the other hand, if we started with the system
and put it in row echelon form (do it yourself) we get
Now we have no contradictions, but the system is underdetermined. Indeed, the rank is , so the kernel has dimension , and thus we’ll have one free parameter in our solution. It’s still easy to solve, though. We first find that . Plugging this into the first equation we find , which is easily solved to find .
So whether the system is exactly solvable, underdetermined, or contradictory, we can find whatever solutions may exist explicitly using a row echelon form.
I’ve been following this blog with Google Reader for a while, but it has been generally above my head, until last post when you shifted gears and went to the basics of linear algebra. I’m enjoying the reading. If you go from this material all the way to the topics from earlier I’ll probably get it this time around.
Comment by Robert Danhof | September 2, 2009 |
Yes, I probably could have done this earlier, but I wanted to get does the idea of particular useful factorizations and things that were essentially basis-free before mucking about with something as basis heavy as elementary row operations and row echelon forms. This material is actually coming to the end of my work on linear algebra (after something like a full year’s worth of work on it). Soon I’ll be able to get into multivariable calculus.
Comment by John Armstrong | September 2, 2009 |
[…] elimination we saved some steps by leaving pivoted rows alone. This was fine for the purposes of solving equations, where the point is just to eliminate progressively more and more variables in each equation. We […]
Pingback by Reduced Row Echelon Form « The Unapologetic Mathematician | September 3, 2009 |
A few years ago I read some Numerical Methods lecture notes in Portuguese where a variant of the Gauss Elimination method was described. It is called partial pivoting and is intended to provide a greater numerical accuracy . First we exchange complete rows in the system augmented matrix so that the entry () has the largest absolute value from all entries with and then proceed as in the normal Gauss elimination method applied to column .
For instance, starting with your first example
we exchange rows 1 and 2 ()
Then we apply the pivoting method to column 1
Now we exchange rows 2 and 3 ()
and apply the normal method to column 2
From here we get the system of equations
which we solve as before
.
Comment by Américo Tavares | September 5, 2009 |
Oh, sure Américo, there are more efficient algorithms, and optimizations on the basic algorithms, but I’m mainly going for what at UMD would be a 200-level linear algebra level approach to the subject, not a 460-level numerical analysis approach. But thanks for pointing that direction out.
Comment by John Armstrong | September 6, 2009 |
Hello.
Could you comment on using the Inverse Matrix.
What if the matrix is singular in inhomogenous system?
Thanks.
Comment by Drazick | September 29, 2009 |
Drazick
If it helps to the first part of your comment (I am not sure it does) here is an
Example: solve
and find the inverse matrix of
.
Let , and be the matrices
We form the three augmented matrices
The application of the Gauss method with partial pivoting (see my comment No. 4 above) to the first matrix gives:
To the second,
And to the third one
Solving
we get
that are the three columns of the inverse matrix
Comment by Américo Tavares | March 30, 2010 |
wats the point or use of gaussian elimination?
Comment by bash | March 17, 2011 |
Well for one thing it lets us solve systems of linear equations, as this post itself explains.
Comment by John Armstrong | March 17, 2011 |