The Unapologetic Mathematician

Mathematics for the interested outsider

Elementary Matrices Generate the General Linear Group

Okay, so we can use elementary row operations to put any matrix into its (unique) reduced row echelon form. As we stated last time, this consists of building up a basis for the image of the transformation the matrix describes by walking through a basis for the domain space and either adding a new, independent basis vector or writing the image of a domain basis vector in terms of the existing image basis vectors.

So let’s say we’ve got a transformation A in \mathrm{GL}(n,\mathbb{F}). Given a basis, we get an invertible matrix (which we’ll also call A). Then we can use elementary row operations to put this matrix into its reduced row echelon form. But now every basis vector gets sent to a vector that’s linearly independent of all the others, or else the transformation A wouldn’t be invertible! That is, the reduced row echelon form of the matrix A must be the identity matrix.

But remember that every one of our elementary row operations is the result of multiplying on the left by an elementary matrix. So we can take the matrices corresponding to the list of all the elementary row operations and write

\displaystyle\left(E_kE_{k-1}\dots E_2E_1\right)A=I

which tells us that applying all these elementary row operations one after another leads us to the identity matrix. But this means that the product of all the elementary matrices on the right is A^{-1}. And since we can also apply this to the transformation A^{-1}, we can find a list of elementary matrices whose product is A. That is, any invertible linear transformation can be written as the product of a finite list of elementary matrices, and thus the elementary matrices generate the general linear group.

September 4, 2009 Posted by | Algebra, Linear Algebra | 4 Comments

Reduced Row Echelon Form

Let’s take row echelon form and push it a little further into a form that is unique: reduced row echelon form.

When we were doing Gaussian elimination we saved some steps by leaving pivoted rows alone. This was fine for the purposes of solving equations, where the point is just to eliminate progressively more and more variables in each equation. We also saved steps by leaving the pivots as whatever values they started with. Both of these led to a lot of variation in the possibilities for row echelon forms. We’ll tweak the method of Gaussian elimination to give a new algorithm called “Gauss-Jordan elimination”, which will take more steps but will remedy these problems.

First of all, when we pick a pivot and swap it into place, we scale that row by the inverse of the pivot value. This will set the pivot value itself to {1} (which, incidentally, helps with the shears that come next). Then we shear to eliminate not only the nonzero entries below the pivot, but the nonzero entries above the pivot as well. That leaves the pivot as the only nonzero entry in its column.

Let’s see what this looks like in terms of our usual example:

\displaystyle\begin{pmatrix}2&1&-1&8\\-3&-1&2&-11\\-2&1&2&-3\end{pmatrix}

We use the {2} in the upper-left as a pivot and scale the top row by \frac{1}{2}

\displaystyle\begin{pmatrix}1&\frac{1}{2}&-\frac{1}{2}&4\\-3&-1&2&-11\\-2&1&2&-3\end{pmatrix}

Next, we clear out the first column with shears. Note that the shear value is now just the negative of the entry in the first column.

\displaystyle\begin{pmatrix}1&\frac{1}{2}&-\frac{1}{2}&4\\{0}&\frac{1}{2}&\frac{1}{2}&1\\{0}&2&1&5\end{pmatrix}

So far it’s almost the same as before, except for normalizing the pivot. But now when we choose the \frac{1}{2} in the second column as the pivot we scale its row by 2

\displaystyle\begin{pmatrix}1&\frac{1}{2}&-\frac{1}{2}&4\\{0}&1&1&2\\{0}&2&1&5\end{pmatrix}

and use shears to clear the column above and below this pivot

\displaystyle\begin{pmatrix}1&0&-1&3\\{0}&1&1&2\\{0}&0&-1&1\end{pmatrix}

Now the only choice for a pivot in the third column is -1. Again we normalize it and use shears to clear the column

\displaystyle\begin{pmatrix}1&0&0&2\\{0}&1&0&3\\{0}&0&1&-1\end{pmatrix}

This matrix is in reduced row echelon form, and I say that it is unique. Indeed, we are not allowed to alter the basis of the input space (since that would involve elementary column operations), so we can view this as a process of creating a basis for the output space in terms of the given basis of the input space.

What we do is walk down the input basis, vector by vector. At each step, we ask if the image of this basis vector is linearly independent of the images of those that came before. If so, this image is a new basis vector. If not, then we can write the image (uniquely) in terms of the output basis vectors we’ve already written down. In our example above, each of the first three input basis vectors gives a new output basis vector, and the image of the last input basis vector can be written as the column vector in the last column.

The only possibility of nonuniqueness is if we run out of input basis vectors before spanning the output space. But in that case the last rows of the matrix will consist of all zeroes anyway, and so it doesn’t matter what output basis vectors we choose for those rows! And so the reduced row echelon form of the matrix is uniquely determined by the input basis we started with.

September 3, 2009 Posted by | Algebra, Linear Algebra | 8 Comments

Solving Equations with Gaussian Elimination

The row echelon form of a matrix isn’t unique, but it’s still useful for solving systems of linear equations. If we have a system written in matrix form as

\displaystyle Ax=y

then applying elementary row operations builds up a new matrix that we multiply on the left of both sides

\displaystyle EAx=Ey

If the new matrix EA is in row echelon form, the resulting system of equations will be simple to solve.

A little more explicitly, this works because each of the equations is linear, and so each of the elementary row operations corresponds to some manipulation of the equations which gives rise to an equivalent system. for example, the order of the equations doesn’t matter, so swapping two of them (swapping two rows) doesn’t change the solution. Similarly, multiplying both sides of one equation by a nonzero number doesn’t change the solution to that equation, and so doesn’t change the solution of the system. Finally, adding two equations gives another valid equation. Since the three equations “overdetermine” the solution (they’re linearly dependent), we can drop one of the first two. The result is the same as a shear.

Since we’re just doing the same thing to the rows of the column vector y as we are to the matrix A, we may as well stack them beside each other. In this way, we get the “augmented matrix” of the system of equations. For example, we might have the system

\displaystyle\begin{aligned}2x+y-z&=8\\-3x-y+2z&=-11\\-2x+y+2z&=-3\end{aligned}

with the augmented matrix

\displaystyle\begin{pmatrix}2&1&-1&8\\-3&-1&2&-11\\-2&1&2&-3\end{pmatrix}

Then we can follow along our elimination from yesterday, applying the same steps to the system of equations. Foe example, if at this point we add the first and third equations we get 2y+z=5, “eliminating” the variable x from the equation. We replace the third equation with this new, simpler equation, giving the effect of a shear just like we did yesterday.

At the end, we found the row echelon form

\displaystyle\begin{pmatrix}2&1&-1&8\\{0}&\frac{1}{2}&\frac{1}{2}&1\\{0}&0&-1&1\end{pmatrix}

which corresponds with the system

\displaystyle\begin{aligned}2x+y-z&=8\\\frac{1}{2}y+\frac{1}{2}z&=1\\-z&=1\end{aligned}

Now the equations are arranged like a ladder for us to climb up. First, we solve the bottom equation to find z=-1. Then we can plug this into the next equation up to find \frac{1}{2}y-\frac{1}{2}=1, which is easily solved to find y=3. Then we plug both of these answers into the next equation up to find 2x+3+1=8, which is easily solved to find x=2, and we’re done. In fact, this is why we use the word “echelon” — French for a rung of a ladder — to describe this form.

What about our other example from yesterday? This corresponds to a system of four equations with three unknowns, since we’re interpreting the last column as the column vector on the right side of the equation. The system is

\displaystyle\begin{aligned}2x+y-z&=8\\-4x-2y+2z&=-11\\4x+4y-3z=6\\-2x-3y+2z&=-3\end{aligned}

We convert this to an augmented matrix, put this matrix into row echelon form, and return it to the form of a system of equations to find

\displaystyle\begin{aligned}2x+y-z&=8\\2y-z&=-10\\{0}&=5\\{0}&=0\end{aligned}

The empty row on the bottom is fine, since it just tells us the tautology that 0=0. Unfortunately, the line above that contains the contradiction that 0=5. That is, whenever the row echelon form of a system of equations has a leading coefficient in the last column, that system is unsolvable. We don’t even need to climb the ladder to know that.

On the other hand, if we started with the system

\displaystyle\begin{aligned}2x+y-z&=8\\-4x-2y+2z&=-16\\4x+4y-3z=6\\-2x-3y+2z&=2\end{aligned}

and put it in row echelon form (do it yourself) we get

\displaystyle\begin{aligned}2x+y-z&=8\\2y-z&=-10\\{0}&=0\\{0}&=0\end{aligned}

Now we have no contradictions, but the system is underdetermined. Indeed, the rank is {2}, so the kernel has dimension {1}, and thus we’ll have one free parameter in our solution. It’s still easy to solve, though. We first find that y=\frac{1}{2}z-5. Plugging this into the first equation we find 2x-\frac{1}{2}z-5=8, which is easily solved to find x=\frac{1}{4}z-\frac{13}{2}.

So whether the system is exactly solvable, underdetermined, or contradictory, we can find whatever solutions may exist explicitly using a row echelon form.

September 2, 2009 Posted by | Algebra, Linear Algebra | 9 Comments

Row Echelon Form

For now, I only want to focus on elementary row operations. That is, transformations of matrices that can be effected by multiplying by elementary matrices on the left, not on the right.

Row echelon form is a (non-unique) simplification of the form of a matrix that can be reached by elementary row operations. A matrix is in row echelon form if all the empty (all-zero) rows are below all the nonzero rows, and the leftmost nonzero entry in a given row is to the right of the leftmost nonzero entry in the row above it. For example, an upper-triangular matrix is in row echelon form. We put a matrix into row echelon form by a method called “Gaussian elimination”, which always moves a matrix closer and closer to row echelon form.

First, we look at the first column. If it’s empty we leave it alone and move on the next column. If there are any nonzero entries, there must be only one of them at the top of the column, or the matrix can’t be in row echelon form. So, if this isn’t the case then we pick one of the nonzero entries and call it our “pivot”. Swap its whole row up to the top row, and then use shears to eliminate all the other nonzero entries in the column before moving on to the second column.

Before we continue, I’ll point out that the pivot is now the leading entry on a nonzero row at the top of the matrix. We’ll regard it as fixed, so whenever I say “all rows” I mean all the rows that we haven’t locked down yet.

Now that we’ve moved on to the next column, we again look for any nonzero entries. If there are none, we can move on. But if there are, we choose one to be a pivot, swap it to the top (below the previously locked rows) and use shears to remove all the other nonzero entries in the column (other than the locked rows). We continue like this, choosing pivots where there are nonzero entries to deal with, moving them to the top, and eliminating all the other nonzero entries below them, until we’ve locked down all the rows or run out of columns.

Let’s look at this method applied to the matrix

\displaystyle\begin{pmatrix}2&1&-1&8\\-3&-1&2&-11\\-2&1&2&-3\end{pmatrix}

in the first column we have the nonzero value 2 in the first row, so we may as well leave it where it is. We add \frac{3}{2} times the first row to the second to eliminate the -3 there, and add the first row to the third to eliminate the -2 there. At this point our matrix looks like

\displaystyle\begin{pmatrix}2&1&-1&8\\{0}&\frac{1}{2}&\frac{1}{2}&1\\{0}&2&1&5\end{pmatrix}

Now in the second row we have \frac{1}{2} at the top (since the very top row is finished). We leave it where it is and add -4 times the second row to the third to eliminate the 2 there. Now the matrix looks like

\displaystyle\begin{pmatrix}2&1&-1&8\\{0}&\frac{1}{2}&\frac{1}{2}&1\\{0}&0&-1&1\end{pmatrix}

now there are no empty rows, so “all” of them are below all the nonzero rows. Further, within the nonzero rows the leading entries move from left to right as we move down the rows. The matrix is now in row echelon form.

Now let’s try this one:

\displaystyle\begin{pmatrix}2&1&-1&8\\-4&-2&2&-11\\4&4&-3&6\\-2&-3&2&-3\end{pmatrix}

Again, we’ll start with using the upper-left 2 as our first pivot. We use shears to eliminate the rest of the entries in the first column and get

\displaystyle\begin{pmatrix}2&1&-1&8\\{0}&0&0&5\\{0}&2&-1&-10\\{0}&-2&1&5\end{pmatrix}

Now there are nonzero entries in the second column, but none of them are at the top. So we swap the second and third rows to bring a pivot to the top

\displaystyle\begin{pmatrix}2&1&-1&8\\{0}&2&-1&-10\\{0}&0&0&5\\{0}&-2&1&5\end{pmatrix}

and eliminate using a shear

\displaystyle\begin{pmatrix}2&1&-1&8\\{0}&2&-1&-10\\{0}&0&0&5\\{0}&0&0&-5\end{pmatrix}

the third column has no nonzero entries (other than the two locked-in entries at the top), so we skip it. In the fourth column we use 5 as a pivot and eliminate, getting

\displaystyle\begin{pmatrix}2&1&-1&8\\{0}&2&-1&-10\\{0}&0&0&5\\{0}&0&0&0\end{pmatrix}

Now the empty row is on the bottom, and the leading entries of the other rows move from left to right as we move down the rows. Thus, the matrix is now in row echelon form.

September 1, 2009 Posted by | Algebra, Linear Algebra | 2 Comments

Subgroups Generated by Shears

Okay, when I introduced elementary matrices I was a bit vague on the subgroup that the shears generate. I mean to partially rectify that now that we’ve got elementary row operations to work with.

I assert that the upper shears — those elementary matrices with one nonzero entry above the diagonal — generate the group of “upper unipotent” matrices. A matrix is unipotent if it is the identity plus a nilpotent transformation, so all of its eigenvalues are {1}. Specifically, we want those that are also upper-triangular. Thus the matrices we’re talking about have {0} everywhere below the diagonal and all {1} on the diagonal, as

\displaystyle A=\begin{pmatrix}1&a_{1,2}&\cdots&&a_{1,n}\\{0}&1&&&\\\vdots&&\ddots&&\vdots\\&&&1&a_{n-1,n}\\{0}&&\cdots&0&1\end{pmatrix}

What I’m going to do, now, is turn this matrix into the identity by using elementary row operations. Specifically, I’ll only ever add a multiple of one row to another row, which can be effected by multiplying the matrix on the left by shear matrices. And I’ll only add multiples of one row to rows above it, which can be effected by using upper shears.

So first, let’s add -a_{n-1,n} times the nth row to the n-1th.

\displaystyle\left(H_{n-1,n,-a_{n-1,n}}\right)A=\begin{pmatrix}1&a_{1,2}&&\cdots&&a_{1,n}\\{0}&1&&&&\\&&\ddots&&&\vdots\\\vdots&&&1&a_{n-2,n-1}&a_{n-2,n}\\&&&&1&0\\{0}&&&\cdots&0&1\end{pmatrix}

We’ve cleared out the last entry in the next-to-last row in the matrix. Keep going, clearing out all the rest of the last column

\displaystyle\left(H_{1,n,-a_{1,n}}\dots H_{n-1,n,-a_{n-1,n}}\right)A=\begin{pmatrix}1&a_{1,2}&&\cdots&&0\\{0}&1&&&&\\&&\ddots&&&\vdots\\\vdots&&&1&a_{n-2,n-1}&0\\&&&&1&0\\{0}&&&\cdots&0&1\end{pmatrix}

Now we can use the next-to-last row — which has only a single nonzero entry left — to clear out the rest of the next-to-last column

\displaystyle\begin{aligned}\left(\left(H_{1,n-1,-a_{1,n-1}}\dots H_{n-2,n-1,-a_{n-2,n-1}}\right)\left(H_{1,n,-a_{1,n}}\dots H_{n-1,n,-a_{n-1,n}}\right)\right)A&\\=\begin{pmatrix}1&a_{1,2}&&\cdots&&0\\{0}&1&&&&\\&&\ddots&&&\vdots\\\vdots&&&1&0&0\\&&&&1&0\\{0}&&&\cdots&0&1\end{pmatrix}&\end{aligned}

Keep going, clearing out the columns from right to left

\displaystyle\left(\left(H_{1,2,-a_{1,2}}\right)\dots\left(H_{1,n,-a_{1,n}}\dots H_{n-1,n,-a_{n-1,n}}\right)\right)A=\begin{pmatrix}1&0&&\cdots&&0\\{0}&1&&&&\\&&\ddots&&&\vdots\\\vdots&&&1&0&0\\&&&&1&0\\{0}&&&\cdots&0&1\end{pmatrix}

and we’ve got the identity matrix! So that means that this big product of all these upper shears on the left is actually the inverse to the matrix A that we started with. Now we just multiply the inverse of each of these shears together in reverse order to find

\displaystyle A=\left(\left(H_{n-1,n,a_{n-1,n}}\dots H_{1,n,a_{1,n}}\right)\dots\left(H_{1,2,a_{1,2}}\right)\right)

So any upper-unipotent matrix can be written as a product of upper shears. Similarly, any lower-unipotent matrix (all {0} above the diagonal and all {1} on the diagonal) can be written as a product of lower shears. If we add in scalings, we can adjust the diagonal entries too. Given an invertible upper-triangular matrix, first factor it into the product of a diagonal matrix and an upper-unipotent matrix by dividing each row by its diagonal entry. Then the diagonal part can be built from scalings, while the upper-unipotent part can be built from shears. And, of course, scalings and lower shears together generate the subgroup of invertible lower-triangular matrices.

August 28, 2009 Posted by | Algebra, Linear Algebra | 1 Comment

Elementary Row and Column Operations

Here’s a topic that might be familiar all the way back to high school mathematics classes. We’re going to use the elementary matrices to manipulate a matrix. Rather than work out abstract formulas, I’m going to take the specific matrix

\displaystyle\begin{pmatrix}A&B&C&D\\E&F&G&H\\I&J&K&L\end{pmatrix}

and perform example manipulations on it to draw conclusions. The effects of elementary matrices are so, well, elementary that it will be clear how they generalize.

So how do we use the matrices to manipulate matrices? Well, we’re using the elementary matrices to change the input or output bases of the linear transformation represented by the matrix. So to change the output basis we’ll multiply on the left by an elementary matrix, while to change the input basis we’ll multiply on the right by the inverse of an elementary matrix — which itself is an elementary matrix of the same kind.

Let’s start with the swaps. On the left we might have

\displaystyle\begin{pmatrix}1&0&0\\{0}&0&1\\{0}&1&0\end{pmatrix}\begin{pmatrix}A&B&C&D\\E&F&G&H\\I&J&K&L\end{pmatrix}=\begin{pmatrix}A&B&C&D\\I&J&K&L\\E&F&G&H\end{pmatrix}

while on the right we might have

\displaystyle\begin{pmatrix}A&B&C&D\\E&F&G&H\\I&J&K&L\end{pmatrix}\begin{pmatrix}{0}&0&1&0\\{0}&1&0&0\\1&0&0&0\\{0}&0&0&1\end{pmatrix}=\begin{pmatrix}C&B&A&D\\G&F&E&H\\K&J&I&L\end{pmatrix}

On the left, the action of a swap is to swap two rows, while on the right the action is to swap two columns of the matrix.

Next come the scalings. On the left

\displaystyle\begin{pmatrix}c&0&0\\{0}&1&0\\{0}&0&1\end{pmatrix}\begin{pmatrix}A&B&C&D\\E&F&G&H\\I&J&K&L\end{pmatrix}=\begin{pmatrix}cA&cB&cC&cD\\E&F&G&H\\I&J&K&L\end{pmatrix}

and on the right

\displaystyle\begin{pmatrix}A&B&C&D\\E&F&G&H\\I&J&K&L\end{pmatrix}\begin{pmatrix}1&0&0&0\\{0}&c&0&0\\{0}&0&1&0\\{0}&0&0&1\end{pmatrix}=\begin{pmatrix}A&cB&C&D\\E&cF&G&H\\I&cJ&K&L\end{pmatrix}

On the left, the action of a scaling is to multiply a row by the scaling factor, while on the right the effect is to multiply a column by the scaling factor.

Finally, the shears. On the left

\displaystyle\begin{pmatrix}1&0&c\\{0}&1&0\\{0}&0&1\end{pmatrix}\begin{pmatrix}A&B&C&D\\E&F&G&H\\I&J&K&L\end{pmatrix}=\begin{pmatrix}A+cI&B+cJ&C+cK&D+cL\\E&F&G&H\\I&J&K&L\end{pmatrix}

and on the right

\displaystyle\begin{pmatrix}A&B&C&D\\E&F&G&H\\I&J&K&L\end{pmatrix}\begin{pmatrix}1&0&0&0\\{0}&1&0&0\\{0}&0&1&0\\{0}&c&0&1\end{pmatrix}=\begin{pmatrix}A&B+cD&C&D\\E&F+cH&G&H\\I&J+cL&K&L\end{pmatrix}

On the left, the shear H_{i,j,c} adds c times the jth row to the ith row, while on the right, the shear adds c times the ith column to the jth column.

So in general we see that acting on the left manipulates the rows of a matrix, while acting on the right manipulates the columns. We call these the “elementary row operations” and “elementary column operations”, respectively. Any manipulation of the form of a matrix we can effect by these operations can be seen as the result of applying a change of basis matrix on the left (output) or right (input) side. And so any two matrices related by these operations can be seen as representing “the same” transformation in two different bases.

August 27, 2009 Posted by | Algebra, Linear Algebra | 8 Comments

Elementary Matrices

Today we’ll write down three different collections of matrices that together provide us all the tools we need to modify bases.

First, and least important, are the swaps. A swap is a matrix that looks like the identity, but has two of its nonzero entries in reversed columns.

\displaystyle W_{i,j}\begin{pmatrix}1&&&&&&0\\&\ddots&&&&&\\&&0&\cdots&1&&\\&&\vdots&\ddots&\vdots&&\\&&1&\cdots&0&&\\&&&&&\ddots&\\{0}&&&&&&1\end{pmatrix}

where the two swapped columns (or, equivalently, rows) are i and j. The swaps generate a subgroup of \mathrm{GL}(n,\mathbb{F}) isomorphic to the symmetric group S_n. In fact, these are the image of the usual generators of S_n under the permutation representation. They just rearrange the order of the basis elements.

Next are the scalings. A scaling is a matrix that looks like the identity, but one of its nonzero entries isn’t the identity.

\displaystyle C_{i,c}=\begin{pmatrix}1&&&&&&0\\&\ddots&&&&&\\&&1&&&&\\&&&c&&&\\&&&&1&&\\&&&&&\ddots&\\{0}&&&&&&1\end{pmatrix}

where the entry c is in the ith row and column. The scalings generate the subgroup of diagonal matrices, which is isomorphic to \left(\mathbb{F}^\times\right)^nn independent copies of the group of nonzero elements of \mathbb{F} under multiplication. They stretch, squeeze, or reverse individual basis elements.

Finally come the shears. A shear is a matrix that looks like the identity, but one of its off-diagonal entries is nonzero.

\displaystyle H_{i,j,c}=\begin{pmatrix}1&&&&&&0\\&\ddots&&&&&\\&&1&&c&&\\&&&\ddots&&&\\&&&&1&&\\&&&&&\ddots&\\{0}&&&&&&1\end{pmatrix}

where the entry c is in the ith row and jth column. If i<j, then the extra nonzero entry falls above the diagonal and we call it an “upper shear”. On the other hand, if i>j then the extra nonzero entry falls below the diagonal, and we call it a “lower shear”. The shears also generate useful subgroups, but the proof of this fact is more complicated, and I’ll save it for its own post.

Now I said that the swaps are the least important of the three elementary transformations, and I should explain myself. It turns out that swaps aren’t really elementary. Indeed, consider the following calculation

\displaystyle\begin{aligned}\begin{pmatrix}1&1\\{0}&1\end{pmatrix}\begin{pmatrix}1&0\\-1&1\end{pmatrix}\begin{pmatrix}1&1\\{0}&1\end{pmatrix}\begin{pmatrix}-1&0\\{0}&1\end{pmatrix}&\\=\begin{pmatrix}0&1\\-1&1\end{pmatrix}\begin{pmatrix}1&1\\{0}&1\end{pmatrix}\begin{pmatrix}-1&0\\{0}&1\end{pmatrix}&\\=\begin{pmatrix}0&1\\-1&0\end{pmatrix}\begin{pmatrix}-1&0\\{0}&1\end{pmatrix}&\\=\begin{pmatrix}0&1\\1&0\end{pmatrix}\end{aligned}

So we can build a swap from three shears and a scaling. It should be clear how to generalize this to build any swap from three shears and a scaling. But it’s often simpler to just thing of swapping two basis elements as a single basic operation rather than as a composition of shears and scalings.

On the other hand, we can tell that we can’t build any shears from scalings, since the product of scalings is always diagonal. We also can’t build any scalings from shears, since the determinant of any shear is always {1}, and so the product of a bunch of shears also has determinant {1}. Meanwhile, the determinant of a scaling C_{i,c} is always the scaling factor c\neq1.

August 26, 2009 Posted by | Algebra, Linear Algebra | 5 Comments

Decompositions Past and Future

Okay, let’s review some of the manipulations we’ve established.

When we’re looking at an endomorphism T:V\rightarrow V, we have found that we can pick a basis so that the matrix of T is in Jordan normal form, which is almost diagonal. That is, we can find an invertible transformation S and a “Jordan transformation” J so that T=SJS^{-1}.

If we work with a normal transformation N on a complex inner product space we can pick orthonormal basis of eigenvectors. That is, we can find a unitary transformation U and a diagonal transformation \Sigma so that N=U\Sigma U^{-1}.

Similarly, if we work with a self-adjoint transformation S on a real inner product space we can pick orthonormal basis of eigenvectors. That is, we can find an orthogonal transformation O and a diagonal transformation \Sigma so that S=O\Sigma O^{-1}.

Then we generalized: if M:A\rightarrow B is any linear transformation between two inner product spaces we can find orthonormal bases giving the singular value decomposition. There are two unitary (or orthogonal) transformations U:B\rightarrow B and V:A\rightarrow A and a “diagonal” transformation \Sigma:A\rightarrow B so we can write M=U\Sigma V^{-1}.

We used this to show in particular if T is an endomorphism on an inner product space we can write T=UP where U is unitary and P is positive-semidefinite. That is, if we can choose the “output basis” separately from the “input basis”, we can put T into a nice form.

Now we want to continue in this direction of choosing input and output bases separately. It’s obvious we have to do this when the source and target spaces are different, but even for endomorphisms we’ll move them separately. But now we’ll move back away from inner product spaces to any vector spaces over any fields. Just like for the singular value decomposition, what we’ll end up with is essentially captured in the first isomorphism theorem, but we’ll be able to be a lot more explicit about how to find the right bases to simplify the matrix of our transformation.

So here’s the central question for the last chuck of linear algebra (for now): Given a linear transformation T:A\rightarrow B between any two vector spaces, how can we pick invertible transformations X\in\mathrm{GL}(A) and Y\in\mathrm{GL}(B) so that T=Y\Sigma X and \Sigma is in “as simple a form as possible” (and what does this mean?). To be even more particular, we’ll start with arbitrary bases of A and B — so T already has a matrix — and we’ll look at how to modify the matrix by modifying the two bases.

August 24, 2009 Posted by | Algebra, Linear Algebra | 5 Comments

The Uniqueness of Polar Decomposition

I asserted in a comment that the polar decomposition of a transformation has certain uniqueness properties. First, we can always uniquely recover the positive-semidefinite part, and then if that’s actually positive-definite we can recover the unitary part. And it turns out our running analogy with complex numbers is the perfect way to get at these facts.

Recall that if we take a complex number in polar form z=re^{i\theta} we can recover its length as

\displaystyle\lvert z\rvert=\sqrt{\bar{z}z}=\sqrt{re^{-i\theta}re^{i\theta}}=\sqrt{r^2}=r

Similarly, using the right polar decomposition we can write

\displaystyle\sqrt{T^*T}=\sqrt{\left(UP\right)^*\left(UP\right)}=\sqrt{PUU^*P}=\sqrt{PP}=P

where the radical denotes the unique positive-semidefinite square root any positive-semidefinite transformation has.

Now if T is nonsingular then so is P, since we can write P=U^*T and U^* is definitely nonsingular. So in this case we can write U=TP^{-1} and U is uniquely determined. On the other hand, if T is singular then we can’t invert P. We then have some freedom in our choice of U, corresponding to this nontrivial cokernel of P.

We could also use the left polar decomposition to write

\displaystyle\sqrt{TT^*}=\sqrt{\left(P'U\right)\left(P'U\right)^*}=\sqrt{P'UU^*P'}=\sqrt{P'P'}=P'

So the positive-semidefinite part is again uniquely determined, and again the unitary part will be uniquely determined so long as T is nonsingular.

August 21, 2009 Posted by | Algebra, Linear Algebra | Leave a comment

Square Roots

Here’s a neat thing we can do with the spectral theorems: we can take square roots of positive-semidefinite transformations. And this makes sense, since positive-semidefinite transformations are analogous to nonnegative real numbers, and every nonnegative real number has a unique nonnegative real square root. So we expect that every positive-semidefinite transformation will have a unique positive-semidefinite square root.

So we start by writing down the spectral decomposition

\displaystyle P=U\Lambda U^*

Where \Lambda is diagonal. And since P is positive-semidefinite, every diagonal entry of \Lambda — every eigenvalue of P — is a nonnegative real number. We can arrange them nonincreasing order, with the largest eigenvalue in the upper left, and so on down to the lowest eigenvalue (maybe {0}) in the lower right corner. Since the eigenvalues are uniquely determined, with this arrangement \Lambda is uniquely determined. If there are repeated eigenvalues, U might not be completely determined, since we have some freedom in picking the basis for degenerate eigenspaces.

Anyhow, since each entry in \Lambda is a nonnegative real number, we can replace each one with its unique nonnegative square root. We call this new matrix \Sigma, and observe that \Sigma^2=\Lambda. Now we can define S=U\Sigma U^*, and calculate

\displaystyle S^2=U\Sigma U^*U\Sigma U^*=U\Sigma^2U^*=U\Lambda U^*=P

So S is a square root of P. Since the eigenvalues of S (the diagonal entries of \Sigma) are nonnegative real numbers, S is positive-semidefinite.

On the other hand, what if we have some other positive-semidefinite square root S'=U'\Sigma'U'^*. Saying that it’s a square root of P means that

\displaystyle S'^2=U'\Sigma'U'^*U'\Sigma'U'^*=U'\Sigma'^2U'^*=U\Lambda U^*=P

That is, we must have

\displaystyle\Lambda=U^*U'\Sigma'^2U'^*U=\left(U^*U'\right)\left(\Sigma'^2\right)\left(U^*U'\right)^*

The matrix \Sigma'^2 is diagonal, and its entries — the squares of the diagonal entries of \Sigma — must be the eigenvalues of \Lambda. And so the entries of \Sigma' are the same as those of \Sigma, though possibly in a different order. The rearrangement, then, is the content of conjugating by U^*U'. That is, we must have

\displaystyle\Sigma=\left(U^*U'\right)\left(\Sigma'\right)\left(U^*U'\right)^*=U^*\left(U'\Sigma'U'^*\right)U

and so

\displaystyle U\Sigma U^*=U'\Sigma'U'^*

And so we really have the exact same square root again. This establishes the uniqueness of the positive-semidefinite square root.

August 20, 2009 Posted by | Algebra, Linear Algebra | 2 Comments

Follow

Get every new post delivered to your Inbox.

Join 393 other followers