Okay, let’s take the singular value decomposition and do something really neat with it. Specifically, we’ll start with an endomorphism and we’ll write down its singular value decomposition
where and are in the unitary (orthogonal) group of . So, as it turns out, is also unitary. And is positive-semidefinite (since is). And, of course, since , we can write
That is, any endomorphism can be written as the product of a unitary transformation and a positive-semidefinite one.
Remember that unitary transformations are like unit complex numbers, while positive-semidefinite transformations are like nonnegative real numbers. And so this “polar decomposition” is like the polar form of a complex number, where we write a complex number as the product of a unit complex number and a nonnegative real number.
We can recover the analogy like we did before, by taking determinants. We find
since the determinant of a unitary transformation is a unit complex number, and the determinant of a positive-semidefinite transformation is a nonnegative real number. If is nonsingular, so is actually positive-definite, then will be strictly positive, so the determinant of will be nonzero.
We could also define , so . This is the left polar decomposition (writing the positive-definite part on the left), where the previous form is the right polar decomposition
We spent a lot of time yesterday working out how to write down the singular value decomposition of a transformation , writing
where and are unitary transformations on and , respectively, and is a “diagonal” transformation, in the sense that its matrix looks like
where really is a nonsingular diagonal matrix.
So what’s it good for?
Well, it’s a very concrete representation of the first isomorphism theorem. Every transformation is decomposed into a projection, an isomorphism, and an inclusion. But here the projection and the inclusion are built up into unitary transformations (as we verified is always possible), and the isomorphism is the part of .
Incidentally, this means that we can read off the rank of from the number of rows in , while the nullity is the number of zero columns in .
More heuristically, this is saying we can break any transformation into three parts. First, picks out an orthonormal basis of “canonical input vectors”. Then handles the actual transformation, scaling the components in these directions, or killing them off entirely (for the zero columns). Finally, takes us out of the orthonormal basis of “canonical output vectors”. It tells us that if we’re allowed to pick the input and output bases separately, we kill off one subspace (the kernel) and can diagonalize the action on the remaining subspace.
The SVD also comes in handy for solving systems of linear equations. Let’s say we have a system written down as the matrix equation
and then we find
So we can check ourselves by calculating . If this extends into the zero rows of there’s no possible way to satisfy the equation. That is, we can quickly see if the system is unsolvable. On the other hand, if lies completely within the nonzero rows of , it’s straightforward to solve this equation. We first write down the new transformation
where it’s not quite apparent from this block form, but we’ve also taken a transpose. That is, there are as many columns in as there are rows in , and vice versa. The upshot is that is a transformation which kills off the same kernel as does, but is otherwise the identity. Thus we can proceed
This “undoes” the scaling from . We can also replace the lower rows of with variables, since applying will kill them off anyway. Finally, we find
and, actually, a whole family of solutions for the variables we could introduce in the previous step. But this will at least give one solution, and then all the others differ from this one by a vector in , as usual.
where is orthogonal, is unitary, and (in either case) is diagonal. What we want is a similar decomposition for any transformation. And, in fact, we’ll get one that even works for transformations between different inner prouct spaces.
So let’s say we’ve got a transformation (we’re going to want to save and to denote transformations). We also have its adjoint . Then is positive-semidefinite (and thus self-adjoint and normal), and so the spectral theorem applies. There must be a unitary transformation (orthogonal, if we’re working with real vector spaces) so that
where is a diagonal matrix with strictly positive entries.
That is, we can break up as the direct sum . The diagonal transformation is positive-definite, while the restriction of to is the zero transformation. We will restrict to each of these subspaces, giving and , along with their adjoints and . Then we can write
From this we conclude both that and that . We define , where we get the last matrix by just taking the inverse of the square root of each of the diagonal entries of (this is part of why diagonal transformations are so nice to work with). Then we can calculate
This is good, but we don’t yet have unitary matrices in our decomposition. We do know that , and we can check that
Now we know that we can use to “fill out” to give the unitary transformation . That is, (as we just noted), (similarly), and are both the appropriate zero transformation, and . Notice that these are exactly stating that the adjoints and are the projection operators corresponding to the inclusions and in a direct sum representation of as . It’s clear from general principles that there must be some projections, but it’s the unitarity of that makes the projections be exactly the adjoints of the inclusions.
What we need to do now is to similarly fill out by supplying a corresponding that will similarly “fill out” a unitary transformation . But we know that we can do this! Pick an orthonormal basis of and hit it with to get a bunch of orthonormal vectors in (orthonormal because . Then fill these out to an orthonormal basis of all of . Just set to be the span of all the new basis vectors, which is the orthogonal complement of the image of , and let be the inclusion of into . We can then combine to get a unitary transformation
Finally, we define
where there are as many zero rows in as we needed to add to fill out the basis of (the dimension of ). I say that is our desired decomposition. Indeed, we can calculate
where and are unitary on and , respectively, and is a “diagonal” transformation (not strictly speaking in the case where and have different dimensions).
Let’s take the last couple lemmas we’ve proven and throw them together to prove the real analogue of the complex spectral theorem. We start with a self-adjoint transformation on a finite-dimensional real inner-product space .
First off, since is self-adjoint, we know that it has an eigenvector , which we can pick to have unit length (how?). The subspace is then invariant under the action of . But then the orthogonal complement is also invariant under . So we can restrict it to a transformation .
It’s not too hard to see that is also self-adjoint, and so it must have an eigenvector , which will also be an eigenvector of . And we’ll get an orthogonal complement , and so on. Since every step we take reduces the dimension of the vector space we’re looking at by one, we must eventually bottom out. At that point, we have an orthonormal basis of eigenvectors for our original space . Each eigenvector was picked to have unit length, and each one is in the orthogonal complement of those that came before, so they’re all orthogonal to each other.
Just like in the complex case, if we have a basis and a matrix already floating around for , we can use this new basis to perform a change of basis, which will be orthogonal (not unitary in this case). That is, we can write the matrix of any self-adjoint transformation as , where is an orthogonal matrix and is diagonal. Alternately, since , we can think of this as , in case we’re considering our transformation as representing a bilinear form (which self-adjoint transformations often are).
What if we’ve got this sort of representation? A transformation with a matrix of the form must be self-adjoint. Indeed, we can take its adjoint to find
but since is diagonal, it’s automatically symmetric, and thus represents a self-adjoint transformation. Thus if a real transformation has an orthonormal basis of eigenvectors, it must be self-adjoint.
Notice that this is a somewhat simpler characterization than in the complex case. This hinges on the fact that for real transformations taking the adjoint corresponds to simple matrix transposition, and every diagonal matrix is automatically symmetric. For complex transformations, taking the adjoint corresponds to conjugate transposition, and not all diagonal matrices are Hermitian. That’s why we had to expand to the broader class of normal transformations.
Okay, this tells us nothing in the complex case, but for real transformations we have no reason to assume that a given transformation has any eigenvalues at all. But if our transformation is self-adjoint it must have one.
When we found this in the complex case we saw that the characteristic polynomial had to have a root, since is algebraically closed. It’s the fact that isn’t algebraically closed that causes our trouble. But since sits inside we can consider any real polynomial as a complex polynomial. That is, the characteristic polynomial of our transformation, considered as a complex polynomial (whose coefficients just happen to all be real) must have a complex root.
This really feels like a dirty trick, so let’s try to put it on a bit firmer ground. We’re looking at a transformation on a vector space over . What we’re going to do is “complexify” our space, so that we can use some things that only work over the complex numbers. To do this, we’ll consider itself as a two-dimensional vector space over and form the tensor product . The transformation immediately induces a transformation by defining . It’s a complex vector space, since given a complex constant we can define the scalar product of by as . Finally, is complex-linear since it commutes with our complex scalar product.
What have we done? Maybe it’ll be clearer if we pick a basis for . That is, any vector in is a linear combination of the in a unique way. Then every (real) vector in is a unique linear combination of and (this latter is the complex number, not the index; try to keep them separate). But as complex vectors, we have , and so every vector is a unique complex linear combination of the . It’s like we’ve kept the same basis, but just decided to allow complex coefficients too.
And what about the matrix of with respect to this (complex) basis of ? Well it’s just the same as the old matrix of with respect to the ! Just write
Then if is self-adjoint its matrix will be symmetric, and so will the matrix of , which must then be self-adjoint as well. And we can calculate the characteristic polynomial of from its matrix, so the characteristic polynomial of will be the same — except it will be a complex polynomial whose coefficients all just happen to be real.
Okay so back to the point. Since is a transformation on a complex vector space it must have an eigenvalue and a corresponding eigenvector . And I say that since is self-adjoint$, the eigenvalue must be real. Indeed, we can calculate
and thus , so is real.
Therefore, we have found a real number so that when we plug it into the characteristic polynomial of , we get zero. But then we also get zero when we plug it into the characteristic polynomial of , and thus it’s also an eigenvalue of .
And so, finally, every self-adjoint transformation on a real vector space has at least one eigenvector.
Okay, today I want to nail down a lemma about the invariant subspaces (and, in particular, eigenspaces) of self-adjoint transformations. Specifically, the fact that the orthogonal complement of an invariant subspace is also invariant.
So let’s say we’ve got a subspace and its orthogonal complement . We also have a self-adjoint transformation so that for all . What we want to show is that for every , we also have
Okay, so let’s try to calculate the inner product for an arbitrary .
since is self-adjoint, is in , and is in . Then since this is zero no matter what we pick, we see that . Neat!
We’re now ready to characterize those transformations on complex vector spaces which have a diagonal matrix with respect to some orthonormal basis. First of all, such a transformation must be normal. If we have a diagonal matrix we can find the matrix of the adjoint by taking its conjugate transpose, and this will again be diagonal. Since any two diagonal matrices commute, the transformation must commute with its adjoint, and is therefore normal.
On the other hand, let’s start with a normal transformation and see what happens as we try to diagonalize it. First, since we’re working over here, we can pick an orthonormal basis that gives us an upper-triangular matrix and call the basis . Now, I assert that this matrix already is diagonal when is normal.
Let’s write out the matrices for
Now we can see that , while . Since these bases are orthonormal, it’s easy to calculate the squared-lengths of these two:
But since is normal, these two must be the same. And so all the entries other than maybe in the first row of our matrix must be zero. We can then repeat this reasoning with the basis vector , and reach a similar conclusion about the second row, and so on until we see that all the entries above the diagonal must be zero.
That is, not only is it necessary that a transformation be normal in order to diagonalize it, it’s also sufficient. Any normal transformation on a complex vector space has an orthonormal basis of eigenvectors.
Now if we have an arbitrary orthonormal basis — say is a transformation on with the standard basis already floating around — we may want to work with the matrix of with respect to this basis. If this were our basis of eigenvectors, would have the diagonal matrix . But we may not be so lucky. Still, we can perform a change of basis using the basis of eigenvectors to fill in the columns of the change-of-basis matrix. And since we’re going from one orthonormal basis to another, this will be unitary!
Thus a normal transformation is not only equivalent to a diagonal transformation, it is unitarily equivalent. That is, the matrix of any normal transformation can be written as for a diagonal matrix and a unitary matrix . And any matrix which is unitarily equivalent to a diagonal matrix is normal. That is, if you take the subspace of diagonal matrices within the space of all matrices, then use the unitary group to act by conjugation on this subspace, the result is the subspace of all normal matrices, which represent normal transformations.
Often, you’ll see this written as , which is really the same thing of course, but there’s an interesting semantic difference. Writing it using the inverse is a similarity, which is our notion of equivalence for transformations. So if we’re thinking of our matrix as acting on a vector space, this is the “right way” to think of the spectral theorem. On the other hand, using the conjugate transpose is a congruence, which is our notion of equivalence for bilinear forms. So if we’re thinking of our matrix as representing a bilinear form, this is the “right way” to think of the spectral theorem. But of course since we’re using unitary transformations here, it doesn’t matter! Unitary equivalence of endomorphisms and of bilinear forms is exactly the same thing.
I almost forgot to throw in this little observation about unitary and orthogonal matrices that will come in handy.
Let’s say we’ve got a unitary transformation and an orthonormal basis . We can write down the matrix as before
Now, each column is a vector. In particular, it’s the result of transforming a basis vector by .
What do these vectors have to do with each other? Well, let’s take their inner products and find out.
since preserves the inner product. That is the collection of columns of the matrix of form another orthonormal basis.
On the other hand, what if we have in mind some other orthonormal basis . We can write each of these vectors out in terms of the original basis
and even get a change-of-basis transformation (like we did for general linear transformations) defined by
so the are the matrix entries for with respect to the basis . This transformation will then be unitary.
Indeed, take arbitrary vectors and . Their inner product is
On the other hand, after acting by we find
since the basis is orthonormal as well.
To sum up: with respect to an orthonormal basis, the columns of a unitary matrix form another orthonormal basis. Conversely, writing any other orthonormal basis in terms of the original basis and using these coefficients as the columns of a matrix gives a unitary matrix. The same holds true for orthogonal matrices, with similar reasoning all the way through. And both of these are parallel to the situation for general linear transformations: the columns of an invertible matrix with respect to any basis form another basis, and conversely.
First off, it turns out that the eigenvalues of are exactly the complex conjugates of those of (the same, if we’re working over . Actually, this isn’t even special to normal operators. Indeed, if has a nontrivial kernel, then we can take the adjoint to find that must have a nontrivial kernel as well. But if our transformation is normal, it turns out that not only do we have conjugate eigenvalues, they correspond to the same eigenvectors as well!
To see this, we do almost the same thing as before. But we get more than just a nontrivial kernel this time. Given an eigenvector we know that , and so it must have length zero. But if is normal then so is :
and so acting by gives the same length as acting by . That is:
thus by the definiteness of length, we know that . That is, is also an eigenvector of , with eigenvalue .
Then as a corollary we can find that not only are the eigenvectors corresponding to distinct eigenvalues linearly independent, they are actually orthogonal! Indeed, if and are eigenvectors of with distinct eigenvalues and , respectively, then we find
Since we must conclude that , and that the two eigenvectors are orthogonal.
All the transformations in our analogy — self-adjoint and unitary (or orthogonal), and even anti-self-adjoint (antisymmetric and “skew-Hermitian”) transformations satisfying — all satisfy one slightly subtle but very interesting property: they all commute with their adjoints. Self-adjoint and anti-self-adjoint transformations do because any transformation commutes with itself and also with its negative, since negation is just scalar multiplication. Orthogonal and unitary transformations do because every transformation commutes with its own inverse.
Now in general most pairs of transformations do not commute, so there’s no reason to expect this to happen commonly. Still, if we have a transformation so that , we call it a “normal” transformation.
Let’s bang out an equivalent characterization of normal operators while we’re at it, so we can get an idea of what they look like geometrically. Take any vector , hit it with , and calculate its squared-length (I’m not specifying real or complex, since the notation is the same either way). We get
On the other hand, we could do the same thing but using instead of .
But if is normal, then and are the same, and thus for all vectors
Conversely, if for all vectors , then we can use the polarization identities to conclude that .
So normal transformations are exactly those that the length of a vector is the same whether we use the transformation or its adjoint. For self-adjoint and anti-self-adjoint transformations this is pretty obvious since they’re (almost) the same thing anyway. For orthogonal and unitary transformations, they don’t change the lengths of vectors at all, so this makes sense.
Just to be clear, though, there are matrices that are normal, but which aren’t any of the special kinds we’ve talked about so far. For example, the transformation represented by the matrix
has its adjoint represented by
which is neither the original transformation nor its negative, so it’s neither self-adjoint nor anti-self-adjoint. We can calculate their product in either order to get
since we get the same answer, the transformation is normal, but it’s clearly not unitary because if it were we’d get the identity matrix here.