## The Singular Value Decomposition

Now the real and complex spectral theorems give nice decompositions of self-adjoint and normal transformations, respectively. Each one is of a similar form

where is orthogonal, is unitary, and (in either case) is diagonal. What we want is a similar decomposition for *any* transformation. And, in fact, we’ll get one that even works for transformations between different inner prouct spaces.

So let’s say we’ve got a transformation (we’re going to want to save and to denote transformations). We also have its adjoint . Then is positive-semidefinite (and thus self-adjoint and normal), and so the spectral theorem applies. There must be a unitary transformation (orthogonal, if we’re working with real vector spaces) so that

where is a diagonal matrix with strictly positive entries.

That is, we can break up as the direct sum . The diagonal transformation is positive-*definite*, while the restriction of to is the zero transformation. We will restrict to each of these subspaces, giving and , along with their adjoints and . Then we can write

From this we conclude both that and that . We define , where we get the last matrix by just taking the inverse of the square root of each of the diagonal entries of (this is part of why diagonal transformations are so nice to work with). Then we can calculate

This is good, but we don’t yet have unitary matrices in our decomposition. We do know that , and we can check that

Now we know that we can use to “fill out” to give the unitary transformation . That is, (as we just noted), (similarly), and are both the appropriate zero transformation, and . Notice that these are exactly stating that the adjoints and are the projection operators corresponding to the inclusions and in a direct sum representation of as . It’s clear from general principles that there must be *some* projections, but it’s the unitarity of that makes the projections be exactly the adjoints of the inclusions.

What we need to do now is to similarly fill out by supplying a corresponding that will similarly “fill out” a unitary transformation . But we know that we can do this! Pick an orthonormal basis of and hit it with to get a bunch of orthonormal vectors in (orthonormal because . Then fill these out to an orthonormal basis of all of . Just set to be the span of all the new basis vectors, which is the orthogonal complement of the image of , and let be the inclusion of into . We can then combine to get a unitary transformation

Finally, we define

where there are as many zero rows in as we needed to add to fill out the basis of (the dimension of ). I say that is our desired decomposition. Indeed, we can calculate

where and are unitary on and , respectively, and is a “diagonal” transformation (not strictly speaking in the case where and have different dimensions).

[...] Meaning of the SVD We spent a lot of time yesterday working out how to write down the singular value decomposition of a transformation , [...]

Pingback by The Meaning of the SVD « The Unapologetic Mathematician | August 18, 2009 |

[...] Okay, let’s take the singular value decomposition and do something really neat with it. Specifically, we’ll start with an endomorphism and [...]

Pingback by Polar Decomposition « The Unapologetic Mathematician | August 19, 2009 |

[...] any linear transformation between two inner product spaces we can find orthonormal bases giving the singular value decomposition. There are two unitary (or orthogonal) transformations and and a “diagonal” [...]

Pingback by Decompositions Past and Future « The Unapologetic Mathematician | August 24, 2009 |