The Unapologetic Mathematician

Mathematics for the interested outsider

Decompositions Past and Future

Okay, let’s review some of the manipulations we’ve established.

When we’re looking at an endomorphism T:V\rightarrow V, we have found that we can pick a basis so that the matrix of T is in Jordan normal form, which is almost diagonal. That is, we can find an invertible transformation S and a “Jordan transformation” J so that T=SJS^{-1}.

If we work with a normal transformation N on a complex inner product space we can pick orthonormal basis of eigenvectors. That is, we can find a unitary transformation U and a diagonal transformation \Sigma so that N=U\Sigma U^{-1}.

Similarly, if we work with a self-adjoint transformation S on a real inner product space we can pick orthonormal basis of eigenvectors. That is, we can find an orthogonal transformation O and a diagonal transformation \Sigma so that S=O\Sigma O^{-1}.

Then we generalized: if M:A\rightarrow B is any linear transformation between two inner product spaces we can find orthonormal bases giving the singular value decomposition. There are two unitary (or orthogonal) transformations U:B\rightarrow B and V:A\rightarrow A and a “diagonal” transformation \Sigma:A\rightarrow B so we can write M=U\Sigma V^{-1}.

We used this to show in particular if T is an endomorphism on an inner product space we can write T=UP where U is unitary and P is positive-semidefinite. That is, if we can choose the “output basis” separately from the “input basis”, we can put T into a nice form.

Now we want to continue in this direction of choosing input and output bases separately. It’s obvious we have to do this when the source and target spaces are different, but even for endomorphisms we’ll move them separately. But now we’ll move back away from inner product spaces to any vector spaces over any fields. Just like for the singular value decomposition, what we’ll end up with is essentially captured in the first isomorphism theorem, but we’ll be able to be a lot more explicit about how to find the right bases to simplify the matrix of our transformation.

So here’s the central question for the last chuck of linear algebra (for now): Given a linear transformation T:A\rightarrow B between any two vector spaces, how can we pick invertible transformations X\in\mathrm{GL}(A) and Y\in\mathrm{GL}(B) so that T=Y\Sigma X and \Sigma is in “as simple a form as possible” (and what does this mean?). To be even more particular, we’ll start with arbitrary bases of A and B — so T already has a matrix — and we’ll look at how to modify the matrix by modifying the two bases.

August 24, 2009 Posted by | Algebra, Linear Algebra | 5 Comments