The Unapologetic Mathematician

Mathematics for the interested outsider

Decompositions Past and Future

Okay, let’s review some of the manipulations we’ve established.

When we’re looking at an endomorphism T:V\rightarrow V, we have found that we can pick a basis so that the matrix of T is in Jordan normal form, which is almost diagonal. That is, we can find an invertible transformation S and a “Jordan transformation” J so that T=SJS^{-1}.

If we work with a normal transformation N on a complex inner product space we can pick orthonormal basis of eigenvectors. That is, we can find a unitary transformation U and a diagonal transformation \Sigma so that N=U\Sigma U^{-1}.

Similarly, if we work with a self-adjoint transformation S on a real inner product space we can pick orthonormal basis of eigenvectors. That is, we can find an orthogonal transformation O and a diagonal transformation \Sigma so that S=O\Sigma O^{-1}.

Then we generalized: if M:A\rightarrow B is any linear transformation between two inner product spaces we can find orthonormal bases giving the singular value decomposition. There are two unitary (or orthogonal) transformations U:B\rightarrow B and V:A\rightarrow A and a “diagonal” transformation \Sigma:A\rightarrow B so we can write M=U\Sigma V^{-1}.

We used this to show in particular if T is an endomorphism on an inner product space we can write T=UP where U is unitary and P is positive-semidefinite. That is, if we can choose the “output basis” separately from the “input basis”, we can put T into a nice form.

Now we want to continue in this direction of choosing input and output bases separately. It’s obvious we have to do this when the source and target spaces are different, but even for endomorphisms we’ll move them separately. But now we’ll move back away from inner product spaces to any vector spaces over any fields. Just like for the singular value decomposition, what we’ll end up with is essentially captured in the first isomorphism theorem, but we’ll be able to be a lot more explicit about how to find the right bases to simplify the matrix of our transformation.

So here’s the central question for the last chuck of linear algebra (for now): Given a linear transformation T:A\rightarrow B between any two vector spaces, how can we pick invertible transformations X\in\mathrm{GL}(A) and Y\in\mathrm{GL}(B) so that T=Y\Sigma X and \Sigma is in “as simple a form as possible” (and what does this mean?). To be even more particular, we’ll start with arbitrary bases of A and B — so T already has a matrix — and we’ll look at how to modify the matrix by modifying the two bases.

August 24, 2009 - Posted by | Algebra, Linear Algebra

5 Comments »

  1. “as simple a form as possible” (and what does this mean?)

    “The essence of mathematics is not to make simple
    things complicated, but to make complicated things
    simple.”
    — S. Gudder

    David Hilbert’s long lost “24th problem” [Thiele] was intended to clarify the notion that for every theorem, there is a “simplest” proof. His grand program was demolished by Godel, but his problems, upon solution, bestow instant success and even immortality to the
    solver. His notion of “simplest” raises key questions for the 21st century (when computerized automated theorem proving has solved some famous problems but created debate as to what constitutes proof), these questions capable of analysis in the domains of Complexity and the Philosophy of Science. In particular, given multiple definitions
    of “simplest,” involving differing definitions of, usage of, and justifications for elegance and (qualitative and quantitative) parsimony, by what meta-criterion do we choose the simplest of those? And how are our hands tied by neurological and psychological limitations on our ability to introspect on how we choose (i.e. “choice blindness), and how automated theorem provers operate? Are we, in Zeilberger’s phrase, “slaves of Occam’s razor?”
    [Complexity in the Paradox of Simplicity, by Jonathan Post
    and Philip Fellman]
    http://necsi.org/events/iccs6/viewabstract.php?id=248

    Hilbert suggested to Heisenberg that he find the differential equation that would correspond to his matrix equations. Had he taken Hilbert’s advice [JVP: allohistory, or alternative reality contrafactual], Heisenberg may have discovered the Schrödinger equation before Schrödinger. When
    mathematicians proved Heisenberg’s matrix mechanics and Schrödinger’ wave mechanics equivalent, Hilbert exclaimed, “Physics is obviously far too difficult to be left to the physicists and mathematicians still think they are God’s gift to science.”

    Comment by Jonathan Vos Post | August 25, 2009 | Reply

  2. Was my long comment on “as simple a form as possible” (and what does this mean?) rejected, in moderation queue, or died before reaching you?

    Comment by Jonathan Vos Post | August 25, 2009 | Reply

  3. It seems to have been marked as spam, actually…

    Comment by John Armstrong | August 25, 2009 | Reply

  4. And I’ll respond by noting that a more nuanced view would be to say that mathematics strives to make things “as simple as possible and no simpler“.

    Comment by John Armstrong | August 25, 2009 | Reply

  5. There’s the injunction attributed to Einstein: “Keep things as simple as possible, and no simpler.” But none of the first 50 citations I googled on “as simple as possible and no simpler” gave a specific Einstein book, article, speech, or interview. Now I’m trying to remember what Feynman told me that Einstein told him about that. In any case, your nuance is right. But where does the partial ordering on simplicity come from? Is that why Hilbert backed off and left this question off the delivered list of Top 23?

    Comment by Jonathan Vos Post | August 25, 2009 | Reply


Leave a comment