## Decompositions Past and Future

Okay, let’s review some of the manipulations we’ve established.

When we’re looking at an endomorphism , we have found that we can pick a basis so that the matrix of is in Jordan normal form, which is almost diagonal. That is, we can find an invertible transformation and a “Jordan transformation” so that .

If we work with a normal transformation on a complex inner product space we can pick orthonormal basis of eigenvectors. That is, we can find a unitary transformation and a diagonal transformation so that .

Similarly, if we work with a self-adjoint transformation on a real inner product space we can pick orthonormal basis of eigenvectors. That is, we can find an orthogonal transformation and a diagonal transformation so that .

Then we generalized: if is any linear transformation between two inner product spaces we can find orthonormal bases giving the singular value decomposition. There are two unitary (or orthogonal) transformations and and a “diagonal” transformation so we can write .

We used this to show in particular if is an endomorphism on an inner product space we can write where is unitary and is positive-semidefinite. That is, if we can choose the “output basis” separately from the “input basis”, we can put into a nice form.

Now we want to continue in this direction of choosing input and output bases separately. It’s obvious we have to do this when the source and target spaces are different, but even for endomorphisms we’ll move them separately. But now we’ll move back away from inner product spaces to *any* vector spaces over *any* fields. Just like for the singular value decomposition, what we’ll end up with is essentially captured in the first isomorphism theorem, but we’ll be able to be a lot more explicit about how to find the right bases to simplify the matrix of our transformation.

So here’s the central question for the last chuck of linear algebra (for now): Given a linear transformation between any two vector spaces, how can we pick invertible transformations and so that and is in “as simple a form as possible” (and what does this mean?). To be even more particular, we’ll start with arbitrary bases of and — so already has a matrix — and we’ll look at how to modify the matrix by modifying the two bases.

“as simple a form as possible” (and what does this mean?)

“The essence of mathematics is not to make simple

things complicated, but to make complicated things

simple.”

— S. Gudder

David Hilbert’s long lost “24th problem” [Thiele] was intended to clarify the notion that for every theorem, there is a “simplest” proof. His grand program was demolished by Godel, but his problems, upon solution, bestow instant success and even immortality to the

solver. His notion of “simplest” raises key questions for the 21st century (when computerized automated theorem proving has solved some famous problems but created debate as to what constitutes proof), these questions capable of analysis in the domains of Complexity and the Philosophy of Science. In particular, given multiple definitions

of “simplest,” involving differing definitions of, usage of, and justifications for elegance and (qualitative and quantitative) parsimony, by what meta-criterion do we choose the simplest of those? And how are our hands tied by neurological and psychological limitations on our ability to introspect on how we choose (i.e. “choice blindness), and how automated theorem provers operate? Are we, in Zeilberger’s phrase, “slaves of Occam’s razor?”

[Complexity in the Paradox of Simplicity, by Jonathan Post

and Philip Fellman]

http://necsi.org/events/iccs6/viewabstract.php?id=248

Hilbert suggested to Heisenberg that he find the differential equation that would correspond to his matrix equations. Had he taken Hilbert’s advice [JVP: allohistory, or alternative reality contrafactual], Heisenberg may have discovered the Schrödinger equation before Schrödinger. When

mathematicians proved Heisenberg’s matrix mechanics and Schrödinger’ wave mechanics equivalent, Hilbert exclaimed, “Physics is obviously far too difficult to be left to the physicists and mathematicians still think they are God’s gift to science.”

Comment by Jonathan Vos Post | August 25, 2009 |

Was my long comment on “as simple a form as possible” (and what does this mean?) rejected, in moderation queue, or died before reaching you?

Comment by Jonathan Vos Post | August 25, 2009 |

It seems to have been marked as spam, actually…

Comment by John Armstrong | August 25, 2009 |

And I’ll respond by noting that a more nuanced view would be to say that mathematics strives to make things “as simple as possible

and no simpler“.Comment by John Armstrong | August 25, 2009 |

There’s the injunction attributed to Einstein: “Keep things as simple as possible, and no simpler.” But none of the first 50 citations I googled on “as simple as possible and no simpler” gave a specific Einstein book, article, speech, or interview. Now I’m trying to remember what Feynman told me that Einstein told him about that. In any case, your nuance is right. But where does the partial ordering on simplicity come from? Is that why Hilbert backed off and left this question off the delivered list of Top 23?

Comment by Jonathan Vos Post | August 25, 2009 |