Again, let’s take a linear endomorphism on a vector space of finite dimension . We know that its characteristic polynomial can be defined without reference to a basis of , and so each of the coefficients of is independent of any choice of basis. The leading coefficient is always , so that’s not very interesting. The constant term is the determinant, which we’d known from other considerations before. There’s one more coefficient we’re interested in, partly for the interesting properties we’ll explore, and partly for its ease of computation. This is the coefficient of .
So, let’s go back to our formula for the characteristic polynomial:
Which terms can involve . Well, we can get one factor of every time we have , and so we need this to happen at least times to get factors of . But if the permutation sends all but one index back to itself, then the last index must also be fixed, since there’s nowhere else for it to go! So we only have to look at the term corresponding to the identity permutation. This will simplify our lives immensely.
Now we’re considering the product
When we multiply this out, we make choices. At each step we can either take the , or we can take the . We’re interested in the terms where we take the times. There are ways of making this choice, corresponding to which one of the indices we don’t take the . Incidentally, we could also think of this in terms of combinations, as .
Anyhow, for each choice of one index to use the matrix entry instead of the variable, we’ll have a term . We add all of these up, summing over — as our notation suggests we should! And now we have the second coefficient of the characteristic polynomial. We drop the negative sign and call this the “trace” of :
where in the last formula we’re using the summation convention again. Incidentally, “trace” should be read as referring to a telltale sign that has left behind, like a hunted animal’s.. um.. “leavings”.
Anyhow, we can now write out a few of the terms in the characteristic polynomial:
First let’s look at the formula for the characteristic polynomial in terms of the matrix entries of :
Now we’re interested in , which is exactly what we calculate to determine if the kernel of is nontrivial. But the kernel of is the eigenspace corresponding to eigenvalue , so this should have something to do with the characteristic polynomial at . So let’s see what happens.
This is just times our formula for the determinant of . But of course we know the dimension ahead of time, so we know whether to flip the sign or not. So just take the characteristic polynomial, evaluate it at zero, and flip the sign if necessary to get the determinant.
There’s one thing to note here, even though it doesn’t really tell us anything new. We’ve said that is noninvertible if and only if its determinant is zero. Now we know that this will happen if and only if the constant term of the characteristic polynomial is zero. In this case, the polynomial must have a root at , which means that the -eigenspace of is nontrivial. But this is just the kernel of is nontrivial. Thus (as we already know) a linear transformation is noninvertible if and only if its kernel is nontrivial.
Given a linear endomorphism on a vector space of dimension , we’ve defined a function — — whose zeroes give exactly those field elements so that the -eigenspace of is nontrivial. Actually, we’ll switch it up a bit and use the function , which has the same useful property. Now let’s consider this function a little more deeply.
First off, if we choose a basis for we have matrix representations of endomorphisms, and thus a formula for their determinants. For instance, if is represented by the matrix with entries , then its determinant is given by
which is a sum of products of matrix entries. Now, the matrix entries for the transformation are given by . Each of these new entries is a polynomial (either constant or linear) in the variable . Any sum of products of polynomials is again a polynomial, and so our function is actually a polynomial in . We call it the “characteristic polynomial” of the transformation . In terms of the matrix entries of itself, we get
What’s the degree of this polynomial? Well, first let’s consider the degree of each term in the sum. Given a permutation the term is the product of factors. The th of these factors will be a field element if , and will be a linear polynomial if . Since multiplying polynomials adds their degrees, the degree of the term will be the number of such that . Thus the highest possible degree happens if for all index values . This only happens for one permutation — the identity — so there can’t be another term of the same degree to cancel the highest-degree monomial when we add them up. And so the characteristic polynomial has degree , equal to the dimension of the vector space .
What’s the leading coefficient? Again, the degree- monomial can only show up once, in the term corresponding to the identity permutation. Specifically, this term is
Each factor gives a coefficient of , and so the coefficient of the term is also . Thus the leading coefficient of the characteristic polynomial is — a fact which turns out to be useful.
Yesterday, we defined eigenvalues, eigenvectors, and eigenspaces. But we didn’t talk about how to actually find them (though one commenter decided to jump the gun a bit). It turns out that determining the eigenspace for any given eigenvalue is the same sort of problem as determining the kernel.
Let’s say we’ve got a linear endomorphism and a scalar value . We want to determine the subspace of consisting of those eigenvectors satisfying the equation
First, let’s adjust the right hand side. Instead of thinking of the scalar product of and , we can write it as the action of the transformation , where is the identity transformation on . That is, we have the equation
Now we can do some juggling to combine these two linear transformations being evaluated at the same vector :
And we find that the -eigenspace of is the kernel .
Now, as I stated yesterday most of these eigenspaces will be trivial, just as the kernel may be trivial. The interesting stuff happens when is nontrivial. In this case, we’ll call an eigenvalue of the transformation (thus the eigenvalues of a transformation are those which correspond to nonzero eigenvectors). So how can we tell whether or not a kernel is trivial? Well, we know that the kernel of an endomorphism is trivial if and only if the endomorphism is invertible. And the determinant provides a test for invertibility!
So we can take the determinant and consider it as a function of . If we get the value , then the -eigenspace of is nontrivial, and is an eigenvalue of . Then we can use other tools to actually determine the eigenspace if we need to.
Okay, I’m back in Kentucky, and things should get back up to speed. For the near future I’ll be talking more about linear endomorphisms — transformations from a vector space to itself.
The absolute simplest thing that a linear transformation can do to a vector is to kill it off entirely. That is, given a linear transformation , it’s possible that for some vectors . This is just what we mean by saying that is in the kernel . We’ve seen before that the vectors in the kernel form a subspace of
What other simple things could do to the vector ? One possibility is that does nothing to at all. That is, . We can call this vector a “fixed point” of the transformation . Notice that if is a fixed point, then so is any scalar multiple of . Indeed, by the linearity of . Similarly, if and are both fixed points, then . Thus the fixed points of also form a subspace of .
What else could happen? Well, notice that the two above cases are related. The condition that is in the kernel can be written . The condition that it’s a fixed point can be written . Each one says that the action of on is to multiply it by some fixed scalar. Let’s change that scalar and see what happens.
We’re now considering a linear transformation and a vector so that for some scalar . That is, hasn’t changed the direction of , but only its length. We call such a vector an “eigenvector” of , and the corresponding scalar its “eigenvalue”. In contexts where our vector space is a space of functions, it’s common (especially among quantum physicists) to use the term “eigenfunction” instead of eigenvector, and even weirder applications of the “eigen-” prefix, but these are almost always just special cases of eigenvectors.
Now it turns out that the eigenvectors associated to any particular eigenvalue form a subspace of . If we assume that and are both eigenvectors with eigenvalue , and that is another scalar, then we can check
We call the subspace of eigenvectors with eigenvalue the -eigenspace of . Notice here that the -eigenspace is the kernel of , and the -eigenspace is the subspace of fixed points. The eigenspace makes sense for all scalars , but any given eigenspace might be trivial, just as the transformation might have a trivial kernel.
Now, all of this is basically definitional. What’s surprising is how much of the behavior of any linear transformation is caught up in the behavior of its eigenvalues. We’ll see this more and more as we go further.
So what happens when fails to be invertible? Its image must miss some vectors in . That is, we have a nontrivial kernel. The index of is zero, so a trivial kernel would mean a trivial cokernel. We would then have a one-to-one and onto linear transformation, and would be invertible.
Let’s take a basis of . Since this is a linearly independent set spanning a subspace of , it can be completed to a basis for all of . Now we can use this basis of to write out the matrix of and use our formula from last time to calculate .
The th column of the matrix is the vector written out in terms of our basis. But since the first few basis vectors are in the kernel of we have at least . So the first column of the matrix must be all zeroes. Now as we pick a permutation and walk down the rows, some row is going to tell us to multiply by the entry in the first column, which we have just seen is zero. That is, for every permutation, the term in our determinant formula for that permutation is zero, and so the sum of them all — the determinant itself — must also be zero.
Notice that this still lets us think of the determinant as preserving multiplications in the algebra of endomorphisms of . Any noninvertible linear transformation is sent to zero. The product of a noninvertible transformation and any other transformation will be noninvertible, and the product of their determinants will be zero. This also gives us a test for invertibility! Take the linear transformation and run it through the determinant function. If the result is zero, then is noninvertible. If the result is nonzero, then is invertible.
Sorry for not posting yesterday, but I was sort of run-down. I”m taking today as a break before I join Knots in Washington tomorrow, already in progress. Anyhow, the last day of the Joint Meetings is always slow, and there’s nothing much for me to talk about here. Instead, I’ll put up some pictures.
Today’s special session on homotopy theory and higher categories seems to have pushed its higher categories until the afternoon, so I got a chance to see Dan Teague (of Dartmouth) talk about “Making Math out of Style”.
This was rather interesting to me, since the jumping-off point was the identification of Pollack paintings by box-counting dimensions. I really liked this story when it came out, since it’s a great story to relate mathematics to art. The talk continued to discuss efforts to identify authorship of some of the Federalist Papers, and of one of the Wizard of Oz books. Then there was identifying forged Van Gogh paintings (which I think I saw on Scientific American Frontiers a few months ago). Neat stuff.
In the afternoon, John Baez led off, talking about the classifying space of a 2-group. I’ll also him later discussing groupoidification in the categorification and link homology. I wanted to make this post now in a bit of down time so I could remind people that I’ll be at Tryst at 8, and to pass on this bit of wisdom from Baez’ first talk: is just in a really weird font.
[UPDATE]: Paul is right in his comment below. Dan Rockmore gave the talk, and Dan Teague introduced him. I met someone else the next day who confirmed both this fact and that he was initially confused by it as well.
Today was a little thin. I had to meet someone in the exhibit hall when it opened, and the talks I saw after that in the morning were sort of lackluster. After lunch I saw a couple topology and applied mathematics talks, but then had to head off for another meeting about a job prospect. After that rather than sticking around for Mikhail Khovanov’s invited address (I probably know what he’d say anyway), I decided to hit the metro and try to beat the beltway traffic.
One talk in the early morning caught my attention. David Clark talked about the functoriality of the analogue of Khovanov homology (which is based around . The talk itself I don’t care to talk about much here, but I was glad to see that the first step was to pass from links to tangles, and to treat them as the natural setting. Now if I can just get the term “tangle covariant” to catch on…
Oh, I wasn’t able to join the Secret Blogging Seminar’s drinking tonight. Tomorrow night, however, I’m thinking I’ll be at Tryst. I’m done with special sessions by 6, and I’ll be wanting to have dinner of course. So I’ll say “8 PM”. Here’s a map.
Well, I made some good non-academic contacts already. I’d rather not go into details, since being too talky might be a problem when prospective employers look to Google and find me as the top hit for my name and subject. But I’m feeling good about my prospects, even without looking at the wilds of federal government and contracting jobs.
Anyhow, as for mathematics there were a number of good talks, but most of them were either what was expected from the speaker, or felt pretty technical. One, though, really grabbed me. Kerry Luse, formerly a student of Yongwu Rong’s at George Washington, spoke about “A transition polynomial for signed Feynman diagrams”. She started with the chord diagram of a knot and added a sign to each chord. If you read the signs as orientations, you get Feynman diagrams for a single species of noninteracting, non-self-dual particles. Alternately, you can interpret the diagrams as arising from RNA secondary structures, as she did. Either way, she was looking for a polynomial invariant to be calculated from such a diagram, and she came up with some really interesting results from her choice. One property in particular was the fact that the resulting polynomial (as applied to chord diagrams arising from knots) is multiplicative under connected sums of links. This makes me think it’s got something to do with the Alexander polynomial.
She also mentioned chord diagrams for links, with more than one loop, which I don’t think I’ve ever considered as such. Immediately this made me think of extending to tangles (naturally), and then that these chord diagrams may themselves form a category of their own. Is there some sort of duality here? If so it might turn connected sum on one side into disjoint union on the other side, which could provide a fascinating connection between classical and quantum topology…
See, I’m not going to be stopping research, and definitely not stopping this project here (thanks, btw, for the comments), but I just need to get out of the academic game I’ve been playing the last few years.
Anyhow, since it’s been weeks since I’ve been at home cooking for myself (thanks to Dad insisting on doing it all), I figured I’d have a bonus “I (Didn’t) Made It!”:
At the Afghan Grill, just around the corner from the Marriott, I’m having the mantoo, and across the table from me is the lamb qabili palao. So who ordered the Afghan equivalent of biryani?