Sorry for the delays. It’s been a bit busy.
Today we’ll finish off our treatment of nilpotent transformations by taking last Thursday’s lemma and using it to find a really nice basis for a nilpotent transformation, called a “Jordan basis”. This is one that makes the matrix of the transformation break up into a block-diagonal form
where each block has the form
with zeroes down the diagonal — as we should have for any nilpotent transformation in upper-triangular form — and ones just above it. Note that a block could just be a single zero in a matrix, with no space above the diagonal to put any ones anyway.
So, remember that we’re talking about a linear transformation with the property that for some . We’ll proceed by induction on the dimension . If then has to be the zero transformation, and any basis will be a Jordan basis.
Now we’ll assume that the statement holds for all vector spaces of dimension less than . If we pick to be the smallest power so that , then we know . Thus there is some vector so that . This vector spans a one-dimensional subspace which trivially intersects the kernel of . Our lemma then tells us that we can decompose as a direct sum:
for some invariant subspace . Now, can’t be all of , since we know isn’t in it. It could be nontrivial, though. Then we’ve broken into the direct sum of two invariant subspaces, and we can restrict to each subspace and use the inductive hypothesis to get a Jordan basis for each part. Putting those bases together gives us a Jordan basis for all of .
What’s left is the case where is trivial, and so . This means that is spanned by the vectors . Thus , since a basis is a minimal spanning set. But we also know that , so . Thus , and the spanning set is actually a basis.
Reversing the order in the list above, we see that , so the first column in the matrix will be zero. Next, , so the second column will have a one in the first row, and zeros elsewhere. As we go forward, the th column has a one just above the th row, and zeroes elsewhere, showing that this is a Jordan basis, and finishing the theorem.
Essentially what we’ve done is this: we pick one vector that lives as long as possible, and follow it until it dies, adding new basis elements as we go. Then if we haven’t used up the whole space, we pick another vector that lives as long as possible — which might be not as long as the first one did — and repeat the process until eventually we fill up all of . The lemma is important in that it tells us that these later streams of basis vectors will never step on the toes of earlier streams.
We should point out here that this result about nilpotent transformations works no matter what base field we’re working with. A single nilpotent transformation on its own is actually a remarkably simple thing, which can be described very succinctly.