Nilpotent Transformations I
For today I want to consider a single transformation whose only eigenvalue is . That is, for sufficiently large powers . We’ve seen that is sufficiently large a power to take to check if is nilpotent. We’ve also seen that has an upper-triangular matrix with all zeroes along the diagonal — the single eigenvalue with multiplicity . This sort of “nil-potent” transformation (because a power of it is the null transformation) is especially interesting because if we take any transformation, restrict it to a generalized eigenspace, and subtract the eigenvalue times the identity transformation, what’s left is nilpotent. This will be important soon.
The essential thing about is that its kernel gets bigger and bigger as we raise it to higher and higher powers, until it swallows up the whole vector space. This much we knew, but we’re going to follow it closely, so we can describe exactly how it slips away. What we need first is a lemma: Let be the power where first equals the zero transformation. If is a subspace that intersects the kernel of trivially — so doesn’t kill off anything in until the th (and last) iteration — then we can build the subspace . This subspace is invariant under , since applying just pushes everything down the line. The lemma will assert that there is another invariant subspace so that
We’ll proceed by induction on . If then itself is the zero transformation. Every subspace of is invariant under N$, and so we can always find such a .
Now, let’s say that , and let be a subspace so that . Then some of is taken up by , and some by the kernel (with no overlap). We can find another subspace so that , and write .
We can see that , since applications of are sufficient to kill off every vector. We can also see that because if anything in were killed off by applications of it would be in , contradicting the direct sum above. Now if we restrict to , we’re all set to invoke the inductive hypothesis. In this case, plays the role of and plays the role of . The inductive hypothesis gives us a subspace that’s invariant under and satisfying
We can set
to get a subspace of that is invariant under (since and the rest of the sum are each separately invariant). This will be the subspace we need, which we must now check. First, note that
but is the sum of and the rest direct? We need to show that their intersection is trivial.
Take elements through of , elements through of , and , and ask that
Applying to each side of this equation we find — everything else gets killed off — which can only happen if . Then we’re left with
which would imply
contradicting the directness of the sum in the above decomposition of unless everything in sight is zero.
So there it is. It’s sort of messy, but at the end of the day we can start with a subspace that doesn’t disappear for as long as possible and use to march it around until it dies. Then the rest of can be made up by another invariant subspace. Tomorrow we’ll see what we can do with this.