The Unapologetic Mathematician

Mathematics for the interested outsider

Nilpotent Transformations II

Sorry for the delays. It’s been a bit busy.

Today we’ll finish off our treatment of nilpotent transformations by taking last Thursday’s lemma and using it to find a really nice basis for a nilpotent transformation, called a “Jordan basis”. This is one that makes the matrix of the transformation break up into a block-diagonal form

\displaystyle\begin{pmatrix}A_1&&{0}\\&\ddots&\\{0}&&A_n\end{pmatrix}

where each block has the form

\displaystyle\begin{pmatrix}{0}&1&&&{0}\\&{0}&1&&\\&&\ddots&\ddots&\\&&&{0}&1\\{0}&&&&{0}\end{pmatrix}

with zeroes down the diagonal — as we should have for any nilpotent transformation in upper-triangular form — and ones just above it. Note that a block could just be a single zero in a matrix, with no space above the diagonal to put any ones anyway.

So, remember that we’re talking about a linear transformation N:V\rightarrow V with the property that N^k=0 for some k. We’ll proceed by induction on the dimension d=\dim(V). If d=1 then N has to be the zero transformation, and any basis will be a Jordan basis.

Now we’ll assume that the statement holds for all vector spaces of dimension less than d. If we pick k to be the smallest power so that N^k=0, then we know N^{k-1}\neq0. Thus there is some vector u so that N^{k-1}u\neq0. This vector spans a one-dimensional subspace U which trivially intersects the kernel of N^{k-1}. Our lemma then tells us that we can decompose V as a direct sum:

V=(U+NU+...+N^{k-1}U)\oplus W

for some invariant subspace W. Now, W can’t be all of V, since we know u isn’t in it. It could be nontrivial, though. Then we’ve broken V into the direct sum of two invariant subspaces, and we can restrict N to each subspace and use the inductive hypothesis to get a Jordan basis for each part. Putting those bases together gives us a Jordan basis for all of V.

What’s left is the case where W is trivial, and so V=U+NU+...+N^{k-1}U. This means that V is spanned by the k vectors \{u,Nu,...,N^{k-1}u\}. Thus k\geq d, since a basis is a minimal spanning set. But we also know that N^d=0, so k\leq d. Thus k=d, and the spanning set is actually a basis.

Reversing the order in the list above, we see that N(N^{k-1}u)=0, so the first column in the matrix will be zero. Next, N(N^{k-2}u)=N^{k-1}u, so the second column will have a one in the first row, and zeros elsewhere. As we go forward, the ith column has a one just above the ith row, and zeroes elsewhere, showing that this is a Jordan basis, and finishing the theorem.

Essentially what we’ve done is this: we pick one vector that lives as long as possible, and follow it until it dies, adding new basis elements as we go. Then if we haven’t used up the whole space, we pick another vector that lives as long as possible — which might be not as long as the first one did — and repeat the process until eventually we fill up all of V. The lemma is important in that it tells us that these later streams of basis vectors will never step on the toes of earlier streams.

We should point out here that this result about nilpotent transformations works no matter what base field we’re working with. A single nilpotent transformation on its own is actually a remarkably simple thing, which can be described very succinctly.

March 3, 2009 Posted by | Algebra, Linear Algebra | 2 Comments