The Unapologetic Mathematician

Mathematics for the interested outsider

Determining Generalized Eigenvectors

Our definition of a generalized eigenvector looks a lot like the one for an eigenvector. But finding them may not be as straightforward as our method for finding eigenvectors. In particular, we’re asking that the vector be in the kernel of not one transformation, but some transformation in a whole infinite list of them. We can’t just go ahead and apply each transformation, hoping to find one that sends our vector to zero.

Maybe the form of this list can help us. We’re really just taking the one transformation T-\lambda1_V and applying it over and over again. So we could start with v and calculate (T-\lambda1_V)v, and then (T-\lambda1_V)^2v, and so on, until we end up with (T-\lambda1_V)^nv=0. That’s all well and good if v is a generalized eigenvector, but what if it isn’t? At what point do we stop and say we’re never going to get to zero?

The first thing we have to notice is that as we go along the list of transformations, the kernel never shrinks. That is, if (T-\lambda1_V)^iv=0 then


Thus, we have an increasing sequence of subspaces

\displaystyle 0=\mathrm{Ker}\left((T-\lambda1_V)^0\right)\subseteq\mathrm{Ker}\left((T-\lambda1_V)^1\right)\subseteq\mathrm{Ker}\left((T-\lambda1_V)^2\right)\subseteq...

Next we have to recognize that this sequence is strictly increasing until it levels out. That is, if \mathrm{Ker}\left((T-\lambda1_V)^{i-1}\right)=\mathrm{Ker}\left((T-\lambda1_V)^i\right) then \mathrm{Ker}\left((T-\lambda1_V)^i\right)=\mathrm{Ker}\left((T-\lambda1_V)^{i+1}\right). Then, of course, we can use an inductive argument to see that all the kernels from that point on are the same. But why does this happen? Well, let’s say that (T-\lambda1_V)^iv=0 implies that (T-\lambda1_V)^{i-1}v=0 (the other implication we’ve already taken care of above). Then we can see that (T-\lambda1_V)^{i+1}v=0 implies that (T-\lambda1_V)^iv=0 by rewriting them:


where we have used the assumed implication between the second and third lines.

So once this sequence stops growing at one step, it never grows again. That is, if the kernels ever stabilize the sequence looks like


and \mathrm{Ker}\left((T-\lambda1_V)^n\right) is as large as it ever gets.

So does the sequence top out? Of course, it has to! Indeed, each step before it stops growing raises the dimension by at least one, so if it didn’t stop by step d=\dim(V) it would get bigger than the space V itself, which is absurd because these are all subspaces of V. So \mathrm{Ker}\left((T-\lambda1_V)^d\right) is the largest of these kernels.

Where does this leave us? We’ve established that if v is in the kernel of any power of T-\lambda1_V it will be in \mathrm{Ker}\left((T-\lambda1_V)^d\right). Thus the space of generalized eigenvectors with eigenvalue \lambda is exactly this kernel. Now finding generalized eigenvectors is just as easy as finding eigenvectors.

February 17, 2009 Posted by | Algebra, Linear Algebra | 5 Comments