## Generalized Eigenvectors

Sorry for the delay, but exam time is upon us, or at least my college algebra class.

Anyhow, we’ve established that distinct eigenvalues allow us to diagonalize a matrix, but repeated eigenvalues cause us problems. We need to generalize the concept of eigenvectors somewhat.

First of all, since an eigenspace generalizes a kernel, let’s consider a situation where we repeat the eigenvalue :

This kills off the vector right away. But the vector gets sent to , where it can be killed by a *second* application of the matrix. So while there may not be two independent eigenvectors with eigenvalue , there can be another vector that is *eventually* killed off by repeated applications of the matrix.

More generally, consider a *strictly* upper-triangular matrix, all of whose diagonal entries are zero as well:

That is, for all . What happens as we compose this matrix with itself? I say that for we’ll find the entry to be zero for all . Indeed, we can calculate it as a sum of terms like . For each of these factors to be nonzero we need and . That is, , or else the matrix entry must be zero. Similarly, every additional factor of pushes the nonzero matrix entries one step further from the diagonal, and eventually they must fall off the upper-right corner. That is, *some* power of must give the zero matrix. The vectors may not have been killed by the transformation , so they may not all have been in the kernel, but they will all be in the kernel of some power of .

Similarly, let’s take a linear transformation and a vector . If we said that is an eigenvector of with eigenvalue . Now we’ll extend this by saying that if for some , then is a *generalized* eigenvector of with eigenvalue .