We would like to define the multiplicity of an eigenvalue of a linear transformation as the number of independent associated eigenvectors. That is, as the dimension of the kernel of . Unfortunately, we saw that when we have repeated eigenvalues, sometimes this doesn’t quite capture the right notion. In that example, the -eigenspace has dimension , but it seems from the upper-triangular matrix that the eigenvalue should have multiplicity .
Then we can use our definition of multiplicity for roots of polynomials to see that a given value of has multiplicity equal to the number of times it shows up on the diagonal of an upper-triangular matrix.
It turns out that generalized eigenspaces do capture this notion, and we have a way of calculating them to boot! That is, I’m asserting that the multiplicity of an eigenvalue is both the number of times that shows up on the diagonal of any upper-triangular matrix for , and the number of independent generalized eigenvectors with eigenvalue — which is .
So, let’s fix a vector space of finite dimension over an algebraically closed field . Pick a linear transformation and a basis with respect to which the basis of is upper-triangular. We know such a matrix will exist because we’re working over an algebraically closed base field. I’ll prove the assertion for the eigenvalue — that the number of copies of on the diagonal of the matrix is the dimension of the kernel of — since for other eigenvalues we just replace with and do the exact same thing.
We’ll prove this statement by induction on the dimension of . The base case is easy: if then the kernel of has dimension if the upper triangular matrix is , and has dimension for anything else.
For the inductive step, we’re interested in the subspace spanned by the basis vectors through . Let’s call this subspace . Now we can write out the matrix of :
We can see that every vector in — linear combinations of through — lands back in . Meanwhile , where the components of are given in the last column. The fact that is invariant under the action of means that we can restrict to that subspace, getting the transformation . Its matrix with respect to the obvious basis is
The dimension of is less than that of , so we can use our inductive hypothesis to conclude that shows up on the diagonal of this matrix times. But we saw yesterday that the sequence of kernels of powers of has stabilized by this point (since has dimension ), so this is also . The last diagonal entry of is either or not. If , we want to show that
On the other hand, if , we want to show that
The inclusion-exclusion principle tells us that
Since this dimension of a subspace has to be less than or equal to , the difference in dimensions on the right can only be either zero or one. And we also see that
So if we need to show that every vector in actually lies in , so the difference in dimensions is zero. On the other hand, if we need to find a vector in that’s not in , so the difference in dimensions has to be one.
The first case is easier. Any vector in but not in can be written uniquely as for some nonzero scalar and some vector . When we apply the transformation , we get . Since , the coefficient of is again nonzero. No matter how many times we apply , we’ll still have a nonzero vector left. Thus the kernel of is completely contained in , and so we conclude .
In the second case, let’s look for a vector of the form . We want to choose so that . At the first application of we find . Thus
But the dimension of is , and so by this point the sequence of images of powers of has stabilized! That is,
and so we can find a so that . This gives us a vector in the kernel of that doesn’t lie in , and the inductive step is complete.
As a final remark, notice that the only place we really used the fact that is algebraically closed is when we picked a basis that would make upper-triangular. Everything still goes through as long as we have an upper-triangular matrix, but a given linear transformation may have no such matrix.