## The Multiplicity of an Eigenvalue

We would like to define the multiplicity of an eigenvalue of a linear transformation as the number of independent associated eigenvectors. That is, as the dimension of the kernel of . Unfortunately, we saw that when we have repeated eigenvalues, sometimes this doesn’t quite capture the right notion. In that example, the -eigenspace has dimension , but it seems from the upper-triangular matrix that the eigenvalue should have multiplicity .

Indeed, we saw that if the entries along the diagonal of an upper-triangular matrix are , then the characteristic polynomial is

Then we can use our definition of multiplicity for roots of polynomials to see that a given value of has multiplicity equal to the number of times it shows up on the diagonal of an upper-triangular matrix.

It turns out that generalized eigenspaces *do* capture this notion, and we have a way of calculating them to boot! That is, I’m asserting that the multiplicity of an eigenvalue is both the number of times that shows up on the diagonal of any upper-triangular matrix for , and the number of independent generalized eigenvectors with eigenvalue — which is .

So, let’s fix a vector space of finite dimension over an algebraically closed field . Pick a linear transformation and a basis with respect to which the basis of is upper-triangular. We know such a matrix will exist because we’re working over an algebraically closed base field. I’ll prove the assertion for the eigenvalue — that the number of copies of on the diagonal of the matrix is the dimension of the kernel of — since for other eigenvalues we just replace with and do the exact same thing.

We’ll prove this statement by induction on the dimension of . The base case is easy: if then the kernel of has dimension if the upper triangular matrix is , and has dimension for anything else.

For the inductive step, we’re interested in the subspace spanned by the basis vectors through . Let’s call this subspace . Now we can write out the matrix of :

We can see that every vector in — linear combinations of through — lands back in . Meanwhile , where the components of are given in the last column. The fact that is invariant under the action of means that we can restrict to that subspace, getting the transformation . Its matrix with respect to the obvious basis is

The dimension of is less than that of , so we can use our inductive hypothesis to conclude that shows up on the diagonal of this matrix times. But we saw yesterday that the sequence of kernels of powers of has stabilized by this point (since has dimension ), so this is also . The last diagonal entry of is either or not. If , we want to show that

On the other hand, if , we want to show that

The inclusion-exclusion principle tells us that

Since this dimension of a subspace has to be less than or equal to , the difference in dimensions on the right can only be either zero or one. And we also see that

So if we need to show that every vector in actually lies in , so the difference in dimensions is zero. On the other hand, if we need to find a vector in that’s *not* in , so the difference in dimensions has to be one.

The first case is easier. Any vector in but not in can be written uniquely as for some nonzero scalar and some vector . When we apply the transformation , we get . Since , the coefficient of is again nonzero. No matter how many times we apply , we’ll still have a nonzero vector left. Thus the kernel of is completely contained in , and so we conclude .

In the second case, let’s look for a vector of the form . We want to choose so that . At the first application of we find . Thus

But the dimension of is , and so by this point the sequence of images of powers of has stabilized! That is,

and so we *can* find a so that . This gives us a vector in the kernel of that doesn’t lie in , and the inductive step is complete.

As a final remark, notice that the only place we really used the fact that is algebraically closed is when we picked a basis that would make upper-triangular. Everything still goes through as long as we have an upper-triangular matrix, but a given linear transformation may have no such matrix.

hey john. it’s conor from back at yale. i just wanted to say thanks for these great posts. they are really helpful/fun…./nerdy. keep it up.

Comment by conor | February 19, 2009 |

Glad to hear they’re helping someone. I may be up for Zuckerman’s 60th. It should be easier to get time off from a real job than an academic one.

Comment by John Armstrong | February 19, 2009 |

[…] There has also been an interesting post about “generalized eigenvectors”, which I have been meaning to talk about. it’s on another blog: […]

Pingback by Happenings Feb 20 « Rip’s Applied Mathematics Blog | February 21, 2009 |

[…] powers . We’ve seen that is sufficiently large a power to take to check if is nilpotent. We’ve also seen that has an upper-triangular matrix with all zeroes along the diagonal — the single […]

Pingback by Nilpotent Transformations I « The Unapologetic Mathematician | February 26, 2009 |

[…] each one is invariant under . The dimension of the generalized eigenspace associated to is the multiplicity of , which is the number of times shows up on the diagonal of an upper-triangular matrix for . […]

Pingback by Jordan Normal Form « The Unapologetic Mathematician | March 4, 2009 |

[…] statement is parallel to the one about multiplicities of eigenvalues over algebraically closed fields. And we’ll use a similar proof. First, let’s define the polynomial to be if […]

Pingback by The Multiplicity of an Eigenpair « The Unapologetic Mathematician | April 8, 2009 |

I think in your images $Im(T^{d-1})$ and $Im(T^d)$ above, you really want $Im((T|_U)^{d-1})$ and $Im((T|_U)^d)$.

Comment by David | April 10, 2009 |

Good point, thanks.

Comment by John Armstrong | April 10, 2009 |

Hi, I’m from this post:

https://math.stackexchange.com/questions/2651744/axlers-ladr-algebraic-multiplicity-of-an-eigenvalue-is-the-number-of-times-it#

This provides an answer to Exercise 11 of Chapter 8.B in Sheldon Axler’s Linear Algebra Done Right 3rd edition. Thank you very much!

Comment by Sky | March 16, 2021 |

I’m glad that you found it helpful. To follow up on the discussion from MSX, Axler takes a very specific philosophical point to avoid picking a basis until he absolutely has to. I generally agree, since the choice of a basis is artificial, and bogs the subject down in a lot of extra proofs to show that whatever we’re defining is independent of the choice.

The issue is that while there is a way to define determinants without a basis, it ends up leading through some even hairier theoretical wilds. When I learned out of Axler as an undergraduate, my professor supplemented Axler with the relevant section of Lang (his own advisor, as it happens) in order to manage it. It seems that the other MSX commenters are more used to an approach that picks a basis, defines the determinant, shows its independence, and moves on, which allows its use as a tool to prove this fact.

Comment by John Armstrong | March 16, 2021 |