# The Unapologetic Mathematician

## The Multiplicity of an Eigenvalue

We would like to define the multiplicity of an eigenvalue $\lambda$ of a linear transformation $T$ as the number of independent associated eigenvectors. That is, as the dimension of the kernel of $T-\lambda1_V$. Unfortunately, we saw that when we have repeated eigenvalues, sometimes this doesn’t quite capture the right notion. In that example, the ${1}$-eigenspace has dimension ${1}$, but it seems from the upper-triangular matrix that the eigenvalue ${1}$ should have multiplicity ${2}$.

Indeed, we saw that if the entries along the diagonal of an upper-triangular matrix are $\lambda_1,...,\lambda_d$, then the characteristic polynomial is

$\displaystyle\prod\limits_{k=1}^d(\lambda-\lambda_k)$

Then we can use our definition of multiplicity for roots of polynomials to see that a given value of $\lambda$ has multiplicity equal to the number of times it shows up on the diagonal of an upper-triangular matrix.

It turns out that generalized eigenspaces do capture this notion, and we have a way of calculating them to boot! That is, I’m asserting that the multiplicity of an eigenvalue $\lambda$ is both the number of times that $\lambda$ shows up on the diagonal of any upper-triangular matrix for $T$, and the number of independent generalized eigenvectors with eigenvalue $\lambda$ — which is $\dim\mathrm{Ker}\left((T-\lambda1_V)^{\dim(V)}\right)$.

So, let’s fix a vector space $V$ of finite dimension $D$ over an algebraically closed field $\mathbb{F}$. Pick a linear transformation $T:V\rightarrow V$ and a basis $\left\{e_i\right\}$ with respect to which the basis of $T$ is upper-triangular. We know such a matrix will exist because we’re working over an algebraically closed base field. I’ll prove the assertion for the eigenvalue $\lambda=0$ — that the number of copies of ${0}$ on the diagonal of the matrix is the dimension of the kernel of $T^d$ — since for other eigenvalues we just replace $T$ with $T-\lambda1_V$ and do the exact same thing.

We’ll prove this statement by induction on the dimension of $V$. The base case is easy: if $\dim(V)=1$ then the kernel of $T$ has dimension ${1}$ if the upper triangular matrix is $\begin{pmatrix}{0}\end{pmatrix}$, and has dimension ${0}$ for anything else.

For the inductive step, we’re interested in the subspace spanned by the basis vectors $e_1$ through $e_{d-1}$. Let’s call this subspace $U$. Now we can write out the matrix of $T$:

$\displaystyle\begin{pmatrix}\lambda_1&&*&t_1^d\\&\ddots&&\vdots\\&&\lambda_{d-1}&t_{d-1}^d\\{0}&&&\lambda_d\end{pmatrix}$

We can see that every vector in $U$ — linear combinations of $e_1$ through $e_{d-1}$ — lands back in $U$. Meanwhile $T(e_d)=\lambda_de_d+\bar{u}$, where the components of $\bar{u}\in U$ are given in the last column. The fact that $U$ is invariant under the action of $T$ means that we can restrict $T$ to that subspace, getting the transformation $T|_U:U\rightarrow U$. Its matrix with respect to the obvious basis is

$\displaystyle\begin{pmatrix}\lambda_1&&*\\&\ddots&\\{0}&&\lambda_{d-1}\end{pmatrix}$

The dimension of $U$ is less than that of $V$, so we can use our inductive hypothesis to conclude that ${0}$ shows up on the diagonal of this matrix $\dim\left(\mathrm{Ker}\left((T|_U)^{d-1}\right)\right)$ times. But we saw yesterday that the sequence of kernels of powers of $T|_U$ has stabilized by this point (since $U$ has dimension $d-1$), so this is also $\dim\left(\mathrm{Ker}\left((T|_U)^d\right)\right)$. The last diagonal entry of $T$ is either ${0}$ or not. If $\lambda_d\neq0$, we want to show that

$\displaystyle\dim\left(\mathrm{Ker}(T^d)\right)=\dim\left(\mathrm{Ker}\left((T|_U)^d\right)\right)$

On the other hand, if $\lambda_d=0$, we want to show that

$\displaystyle\dim\left(\mathrm{Ker}(T^d)\right)=\dim\left(\mathrm{Ker}\left((T|_U)^d\right)\right)+1$

The inclusion-exclusion principle tells us that

\displaystyle\begin{aligned}\dim\left(U+\mathrm{Ker}(T^d)\right)&=\dim(U)+\dim\left(\mathrm{Ker}(T^d)\right)-\dim\left(U\cap\mathrm{Ker}(T^d)\right)\\&=(d-1)+\dim\left(\mathrm{Ker}(T^d)\right)-\dim\left(\mathrm{Ker}\left((T|_U)^d\right)\right)\end{aligned}

Since this dimension of a subspace has to be less than or equal to $d$, the difference in dimensions on the right can only be either zero or one. And we also see that

$\displaystyle\mathrm{Ker}\left((T|_U)^d\right)=U\cap\mathrm{Ker}(T^d)\subseteq\mathrm{Ker}(T^d)$

So if $\lambda\neq0$ we need to show that every vector in $\mathrm{Ker}(T^d)$ actually lies in $U$, so the difference in dimensions is zero. On the other hand, if $\lambda=0$ we need to find a vector in $\mathrm{Ker}(T^d)$ that’s not in $U$, so the difference in dimensions has to be one.

The first case is easier. Any vector in $V$ but not in $U$ can be written uniquely as $u+xe_d$ for some nonzero scalar $x\in\mathbb{F}$ and some vector $u\in U$. When we apply the transformation $T$, we get $T(u)+x\bar{u}+x\lambda_de_d$. Since $\lambda_d\neq0$, the coefficient of $e_d$ is again nonzero. No matter how many times we apply $T$, we’ll still have a nonzero vector left. Thus the kernel of $T^d$ is completely contained in $U$, and so we conclude $\mathrm{Ker}(T^d)=\mathrm{Ker}\left((T|_U)^d\right)$.

In the second case, let’s look for a vector of the form $u-e_d$. We want to choose $u\in U$ so that $T^d(u)=T^d(e_d)$. At the first application of $T$ we find $T(e_d)=\bar{u}\in U$. Thus

$\displaystyle T^d(e_d)=T^{d-1}(\bar{u})\in\mathrm{Im}\left(\left(T|_U\right)^{d-1}\right)$

But the dimension of $U$ is $d-1$, and so by this point the sequence of images of powers of $T|_U$ has stabilized! That is,

$\displaystyle T^d(e_d)\in\mathrm{Im}\left(\left(T|_U\right)^{d-1}\right)\subseteq\mathrm{Im}\left(\left(T|_U\right)^d\right)$

and so we can find a $u$ so that $T^d(u)=T^d(e_d)$. This gives us a vector in the kernel of $T$ that doesn’t lie in $U$, and the inductive step is complete.

As a final remark, notice that the only place we really used the fact that $\mathbb{F}$ is algebraically closed is when we picked a basis that would make $T$ upper-triangular. Everything still goes through as long as we have an upper-triangular matrix, but a given linear transformation may have no such matrix.

February 19, 2009 - Posted by | Algebra, Linear Algebra

1. hey john. it’s conor from back at yale. i just wanted to say thanks for these great posts. they are really helpful/fun…./nerdy. keep it up.

Comment by conor | February 19, 2009 | Reply

2. Glad to hear they’re helping someone. I may be up for Zuckerman’s 60th. It should be easier to get time off from a real job than an academic one.

Comment by John Armstrong | February 19, 2009 | Reply

3. […] There has also been an interesting post about “generalized eigenvectors”, which I have been meaning to talk about. it’s on another blog: […]

Pingback by Happenings Feb 20 « Rip’s Applied Mathematics Blog | February 21, 2009 | Reply

4. […] powers . We’ve seen that is sufficiently large a power to take to check if is nilpotent. We’ve also seen that has an upper-triangular matrix with all zeroes along the diagonal — the single […]

Pingback by Nilpotent Transformations I « The Unapologetic Mathematician | February 26, 2009 | Reply

5. […] each one is invariant under . The dimension of the generalized eigenspace associated to is the multiplicity of , which is the number of times shows up on the diagonal of an upper-triangular matrix for . […]

Pingback by Jordan Normal Form « The Unapologetic Mathematician | March 4, 2009 | Reply

6. […] statement is parallel to the one about multiplicities of eigenvalues over algebraically closed fields. And we’ll use a similar proof. First, let’s define the polynomial to be if […]

Pingback by The Multiplicity of an Eigenpair « The Unapologetic Mathematician | April 8, 2009 | Reply

7. I think in your images $Im(T^{d-1})$ and $Im(T^d)$ above, you really want $Im((T|_U)^{d-1})$ and $Im((T|_U)^d)$.

Comment by David | April 10, 2009 | Reply

8. Good point, thanks.

Comment by John Armstrong | April 10, 2009 | Reply