# The Unapologetic Mathematician

## Determining Eigenvalues

Yesterday, we defined eigenvalues, eigenvectors, and eigenspaces. But we didn’t talk about how to actually find them (though one commenter decided to jump the gun a bit). It turns out that determining the eigenspace for any given eigenvalue is the same sort of problem as determining the kernel.

Let’s say we’ve got a linear endomorphism $T:V\rightarrow V$ and a scalar value $\lambda$. We want to determine the subspace of $V$ consisting of those eigenvectors $v$ satisfying the equation

$T(v)=\lambda v$

First, let’s adjust the right hand side. Instead of thinking of the scalar product of $\lambda$ and $v$, we can write it as the action of the transformation $\lambda1_V$, where $1_V$ is the identity transformation on $V$. That is, we have the equation

$T(v)=\left[\lambda1_V\right](v)$

Now we can do some juggling to combine these two linear transformations being evaluated at the same vector $v$:

$\left[T-\lambda1_V\right](v)=0$

And we find that the $\lambda$-eigenspace of $T$ is the kernel $\mathrm{Ker}(T-\lambda1_V)$.

Now, as I stated yesterday most of these eigenspaces will be trivial, just as the kernel may be trivial. The interesting stuff happens when $\mathrm{Ker}(T-\lambda1_V)$ is nontrivial. In this case, we’ll call $\lambda$ an eigenvalue of the transformation $T$ (thus the eigenvalues of a transformation are those which correspond to nonzero eigenvectors). So how can we tell whether or not a kernel is trivial? Well, we know that the kernel of an endomorphism is trivial if and only if the endomorphism is invertible. And the determinant provides a test for invertibility!

So we can take the determinant $\det(T-\lambda1_V)$ and consider it as a function of $\lambda$. If we get the value ${0}$, then the $\lambda$-eigenspace of $T$ is nontrivial, and $\lambda$ is an eigenvalue of $T$. Then we can use other tools to actually determine the eigenspace if we need to.

January 27, 2009 - Posted by | Algebra, Linear Algebra

1. […] Given a linear endomorphism on a vector space fo dimension , we’ve defined a function — — whose zeroes give exactly those field elements so that the -eigenspace of is […]

Pingback by The Characteristic Polynomial « The Unapologetic Mathematician | January 28, 2009 | Reply

2. […] looks a lot like the one for an eigenvector. But finding them may not be as straightforward as our method for finding eigenvectors. In particular, we’re asking that the vector be in the kernel of not one transformation, but […]

Pingback by Determining Generalized Eigenvalues « The Unapologetic Mathematician | February 17, 2009 | Reply

3. Not to accuse you of being “trashy” or in “bad taste,” but some of the machinery you’ve already built up lets you talk about eigenvalues without talking about determinants following Axler’s philosophy: any linear transformation determines a finite-dimensional representation of $\mathbb{F}[X]$, and the kernel of the map is a principal ideal generated by the minimal polynomial of $T$ from which its eigenvalues can be read off. After proving some properties of generalized eigenvectors, it is then possible to define the characteristic polynomial and prove it has the usual properties.

(Don’t get me wrong; I love determinants. But I think the anti-determinant perspective is also valuable, not least because in infinite-dimensional spaces we no longer have a determinant and must find another way to compute eigenvalues.)

Comment by Qiaochu Yuan | June 7, 2009 | Reply

4. Yes, but I specifically dislike Axler’s approach that ultimately leads to determinants. And his problem with determinants ultimately stems from their matrix representation. But I went off on this whole side trip to properly define determinants independently of any choice of a basis, and the representation theory leads you through. Then doing eigenvalues with the determinant as a tool you can still point out the geometric significance as you go.

Comment by John Armstrong | June 7, 2009 | Reply