## Eigenvalues, Eigenvectors, and Eigenspaces

Okay, I’m back in Kentucky, and things should get back up to speed. For the near future I’ll be talking more about linear endomorphisms — transformations from a vector space to itself.

The absolute simplest thing that a linear transformation can do to a vector is to kill it off entirely. That is, given a linear transformation , it’s possible that for some vectors . This is just what we mean by saying that is in the kernel . We’ve seen before that the vectors in the kernel form a subspace of

What other simple things could do to the vector ? One possibility is that does nothing to at all. That is, . We can call this vector a “fixed point” of the transformation . Notice that if is a fixed point, then so is any scalar multiple of . Indeed, by the linearity of . Similarly, if and are both fixed points, then . Thus the fixed points of also form a subspace of .

What else could happen? Well, notice that the two above cases are related. The condition that is in the kernel can be written . The condition that it’s a fixed point can be written . Each one says that the action of on is to multiply it by some fixed scalar. Let’s change that scalar and see what happens.

We’re now considering a linear transformation and a vector so that for some scalar . That is, hasn’t changed the direction of , but only its length. We call such a vector an “eigenvector” of , and the corresponding scalar its “eigenvalue”. In contexts where our vector space is a space of functions, it’s common (especially among quantum physicists) to use the term “eigenfunction” instead of eigenvector, and even weirder applications of the “eigen-” prefix, but these are almost always just special cases of eigenvectors.

Now it turns out that the eigenvectors associated to any particular eigenvalue form a subspace of . If we assume that and are both eigenvectors with eigenvalue , and that is another scalar, then we can check

and

We call the subspace of eigenvectors with eigenvalue the -eigenspace of . Notice here that the -eigenspace is the kernel of , and the -eigenspace is the subspace of fixed points. The eigenspace makes sense for all scalars , but any given eigenspace might be trivial, just as the transformation might have a trivial kernel.

Now, all of this is basically definitional. What’s surprising is how much of the behavior of any linear transformation is caught up in the behavior of its eigenvalues. We’ll see this more and more as we go further.

Nice to have one I can ‘just read’ …

Comment by Avery Andrews | January 26, 2009 |

A lucid and concise description! Thanks.

Comment by diotimajsh | January 27, 2009 |

one minor point:

normaly you define a eigenvektor to be just as you described but NOT 0 – this is because if you allow v to be the nullvector then you can find a eigenvektor for every eigenvalue (A 0 = \lambda 0 foreach A and \lambda).

And it might be nice to know that you can calculate those eigenvalues by solving det( A – x*I ) = 0 for x (where A is a matrix representation for T and I is the identity-matrix)

Of course this works because for a eigenvalue l with eigenvector v (not 0) you can write

Tv = lv, so Tv – lv = (T – l*id)v = 0 and as v is not 0 you see that T-l*id is singular – so the det has to be 0.

On the other hand if det(A – lI) = 0 A-lI is singular and so you can find a v not 0 with (A-lI)v = 0, so Av = lv.

Comment by Ano Nym | January 27, 2009 |

Ano Nym: I said right in my post that any given eigenspace might be trivial, consisting of only the zero vector. It’s a mere convention whether or not zero is considered an eigenvector, since it doesn’t really change anything else down the road.

As for your second point, have a little patience, okay? Just accept the idea that I might know what I’m doing.

Comment by John Armstrong | January 27, 2009 |

[...] Yesterday, we defined eigenvalues, eigenvectors, and eigenspaces. But we didn’t talk about how to actually find them (though one commenter decided to jump the [...]

Pingback by Determining Eigenvalues « The Unapologetic Mathematician | January 27, 2009 |

[...] defined a function — — whose zeroes give exactly those field elements so that the -eigenspace of is nontrivial. Actually, we’ll switch it up a bit and use the function , which has the [...]

Pingback by The Characteristic Polynomial « The Unapologetic Mathematician | January 28, 2009 |

[...] the algebraically-closed ones). Let be a linear endomorphism on a vector space , and for , let be eigenvectors with corresponding eigenvalues . Further, assume that for . I claim that the are linearly [...]

Pingback by Distinct Eigenvalues « The Unapologetic Mathematician | February 4, 2009 |

[...] of all, since an eigenspace generalizes a kernel, let’s consider a situation where we repeat the eigenvalue [...]

Pingback by Generalized Eigenvectors « The Unapologetic Mathematician | February 16, 2009 |

[...] Our definition of a generalized eigenvector looks a lot like the one for an eigenvector. But finding them may not be as straightforward as our method for finding eigenvectors. In [...]

Pingback by Determining Generalized Eigenvalues « The Unapologetic Mathematician | February 17, 2009 |

[...] multiplicity of an eigenvalue of a linear transformation as the number of independent associated eigenvectors. That is, as the dimension of the kernel of . Unfortunately, we saw that when we have repeated [...]

Pingback by The Multiplicity of an Eigenvalue « The Unapologetic Mathematician | February 19, 2009 |

[...] that if we take a vector then its image is again in . This generalizes the nice situation about eigenspaces: that we have some control (if not as complete) over the image of a [...]

Pingback by Kernels of Polynomials of Transformations « The Unapologetic Mathematician | February 24, 2009 |

[...] we can calculate the characteristic polynomial of , whose roots are the eigenvalues of . For each eigenvalue , we can define the generalized eigenspace as the kernel , since if some [...]

Pingback by Jordan Normal Form « The Unapologetic Mathematician | March 4, 2009 |

[...] of an Eigenpair An eigenvalue of a linear transformation is the same thing as a root of the characteristic polynomial of . That [...]

Pingback by Eigenvectors of an Eigenpair « The Unapologetic Mathematician | April 3, 2009 |

[...] first let’s consider the eigenvalues of . If is an eigenvector we have for some scalar . Then we can [...]

Pingback by The Determinant of a Positive-Definite Transformation « The Unapologetic Mathematician | August 3, 2009 |

[...] Eigenvalues and Eigenvectors of Normal Transformations Let’s say we have a normal transformation . It turns out we can say some interesting things about its eigenvalues and eigenvectors. [...]

Pingback by Eigenvalues and Eigenvectors of Normal Transformations « The Unapologetic Mathematician | August 6, 2009 |

[...] Okay, today I want to nail down a lemma about the invariant subspaces (and, in particular, eigenspaces) of self-adjoint transformations. Specifically, the fact that the orthogonal complement of an [...]

Pingback by Invariant Subspaces of Self-Adjoint Transformations « The Unapologetic Mathematician | August 11, 2009 |

[...] transformation was the identity on the remaining -dimensional space. This tells us that all of our eigenvalues are , and the characteristic polynomial is , where . We can evaluate this on the transformation to [...]

Pingback by A Lemma on Reflections « The Unapologetic Mathematician | January 19, 2010 |

[...] root , either or . Indeed, since , we must have . As a reflection, breaks into a one-dimensional eigenspace with eigenvalue and another complementary eigenspace with eigenvalue . If contains the [...]

Pingback by Properties of Irreducible Root Systems II « The Unapologetic Mathematician | February 11, 2010 |

[...] is algebraically closed, we must be able to find an eigenvalue . Letting be this eigenvalue, we see that commutes with for all , and so Schur’s lemma [...]

Pingback by Endomorphism and Commutant Algebras « The Unapologetic Mathematician | October 1, 2010 |