The Unapologetic Mathematician

Mathematics for the interested outsider

Eigenvalues, Eigenvectors, and Eigenspaces

Okay, I’m back in Kentucky, and things should get back up to speed. For the near future I’ll be talking more about linear endomorphisms — transformations from a vector space to itself.

The absolute simplest thing that a linear transformation can do to a vector is to kill it off entirely. That is, given a linear transformation T:V\rightarrow V, it’s possible that T(v)=0 for some vectors v\in V. This is just what we mean by saying that v is in the kernel \mathrm{Ker}(T). We’ve seen before that the vectors in the kernel form a subspace of V

What other simple things could T do to the vector v? One possibility is that T does nothing to v at all. That is, T(v)=v. We can call this vector T a “fixed point” of the transformation T. Notice that if v is a fixed point, then so is any scalar multiple of v. Indeed, T(cv)=cT(v)=cv by the linearity of T. Similarly, if v and w are both fixed points, then T(v+w)=T(v)+T(w)=v+w. Thus the fixed points of T also form a subspace of V.

What else could happen? Well, notice that the two above cases are related. The condition that v is in the kernel can be written T(v)=0v. The condition that it’s a fixed point can be written T(v)=1v. Each one says that the action of T on v is to multiply it by some fixed scalar. Let’s change that scalar and see what happens.

We’re now considering a linear transformation T and a vector v so that T(v)=\lambda v for some scalar \lambda. That is, T hasn’t changed the direction of v, but only its length. We call such a vector an “eigenvector” of T, and the corresponding scalar its “eigenvalue”. In contexts where our vector space is a space of functions, it’s common (especially among quantum physicists) to use the term “eigenfunction” instead of eigenvector, and even weirder applications of the “eigen-” prefix, but these are almost always just special cases of eigenvectors.

Now it turns out that the eigenvectors associated to any particular eigenvalue form a subspace of V. If we assume that v and w are both eigenvectors with eigenvalue \lambda, and that c is another scalar, then we can check

T(cv)=cT(v)=c\lambda v=\lambda cv

and

T(v+w)=T(v)+T(w)=\lambda v+\lambda w=\lambda(v+w)

We call the subspace of eigenvectors with eigenvalue \lambda the \lambda-eigenspace of V. Notice here that the {0}-eigenspace is the kernel of V, and the {1}-eigenspace is the subspace of fixed points. The eigenspace makes sense for all scalars \lambda, but any given eigenspace might be trivial, just as the transformation might have a trivial kernel.

Now, all of this is basically definitional. What’s surprising is how much of the behavior of any linear transformation is caught up in the behavior of its eigenvalues. We’ll see this more and more as we go further.

January 26, 2009 Posted by | Algebra, Linear Algebra | 19 Comments

   

Follow

Get every new post delivered to your Inbox.

Join 366 other followers