The Unapologetic Mathematician

Mathematics for the interested outsider

Transformations with All Eigenvalues Distinct

When will a linear transformation T have a diagonal matrix? We know that this happens exactly when we can find a basis \{e_i\}_{i=1}^d of our vector space V so that each basis vector is an eigenvector T(e_i)=\lambda_ie_i. But when does this happen? The full answer can be complicated, but one sufficient condition we can describe right away.

First, remember that we can always calculate the characteristic polynomial of T. This will be a polynomial of degree d — the dimension of V. Further, we know that a field element \lambda is an eigenvalue of T if and only if \lambda is a root of the characteristic polynomial. Since the degree of the polynomial is d, we can expect to find no more than d distinct roots of the polynomial — no more than d distinct eigenvalues of T. I want to consider what happens in this generic case.

Now we have d field elements \lambda_i, and for each one we can pick a vector e_i so that T(e_i)=\lambda_ie_i. I say that they form a basis of V. If we can show that they are linearly independent, then they span some subspace of V. But since there are d distinct vectors here, they span a subspace of dimension d, which must be all of V. And we know, by an earlier lemma that a collection of eigenvectors corresponding to distinct eigenvalues must be linearly independent! Thus, the e_i form a basis for V consisting of eigenvectors of T. With respect to this basis, the matrix of T is diagonal.

This is good, but notice that there are still plenty of things that can go wrong. It’s entirely possible that two (or more!) of the eigenvalues are not distinct. Worse, we could be working over a field that isn’t algebraically closed, so there may not be d roots at all, even counting duplicates. But still, in the generic case we’ve got a diagonal matrix with respect to a well-chosen basis.

February 10, 2009 Posted by | Algebra, Linear Algebra | 2 Comments