Here’s another little result that’s good over any field, algebraically-closed or not. We know that the linear maps from a vector space (of finite dimension ) to itself form an algebra over . We can pick a basis and associate a matrix to each of these linear transformations. It turns out that the upper-triangular matrices form a subalgebra.
The easy part is to show that matrices of the form
form a linear subspace of . Clearly if we add two of these matrices together, we still get zero everywhere below the diagonal, and the same goes for multiplying the matrix by a scalar.
The harder part is to show that the product of two such matrices is again upper-triangular. So let’s take two of them with entries and . To make these upper-triangular, we’ll require that and for . What we need to check is that the matrix entry of the product for . But this matrix entry is a sum of terms as runs from to , and each term is a product of one matrix entry from each matrix. The first matrix entry can only be nonzero if , while the second can only be nonzero if . Thus their product can only be nonzero if . And this means that all the nonzero entries of the product are on or above the diagonal.
Today I’d like to point out a little fact that applies over any field (not just the algebraically-closed ones). Let be a linear endomorphism on a vector space , and for , let be eigenvectors with corresponding eigenvalues . Further, assume that for . I claim that the are linearly independent.
Suppose the collection is linearly dependent. Then for some we have a linear relation
We can assume that is the smallest index so that we get such a relation involving only smaller indices.
Hit both sides of this equation by , and use the eigenvalue properties to find
On the other hand, we could just multiply the first equation by to get
Subtracting, we find the equation
But we this would contradict the minimality of we assumed before. Thus there can be no such linear relation, and eigenvectors corresponding to distinct eigenvalues are linearly independent.
We know that every linear endomorphism on a vector space over an algebraically-closed field has a basis with respect to which its matrix is upper-triangular. That is, it looks like
So we’re interested in matrices of this form, whether the base field is algebraically-closed or not. One thing that’s nice about upper-triangular matrices is that their determinants are pretty simple.
Remember that we can calculate the determinant by summing one term for each permutation in the symmetric group . Each term is the product of one entry from each row of the matrix. But we can see that in the last row, we have to pick the last entry, or the whole term will be zero. Then in the next to last row, we must pick either of the last two entries. But we can’t pick the last one, since that’s locked up for the last row, so we must again pick the entry on the diagonal.
As we work backwards up the matrix, we find that the only possible way of picking a nonzero term is to always pick the diagonal element. That is, we only need to consider the identity permutation . And then the determinant is simply the product of all these diagonal elements. That is,
Notice that the entries above the diagonal don’t matter at all!
One way this comes in handy is in finding the eigenvalues of . “But wait!” you cry, “Didn’t we find the by looking for eigenvalues of ? Not quite. We found them by looking for eigenvalues of a sequence of restrictions of to smaller and smaller quotient spaces. We have no reason to believe (yet) that these actually correspond to eigenvalues of .
But now we can easily find the matrix corresponding to , since matrix of the identity transformation is the same in every basis. We find
This is again upper-triangular, so we can easily calculate the characteristic polynomial by taking its determinant. We find
Then it’s clear to see that this will be zero exactly when . That is, the eigenvalues of are exactly the entries along the diagonal of an upper-triangular matrix for the transformation. Incidentally, this shows in passing that even though there may be many different upper-triangular matrices representing the same transformation (in different bases), they all have the same entries along the diagonal (possibly in different orders).
What does this assumption buy us? It says that the characteristic polynomial of a linear transformation is — like any polynomial over an algebraically closed field — guaranteed to have a root. Thus any linear transformation has an eigenvalue , as well as a corresponding eigenvector satisfying
So let’s pick an eigenvector and take the subspace it spans. We can take the quotient space and restrict to act on it. Why? Because if we take two representatives of the same vector in the quotient space, then . Then we find
which represents the same vector as .
Now the restriction of to is another linear endomorphism over an algebraically closed field, so its characteristic polynomial must have a root, and it must have an eigenvalue with associated eigenvector . But let’s be careful. Does this mean that is an eigenvector of ? Not quite. All we know is that
since vectors in the quotient space are only defined up to multiples of .
We can proceed like this, pulling off one vector after another. Each time we find
The image of in the th quotient space is a constant times itself, plus a linear combination of the earlier vectors. Further, each vector is linearly independent of the ones that came before, since if it weren’t, then it would be the zero vector in its quotient space. This procedure only grinds to a halt when the number of vectors equals the dimension of , for then the quotient space is trivial, and the linearly independent collection spans . That is, we’ve come up with a basis.
So, what does look like in this basis? Look at the expansion above. We can set for all . When we set . And in the remaining cases, where , we set . That is, the matrix looks like
Where the star above the diagonal indicates unknown matrix entries, and the zero below the diagonal indicates that all the entries in that region are zero. We call such a matrix “upper-triangular”, since the only nonzero entries in the matrix are on or above the diagonal. What we’ve shown here is that over an algebraically-closed field, any linear transformation has a basis with respect to which the matrix of the transformation is upper-triangular. This is an important first step towards classifying these transformations.