The Unapologetic Mathematician

Mathematics for the interested outsider

The Algebra of Upper-Triangular Matrices

Here’s another little result that’s good over any field, algebraically-closed or not. We know that the linear maps from a vector space V (of finite dimension d) to itself form an algebra over \mathbb{F}. We can pick a basis and associate a matrix to each of these linear transformations. It turns out that the upper-triangular matrices form a subalgebra.

The easy part is to show that matrices of the form

\displaystyle\begin{pmatrix}\lambda_1&&*\\&\ddots&\\{0}&&\lambda_d\end{pmatrix}

form a linear subspace of \mathrm{End}(V). Clearly if we add two of these matrices together, we still get zero everywhere below the diagonal, and the same goes for multiplying the matrix by a scalar.

The harder part is to show that the product of two such matrices is again upper-triangular. So let’s take two of them with entries s_i^j and t_i^j. To make these upper-triangular, we’ll require that s_i^j=0 and t_i^j=0 for i>j. What we need to check is that the matrix entry of the product s_i^jt_j^k=0 for i>k. But this matrix entry is a sum of terms as j runs from {1} to d, and each term is a product of one matrix entry from each matrix. The first matrix entry can only be nonzero if i\leq j, while the second can only be nonzero if j\leq k. Thus their product can only be nonzero if i\leq k. And this means that all the nonzero entries of the product are on or above the diagonal.

February 5, 2009 Posted by | Algebra, Linear Algebra | 1 Comment

Distinct Eigenvalues

Today I’d like to point out a little fact that applies over any field (not just the algebraically-closed ones). Let T be a linear endomorphism on a vector space V, and for 1\leq i\leq n, let v_i be eigenvectors with corresponding eigenvalues \lambda_i. Further, assume that \lambda_i\neq\lambda_i for i\neq j. I claim that the v_i are linearly independent.

Suppose the collection is linearly dependent. Then for some k we have a linear relation

\displaystyle v_k=c_1v_1+c_2v_2+...+c_{k-1}v_{k-1}

We can assume that k is the smallest index so that we get such a relation involving only smaller indices.

Hit both sides of this equation by T, and use the eigenvalue properties to find

\displaystyle\lambda_kv_k=c_1\lambda_1v_1+c_2\lambda_2v_2+...+c_{k-1}\lambda_{k-1}v_{k-1}

On the other hand, we could just multiply the first equation by \lambda_k to get

\displaystyle\lambda_kv_k=c_1\lambda_kv_1+c_2\lambda_kv_2+...+c_{k-1}\lambda_kv_{k-1}

Subtracting, we find the equation

\displaystyle0=c_1(\lambda_k-\lambda_1)v_1+c_2(\lambda_k-\lambda_2)v_2+...+c_{k-1}(\lambda_k-\lambda_{k-1})v_{k-1}

But we this would contradict the minimality of k we assumed before. Thus there can be no such linear relation, and eigenvectors corresponding to distinct eigenvalues are linearly independent.

February 4, 2009 Posted by | Algebra, Linear Algebra | 5 Comments

The Determinant of an Upper-Triangular Matrix

We know that every linear endomorphism T on a vector space V over an algebraically-closed field \mathbb{F} has a basis with respect to which its matrix is upper-triangular. That is, it looks like

\displaystyle\begin{pmatrix}\lambda_1&&*\\&\ddots&\\{0}&&\lambda_d\end{pmatrix}

So we’re interested in matrices of this form, whether the base field is algebraically-closed or not. One thing that’s nice about upper-triangular matrices is that their determinants are pretty simple.

Remember that we can calculate the determinant by summing one term for each permutation in the symmetric group S_d. Each term is the product of one entry from each row of the matrix. But we can see that in the last row, we have to pick the last entry, or the whole term will be zero. Then in the next to last row, we must pick either of the last two entries. But we can’t pick the last one, since that’s locked up for the last row, so we must again pick the entry on the diagonal.

As we work backwards up the matrix, we find that the only possible way of picking a nonzero term is to always pick the diagonal element. That is, we only need to consider the identity permutation \pi(k)=k. And then the determinant is simply the product of all these diagonal elements. That is,

\displaystyle\det\begin{pmatrix}\lambda_1&&*\\&\ddots&\\{0}&&\lambda_d\end{pmatrix}=\prod\limits_{k=1}^d\lambda_k

Notice that the entries above the diagonal don’t matter at all!

One way this comes in handy is in finding the eigenvalues of T. “But wait!” you cry, “Didn’t we find the \lambda_i by looking for eigenvalues of T? Not quite. We found them by looking for eigenvalues of a sequence of restrictions of T to smaller and smaller quotient spaces. We have no reason to believe (yet) that these actually correspond to eigenvalues of T.

But now we can easily find the matrix corresponding to \lambda1_V-T, since matrix of the identity transformation is the same in every basis. We find

\displaystyle\begin{pmatrix}\lambda-\lambda_1&&*\\&\ddots&\\{0}&&\lambda-\lambda_d\end{pmatrix}

This is again upper-triangular, so we can easily calculate the characteristic polynomial by taking its determinant. We find

\displaystyle\prod\limits_{k=1}^d(\lambda-\lambda_k)

Then it’s clear to see that this will be zero exactly when \lambda=\lambda_k. That is, the eigenvalues of T are exactly the entries along the diagonal of an upper-triangular matrix for the transformation. Incidentally, this shows in passing that even though there may be many different upper-triangular matrices representing the same transformation (in different bases), they all have the same entries along the diagonal (possibly in different orders).

February 3, 2009 Posted by | Algebra, Linear Algebra | 4 Comments

Upper-Triangular Matrices

Until further notice, I’ll be assuming that the base field \mathbb{F} is algebraically closed, like the complex numbers \mathbb{C}.

What does this assumption buy us? It says that the characteristic polynomial of a linear transformation T is — like any polynomial over an algebraically closed field — guaranteed to have a root. Thus any linear transformation T has an eigenvalue \lambda_1, as well as a corresponding eigenvector e_1 satisfying

T(e_1)=\lambda_1e_1

So let’s pick an eigenvector e_1 and take the subspace \mathbb{F}e_1\subseteq V it spans. We can take the quotient space V/\mathbb{F}e_1 and restrict T to act on it. Why? Because if we take two representatives v,w\in V of the same vector in the quotient space, then w=v+ce_1. Then we find

T(w)=T(v+ce_1)=T(v)+cT(e_1)=T(v)+c\lambda_1e_1

which represents the same vector as T(v).

Now the restriction of T to V/\mathbb{F}e_1 is another linear endomorphism over an algebraically closed field, so its characteristic polynomial must have a root, and it must have an eigenvalue \lambda_2 with associated eigenvector e_2. But let’s be careful. Does this mean that e_2 is an eigenvector of T? Not quite. All we know is that

T(e_2)=\lambda_2e_2+c_{1,2}e_1

since vectors in the quotient space are only defined up to multiples of e_1.

We can proceed like this, pulling off one vector e_i after another. Each time we find

T(e_i)=\lambda_ie_i+c_{i-1,i}e_{i-1}+c_{i-2,i}e_{i-2}+...+c_{1,i}e_1

The image of e_i in the ith quotient space is a constant times e_i itself, plus a linear combination of the earlier vectors. Further, each vector is linearly independent of the ones that came before, since if it weren’t, then it would be the zero vector in its quotient space. This procedure only grinds to a halt when the number of vectors equals the dimension of V, for then the quotient space is trivial, and the linearly independent collection \{e_i\} spans V. That is, we’ve come up with a basis.

So, what does T look like in this basis? Look at the expansion above. We can set t_i^j=c_{i,j} for all i<j. When i=j we set t_i^i=\lambda_i. And in the remaining cases, where i^gt;j, we set t_i^j=0. That is, the matrix looks like

\displaystyle\begin{pmatrix}\lambda_1&&*\\&\ddots&\\{0}&&\lambda_d\end{pmatrix}

Where the star above the diagonal indicates unknown matrix entries, and the zero below the diagonal indicates that all the entries in that region are zero. We call such a matrix “upper-triangular”, since the only nonzero entries in the matrix are on or above the diagonal. What we’ve shown here is that over an algebraically-closed field, any linear transformation has a basis with respect to which the matrix of the transformation is upper-triangular. This is an important first step towards classifying these transformations.

February 2, 2009 Posted by | Algebra, Linear Algebra | 20 Comments

Follow

Get every new post delivered to your Inbox.

Join 389 other followers