The Unapologetic Mathematician

Mathematics for the interested outsider

Flags

We’d like to have matrix-oriented versions of Engel’s theorem and Lie’s theorem, and to do that we’ll need flags. I’ve actually referred to flags long, long ago, but we’d better go through them now.

In its simplest form, a flag is simply a strictly-increasing sequence of subspaces \{V_k\}_{k=0}^n of a given finite-dimensional vector space. And we almost always say that a flag starts with V_0=0 and ends with V_n=V. In the middle we have some other subspaces, each one strictly including the one below it. We say that a flag is “complete” if \dim(V_k)=k — and thus n=\dim(V) — and for our current purposes all flags will be complete unless otherwise mentioned.

The useful thing about flags is that they’re a little more general and “geometric” than ordered bases. Indeed, given an ordered basis \{e_k\}_{k=1}^n we have a flag on V: define V_k to be the span of \{e_i\}_{i=1}^k. As a partial converse, given any (complete) flag we can come up with a not-at-all-unique basis: at each step let e_k be the preimage in V_k of some nonzero vector in the one-dimensional space V_k/V_{k-1}.

We say that an endomorphism of V “stabilizes” a flag if it sends each V_k back into itself. In fact, we saw something like this in the proof of Lie’s theorem: we build a complete flag on the subspace W_n, building the subspace up one basis element at a time, and then showed that each k\in K stabilized that flag. More generally, we say a collection of endomorphisms stabilizes a flag if all the endomorphisms in the collection do.

So, what do Lie’s and Engel’s theorems tell us about flags? Well, Lie’s theorem tells us that if L\subseteq\mathfrak{gl}(V) is solvable then it stabilizes some flag in V. Equivalently, there is some basis with respect to which the matrices of all elements of L are upper-triangular. In other words, L is isomorphic to some subalgebra of \mathfrak{t}(\dim(V),\mathbb{F}). We see that not only is \mathfrak{t}(n,\mathbb{F}) solvable, it is in a sense the archetypal solvable Lie algebra.

The proof is straightforward: Lie’s theorem tells us that L has a common eigenvector v_1\in V. We let this span the one-dimensional subspace V_1 and consider the action of L on the quotient W_1=V/V_1. Since we know that the image of L in \mathfrak{gl}(W_1) will again be solvable, we get a common eigenvector w_2\in W_1. Choosing a pre-image v_2\in V with w_2=v_2+\mathbb{F}v_1 we get our second basis vector. We can continue like this, building up a basis of V such that at each step we can write l(v_k)\in\lambda_k(l)v_k+V_{k-1} for all l\in L and some \lambda_k\in L^*.

For nilpotent L, the same is true — of course, nilpotent Lie algebras are automatically solvable — but Engel’s theorem tells us more: the functional $\lambda$ must be zero, and the diagonal entries of the above matrices are all zero. We conclude that any nilpotent L is isomorphic to some subalgebra of \mathfrak{n}(\dim(V),\mathbb{F}). That is, not only is \mathfrak{n}(n,\mathbb{F}) nilpotent, it is the archetype of all nilpotent Lie algebras in just the same way as \mathfrak{t}(n,\mathbb{F}) is the archetypal solvable Lie algebra.

More generally, if L is any solvable (nilpotent) Lie algebra and \phi:L\to\mathfrak{gl}(V) is any finite-dimensional representation of L, then we know that the image \phi(L) is a solvable (nilpotent) linear Lie algebra acting on V, and thus it must stabilize some flag of V. As a particular example, consider the adjoint action \mathrm{ad}:L\to\mathfrak{gl}(L); a subspace of L invariant under the adjoint action of L is just the same thing as an ideal of L, so we find that there must be some chain of ideals:

\displaystyle 0=I_0\subseteq I_1\subseteq\dots\subseteq I_{n-1}\subseteq I_n=L

where \dim(I_k)=k. Given such a chain, we can of course find a basis of L with respect to which the matrices of the adjoint action are all in \mathfrak{t}(\dim(L),\mathbb{F}) (\mathfrak{n}(\dim(L),\mathbb{F})).

In either case, we find that [L,L] is nilpotent. Indeed, if L is already nilpotent this is trivial. But if L is merely solvable, we see that the matrices of the commutators [\mathrm{ad}(x),\mathrm{ad}(y)] for x,y\in L lie in

\displaystyle [\mathfrak{t}(\dim(L),\mathbb{F}),\mathfrak{t}(\dim(L),\mathbb{F})]=\mathfrak{n}(\dim(L),\mathbb{F})

But since \mathrm{ad} is a homomorphism, this is the matrix of \mathrm{ad}([x,y]) acting on L, and obviously its action on the subalgebra [L,L] is nilpotent as well. Thus each element of [L,L] is ad-nilpotent, and Engel’s theorem then tells us that [L,L] is a nilpotent Lie algebra.

August 25, 2012 Posted by | Algebra, Lie Algebras | Leave a comment

Lie’s Theorem

The lemma leading to Engel’s theorem boils down to the assertion that there is some common eigenvector for all the endomorphisms in a nilpotent linear Lie algebra L\subseteq\mathfrak{gl}(V) on a finite-dimensional nonzero vector space V. Lie’s theorem says that the same is true of solvable linear Lie algebras. Of course, in the nilpotent case the only possible eigenvalue was zero, so we may find things a little more complicated now. We will, however, have to assume that \mathbb{F} is algebraically closed and that no multiple of the unit in \mathbb{F} is zero.

We will proceed by induction on the dimension of L using the same four basic steps as in the lemma: find an ideal K\subseteq L of codimension one, so we can write L=K+\mathbb{F}z for some z\in K\setminus L; find common eigenvectors for K; find a subspace of such common eigenvectors stabilized by L; find in that space an eigenvector for z.

First, solvability says that L properly includes [L,L], or else the derived series wouldn’t be able to even start heading towards 0. The quotient L/[L,L] must be abelian, with all brackets zero, so we can pick any subspace of this quotient with codimension one and it will be an ideal. The preimage of this subspace under the quotient projection will then be an ideal K\subseteq L of codimension one.

Now, K is a subalgebra of L, so we know it’s also solvable, so induction tells us that there’s a common eigenvector v\in V for the action of K. If K is zero, then L must be one-dimensional abelian, in which case the proof is obvious. Otherwise there is some linear functional \lambda\in K^* defined by

\displaystyle k(v)=\lambda(k)v

Of course, v is not the only such eigenvector; we define the (nonzero) subspace W by

\displaystyle W=\{w\in V\vert\forall k\in K, k(w)=\lambda(k)w\}

Next we must show that L sends W back into itself. To see this, pick l\in L and k\in K and check that

\displaystyle\begin{aligned}k(l(w))&=l(k(w))-[l,k](w)\\&=l(\lambda(k)w)-\lambda([l,k])w\\&=\lambda(k)l(w)-\lambda([l,k])w\end{aligned}

But if l(w)\in W, then we’d have k(l(w))=\lambda(k)l(w); we need to verify that \lambda([l,k])=0. In the nilpotent case — Engel’s theorem — the functional \lambda was constantly zero, so this was easy, but it’s a bit harder here.

Fixing w\in W and l\in L we pick n to be the first index where the collection \{l^i(w)\}_{i=0}^n is linearly independent — the first one where we can express l^n(w) as the linear combination of all the previous l^i(w). If we write W_i for the subspace spanned by the first i of these vectors, then the dimension of W_i grows one-by-one until we get to \dim(W_n)=n, and W_{n+i}=W_n from then on.

I say that each of the W_i are invariant under each k\in K. Indeed, we can prove the congruence

\displaystyle k(l^i(w))\equiv\lambda(k)l^i(w)\quad\mod W_i

that is, k acts on l^i(w) by multiplication by \lambda(k), plus some “lower-order terms”. For i=0 this is the definition of \lambda; in general we have

\displaystyle\begin{aligned}k(l^i(w))&=k(l(l^{i-1}(w)))\\&=l(k(l^{i-1}(w)))-[l,k](l^{i-1}(w))\\&=\lambda(k)l^i(w)+l(w')-\lambda([l,k])l^{i-1}(w)-w''\end{aligned}

for some w',w''\in W_{i-1}.

And so we conclude that, using the obvious basis of W_n the action of k on this subspace is in the form of an upper-triangular matrix with \lambda(k) down the diagonal. The trace of this matrix is n\lambda(k). And in particular, the trace of the action of [l,k] on W_n is n\lambda([l,k]). But l and k both act as endomorphisms of W_n — the one by design and the other by the above proof — and the trace of any commutator is zero! Since n must have an inverse we conclude that \lambda([l,k])=0.

Okay so that checks out that the action of L sends W back into itself. We finish up by picking some eigenvector v_0\in W of z, which we know must exist because we’re working over an algebraically closed field. Incidentally, we can then extend \lambda to all of L by using z(v_0)=\lambda(z)v_0.

August 25, 2012 Posted by | Algebra, Lie Algebras | 1 Comment

   

Follow

Get every new post delivered to your Inbox.

Join 388 other followers