The Unapologetic Mathematician

Mathematics for the interested outsider

The Killing Form

We can now define a symmetric bilinear form \kappa on our Lie algebra L by the formula

\displaystyle\kappa(x,y)=\mathrm{Tr}(\mathrm{ad}(x)\mathrm{ad}(y))

It’s symmetric because the cyclic property of the trace lets us swap \mathrm{ad}(x) and \mathrm{ad}(y) and get the same value. It also satisfies another identity which is referred to as “associativity”, though it may not appear like the familiar version of that property at first:

\displaystyle\begin{aligned}\kappa([x,y],z)&=\mathrm{Tr}(\mathrm{ad}([x,y])\mathrm{ad}(z))\\&=\mathrm{Tr}([\mathrm{ad}(x),\mathrm{ad}(y)]\mathrm{ad}(z))\\&=\mathrm{Tr}(\mathrm{ad}(x)[\mathrm{ad}(y),\mathrm{ad}(z)])\\&=\mathrm{Tr}(\mathrm{ad}(x)\mathrm{ad}([y,z]))\\&=\kappa(x,[y,z])\end{aligned}

Where we have used the trace identity from last time.

This is called the Killing form, named for Wilhelm Killing and not nearly so coincidentally as the Poynting vector. It will be very useful to study the structures of Lie algebras.

First, though, we want to show that the definition is well-behaved. Specifically, if I\subseteq L is an ideal, then we can define \kappa_I to be the Killing form of I. It turns out that \kappa_I is just the same as \kappa, but restricted to take its arguments in I instead of all of L.

A lemma: if W\subseteq V is any subspace of a vector space and \phi:V\to V has its image contained in W, then the trace of \phi over V is the same as its trace over W. Indeed, take any basis of W and extend it to one of V; the matrix of \phi with respect to this basis has zeroes for all the rows that do not correspond to the basis of W, so the trace may as well just be taken over W.

Now the fact that I is an ideal means that for any x,y\in I the mapping \mathrm{ad}(x)\mathrm{ad}(y) is an endomorphism of L sending all of L into I. Thus its trace over I is the same as its trace over all of L, and the Killing form on I applied to x,y\in I is the same as the Killing form on L applied to the same two elements.

September 3, 2012 Posted by | Algebra, Lie Algebras | 5 Comments

Cartan’s Criterion

It’s obvious that if [L,L] is nilpotent then L will be solvable. And Engel’s theorem tells us that if each x\in[L,L] is ad-nilpotent, then [L,L] is itself nilpotent. We can now combine this with our trace criterion to get a convenient way of identifying solvable Lie algebras.

If L\subseteq\mathfrak{gl}(V) is a linear Lie algebra and \mathrm{Tr}(xy)=0 for all x\in[L,L] and y\in L, then L is solvable. We’d obviously like to use the trace criterion to show this, but we need a little massaging first.

The catch is that our M consists of all the x\in\mathfrak{gl}(V) such that \mathrm{ad}(x) sends L to [L,L]. Clearly L\subseteq M, but it may not be all of M; our hypothesis states that \mathrm{Tr}(xy)=0 for all y\in L, but the criterion needs it to hold for all y\in M.

To get there, we use the following calculation, which is a useful lemma in its own right:

\displaystyle\begin{aligned}\mathrm{Tr}([x,y]z)&=\mathrm{Tr}(xyz-yxz)\\&=\mathrm{Tr}(xyz)-\mathrm{Tr}(yxz)\\&=\mathrm{Tr}(xyz)-\mathrm{Tr}(xzy)\\&=\mathrm{Tr}(xyz-xzy)\\&=\mathrm{Tr}(x[y,z])\end{aligned}

Now, if x,y\in L — so [x,y]\in[L,L] — and z\in M then

\displaystyle\mathrm{Tr}([x,y]z)=\mathrm{Tr}(x[y,z])=\mathrm{Tr}([y,z]x)

But since z\in M we know that [y,z]\in[L,L]\subseteq L, which means that the hypothesis kicks in: [y,z]\in[L,L] and x\in L so \mathrm{Tr}([y,z]x)=0.

Then we know that all x\in[L,L] are nilpotent endomorphisms, which makes them ad-nilpotent. Engel’s theorem tells us that [L,L] is nilpotent, which means L is solvable.

We can also extend this out to abstract Lie algebras: if L is any Lie algebra such that \mathrm{Tr}(\mathrm{ad}(x)\mathrm{ad}(y))=0 for all x\in[L,L] and y\in L, then L is solvable. Indeed, we can apply the linear version to the image \mathrm{ad}(L)\subseteq\mathfrak{gl}(L) to see that this algebra is solvable. The kernel \mathrm{Ker}(\mathrm{ad}) is just the center Z(L), which is abelian and thus automatically solvable. The image \mathrm{ad}(L) is thus the solvable quotient of L by a solvable kernel, so we know that L itself is solvable.

September 1, 2012 Posted by | Algebra, Lie Algebras | 3 Comments

A Trace Criterion for Nilpotence

We’re going to need another way of identifying nilpotent endomorphisms. Let A\subseteq B\subseteq\mathfrak{gl}(V) be two subspaces of endomorphisms on a finite-dimensional space V, and let M be the collection of x\in\mathfrak{gl}(V) such that \mathrm{ad}(x) sends B into A. If x\in M satisfies \mathrm{Tr}(xy)=0 for all y\in M then x is nilpotent.

The first thing we do is take the Jordan-Chevalley decomposition of xx=s+n — and fix a basis that diagonalizes x with eigenvalues a_i. We define E to be the \mathbb{Q}-subspace of \mathbb{F} spanned by the eigenvalues. If we can prove that this space is trivial, then all the eigenvalues of s must be zero, and thus s itself must be zero.

We proceed by showing that any linear functional f:E\to\mathbb{Q} must be zero. Taking one, we define y\in\mathfrak{gl}(V) to be the endomorphism whose matrix with respect to our fixed basis is diagonal: f(a_i)\delta_{ij}. If \{e_{ij}\} is the corresponding basis of \mathfrak{gl}(V) we can calculate that

\displaystyle\begin{aligned}\left[\mathrm{ad}(s)\right](e_{ij})&=(a_i-a_j)e_{ij}\\\left[\mathrm{ad}(y)\right](e_{ij})&=(f(a_i)-f(a_j))e_{ij}\end{aligned}

Now we can find some polynomial r(T) such that r(a_i-a_j)=f(a_i)-f(a_j); there is no ambiguity here since if a_i-a_j=a_k-a_l then the linearity of f implies that

\displaystyle\begin{aligned}f(a_i)-f(a_j)&=f(a_i-a_j)\\&=f(a_k-a_l)\\&=f(a_k)-f(a_l)\end{aligned}

Further, picking i=j we can see that r(0)=0, so r has no constant term. It should be apparent that \mathrm{ad}(y)=r\left(\mathrm{ad}(s)\right).

Now, we know that \mathrm{ad}(s) is the semisimple part of \mathrm{ad}(x), so the Jordan-Chevalley decomposition lets us write it as a polynomial in \mathrm{ad}(x) with no constant term. But then we can write \mathrm{ad}(y)=r\left(p\left(\mathrm{ad}(x)\right)\right). Since \mathrm{ad}(x) maps B into A, so does \mathrm{ad}(y), and our hypothesis tells us that

\displaystyle\mathrm{Tr}(xy)=\sum\limits_{i=1}^{\dim V}a_if(a_i)=0

Hitting this with f we find that the sum of the squares of the f(a_i) is also zero, but since these are rational numbers they must all be zero.

Thus, as we asserted, the only possible \mathbb{Q}-linear functional on E is zero, meaning that E is trivial, all the eigenvalues of s are zero, and x is nipotent, as asserted.

August 31, 2012 Posted by | Algebra, Lie Algebras, Linear Algebra | 1 Comment

Uses of the Jordan-Chevalley Decomposition

Now that we’ve given the proof, we want to mention a few uses of the Jordan-Chevalley decomposition.

First, we let A be any finite-dimensional \mathbb{F}-algebra — associative, Lie, whatever — and remember that \mathrm{End}_\mathbb{F}(A) contains the Lie algebra of derivations \mathrm{Der}(A). I say that if \delta\in\mathrm{Der}(A) then so are its semisimple part \sigma and its nilpotent part \nu; it’s enough to show that \sigma is.

Just like we decomposed V in the proof of the Jordan-Chevalley decomposition, we can break A down into the eigenspaces of \delta — or, equivalently, of \sigma. But this time we will index them by the eigenvalue: A_a consists of those x\in A such that \left[\delta-aI\right]^k(x)=0 for sufficiently large k.

Now we have the identity:

\displaystyle\left[\delta-(a+b)I\right]^n(xy)=\sum\limits_{i=0}^n\binom{n}{i}\left[\delta-aI\right]^{n-i}(x)\left[\delta-bI\right]^i(y)

which is easily verified. If a sufficiently large power of \delta-aI applied to x and a sufficiently large power of \delta-bI applied to y are both zero, then for sufficiently large n one or the other factor in each term will be zero, and so the entire sum is zero. Thus we verify that A_aA_b\subseteq A_{a+b}.

If we take x\in A_a and y\in A_b then xy\in A_{a+b}, and thus \sigma(xy)=(a+b)xy. On the other hand,

\displaystyle\begin{aligned}\sigma(x)y+x\sigma(y)&=axy+bxy\\&=(a+b)xy\end{aligned}

And thus \sigma satisfies the derivation property

\displaystyle\sigma(xy)=\sigma(x)y+x\sigma(y)

so \sigma and \nu are both in \mathrm{Der}(A).

For the other side we note that, just as the adjoint of a nilpotent endomorphism is nilpotent, the adjoint of a semisimple endomorphism is semisimple. Indeed, if \{v_i\}_{i=0}^n is a basis of V such that the matrix of x is diagonal with eigenvalues \{a_i\}, then we let e_{ij} be the standard basis element of \mathfrak{gl}(n,\mathbb{F}), which is isomorphic to \mathfrak{gl}(V) using the basis \{v_i\}. It’s a straightforward calculation to verify that

\displaystyle\left[\mathrm{ad}(x)\right](e_{ij})=(a_i-a_j)e_{ij}

and thus \mathrm{ad}(x) is diagonal with respect to this basis.

So now if x=x_s+x_n is the Jordan-Chevalley decomposition of x, then \mathrm{ad}(x_s) is semisimple and \mathrm{ad}(x_n) is nilpotent. They commute, since

\displaystyle\begin{aligned}\left[\mathrm{ad}(x_s),\mathrm{ad}(x_n)\right]&=\mathrm{ad}\left([x_s,x_n]\right)\\&=\mathrm{ad}(0)=0\end{aligned}

Since \mathrm{ad}(x)=\mathrm{ad}(x_s)+\mathrm{ad}(x_n) is the decomposition of \mathrm{ad}(x) into a semisimple and a nilpotent part which commute with each other, it is the Jordan-Chevalley decomposition of \mathrm{ad}(x).

August 30, 2012 Posted by | Algebra, Lie Algebras, Linear Algebra | 3 Comments

The Jordan-Chevalley Decomposition (proof)

We now give the proof of the Jordan-Chevalley decomposition. We let x have distinct eigenvalues \{a_i\}_{i=1}^k with multiplicities \{m_i\}_{i=1}^k, so the characteristic polynomial of x is

\displaystyle\prod\limits_{i=1}^k(T-a_i)^{m_i}

We set V_i=\mathrm{Ker}\left((x-a_iI)^{m_i}\right) so that V is the direct sum of these subspaces, each of which is fixed by x.

On the subspace V_i, x has the characteristic polynomial (T-a_i)^{m_i}. What we want is a single polynomial p(T) such that

\displaystyle\begin{aligned}p(T)&\equiv a_i\mod (T-a_i)^{m_i}\\p(T)&\equiv0\mod T\end{aligned}

That is, p(T) has no constant term, and for each i there is some k_i(T) such that

\displaystyle p(T)=(T-a_i)^{m_i}k_i(T)+a_i

Thus, if we evaluate p(x) on the V_i block we get a_i.

To do this, we will make use of a result that usually comes up in number theory called the Chinese remainder theorem. Unfortunately, I didn’t have the foresight to cover number theory before Lie algebras, so I’ll just give the statement: any system of congruences — like the one above — where the moduli are relatively prime — as they are above, unless 0 is an eigenvalue in which case just leave out the last congruence since we don’t need it — has a common solution, which is unique modulo the product of the separate moduli. For example, the system

\displaystyle\begin{aligned}x&\equiv2\mod3\\x&\equiv3\mod4\\x&\equiv1\mod5\end{aligned}

has the solution 11, which is unique modulo 3\cdot4\cdot5=60. This is pretty straightforward to understand for integers, but it works as stated over any principal ideal domain — like \mathbb{F}[T] — and, suitably generalized, over any commutative ring.

So anyway, such a p exists, and it’s the p we need to get the semisimple part of x. Indeed, on any block V_i x_s=p(x) differs from x by stripping off any off-diagonal elements. Then we can just set q(T)=T-p(T) and find x_n=q(x). Any two polynomials in x must commute — indeed we can simply calculate

\displaystyle\begin{aligned}x_sx_n&=p(x)q(x)\\&=q(x)p(x)\\&=x_nx_s\end{aligned}

Finally, if x:B\to A then so must any polynomial in x, so the last assertion of the decomposition holds.

The only thing left is the uniqueness of the decomposition. Let’s say that x=s+n is a different decomposition into a semisimple and a nilpotent part which commute with each other. Then we have x_s-s=n-x_n, and all four of these endomorphisms commute with each other. But the left-hand side is semisimple — diagonalizable — but the right hand side is nilpotent, which means its only possible eigenvalue is zero. Thus s=x_s and n=x_n.

August 28, 2012 Posted by | Algebra, Linear Algebra | 1 Comment

The Jordan-Chevalley Decomposition

We recall that any linear endomorphism of a finite-dimensional vector space over an algebraically closed field can be put into Jordan normal form: we can find a basis such that its matrix is the sum of blocks that look like

\displaystyle\begin{pmatrix}\lambda&1&&&{0}\\&\lambda&1&&\\&&\ddots&\ddots&\\&&&\lambda&1\\{0}&&&&\lambda\end{pmatrix}

where \lambda is some eigenvalue of the transformation. We want a slightly more abstract version of this, and it hinges on the idea that matrices in Jordan normal form have an obvious diagonal part, and a bunch of entries just above the diagonal. This off-diagonal part is all in the upper-triangle, so it is nilpotent; the diagonalizable part we call “semisimple”. And what makes this particular decomposition special is that the two parts commute. Indeed, the block-diagonal form means we can carry out the multiplication block-by-block, and in each block one factor is a constant multiple of the identity, which clearly commutes with everything.

More generally, we will have the Jordan-Chevalley decomposition of an endomorphism: any x\in\mathrm{End}(V) can be written uniquely as the sum x=x_s+x_n, where x_s is semisimple — diagonalizable — and x_n is nilpotent, and where x_s and x_n commute with each other.

Further, we will find that there are polynomials p(T) and q(T) — each of which with no constant term — such that p(x)=x_s and q(x)=x_n. And thus we will find that any endomorphism that commutes with x with also commute with both x_s and x_n.

Finally, if A\subseteq B\subseteq V is any pair of subspaces such that x:B\to A then the same is true of both x_s and x_n.

We will prove these next time, but let’s see that this is actually true of the Jordan normal form. The first part we’ve covered.

For the second, set aside the assertion about p and q; any endomorphism commuting with x either multiplies each block by a constant or shuffles similar blocks, and both of these operations commute with both x_n and x_n.

For the last part, we may as well assume that B=V, since otherwise we can just restrict to x\vert_B\in\mathrm{End}(B). If \mathrm{Im}(x)\subseteq A then the Jordan normal form shows us that any complementary subspace to A must be spanned by blocks with eigenvalue 0. In particular, it can only touch the last row of any such block. But none of these rows are in the range of either the diagonal or off-diagonal portions of the matrix.

August 28, 2012 Posted by | Algebra, Linear Algebra | 3 Comments

Flags

We’d like to have matrix-oriented versions of Engel’s theorem and Lie’s theorem, and to do that we’ll need flags. I’ve actually referred to flags long, long ago, but we’d better go through them now.

In its simplest form, a flag is simply a strictly-increasing sequence of subspaces \{V_k\}_{k=0}^n of a given finite-dimensional vector space. And we almost always say that a flag starts with V_0=0 and ends with V_n=V. In the middle we have some other subspaces, each one strictly including the one below it. We say that a flag is “complete” if \dim(V_k)=k — and thus n=\dim(V) — and for our current purposes all flags will be complete unless otherwise mentioned.

The useful thing about flags is that they’re a little more general and “geometric” than ordered bases. Indeed, given an ordered basis \{e_k\}_{k=1}^n we have a flag on V: define V_k to be the span of \{e_i\}_{i=1}^k. As a partial converse, given any (complete) flag we can come up with a not-at-all-unique basis: at each step let e_k be the preimage in V_k of some nonzero vector in the one-dimensional space V_k/V_{k-1}.

We say that an endomorphism of V “stabilizes” a flag if it sends each V_k back into itself. In fact, we saw something like this in the proof of Lie’s theorem: we build a complete flag on the subspace W_n, building the subspace up one basis element at a time, and then showed that each k\in K stabilized that flag. More generally, we say a collection of endomorphisms stabilizes a flag if all the endomorphisms in the collection do.

So, what do Lie’s and Engel’s theorems tell us about flags? Well, Lie’s theorem tells us that if L\subseteq\mathfrak{gl}(V) is solvable then it stabilizes some flag in V. Equivalently, there is some basis with respect to which the matrices of all elements of L are upper-triangular. In other words, L is isomorphic to some subalgebra of \mathfrak{t}(\dim(V),\mathbb{F}). We see that not only is \mathfrak{t}(n,\mathbb{F}) solvable, it is in a sense the archetypal solvable Lie algebra.

The proof is straightforward: Lie’s theorem tells us that L has a common eigenvector v_1\in V. We let this span the one-dimensional subspace V_1 and consider the action of L on the quotient W_1=V/V_1. Since we know that the image of L in \mathfrak{gl}(W_1) will again be solvable, we get a common eigenvector w_2\in W_1. Choosing a pre-image v_2\in V with w_2=v_2+\mathbb{F}v_1 we get our second basis vector. We can continue like this, building up a basis of V such that at each step we can write l(v_k)\in\lambda_k(l)v_k+V_{k-1} for all l\in L and some \lambda_k\in L^*.

For nilpotent L, the same is true — of course, nilpotent Lie algebras are automatically solvable — but Engel’s theorem tells us more: the functional $\lambda$ must be zero, and the diagonal entries of the above matrices are all zero. We conclude that any nilpotent L is isomorphic to some subalgebra of \mathfrak{n}(\dim(V),\mathbb{F}). That is, not only is \mathfrak{n}(n,\mathbb{F}) nilpotent, it is the archetype of all nilpotent Lie algebras in just the same way as \mathfrak{t}(n,\mathbb{F}) is the archetypal solvable Lie algebra.

More generally, if L is any solvable (nilpotent) Lie algebra and \phi:L\to\mathfrak{gl}(V) is any finite-dimensional representation of L, then we know that the image \phi(L) is a solvable (nilpotent) linear Lie algebra acting on V, and thus it must stabilize some flag of V. As a particular example, consider the adjoint action \mathrm{ad}:L\to\mathfrak{gl}(L); a subspace of L invariant under the adjoint action of L is just the same thing as an ideal of L, so we find that there must be some chain of ideals:

\displaystyle 0=I_0\subseteq I_1\subseteq\dots\subseteq I_{n-1}\subseteq I_n=L

where \dim(I_k)=k. Given such a chain, we can of course find a basis of L with respect to which the matrices of the adjoint action are all in \mathfrak{t}(\dim(L),\mathbb{F}) (\mathfrak{n}(\dim(L),\mathbb{F})).

In either case, we find that [L,L] is nilpotent. Indeed, if L is already nilpotent this is trivial. But if L is merely solvable, we see that the matrices of the commutators [\mathrm{ad}(x),\mathrm{ad}(y)] for x,y\in L lie in

\displaystyle [\mathfrak{t}(\dim(L),\mathbb{F}),\mathfrak{t}(\dim(L),\mathbb{F})]=\mathfrak{n}(\dim(L),\mathbb{F})

But since \mathrm{ad} is a homomorphism, this is the matrix of \mathrm{ad}([x,y]) acting on L, and obviously its action on the subalgebra [L,L] is nilpotent as well. Thus each element of [L,L] is ad-nilpotent, and Engel’s theorem then tells us that [L,L] is a nilpotent Lie algebra.

August 25, 2012 Posted by | Algebra, Lie Algebras | Leave a comment

Lie’s Theorem

The lemma leading to Engel’s theorem boils down to the assertion that there is some common eigenvector for all the endomorphisms in a nilpotent linear Lie algebra L\subseteq\mathfrak{gl}(V) on a finite-dimensional nonzero vector space V. Lie’s theorem says that the same is true of solvable linear Lie algebras. Of course, in the nilpotent case the only possible eigenvalue was zero, so we may find things a little more complicated now. We will, however, have to assume that \mathbb{F} is algebraically closed and that no multiple of the unit in \mathbb{F} is zero.

We will proceed by induction on the dimension of L using the same four basic steps as in the lemma: find an ideal K\subseteq L of codimension one, so we can write L=K+\mathbb{F}z for some z\in K\setminus L; find common eigenvectors for K; find a subspace of such common eigenvectors stabilized by L; find in that space an eigenvector for z.

First, solvability says that L properly includes [L,L], or else the derived series wouldn’t be able to even start heading towards 0. The quotient L/[L,L] must be abelian, with all brackets zero, so we can pick any subspace of this quotient with codimension one and it will be an ideal. The preimage of this subspace under the quotient projection will then be an ideal K\subseteq L of codimension one.

Now, K is a subalgebra of L, so we know it’s also solvable, so induction tells us that there’s a common eigenvector v\in V for the action of K. If K is zero, then L must be one-dimensional abelian, in which case the proof is obvious. Otherwise there is some linear functional \lambda\in K^* defined by

\displaystyle k(v)=\lambda(k)v

Of course, v is not the only such eigenvector; we define the (nonzero) subspace W by

\displaystyle W=\{w\in V\vert\forall k\in K, k(w)=\lambda(k)w\}

Next we must show that L sends W back into itself. To see this, pick l\in L and k\in K and check that

\displaystyle\begin{aligned}k(l(w))&=l(k(w))-[l,k](w)\\&=l(\lambda(k)w)-\lambda([l,k])w\\&=\lambda(k)l(w)-\lambda([l,k])w\end{aligned}

But if l(w)\in W, then we’d have k(l(w))=\lambda(k)l(w); we need to verify that \lambda([l,k])=0. In the nilpotent case — Engel’s theorem — the functional \lambda was constantly zero, so this was easy, but it’s a bit harder here.

Fixing w\in W and l\in L we pick n to be the first index where the collection \{l^i(w)\}_{i=0}^n is linearly independent — the first one where we can express l^n(w) as the linear combination of all the previous l^i(w). If we write W_i for the subspace spanned by the first i of these vectors, then the dimension of W_i grows one-by-one until we get to \dim(W_n)=n, and W_{n+i}=W_n from then on.

I say that each of the W_i are invariant under each k\in K. Indeed, we can prove the congruence

\displaystyle k(l^i(w))\equiv\lambda(k)l^i(w)\quad\mod W_i

that is, k acts on l^i(w) by multiplication by \lambda(k), plus some “lower-order terms”. For i=0 this is the definition of \lambda; in general we have

\displaystyle\begin{aligned}k(l^i(w))&=k(l(l^{i-1}(w)))\\&=l(k(l^{i-1}(w)))-[l,k](l^{i-1}(w))\\&=\lambda(k)l^i(w)+l(w')-\lambda([l,k])l^{i-1}(w)-w''\end{aligned}

for some w',w''\in W_{i-1}.

And so we conclude that, using the obvious basis of W_n the action of k on this subspace is in the form of an upper-triangular matrix with \lambda(k) down the diagonal. The trace of this matrix is n\lambda(k). And in particular, the trace of the action of [l,k] on W_n is n\lambda([l,k]). But l and k both act as endomorphisms of W_n — the one by design and the other by the above proof — and the trace of any commutator is zero! Since n must have an inverse we conclude that \lambda([l,k])=0.

Okay so that checks out that the action of L sends W back into itself. We finish up by picking some eigenvector v_0\in W of z, which we know must exist because we’re working over an algebraically closed field. Incidentally, we can then extend \lambda to all of L by using z(v_0)=\lambda(z)v_0.

August 25, 2012 Posted by | Algebra, Lie Algebras | 1 Comment

Engel’s Theorem

When we say that a Lie algebra L is nilpotent, another way of putting it is that for any sufficiently long sequence \{x_i\} of elements of L the nested adjoint

\mathrm{ad}(x_n)\left[\dots\mathrm{ad}(x_2)\left[\mathrm{ad}(x_1)[y]\right]\right]

is zero for all y\in L. In particular, applying \mathrm{ad}(x) enough times will eventually kill any element of L. That is, each x\in L is ad-nilpotent. It turns out that the converse is also true, which is the content of Engel’s theorem.

But first we prove this lemma: if L\subseteq\mathfrak{gl}(V) is a linear Lie algebra on a finite-dimensional, nonzero vector space V that consists of nilpotent endomorphisms, then there is some nonzero v\in V for which l(v)=0 for all l\in L.

If \dim(L)=1 then L is spanned by a single nilpotent endomorphism, which has only the eigenvalue zero, and must have an eigenvector v, proving the lemma in this case.

If K is any nontrivial subalgebra of L then \mathrm{ad}(k)\in\mathrm{End}(L) is nilpotent for all k\in K. We also get an everywhere-nilpotent action on the quotient vector space L/K. But since \dim(K)<\dim(L), the induction hypothesis gives us a nonzero vector x+K\in L/K that gets killed by every k\in K. But this means that [k,x]\in K for all k\in K, while x\notin K. That is, K is strictly contained in the normalizer N_L(K).

Now instead of just taking any subalgebra, let K be a maximal proper subalgebra in L. Since K is properly contained in N_L(K), we must have N_L(K)=L, and thus K is actually an ideal of L. If \dim(L/K)>1 then we could find an even larger subalgebra of L containing K, in contradiction to our assumption, so as vector spaces we can write L\cong K+\mathbb{F}z for any z\in L\setminus K.

Finally, let W\subseteq V consist of those vectors killed by all \in K, which the inductive hypothesis tells us is a nonempty collection. Since K is an ideal, L sends W back into itself: k(l(w))=l(k(w))-[l,k](w)=0. Picking a z\in L\setminus K as above, its action on W is nilpotent, so it must have an eigenvector w with z(w)=0. Thus l(w)=0 for all l\in L=K+\mathbb{F}z.

So, now, to Engel’s theorem. We take a Lie algebra L consisting of ad-nilpotent elements. Thus the algebra \mathrm{ad}(L)\subseteq\mathfrak{gl}(L) consists of nilpotent endomorphisms on the vector space L, and there is thus some nonzero z\in L for which [L,z]=0. That is, L has a nontrivial center — z\in Z(L).

The quotient L/Z(L) thus has a lower dimension than L, and it also consists of ad-nilpotent elements. By induction on the dimension of L we assume that L/Z(L) is actually nilpotent, which proves that L itself is nilpotent.

August 22, 2012 Posted by | Algebra, Lie Algebras | 3 Comments

Facts About Solvability and Nilpotence

Solvability is an interesting property of a Lie algebra L, in that it tends to “infect” many related algebras. For one thing, all subalgebras and quotient algebras of L are also solvable. For the first count, it should be clear that if K\subseteq L then K^{(i)}\subseteq L^{(i)}. On the other hand, if L\to L/I is a quotient epimorphism then any element in (L/I)^{(i)} has a representative in L^{(i)}, so if the derived series of L bottoms out at 0 then so must the derived series of L/I.

As a sort of converse, suppose that L/I is a solvable quotient of L by a solvable ideal I; then L is itself solvable. Indeed, if (L/I)^{(n)}=0 and \pi:L\to L/I is the quotient epimorphism then \phi(L^{(n)})=0, as we saw above. That is, L^{(n)}\subseteq I, but since I is solvable this means that L^{(n)} — as a subalgebra — is solvable, and thus L is as well.

Finally, if I and J are solvable ideals of L then so is I+J. Here, we can use the third isomorphism theorem to establish an isomorphism (I+J)/J\cong I/(I\cap J). The right hand side is a quotient of I, and so it’s solvable, which makes (I+J)/J a solvable quotient by a solvable ideal, meaning that I+J is itself solvable.

As an application, let L be any Lie algebra and let S be a maximal solvable ideal, contained in no larger solvable ideal. If I is any other solvable ideal, then S+I is solvable as well, and it obviously contains S. But maximality then tells us that S+I=S, from which we conclude that I\subseteq S. Thus we conclude that the maximal solvable ideal S is unique; we call it the “radical” of L, written \mathrm{Rad}(L).

In the case that the radical of L is zero, we say that L is “semisimple”. In particular, a simple Lie algebra is semisimple, since the only ideals of L are itself and 0, and L is not solvable.

In general, the quotient L/\mathrm{Rad}(L) is semisimple, since if it had a solvable ideal it would have to be of the form I/\mathrm{Rad}(L) for some I\subseteq L containing \mathrm{Rad}(L). But if I/\mathrm{Rad}(L) is a solvable quotient of I by a solvable ideal, then I must be solvable, which means it must be contained in the radical of L. Thus the only solvable ideal of L/\mathrm{Rad}(L) is 0, as we said.

We also have some useful facts about nilpotent algebras. First off, just as for solvable algebras all subalgebras and quotient algebras of a nilpotent algebra are nilpotent. Even the proof is all but identical.

Next, if L/Z(L) — where Z(L) is the center of L — is nilpotent then L is as well. Indeed, to say that (L/Z(L))^n=0 is to say that L^n\subseteq Z(L) for some n. But then L^{n+1}=[L,L^n]\subseteq[L,Z(L)]=0.

Finally, if L\neq0 is nilpotent, then Z(L)\neq0. To see this, note that if L^{n+1} is the first term of the descending central series that equals zero, then 0\neq L^n\subseteq Z(L), since the brackets of everything in L^n with anything in L are all zero.

August 21, 2012 Posted by | Algebra, Lie Algebras | 7 Comments