Nilpotent and Solvable Lie Algebras
There are two big types of Lie algebras that we want to take care of right up front, and both of them are defined similarly. We remember that if and
are ideals of a Lie algebra
, then
— the collection spanned by brackets of elements of
and
— is also an ideal of
. And since the bracket of any element of
with any element of
is back in
, we can see that
. Similarly we conclude
, so
.
Now, starting from we can build up a tower of ideals starting with
and moving down by
. We call this the “derived series” of
. If this tower eventually bottoms out at
we say that
is “solvable”. If
is abelian we see that
, so
is automatically solvable. At the other extreme, if
is simple — and thus not abelian — the only possibility is
, so the derived series never gets down to
, and thus
is not solvable.
We can build up another tower, again starting with , but this time moving down by
. We call this the “lower central series” or “descending central series” of
. If this tower eventually bottoms out at
we say that
is “nilpotent”. Just as above we see that abelian Lie algebras are automatically nilpotent, while simple Lie algebras are never nilpotent.
It’s not too hard to see that for all
. Indeed,
to start. Then if
then
so the assertion follows by induction. Thus we see that any nilpotent algebra is solvable, but solvable algebras are not necessarily nilpotent.
As some explicit examples, we look back at the algebras and
. The second, as we might guess, is nilpotent, and thus solvable. The first, though, is merely solvable.
First, let’s check that is nilpotent. The obvious basis consists of all the matrix entries
with
, and we can know that
We have an obvious sense of the “level” of an element: the difference , which is well-defined on each basis element. We can tell that the bracket of two basis elements gives either zero or another basis element whose level is the sum of the levels of the first two basis elements. The ideal
is spanned by all the basis elements of level
. The ideal
is then spanned by basis elements of level
. And so it goes, each
spanned by basis elements of level
. But this must run out soon enough, since the highest possible level is
. In terms of the matrix, elements of
are zero everywhere on or below the diagonal; elements of
are also zero one row above the diagonal; and so on, each step pushing the nonzero elements “off the edge” to the upper-right of the matrix. Thus
is nilpotent, and thus solvable as well.
Turning to , we already know that
, which we just showed to be solvable! We see that
, which will eventually bottom out at
, thus
is solvable as well. However, we can also calculate that
and so the derived series of stops after the first term and never reaches
. Thus this algebra is solvable, but not nilpotent.
An Explicit Example
Let’s pause and catch our breath with an actual example of some of the things we’ve been talking about. Specifically, we’ll consider — the special linear Lie algebra on a two-dimensional vector space. This is a nice example not only because it’s nicely representative of some general phenomena, but also because the algebra itself is three-dimensional, which helps keep clear the distinction between
as a Lie algebra and the adjoint action of
on itself, particularly since these are both thought of in terms of matrix multiplications.
Now, we know a basis for this algebra:
which we will take in this order. We want to check each of the brackets of these basis elements:
Writing out each bracket of basis elements as a (unique) linear combination of basis elements specifies the bracket completely, by linearity. We call the coefficients the “structure constants” of , and they determine the algebra up to isomorphism.
Okay, now we want to use this basis of the vector space and write down matrices for the action of
on
:
Now, both and
are nilpotent. In the case of
we can see that
sends the line spanned by
to the line spanned by
, the line spanned by
to the line spanned by
, and the line spanned by
to zero. So we can calculate the powers:
and the exponential:
Similarly we can calculate the exponential of :
So now it’s a simple matter to write down the following element of :
In other words, ,
, and
.
We can also see that and
themselves are also nilpotent, as endomorphisms of the vector space
. We can calculate their exponentials:
and the product:
It’s easy to check from here that conjugation by has the exact same effect as the action of
:
.
This is a very general phenomenon: if is any linear Lie algebra and
is nilpotent, then conjugation by the exponential of
is the same as applying the exponential of the adoint of
.
Indeed, considering , we can write it as
where and
are left- and right-multiplication by
in
. Since these two commute with each other and both are nilpotent we can write
That is, the action of is the same as left-multiplication by
followed by right-multiplication by
. All we need now is to verify that this is the inverse of
, but the expanded Leibniz identity from last time tells us that
, thus proving our assertion.
We can also tell at this point that the nilpotency of and
and that of
and
are not unrelated. Indeed, if
is nilpotent then
is, too. Indeed, since
and
are commuting nilpotents, their difference —
— is again nilpotent.
We must be careful to note that the converse is not true. Indeed, is ad-nilpotent, but
itself is certainly not nilpotent.
Automorphisms of Lie Algebras
Sorry for the delay; I’ve had a couple busy days. Here’s Thursday’s promised installment.
An automorphism of a Lie algebra is, as usual, an invertible homomorphism from
onto itself, and the collection of all such automorphisms forms a group
.
One obviously useful class of examples arises when we’re considering a linear Lie algebra . If
is an invertible endomorphism of
such that
then the map
is an automorphism of
. Clearly this happens for all
in the cases of
and the special linear Lie algebra
— the latter because the trace is invariant under a change of basis.
Now we’ll specialize to the (usual) case where no multiple of is zero, and we consider an
for which
is “nilpotent”. That is, there’s some finite
such that
— applying
sufficiently many times eventually kills off every element of
. In this case, we say that
itself is “ad-nilpotent”.
In this case, we can define . How does this work? we use the power series expansion of the exponential:
We know that this series converges because eventually every term vanishes once .
Now, I say that . In fact, while this case is very useful, all we need from
is that it’s a nilpotent derivation
of
. The product rule for derivations generalizes as:
So we can write
That is, preserves the multiplication of the algebra that
is a derivation of. In particular, in terms of the Lie algebra
, we find that
Since we conclude that this is an epimorphism of
. It’s invertible by the usual formula
which means it’s an automorphism of .
Just like a derivation of the form is called inner, an automorphism of the form
is called an inner automorphism, and the subgroup
they generate is a normal subgroup of
. Specifically, if
and
then we can calculate
and thus
so the conjugate of an inner automorphism is again inner.
Isomorphism Theorems for Lie Algebras
The category of Lie algebras may not be Abelian, but it has a zero object, kernels, and cokernels, which is enough to get the first isomorphism theorem, just like for rings. Specifically, if is any homomorphism of Lie algebras then we can factor it as follows:
That is, first we project down to the quotient of by the kernel of
, then we have an isomorphism from this quotient to the image of
, followed by the inclusion of the image as a subalgebra of
.
There are actually two more isomorphism theorems which I haven’t made much mention of, though they hold in other categories as well. Since we’ll have use of them in our study of Lie algebras, we may as well get them down now.
The second isomorphism theorem says that if are both ideals of
, then
is an ideal of
. Further, there is a natural isomorphism
. Indeed, if
and
, then we can check that
so is an ideal of
. As for the isomorphism, it’s straightforward from considering
and
as vector subspaces of
. Indeed, saying
and
are equivalent modulo
in
is to say that
. But this means that
for some
, so
and
are equivalent modulo
in
.
The third isomorphism theorem states that if and
are any two ideals of
, then there is a natural isomorphism between
and
— we showed last time that both
and
are ideals. To see this, take
and
in
and consider how they can be equivalent modulo
. First off,
and
are immediately irrelevant, so we may as well just ask how
and
can be equivalent modulo
. Well, this will happen if
, but we know that their difference is also in
, so
.
The Category of Lie Algebras is (not quite) Abelian
We’d like to see that the category of Lie algebras is Abelian. Unfortunately, it isn’t, but we can come close. It should be clear that it’s an -category, since the homomorphisms between any two Lie algebras form a vector space. Direct sums are also straightforward: the Lie algebra
is the direct sum as vector spaces, with
for
and
and the regular brackets on
and
otherwise.
We’ve seen that the category of Lie algebras has a zero object and kernels; now we need cokernels. It would be nice to just say that if is a homomorphism then
is the quotient of
by the image of
, but this image may not be an ideal. Luckily, ideals have a few nice closure properties.
First off, if and
are ideals of
, then
— the subspace spanned by brackets of elements of
and
— is also an ideal. Indeed, we can check that
which is back in
. Similarly, the subspace sum
is an ideal. And, most importantly for us now, the intersection
is an ideal, since if
then both
and
, so
as well. In fact, this is true of arbitrary intersections.
This is important, because it means we can always expand any subset to an ideal. We take all the ideals of
that contain
and intersect them. This will then be another ideal of
containing
, and it is contained in all the others. And we know that this intersection is nonempty, since there’s always at least the ideal
.
So while may not be an ideal of
, we can expand it to an ideal and take the quotient. The projection onto this quotient will be the largest epimorphism of
that sends everything in
to zero, so it will be the cokernel of
.
Where everything falls apart is normality. The very fact that we have ideals as a separate concept from subalgebras is the problem. Any subalgebra is the image of a monomorphism — the inclusion, if nothing else. But not all these subalgebras are themselves kernels of other morphisms; only those that are ideals have this property.
Still, the category is very nice, and these properties will help us greatly in what follows.
Ideals of Lie Algebras
As we said, a homomorphism of Lie algebras is simply a linear mapping between them that preserves the bracket. I want to check, though, that this behaves in certain nice ways.
First off, there is a Lie algebra . That is, the trivial vector space can be given a (unique) Lie algebra structure, and every Lie algebra has a unique homomorphism
and a unique homomorphism
. This is easy.
Also pretty easy is the fact that we have kernels. That is, if is a homomorphism, then the set
is a subalgebra of
. Indeed, it’s actually an “ideal” in pretty much the same sense as for rings. That is, if
and
then
. And we can check that
proving that is an ideal, and thus a Lie algebra in its own right.
Every Lie algebra has two trivial ideals: and
. Another example is the “center” — in analogy with the center of a group — which is the collection
of all
such that
for all
. That is, those for which the adjoint action
is the zero derivation — the kernel of
— which is clearly an ideal.
If we say — again in analogy with groups — that
is abelian; this is the case for the diagonal algebra
, for instance. Abelian Lie algebras are rather boring; they’re just vector spaces with trivial brackets, so we can always decompose them by picking a basis — any basis — and getting a direct sum of one-dimensional abelian Lie algebras.
On the other hand, if the only ideals of are the trivial ones, and if
is not abelian, then we say that
is “simple”. These are very interesting, indeed.
As usual for rings, we can construct quotient algebras. If is an ideal, then we can define a Lie algebra structure on the quotient space
. Indeed, if
and
are equivalence classes modulo
, then we define
which is unambiguous since if and
are two other representatives then
and
, and we calculate
and everything in the parens on the right is in .
Two last constructions in analogy with groups: the “normalizer” of a subspace is the subalgebra
. This is the largest subalgebra of
which contains
as an ideal; if
already is an ideal of
then
; if
we say that
is “self-normalizing”.
The “centralizer” of a subset is the subalgebra
. This is a subalgebra, and in particular we can see that
.
Derivations
When first defining (or, rather, recalling the definition of) Lie algebras I mentioned that the bracket makes each element of a Lie algebra act by derivations on
itself. We can actually say a bit more about this.
First off, we need an algebra over a field
. This doesn’t have to be associative, as our algebras commonly are; all we need is a bilinear map
. In particular, Lie algebras count.
Now, a derivation of
is firstly a linear map from
back to itself. That is,
, where this is the algebra of endomorphisms of
as a vector space over
, not the endomorphisms as an algebra. Instead of preserving the multiplication, we impose the condition that
behave like the product rule:
It’s easy to see that the collection is a vector subspace, but I say that it’s actually a Lie subalgebra, when we equip the space of endomorphisms with the usual commutator bracket. That is, if
and
are two derivations, I say that their commutator is again a derivation.
This, we can check:
We’ve actually seen this before. We identified the vectors at a point on a manifold with the derivations of the (real) algebra of functions defined in a neighborhood of
, so we need to take the commutator of two derivations to be sure of getting a new derivation back.
So now we can say that the mapping that sends to the endomorphism
lands in
because of the Jacobi identity. We call this mapping
the “adjoint representation” of
, and indeed it’s actually a homomorphism of Lie algebras. That is,
. The endomorphism on the left-hand side sends
to
, while on the right-hand side we get
. That these two are equal is yet another application of the Jacobi identity.
One last piece of nomenclature: derivations in the image of are called “inner”; all others are called “outer” derivations.
Orthogonal and Symplectic Lie Algebras
For the next three families of linear Lie algebras we equip our vector space with a bilinear form
. We’re going to consider the endomorphisms
such that
If we pick a basis of
, then we have a matrix for the bilinear form
and one for the endomorphism
So the condition in terms of matrices in comes down to
or, more abstractly, .
So do these form a subalgebra of ? Linearity is easy; we must check that this condition is closed under the bracket. That is, if
and
both satisfy this condition, what about their commutator
?
So this condition will always give us a linear Lie algebra.
We have three different families of these algebras. First, we consider the case where is odd, and we let
be the symmetric, nondegenerate bilinear form with matrix
where is the
identity matrix. If we write the matrix of our endomorphism in a similar form
our matrix conditions turn into
From here it’s straightforward to count out basis elements that satisfy the conditions on the first row and column,
that satisfy the antisymmetry for
, another
that satisfy the antisymmetry for
, and
that satisfy the condition between
and
, for a total of
basis elements. We call this Lie algebra the orthogonal algebra of
, and write
or
. Sometimes we refer to the isomorphism class of this algebra as
.
Next up, in the case where is even we let the matrix of
look like
A similar approach to that above gives a basis with elements. We also call this the orthogonal algebra of
, and write
or
. Sometimes we refer to the isomorphism class of this algebra as
.
Finally, we again take an even-dimensional , but this time we use the skew-symmetric form
This time we get a basis with elements. We call this the symplectic algebra of
, and write
or
. Sometimes we refer to the isomorphism class of this algebra as
.
Along with the special linear Lie algebras, these form the “classical” Lie algebras. It’s a tedious but straightforward exercise to check that for any classical Lie algebra , each basis element
of
can be written as a bracket of two other elements of
. That is, we have
. Since
for some
, and since we know that
, this establishes that
for all classical
.
Special Linear Lie Algebras
More examples of Lie algebras! Today, an important family of linear Lie algebras.
Take a vector space with dimension
and start with
. Inside this, we consider the subalgebra of endomorphisms whose trace is zero, which we write
and call the “special linear Lie algebra”. This is a subspace, since the trace is a linear functional on the space of endomorphisms:
so if two endomorphisms have trace zero then so do all their linear combinations. It’s a subalgebra by using the “cyclic” property of the trace:
Note that this does not mean that endomorphisms can be arbitrarily rearranged inside the trace, which is a common mistake after seeing this formula. Anyway, this implies that
so actually not only is the bracket of two endomorphisms in back in the subspace, the bracket of any two endomorphisms of
lands in
. In other words:
.
Choosing a basis, we will write the algebra as . It should be clear that the dimension is
, since this is the kernel of a single linear functional on the
-dimensional
, but let’s exhibit a basis anyway. All the basic matrices
with
are traceless, so they’re all in
. Along the diagonal,
, so we need linear combinations that cancel each other out. It’s particularly convenient to define
So we’ve got the basic matrices, but we take away the
along the diagonal. Then we add back the
new matrices
, getting
matrices in our standard basis for
, verifying the dimension.
We sometimes refer to the isomorphism class of as
. Because reasons.
Linear Lie Algebras
So now that we’ve remembered what a Lie algebra is, let’s mention the most important ones: linear Lie algebras. These are ones that arise from linear transformations on vector spaces, ’cause mathematicians love them some vector spaces.
Specifically, let be a finite-dimensional vector space over
, and consider the associative algebra of endomorphisms
— linear transformations from
back to itself. We can use the usual method of defining a bracket as a commutator:
to turn this into a Lie algebra. When considered as a Lie algebra like this, we call it the “general linear Lie algebra”, and write . Many Lie algebras are written in the Fraktur typeface like this.
Any subalgebra of is called a “linear Lie algebra”, since it’s made up of linear transformations. It turns out that every finite-dimensional Lie algebra is isomorphic to a linear Lie algebra, but we reserve the “linear” term for those algebras which we’re actually thinking of having linear transformations as elements.
Of course, since is a vector space over
, we can pick a basis. If
has dimension
, then there are
elements in any basis, and so our endomorphisms correspond to the
matrices
. When we think of it in these terms, we often write
for the general linear Lie algebra.
We can actually calculate the bracket structure explicitly in this case; bilinearity tells us that it suffices to write it down in terms of a basis. The standard basis of is
which has a
in the
th row and
th column and
elsewhere. So we can calculate:
where, as usual, is the Kronecker delta:
if the indices are the same and
if they’re different.
We can now identify some important subalgebras of . First, the strictly upper-triangular matrices
involve only the basis elements
with
. If
so the first term in the above expression for the bracket shows up, then the second term cannot show up, and vice versa. Either way, we conclude that the bracket of two basis elements of
— and thus any element of this subspace — involves only other basis elements of the subspace, which makes this a subalgebra.
Similarly, we conclude that the (non-strictly) upper-triangular matrices involving only with
also form a subalgebra
. And, finally, the diagonal matrices involving only
also form a subalgebra
. This last one is interesting, in that the bracket on
is actually trivial, since any two diagonal matrices commute.
As vector spaces, we see that . It’s easy to check that the bracket of a diagonal matrix and a strictly upper-triangular matrix is again strictly upper-triangular — we write
— and so we also have
. This may seem a little like a toy example now, but it turns out to be surprisingly general; many subalgebras will relate to each other this way.