Sorry for the delay; I’ve had a couple busy days. Here’s Thursday’s promised installment.
An automorphism of a Lie algebra is, as usual, an invertible homomorphism from onto itself, and the collection of all such automorphisms forms a group .
One obviously useful class of examples arises when we’re considering a linear Lie algebra . If is an invertible endomorphism of such that then the map is an automorphism of . Clearly this happens for all in the cases of and the special linear Lie algebra — the latter because the trace is invariant under a change of basis.
Now we’ll specialize to the (usual) case where no multiple of is zero, and we consider an for which is “nilpotent”. That is, there’s some finite such that — applying sufficiently many times eventually kills off every element of . In this case, we say that itself is “ad-nilpotent”.
In this case, we can define . How does this work? we use the power series expansion of the exponential:
We know that this series converges because eventually every term vanishes once .
Now, I say that . In fact, while this case is very useful, all we need from is that it’s a nilpotent derivation of . The product rule for derivations generalizes as:
So we can write
That is, preserves the multiplication of the algebra that is a derivation of. In particular, in terms of the Lie algebra , we find that
Since we conclude that this is an epimorphism of . It’s invertible by the usual formula
which means it’s an automorphism of .
Just like a derivation of the form is called inner, an automorphism of the form is called an inner automorphism, and the subgroup they generate is a normal subgroup of . Specifically, if and then we can calculate
so the conjugate of an inner automorphism is again inner.
The category of Lie algebras may not be Abelian, but it has a zero object, kernels, and cokernels, which is enough to get the first isomorphism theorem, just like for rings. Specifically, if is any homomorphism of Lie algebras then we can factor it as follows:
That is, first we project down to the quotient of by the kernel of , then we have an isomorphism from this quotient to the image of , followed by the inclusion of the image as a subalgebra of .
There are actually two more isomorphism theorems which I haven’t made much mention of, though they hold in other categories as well. Since we’ll have use of them in our study of Lie algebras, we may as well get them down now.
The second isomorphism theorem says that if are both ideals of , then is an ideal of . Further, there is a natural isomorphism . Indeed, if and , then we can check that
so is an ideal of . As for the isomorphism, it’s straightforward from considering and as vector subspaces of . Indeed, saying and are equivalent modulo in is to say that . But this means that for some , so and are equivalent modulo in .
The third isomorphism theorem states that if and are any two ideals of , then there is a natural isomorphism between and — we showed last time that both and are ideals. To see this, take and in and consider how they can be equivalent modulo . First off, and are immediately irrelevant, so we may as well just ask how and can be equivalent modulo . Well, this will happen if , but we know that their difference is also in , so .
We’d like to see that the category of Lie algebras is Abelian. Unfortunately, it isn’t, but we can come close. It should be clear that it’s an -category, since the homomorphisms between any two Lie algebras form a vector space. Direct sums are also straightforward: the Lie algebra is the direct sum as vector spaces, with for and and the regular brackets on and otherwise.
We’ve seen that the category of Lie algebras has a zero object and kernels; now we need cokernels. It would be nice to just say that if is a homomorphism then is the quotient of by the image of , but this image may not be an ideal. Luckily, ideals have a few nice closure properties.
First off, if and are ideals of , then — the subspace spanned by brackets of elements of and — is also an ideal. Indeed, we can check that which is back in . Similarly, the subspace sum is an ideal. And, most importantly for us now, the intersection is an ideal, since if then both and , so as well. In fact, this is true of arbitrary intersections.
This is important, because it means we can always expand any subset to an ideal. We take all the ideals of that contain and intersect them. This will then be another ideal of containing , and it is contained in all the others. And we know that this intersection is nonempty, since there’s always at least the ideal .
So while may not be an ideal of , we can expand it to an ideal and take the quotient. The projection onto this quotient will be the largest epimorphism of that sends everything in to zero, so it will be the cokernel of .
Where everything falls apart is normality. The very fact that we have ideals as a separate concept from subalgebras is the problem. Any subalgebra is the image of a monomorphism — the inclusion, if nothing else. But not all these subalgebras are themselves kernels of other morphisms; only those that are ideals have this property.
Still, the category is very nice, and these properties will help us greatly in what follows.
As we said, a homomorphism of Lie algebras is simply a linear mapping between them that preserves the bracket. I want to check, though, that this behaves in certain nice ways.
First off, there is a Lie algebra . That is, the trivial vector space can be given a (unique) Lie algebra structure, and every Lie algebra has a unique homomorphism and a unique homomorphism . This is easy.
Also pretty easy is the fact that we have kernels. That is, if is a homomorphism, then the set is a subalgebra of . Indeed, it’s actually an “ideal” in pretty much the same sense as for rings. That is, if and then . And we can check that
proving that is an ideal, and thus a Lie algebra in its own right.
Every Lie algebra has two trivial ideals: and . Another example is the “center” — in analogy with the center of a group — which is the collection of all such that for all . That is, those for which the adjoint action is the zero derivation — the kernel of — which is clearly an ideal.
If we say — again in analogy with groups — that is abelian; this is the case for the diagonal algebra , for instance. Abelian Lie algebras are rather boring; they’re just vector spaces with trivial brackets, so we can always decompose them by picking a basis — any basis — and getting a direct sum of one-dimensional abelian Lie algebras.
On the other hand, if the only ideals of are the trivial ones, and if is not abelian, then we say that is “simple”. These are very interesting, indeed.
As usual for rings, we can construct quotient algebras. If is an ideal, then we can define a Lie algebra structure on the quotient space . Indeed, if and are equivalence classes modulo , then we define
which is unambiguous since if and are two other representatives then and , and we calculate
and everything in the parens on the right is in .
Two last constructions in analogy with groups: the “normalizer” of a subspace is the subalgebra . This is the largest subalgebra of which contains as an ideal; if already is an ideal of then ; if we say that is “self-normalizing”.
The “centralizer” of a subset is the subalgebra . This is a subalgebra, and in particular we can see that .
When first defining (or, rather, recalling the definition of) Lie algebras I mentioned that the bracket makes each element of a Lie algebra act by derivations on itself. We can actually say a bit more about this.
First off, we need an algebra over a field . This doesn’t have to be associative, as our algebras commonly are; all we need is a bilinear map . In particular, Lie algebras count.
Now, a derivation of is firstly a linear map from back to itself. That is, , where this is the algebra of endomorphisms of as a vector space over , not the endomorphisms as an algebra. Instead of preserving the multiplication, we impose the condition that behave like the product rule:
It’s easy to see that the collection is a vector subspace, but I say that it’s actually a Lie subalgebra, when we equip the space of endomorphisms with the usual commutator bracket. That is, if and are two derivations, I say that their commutator is again a derivation.
This, we can check:
We’ve actually seen this before. We identified the vectors at a point on a manifold with the derivations of the (real) algebra of functions defined in a neighborhood of , so we need to take the commutator of two derivations to be sure of getting a new derivation back.
So now we can say that the mapping that sends to the endomorphism lands in because of the Jacobi identity. We call this mapping the “adjoint representation” of , and indeed it’s actually a homomorphism of Lie algebras. That is, . The endomorphism on the left-hand side sends to , while on the right-hand side we get . That these two are equal is yet another application of the Jacobi identity.
One last piece of nomenclature: derivations in the image of are called “inner”; all others are called “outer” derivations.
If we pick a basis of , then we have a matrix for the bilinear form
and one for the endomorphism
So the condition in terms of matrices in comes down to
or, more abstractly, .
So do these form a subalgebra of ? Linearity is easy; we must check that this condition is closed under the bracket. That is, if and both satisfy this condition, what about their commutator ?
So this condition will always give us a linear Lie algebra.
We have three different families of these algebras. First, we consider the case where is odd, and we let be the symmetric, nondegenerate bilinear form with matrix
where is the identity matrix. If we write the matrix of our endomorphism in a similar form
our matrix conditions turn into
From here it’s straightforward to count out basis elements that satisfy the conditions on the first row and column, that satisfy the antisymmetry for , another that satisfy the antisymmetry for , and that satisfy the condition between and , for a total of basis elements. We call this Lie algebra the orthogonal algebra of , and write or . Sometimes we refer to the isomorphism class of this algebra as .
Next up, in the case where is even we let the matrix of look like
A similar approach to that above gives a basis with elements. We also call this the orthogonal algebra of , and write or . Sometimes we refer to the isomorphism class of this algebra as .
Finally, we again take an even-dimensional , but this time we use the skew-symmetric form
This time we get a basis with elements. We call this the symplectic algebra of , and write or . Sometimes we refer to the isomorphism class of this algebra as .
Along with the special linear Lie algebras, these form the “classical” Lie algebras. It’s a tedious but straightforward exercise to check that for any classical Lie algebra , each basis element of can be written as a bracket of two other elements of . That is, we have . Since for some , and since we know that , this establishes that for all classical .
Take a vector space with dimension and start with . Inside this, we consider the subalgebra of endomorphisms whose trace is zero, which we write and call the “special linear Lie algebra”. This is a subspace, since the trace is a linear functional on the space of endomorphisms:
so if two endomorphisms have trace zero then so do all their linear combinations. It’s a subalgebra by using the “cyclic” property of the trace:
Note that this does not mean that endomorphisms can be arbitrarily rearranged inside the trace, which is a common mistake after seeing this formula. Anyway, this implies that
so actually not only is the bracket of two endomorphisms in back in the subspace, the bracket of any two endomorphisms of lands in . In other words: .
Choosing a basis, we will write the algebra as . It should be clear that the dimension is , since this is the kernel of a single linear functional on the -dimensional , but let’s exhibit a basis anyway. All the basic matrices with are traceless, so they’re all in . Along the diagonal, , so we need linear combinations that cancel each other out. It’s particularly convenient to define
So we’ve got the basic matrices, but we take away the along the diagonal. Then we add back the new matrices , getting matrices in our standard basis for , verifying the dimension.
We sometimes refer to the isomorphism class of as . Because reasons.
So now that we’ve remembered what a Lie algebra is, let’s mention the most important ones: linear Lie algebras. These are ones that arise from linear transformations on vector spaces, ’cause mathematicians love them some vector spaces.
Specifically, let be a finite-dimensional vector space over , and consider the associative algebra of endomorphisms — linear transformations from back to itself. We can use the usual method of defining a bracket as a commutator:
to turn this into a Lie algebra. When considered as a Lie algebra like this, we call it the “general linear Lie algebra”, and write . Many Lie algebras are written in the Fraktur typeface like this.
Any subalgebra of is called a “linear Lie algebra”, since it’s made up of linear transformations. It turns out that every finite-dimensional Lie algebra is isomorphic to a linear Lie algebra, but we reserve the “linear” term for those algebras which we’re actually thinking of having linear transformations as elements.
Of course, since is a vector space over , we can pick a basis. If has dimension , then there are elements in any basis, and so our endomorphisms correspond to the matrices . When we think of it in these terms, we often write for the general linear Lie algebra.
We can actually calculate the bracket structure explicitly in this case; bilinearity tells us that it suffices to write it down in terms of a basis. The standard basis of is which has a in the th row and th column and elsewhere. So we can calculate:
where, as usual, is the Kronecker delta: if the indices are the same and if they’re different.
We can now identify some important subalgebras of . First, the strictly upper-triangular matrices involve only the basis elements with . If so the first term in the above expression for the bracket shows up, then the second term cannot show up, and vice versa. Either way, we conclude that the bracket of two basis elements of — and thus any element of this subspace — involves only other basis elements of the subspace, which makes this a subalgebra.
Similarly, we conclude that the (non-strictly) upper-triangular matrices involving only with also form a subalgebra . And, finally, the diagonal matrices involving only also form a subalgebra . This last one is interesting, in that the bracket on is actually trivial, since any two diagonal matrices commute.
As vector spaces, we see that . It’s easy to check that the bracket of a diagonal matrix and a strictly upper-triangular matrix is again strictly upper-triangular — we write — and so we also have . This may seem a little like a toy example now, but it turns out to be surprisingly general; many subalgebras will relate to each other this way.
Well it’s been quite a while, but I think I can carve out the time to move forwards again. I was all set to start with Lie algebras today, only to find that I’ve already defined them over a year ago. So let’s pick up with a recap: a Lie algebra is a module — usually a vector space over a field — called and give it a bilinear operation which we write as . We often require such operations to be associative, but this time we impose the following two conditions:
Now, as long as we’re not working in a field where — and usually we’re not — we can use bilinearity to rewrite the first condition:
so . This antisymmetry always holds, but we can only go the other way if the character of is not , as stated above.
The second condition is called the “Jacobi identity”, and antisymmetry allows us to rewrite it as:
That is, bilinearity says that we have a linear mapping that sends an element to a linear endomorphism in . And the Jacobi identity says that this actually lands in the subspace of “derivations” — those which satisfy something like the Leibniz rule for derivatives. To see what I mean, compare to the product rule:
where takes the place of , takes the place of , and takes the place of . And the operations are changed around. But you should see the similarity.
Lie algebras obviously form a category whose morphisms are called Lie algebra homomorphisms. Just as we might expect, such a homomorphism is a linear map that preserves the bracket:
We can obviously define subalgebras and quotient algebras. Subalgebras are a bit more obvious than quotient algebras, though, being just subspaces that are closed under the bracket. Quotient algebras are more commonly called “homomorphic images” in the literature, and we’ll talk more about them later.
We will take as a general assumption that our Lie algebras are finite-dimensional, though infinite-dimensional ones absolutely exist and are very interesting.
And I’ll finish the recap by reminding you that we can get Lie algebras from associative algebras; any associative algebra can be given a bracket defined by
The above link shows that this satisfies the Jacobi identity, or you can take it as an exercise.
There is a great source for generating many Lie algebras: associative algebras. Specifically, if we have an associative algebra we can build a lie algebra on the same underlying vector space by letting the bracket be the “commutator” from . That is, for any algebra elements and we define
In fact, this is such a common way of coming up with Lie algebras that many people think of the bracket as a commutator by definition.
Clearly this is bilinear and antisymmetric, but does it satisfy the Jacobi identity? Well, let’s take three algebra elements and form the double bracket
We can find the other orders just as easily
and when we add these all up each term cancels against another term, leaving zero. Thus the commutator in an associative algebra does indeed act as a bracket.