The Unapologetic Mathematician

Mathematics for the interested outsider

Nilpotent and Solvable Lie Algebras

There are two big types of Lie algebras that we want to take care of right up front, and both of them are defined similarly. We remember that if I and J are ideals of a Lie algebra L, then [I,J] — the collection spanned by brackets of elements of I and J — is also an ideal of L. And since the bracket of any element of I with any element of L is back in I, we can see that [I,J]\subseteq I. Similarly we conclude [I,J]\subseteq J, so [I,J]\subseteq I\cap J.

Now, starting from L we can build up a tower of ideals starting with L^{(0)}=L and moving down by L^{(n+1)}=[L^{(n)},L^{(n)}]\subseteq L^{(n)}. We call this the “derived series” of L. If this tower eventually bottoms out at 0 we say that L is “solvable”. If L is abelian we see that L^{(1)}=[L,L]=0, so L is automatically solvable. At the other extreme, if L is simple — and thus not abelian — the only possibility is [L,L]=L, so the derived series never gets down to 0, and thus L is not solvable.

We can build up another tower, again starting with L^0=L, but this time moving down by L^{n+1}=[L,L^n]. We call this the “lower central series” or “descending central series” of L. If this tower eventually bottoms out at 0 we say that L is “nilpotent”. Just as above we see that abelian Lie algebras are automatically nilpotent, while simple Lie algebras are never nilpotent.

It’s not too hard to see that L^{(n)}\subseteq L^n for all n. Indeed, L^{(0)}\subseteq L^0 to start. Then if L^{(n)}\subseteq L^n\subseteq L then

\displaystyle\begin{aligned}L^{(n+1)}&\subseteq [L^{(n)},L^{(n)}]\\&\subseteq [L,L^n]\\&=L^{n+1}\end{aligned}

so the assertion follows by induction. Thus we see that any nilpotent algebra is solvable, but solvable algebras are not necessarily nilpotent.

As some explicit examples, we look back at the algebras \mathfrak{t}(n,\mathbb{F}) and \mathfrak{n}(n,\mathbb{F}). The second, as we might guess, is nilpotent, and thus solvable. The first, though, is merely solvable.

First, let’s check that L=\mathfrak{n}(n,\mathbb{F}) is nilpotent. The obvious basis consists of all the matrix entries e_{ij} with i\leq j, and we can know that


We have an obvious sense of the “level” of an element: the difference j-i, which is well-defined on each basis element. We can tell that the bracket of two basis elements gives either zero or another basis element whose level is the sum of the levels of the first two basis elements. The ideal L=L^0 is spanned by all the basis elements of level \geq1. The ideal L^1 is then spanned by basis elements of level \geq2. And so it goes, each L^i spanned by basis elements of level \geq i+1. But this must run out soon enough, since the highest possible level is n-1. In terms of the matrix, elements of L^0 are zero everywhere on or below the diagonal; elements of L^1 are also zero one row above the diagonal; and so on, each step pushing the nonzero elements “off the edge” to the upper-right of the matrix. Thus \mathfrak{n}(n,\mathbb{F}) is nilpotent, and thus solvable as well.

Turning to L=\mathfrak{t}(n,\mathbb{F}), we already know that L^{(1)}=[L,L]=\mathfrak{n}(n,\mathbb{F}), which we just showed to be solvable! We see that L^{(n+1)}=\mathfrak{n}(n,\mathbb{F})^{(n)}, which will eventually bottom out at 0, thus L is solvable as well. However, we can also calculate that


and so the derived series of \mathfrak{t}(n,\mathbb{F}) stops after the first term and never reaches 0. Thus this algebra is solvable, but not nilpotent.

August 20, 2012 Posted by | Algebra, Lie Algebras | 4 Comments

An Explicit Example

Let’s pause and catch our breath with an actual example of some of the things we’ve been talking about. Specifically, we’ll consider L=\mathfrak{sl}(2,\mathbb{F}) — the special linear Lie algebra on a two-dimensional vector space. This is a nice example not only because it’s nicely representative of some general phenomena, but also because the algebra itself is three-dimensional, which helps keep clear the distinction between L as a Lie algebra and the adjoint action of L on itself, particularly since these are both thought of in terms of matrix multiplications.

Now, we know a basis for this algebra:

\displaystyle\begin{aligned}x&=\begin{pmatrix}0&1\\ 0&0\end{pmatrix}\\y&=\begin{pmatrix}0&0\\1&0\end{pmatrix}\\h&=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\end{aligned}

which we will take in this order. We want to check each of the brackets of these basis elements:

\displaystyle\begin{aligned}{}[h,x]&=2x\\ [h,y]&=-2y\\ [x,y]&=h\end{aligned}

Writing out each bracket of basis elements as a (unique) linear combination of basis elements specifies the bracket completely, by linearity. We call the coefficients the “structure constants” of L, and they determine the algebra up to isomorphism.

Okay, now we want to use this basis of the vector space L and write down matrices for the action of \mathrm{ad}(x) on L:

\displaystyle\begin{aligned}\mathrm{ad}(x)&=\begin{pmatrix}0&0&-2\\ 0&0&0\\ 0&1&0\end{pmatrix}\\\mathrm{ad}(y)&=\begin{pmatrix}0&0&0\\ 0&0&2\\-1&0&0\end{pmatrix}\\\mathrm{ad}(h)&=\begin{pmatrix}2&0&0\\ 0&-2&0\\ 0&0&0\end{pmatrix}\end{aligned}

Now, both \mathrm{ad}(x) and \mathrm{ad}(-y) are nilpotent. In the case of x we can see that \mathrm{ad}(x) sends the line spanned by y to the line spanned by h, the line spanned by h to the line spanned by x, and the line spanned by x to zero. So we can calculate the powers:

\displaystyle\begin{aligned}\mathrm{ad}(x)^0&=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix}\\\mathrm{ad}(x)^1&=\begin{pmatrix}0&0&-2\\ 0&0&0\\ 0&1&0\end{pmatrix}\\\mathrm{ad}(x)^2&=\begin{pmatrix}0&-2&0\\ 0&0&0\\ 0&0&0\end{pmatrix}\\\mathrm{ad}(x)^3&=\begin{pmatrix}0&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}\end{aligned}

and the exponential:

\displaystyle\begin{aligned}\exp(\mathrm{ad}(x))&=\frac{1}{0!}\mathrm{ad}(x)^0+\frac{1}{1!}\mathrm{ad}(x)^1+\frac{1}{2!}\mathrm{ad}(x)^2\\&=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix}+\begin{pmatrix}0&0&-2\\ 0&0&0\\ 0&1&0\end{pmatrix}+\frac{1}{2}\begin{pmatrix}0&-2&0\\ 0&0&0\\ 0&0&0\end{pmatrix}\\&=\begin{pmatrix}1&-1&-2\\ 0&1&0\\ 0&1&1\end{pmatrix}\end{aligned}

Similarly we can calculate the exponential of \mathrm{ad}(-y):


So now it’s a simple matter to write down the following element of \mathrm{Int}(L):

\displaystyle\sigma=\exp(\mathrm{ad}(x))\exp(\mathrm{ad}(-y))\exp(\mathrm{ad}(x))=\begin{pmatrix}0&-1&0\\-1&0&0\\ 0&0&-1\end{pmatrix}

In other words, \sigma(x)=-y, \sigma(y)=-x, and \sigma(h)=-h.

We can also see that x and -y themselves are also nilpotent, as endomorphisms of the vector space \mathbb{F}^2. We can calculate their exponentials:

\displaystyle\begin{aligned}\exp(x)&=\begin{pmatrix}1&1\\ 0&1\end{pmatrix}\\\exp(-y)&=\begin{pmatrix}1&0\\-1&1\end{pmatrix}\end{aligned}

and the product:

\displaystyle s=\exp(x)\exp(-y)\exp(x)=\begin{pmatrix}0&1\\-1&0\end{pmatrix}

It’s easy to check from here that conjugation by s has the exact same effect as the action of \sigma: sls^{-1}=\sigma(l).

This is a very general phenomenon: if L\subseteq\mathfrak{gl}(V) is any linear Lie algebra and x\in L is nilpotent, then conjugation by the exponential of x is the same as applying the exponential of the adoint of x.

Indeed, considering \mathrm{ad}(x)\in\mathrm{End}_\mathbb{F}(\mathrm{End}(V)), we can write it as


where \lambda_x and \rho_x are left- and right-multiplication by x in \mathrm{End}(V). Since these two commute with each other and both are nilpotent we can write


That is, the action of \exp(\mathrm{ad}(x)) is the same as left-multiplication by \exp(x) followed by right-multiplication by \exp(-x). All we need now is to verify that this is the inverse of \exp(x), but the expanded Leibniz identity from last time tells us that \exp(x)\exp(-x)=\exp(x-x)=\exp(0)=1_V, thus proving our assertion.

We can also tell at this point that the nilpotency of x and -y and that of \mathrm{ad}(x) and \mathrm{ad}(-y) are not unrelated. Indeed, if x\in\mathfrak{gl}(V) is nilpotent then \mathrm{ad}(x)\in\mathrm{End}(\mathfrak{gl}(V)) is, too. Indeed, since \lambda_x and \rho_x are commuting nilpotents, their difference — \mathrm{ad}(x)=\lambda_x-\rho_x — is again nilpotent.

We must be careful to note that the converse is not true. Indeed, I_V\in\mathfrak{gl}(V) is ad-nilpotent, but I_V itself is certainly not nilpotent.

August 18, 2012 Posted by | Algebra, Lie Algebras | 1 Comment

Automorphisms of Lie Algebras

Sorry for the delay; I’ve had a couple busy days. Here’s Thursday’s promised installment.

An automorphism of a Lie algebra L is, as usual, an invertible homomorphism from L onto itself, and the collection of all such automorphisms forms a group \mathrm{Aut}(L).

One obviously useful class of examples arises when we’re considering a linear Lie algebra L\subseteq\mathfrak{gl}(V). If g\in\mathrm{GL}(V) is an invertible endomorphism of V such that gLg^{-1}=L then the map x\mapsto gxg^{-1} is an automorphism of L. Clearly this happens for all g in the cases of \mathfrak{gl}(V) and the special linear Lie algebra \mathfrak{sl}(V) — the latter because the trace is invariant under a change of basis.

Now we’ll specialize to the (usual) case where no multiple of 1\in\mathbb{F} is zero, and we consider an x\in L for which \mathrm{ad}(x) is “nilpotent”. That is, there’s some finite n such that \mathrm{ad}(x)^n=0 — applying y\mapsto[x,y] sufficiently many times eventually kills off every element of L. In this case, we say that x itself is “ad-nilpotent”.

In this case, we can define \exp(\mathrm{ad}(x)). How does this work? we use the power series expansion of the exponential:


We know that this series converges because eventually every term vanishes once \mathrm{ad}(x)^k=0.

Now, I say that \exp(\mathrm{ad}(x))\in\mathrm{Aut}(L). In fact, while this case is very useful, all we need from \mathrm{ad}(x) is that it’s a nilpotent derivation \delta of L. The product rule for derivations generalizes as:


So we can write


That is, \exp{\delta} preserves the multiplication of the algebra that \delta is a derivation of. In particular, in terms of the Lie algebra L, we find that


Since \exp(\delta):L\to L we conclude that this is an epimorphism of L. It’s invertible by the usual formula


which means it’s an automorphism of L.

Just like a derivation of the form \mathrm{ad}(x) is called inner, an automorphism of the form \exp(\mathrm{ad}(x)) is called an inner automorphism, and the subgroup \mathrm{Inn}(L) they generate is a normal subgroup of \mathrm{Aut}(L). Specifically, if \phi\in\mathrm{Aut}(L) and x\in L then we can calculate


and thus


so the conjugate of an inner automorphism is again inner.

August 18, 2012 Posted by | Algebra, Lie Algebras | 3 Comments

Isomorphism Theorems for Lie Algebras

The category of Lie algebras may not be Abelian, but it has a zero object, kernels, and cokernels, which is enough to get the first isomorphism theorem, just like for rings. Specifically, if \phi:L\to L' is any homomorphism of Lie algebras then we can factor it as follows:

\displaystyle L\twoheadrightarrow L/\mathrm{Ker}(\phi)\cong\mathrm{Im}(\phi)\hookrightarrow L'

That is, first we project down to the quotient of L by the kernel of \phi, then we have an isomorphism from this quotient to the image of \phi, followed by the inclusion of the image as a subalgebra of L'.

There are actually two more isomorphism theorems which I haven’t made much mention of, though they hold in other categories as well. Since we’ll have use of them in our study of Lie algebras, we may as well get them down now.

The second isomorphism theorem says that if I\subseteq J are both ideals of L, then J/I is an ideal of L/I. Further, there is a natural isomorphism (L/I)/(J/I)\cong L/J. Indeed, if x+I\in L/I and j+I\in J/I, then we can check that

\displaystyle[x+I,j+I]=[x,j]+I\in J/I

so J/I is an ideal of L/I. As for the isomorphism, it’s straightforward from considering I and J as vector subspaces of L. Indeed, saying x+I and y+I are equivalent modulo J/I in L/I is to say that (x-y)+I\in J/I. But this means that x-y=j for some j\in J, so x and y are equivalent modulo J in L.

The third isomorphism theorem states that if I and J are any two ideals of L, then there is a natural isomorphism between (I+J)/J and I/(I\cap J) — we showed last time that both I+J and I\cap J are ideals. To see this, take i_1+j_1 and i_2+j_2 in I+J and consider how they can be equivalent modulo J. First off, j_1 and j_2 are immediately irrelevant, so we may as well just ask how i_1 and i_2 can be equivalent modulo J. Well, this will happen if i_1-i_2\in J, but we know that their difference is also in I, so i_1-i_2\in I\cap J.

August 15, 2012 Posted by | Algebra, Lie Algebras | 1 Comment

The Category of Lie Algebras is (not quite) Abelian

We’d like to see that the category of Lie algebras is Abelian. Unfortunately, it isn’t, but we can come close. It should be clear that it’s an \mathbf{Ab}-category, since the homomorphisms between any two Lie algebras form a vector space. Direct sums are also straightforward: the Lie algebra L\oplus L' is the direct sum as vector spaces, with [l,l']=0 for l\in L and l'\in L' and the regular brackets on L and L' otherwise.

We’ve seen that the category of Lie algebras has a zero object and kernels; now we need cokernels. It would be nice to just say that if \phi:L\to L' is a homomorphism then \mathrm{Cok}(\phi) is the quotient of L' by the image of \phi, but this image may not be an ideal. Luckily, ideals have a few nice closure properties.

First off, if I and J are ideals of L, then [I,J] — the subspace spanned by brackets of elements of I and J — is also an ideal. Indeed, we can check that [[i,j],x]=[[i,x],j]+[i,[j,x]] which is back in [I,J]. Similarly, the subspace sum I+J is an ideal. And, most importantly for us now, the intersection I\cap J is an ideal, since if i\in I\cap J then both [i,x]\in I and [i,x]\in J, so [i,x]\in I\cap J as well. In fact, this is true of arbitrary intersections.

This is important, because it means we can always expand any subset X\subseteq L to an ideal. We take all the ideals of L that contain X and intersect them. This will then be another ideal of L containing X, and it is contained in all the others. And we know that this intersection is nonempty, since there’s always at least the ideal L.

So while \mathrm{Im}(\phi) may not be an ideal of L', we can expand it to an ideal and take the quotient. The projection onto this quotient will be the largest epimorphism of L' that sends everything in \mathrm{Im}(\phi) to zero, so it will be the cokernel of \phi.

Where everything falls apart is normality. The very fact that we have ideals as a separate concept from subalgebras is the problem. Any subalgebra is the image of a monomorphism — the inclusion, if nothing else. But not all these subalgebras are themselves kernels of other morphisms; only those that are ideals have this property.

Still, the category is very nice, and these properties will help us greatly in what follows.

August 14, 2012 Posted by | Algebra, Lie Algebras | 3 Comments

Ideals of Lie Algebras

As we said, a homomorphism of Lie algebras is simply a linear mapping between them that preserves the bracket. I want to check, though, that this behaves in certain nice ways.

First off, there is a Lie algebra 0. That is, the trivial vector space can be given a (unique) Lie algebra structure, and every Lie algebra has a unique homomorphism L\to0 and a unique homomorphism 0\to L. This is easy.

Also pretty easy is the fact that we have kernels. That is, if \phi:L\to L' is a homomorphism, then the set I=\left\{x\in L\vert\phi(x)=0\in L'\right\} is a subalgebra of L. Indeed, it’s actually an “ideal” in pretty much the same sense as for rings. That is, if x\in L and y\in I then [x,y]\in I. And we can check that


proving that \mathrm{Ker}(\phi)\subseteq L is an ideal, and thus a Lie algebra in its own right.

Every Lie algebra has two trivial ideals: 0\subseteq L and L\subseteq L. Another example is the “center” — in analogy with the center of a group — which is the collection Z(L)\subseteq L of all z\in L such that [x,z]=0 for all x\in L. That is, those for which the adjoint action \mathrm{ad}(z) is the zero derivation — the kernel of \mathrm{ad}:L\to\mathrm{Der}(L) — which is clearly an ideal.

If Z(L)=L we say — again in analogy with groups — that L is abelian; this is the case for the diagonal algebra \mathfrak{d}(n,\mathbb{F}), for instance. Abelian Lie algebras are rather boring; they’re just vector spaces with trivial brackets, so we can always decompose them by picking a basis — any basis — and getting a direct sum of one-dimensional abelian Lie algebras.

On the other hand, if the only ideals of L are the trivial ones, and if L is not abelian, then we say that L is “simple”. These are very interesting, indeed.

As usual for rings, we can construct quotient algebras. If I\subseteq L is an ideal, then we can define a Lie algebra structure on the quotient space L/I. Indeed, if x+I and y+I are equivalence classes modulo I, then we define

\displaystyle [x+I,y+I]=[x,y]+I

which is unambiguous since if x' and y' are two other representatives then x'=x+i and y'=y+j, and we calculate

\displaystyle [x',y']=[x+i,y+j]=[x,y]+\left([x,j]+[i,y]+[i,j]\right)

and everything in the parens on the right is in I.

Two last constructions in analogy with groups: the “normalizer” of a subspace K\subseteq L is the subalgebra N_L(K)=\left\{x\in L\vert[x,K]\in K\right\}. This is the largest subalgebra of L which contains K as an ideal; if K already is an ideal of L then N_L(K)=L; if N_L(K)=K we say that K is “self-normalizing”.

The “centralizer” of a subset X\subseteq L is the subalgebra C_L(X)=\left\{x\in L\vert[x,X]=0\right\}. This is a subalgebra, and in particular we can see that Z(L)=C_L(L).

August 13, 2012 Posted by | Algebra, Lie Algebras | 4 Comments


When first defining (or, rather, recalling the definition of) Lie algebras I mentioned that the bracket makes each element of a Lie algebra L act by derivations on L itself. We can actually say a bit more about this.

First off, we need an algebra A over a field \mathbb{F}. This doesn’t have to be associative, as our algebras commonly are; all we need is a bilinear map A\otimes A\to A. In particular, Lie algebras count.

Now, a derivation \delta of A is firstly a linear map from A back to itself. That is, \delta\in\mathrm{End}_\mathbb{F}(A), where this is the algebra of endomorphisms of A as a vector space over \mathbb{F}, not the endomorphisms as an algebra. Instead of preserving the multiplication, we impose the condition that \delta behave like the product rule:


It’s easy to see that the collection \mathrm{Der}(A)\subseteq\mathrm{End}_\mathbb{F}(A) is a vector subspace, but I say that it’s actually a Lie subalgebra, when we equip the space of endomorphisms with the usual commutator bracket. That is, if \delta and \partial are two derivations, I say that their commutator is again a derivation.

This, we can check:

\displaystyle\begin{aligned} [\delta,\partial](ab)=&\delta(\partial(ab))-\partial(\delta(ab))\\=&\delta(\partial(a)b+a\partial(b)))-\partial(\delta(a)b+a\delta(b)))\\=&\delta(\partial(a)b)+\delta(a\partial(b)))-\partial(\delta(a)b)-\partial(a\delta(b)))\\=&\delta(\partial(a))b+\partial(a)\delta(b)+\delta(a)\partial(b)+a\delta(\partial(b))\\&-\partial(\delta(a))b-\delta(a)\partial(b)-\partial(a)\delta(b)-a\partial(\delta(b))\\=&[\delta,\partial](a)b+a[\delta,\partial](b)\end{aligned}

We’ve actually seen this before. We identified the vectors at a point p on a manifold with the derivations of the (real) algebra of functions defined in a neighborhood of p, so we need to take the commutator of two derivations to be sure of getting a new derivation back.

So now we can say that the mapping that sends x\in L to the endomorphism y\mapsto[x,y] lands in \mathrm{Der}(L) because of the Jacobi identity. We call this mapping \mathrm{ad}:L\to\mathrm{Der}(L) the “adjoint representation” of L, and indeed it’s actually a homomorphism of Lie algebras. That is, \mathrm{ad}([x,y])=[\mathrm{ad}(x),\mathrm{ad}(y)]. The endomorphism on the left-hand side sends z\in L to [[x,y],z], while on the right-hand side we get [x,[y,z]]-[y,[x,z]]. That these two are equal is yet another application of the Jacobi identity.

One last piece of nomenclature: derivations in the image of \mathrm{ad}:L\to\mathrm{Der}(L) are called “inner”; all others are called “outer” derivations.

August 10, 2012 Posted by | Algebra, Lie Algebras | 4 Comments

Orthogonal and Symplectic Lie Algebras

For the next three families of linear Lie algebras we equip our vector space V with a bilinear form B. We’re going to consider the endomorphisms f\in\mathfrak{gl}(V) such that

\displaystyle B(f(x),y)=-B(x,f(y))

If we pick a basis \{e_i\} of V, then we have a matrix for the bilinear form

\displaystyle B_{ij}=B(e_i,e_j)

and one for the endomorphism

\displaystyle f(e_i)=\sum\limits_jf_i^je_j

So the condition in terms of matrices in \mathfrak{gl}(n,\mathbb{F}) comes down to


or, more abstractly, Bf=-f^TB.

So do these form a subalgebra of \mathfrak{gl}(V)? Linearity is easy; we must check that this condition is closed under the bracket. That is, if f and g both satisfy this condition, what about their commutator [f,g]?


So this condition will always give us a linear Lie algebra.

We have three different families of these algebras. First, we consider the case where \mathrm{dim}(V)=2l+1 is odd, and we let B be the symmetric, nondegenerate bilinear form with matrix

\displaystyle\begin{pmatrix}1&0&0\\ 0&0&I_l\\ 0&I_l&0\end{pmatrix}

where I_l is the l\times l identity matrix. If we write the matrix of our endomorphism in a similar form


our matrix conditions turn into


From here it’s straightforward to count out 2l basis elements that satisfy the conditions on the first row and column, \frac{1}{2}(l^2-l) that satisfy the antisymmetry for p, another \frac{1}{2}(l^2-1) that satisfy the antisymmetry for n, and l^2 that satisfy the condition between m and q, for a total of 2l^2+l basis elements. We call this Lie algebra the orthogonal algebra of V, and write \mathfrak{o}(V) or \mathfrak{o}(2l+1,\mathbb{F}). Sometimes we refer to the isomorphism class of this algebra as B_l.

Next up, in the case where \mathrm{dim}(V)=2l is even we let the matrix of B look like


A similar approach to that above gives a basis with 2l^2-l elements. We also call this the orthogonal algebra of V, and write \mathfrak{o}(V) or \mathfrak{o}(2l,\mathbb{F}). Sometimes we refer to the isomorphism class of this algebra as D_l.

Finally, we again take an even-dimensional V, but this time we use the skew-symmetric form


This time we get a basis with 2l+2 elements. We call this the symplectic algebra of V, and write \mathfrak{sp}(V) or \mathfrak{sp}(2l,\mathbb{F}). Sometimes we refer to the isomorphism class of this algebra as C_l.

Along with the special linear Lie algebras, these form the “classical” Lie algebras. It’s a tedious but straightforward exercise to check that for any classical Lie algebra L, each basis element e of L can be written as a bracket of two other elements of L. That is, we have [L,L]=L. Since L\subseteq\mathfrak{gl}(V) for some V, and since we know that [\mathfrak{gl}(V),\mathfrak{gl}(V)]=\mathfrak{sl}(V), this establishes that L\subseteq\mathfrak{sl}(V) for all classical L.

August 9, 2012 Posted by | Algebra, Lie Algebras | Leave a comment

Special Linear Lie Algebras

More examples of Lie algebras! Today, an important family of linear Lie algebras.

Take a vector space V with dimension \mathrm{dim}(V)=l+1 and start with \mathfrak{gl}(V). Inside this, we consider the subalgebra of endomorphisms whose trace is zero, which we write \mathfrak{sl}(V) and call the “special linear Lie algebra”. This is a subspace, since the trace is a linear functional on the space of endomorphisms:


so if two endomorphisms have trace zero then so do all their linear combinations. It’s a subalgebra by using the “cyclic” property of the trace:


Note that this does not mean that endomorphisms can be arbitrarily rearranged inside the trace, which is a common mistake after seeing this formula. Anyway, this implies that


so actually not only is the bracket of two endomorphisms in \mathfrak{sl}(V) back in the subspace, the bracket of any two endomorphisms of \mathfrak{gl}(V) lands in \mathfrak{sl}(V). In other words: \left[\mathfrak{gl}(V),\mathfrak{gl}(V)\right]=\mathfrak{sl}(V).

Choosing a basis, we will write the algebra as \mathfrak{sl}(l+1,\mathbb{F}). It should be clear that the dimension is (l+1)^2-1, since this is the kernel of a single linear functional on the (l+1)^2-dimensional \mathfrak{gl}(l+1,\mathbb{F}), but let’s exhibit a basis anyway. All the basic matrices e_{ij} with i\neq j are traceless, so they’re all in \mathfrak{sl}(n,\mathbb{F}). Along the diagonal, \mathrm{Tr}(e_{ii})=1, so we need linear combinations that cancel each other out. It’s particularly convenient to define

\displaystyle h_i=e_{ii}-e_{i+1,i+1}

So we’ve got the (l+1)^2 basic matrices, but we take away the l+1 along the diagonal. Then we add back the l new matrices h_i, getting (l+1)^2-1 matrices in our standard basis for \mathfrak{sl}(l+1,\mathbb{F}), verifying the dimension.

We sometimes refer to the isomorphism class of \mathfrak{sl}(l+1,\mathbb{F}) as A_l. Because reasons.

August 8, 2012 Posted by | Algebra, Lie Algebras | 5 Comments

Linear Lie Algebras

So now that we’ve remembered what a Lie algebra is, let’s mention the most important ones: linear Lie algebras. These are ones that arise from linear transformations on vector spaces, ’cause mathematicians love them some vector spaces.

Specifically, let V be a finite-dimensional vector space over \mathbb{F}, and consider the associative algebra of endomorphisms \mathrm{End}(V) — linear transformations from V back to itself. We can use the usual method of defining a bracket as a commutator:

\displaystyle [x,y]=xy-yx

to turn this into a Lie algebra. When considered as a Lie algebra like this, we call it the “general linear Lie algebra”, and write \mathfrak{gl}(V). Many Lie algebras are written in the Fraktur typeface like this.

Any subalgebra of \mathfrak{gl}(V) is called a “linear Lie algebra”, since it’s made up of linear transformations. It turns out that every finite-dimensional Lie algebra is isomorphic to a linear Lie algebra, but we reserve the “linear” term for those algebras which we’re actually thinking of having linear transformations as elements.

Of course, since V is a vector space over \mathbb{F}, we can pick a basis. If V has dimension n, then there are n elements in any basis, and so our endomorphisms correspond to the n\times n matrices \mathrm{Mat}_n(\mathbb{F}). When we think of it in these terms, we often write \mathfrak{gl}(n,\mathbb{F}) for the general linear Lie algebra.

We can actually calculate the bracket structure explicitly in this case; bilinearity tells us that it suffices to write it down in terms of a basis. The standard basis of \mathfrak{gl}(n,\mathbb{F}) is \{e_{ij}\}_{i,j=1}^n which has a 1 in the ith row and jth column and 0 elsewhere. So we can calculate:

\displaystyle [e_{ij},e_{kl}]=\delta_{jk}e_{il}-\delta_{li}e_{kj}

where, as usual, \delta_{ij} is the Kronecker delta: 1 if the indices are the same and 0 if they’re different.

We can now identify some important subalgebras of \mathfrak{gl}(n,\mathbb{F}). First, the strictly upper-triangular matrices \mathfrak{n}(n,\mathbb{F}) involve only the basis elements e_{ij} with i<j. If i<j=k<l so the first term in the above expression for the bracket shows up, then the second term cannot show up, and vice versa. Either way, we conclude that the bracket of two basis elements of \mathfrak{n}(n,\mathbb{F}) — and thus any element of this subspace — involves only other basis elements of the subspace, which makes this a subalgebra.

Similarly, we conclude that the (non-strictly) upper-triangular matrices involving only e_{ij} with i\leq j also form a subalgebra \mathfrak{t}(n,\mathbb{F}). And, finally, the diagonal matrices involving only e_{ii} also form a subalgebra \mathfrak{d}(n,\mathbb{F}). This last one is interesting, in that the bracket on \mathfrak{d}(n,\mathbb{F}) is actually trivial, since any two diagonal matrices commute.

As vector spaces, we see that \mathfrak{t}(n,\mathbb{F})=\mathfrak{d}(n,\mathbb{F})+\mathfrak{n}(n,\mathbb{F}). It’s easy to check that the bracket of a diagonal matrix and a strictly upper-triangular matrix is again strictly upper-triangular — we write [\mathfrak{d}(n,\mathbb{F}),\mathfrak{n}(n,\mathbb{F})]=\mathfrak{n}(n,\mathbb{F}) — and so we also have [\mathfrak{t}(n,\mathbb{F}),\mathfrak{t}(n,\mathbb{F})]=\mathfrak{n}(n,\mathbb{F}). This may seem a little like a toy example now, but it turns out to be surprisingly general; many subalgebras will relate to each other this way.

August 7, 2012 Posted by | Algebra, Lie Algebras | 7 Comments


Get every new post delivered to your Inbox.

Join 366 other followers