# The Unapologetic Mathematician

## Automorphisms of Lie Algebras

Sorry for the delay; I’ve had a couple busy days. Here’s Thursday’s promised installment.

An automorphism of a Lie algebra $L$ is, as usual, an invertible homomorphism from $L$ onto itself, and the collection of all such automorphisms forms a group $\mathrm{Aut}(L)$.

One obviously useful class of examples arises when we’re considering a linear Lie algebra $L\subseteq\mathfrak{gl}(V)$. If $g\in\mathrm{GL}(V)$ is an invertible endomorphism of $V$ such that $gLg^{-1}=L$ then the map $x\mapsto gxg^{-1}$ is an automorphism of $L$. Clearly this happens for all $g$ in the cases of $\mathfrak{gl}(V)$ and the special linear Lie algebra $\mathfrak{sl}(V)$ — the latter because the trace is invariant under a change of basis.

Now we’ll specialize to the (usual) case where no multiple of $1\in\mathbb{F}$ is zero, and we consider an $x\in L$ for which $\mathrm{ad}(x)$ is “nilpotent”. That is, there’s some finite $n$ such that $\mathrm{ad}(x)^n=0$ — applying $y\mapsto[x,y]$ sufficiently many times eventually kills off every element of $L$. In this case, we say that $x$ itself is “ad-nilpotent”.

In this case, we can define $\exp(\mathrm{ad}(x))$. How does this work? we use the power series expansion of the exponential:

$\displaystyle\exp(\mathrm{ad}(x))=\sum\limits_{k=0}^\infty\frac{\mathrm{ad}(x)^k}{k!}$

We know that this series converges because eventually every term vanishes once $\mathrm{ad}(x)^k=0$.

Now, I say that $\exp(\mathrm{ad}(x))\in\mathrm{Aut}(L)$. In fact, while this case is very useful, all we need from $\mathrm{ad}(x)$ is that it’s a nilpotent derivation $\delta$ of $L$. The product rule for derivations generalizes as:

$\displaystyle\frac{\delta^n}{n!}(xy)=\sum\limits_{i=0}^n\frac{1}{i!}\delta^i(x)\frac{1}{(n-i)!}\delta^{n-i}(y)$

So we can write

\displaystyle\begin{aligned}\exp(\delta(x))\exp(\delta(y))&=\left(\sum\limits_{i=0}^{n-1}\frac{\delta^i(x)}{i!}\right)\left(\sum\limits_{j=0}^{n-1}\frac{\delta^j(y)}{j!}\right)\\&=\sum\limits_{k=0}^{2n-2}\left(\sum\limits_{i=0}^k\frac{\delta^i(x)}{i!}\frac{\delta^{k-i}(y)}{(k-i)!}\right)\\&=\sum\limits_{k=0}^{2n-2}\frac{\delta^k(xy)}{k!}\\&=\sum\limits_{k=0}^{n-1}\frac{\delta^k(xy)}{k!}\\&=\exp(\delta(xy))\end{aligned}

That is, $\exp{\delta}$ preserves the multiplication of the algebra that $\delta$ is a derivation of. In particular, in terms of the Lie algebra $L$, we find that

$\displaystyle[\exp(\delta(x)),\exp(\delta(y))]=\exp(\delta([x,y]))$

Since $\exp(\delta):L\to L$ we conclude that this is an epimorphism of $L$. It’s invertible by the usual formula

$\displaystyle(1+\eta)^{-1}=1-\eta+\eta^2-\cdots\pm\eta^{n-1}$

which means it’s an automorphism of $L$.

Just like a derivation of the form $\mathrm{ad}(x)$ is called inner, an automorphism of the form $\exp(\mathrm{ad}(x))$ is called an inner automorphism, and the subgroup $\mathrm{Inn}(L)$ they generate is a normal subgroup of $\mathrm{Aut}(L)$. Specifically, if $\phi\in\mathrm{Aut}(L)$ and $x\in L$ then we can calculate

\displaystyle\begin{aligned}\phi(\mathrm{ad}(x)(\phi^{-1}(y)))&=\phi([x,\phi^{-1}(y)])\\&=[\phi(x),y]\\&=\mathrm{ad}(\phi(x))(y)\end{aligned}

and thus

$\displaystyle\phi\exp(\mathrm{ad}(x))\phi^{-1}=\exp(\mathrm{ad}(\phi(x)))$

so the conjugate of an inner automorphism is again inner.

August 18, 2012 Posted by | Algebra, Lie Algebras | 3 Comments

## Isomorphism Theorems for Lie Algebras

The category of Lie algebras may not be Abelian, but it has a zero object, kernels, and cokernels, which is enough to get the first isomorphism theorem, just like for rings. Specifically, if $\phi:L\to L'$ is any homomorphism of Lie algebras then we can factor it as follows:

$\displaystyle L\twoheadrightarrow L/\mathrm{Ker}(\phi)\cong\mathrm{Im}(\phi)\hookrightarrow L'$

That is, first we project down to the quotient of $L$ by the kernel of $\phi$, then we have an isomorphism from this quotient to the image of $\phi$, followed by the inclusion of the image as a subalgebra of $L'$.

There are actually two more isomorphism theorems which I haven’t made much mention of, though they hold in other categories as well. Since we’ll have use of them in our study of Lie algebras, we may as well get them down now.

The second isomorphism theorem says that if $I\subseteq J$ are both ideals of $L$, then $J/I$ is an ideal of $L/I$. Further, there is a natural isomorphism $(L/I)/(J/I)\cong L/J$. Indeed, if $x+I\in L/I$ and $j+I\in J/I$, then we can check that

$\displaystyle[x+I,j+I]=[x,j]+I\in J/I$

so $J/I$ is an ideal of $L/I$. As for the isomorphism, it’s straightforward from considering $I$ and $J$ as vector subspaces of $L$. Indeed, saying $x+I$ and $y+I$ are equivalent modulo $J/I$ in $L/I$ is to say that $(x-y)+I\in J/I$. But this means that $x-y=j$ for some $j\in J$, so $x$ and $y$ are equivalent modulo $J$ in $L$.

The third isomorphism theorem states that if $I$ and $J$ are any two ideals of $L$, then there is a natural isomorphism between $(I+J)/J$ and $I/(I\cap J)$ — we showed last time that both $I+J$ and $I\cap J$ are ideals. To see this, take $i_1+j_1$ and $i_2+j_2$ in $I+J$ and consider how they can be equivalent modulo $J$. First off, $j_1$ and $j_2$ are immediately irrelevant, so we may as well just ask how $i_1$ and $i_2$ can be equivalent modulo $J$. Well, this will happen if $i_1-i_2\in J$, but we know that their difference is also in $I$, so $i_1-i_2\in I\cap J$.

August 15, 2012 Posted by | Algebra, Lie Algebras | 1 Comment

## The Category of Lie Algebras is (not quite) Abelian

We’d like to see that the category of Lie algebras is Abelian. Unfortunately, it isn’t, but we can come close. It should be clear that it’s an $\mathbf{Ab}$-category, since the homomorphisms between any two Lie algebras form a vector space. Direct sums are also straightforward: the Lie algebra $L\oplus L'$ is the direct sum as vector spaces, with $[l,l']=0$ for $l\in L$ and $l'\in L'$ and the regular brackets on $L$ and $L'$ otherwise.

We’ve seen that the category of Lie algebras has a zero object and kernels; now we need cokernels. It would be nice to just say that if $\phi:L\to L'$ is a homomorphism then $\mathrm{Cok}(\phi)$ is the quotient of $L'$ by the image of $\phi$, but this image may not be an ideal. Luckily, ideals have a few nice closure properties.

First off, if $I$ and $J$ are ideals of $L$, then $[I,J]$ — the subspace spanned by brackets of elements of $I$ and $J$ — is also an ideal. Indeed, we can check that $[[i,j],x]=[[i,x],j]+[i,[j,x]]$ which is back in $[I,J]$. Similarly, the subspace sum $I+J$ is an ideal. And, most importantly for us now, the intersection $I\cap J$ is an ideal, since if $i\in I\cap J$ then both $[i,x]\in I$ and $[i,x]\in J$, so $[i,x]\in I\cap J$ as well. In fact, this is true of arbitrary intersections.

This is important, because it means we can always expand any subset $X\subseteq L$ to an ideal. We take all the ideals of $L$ that contain $X$ and intersect them. This will then be another ideal of $L$ containing $X$, and it is contained in all the others. And we know that this intersection is nonempty, since there’s always at least the ideal $L$.

So while $\mathrm{Im}(\phi)$ may not be an ideal of $L'$, we can expand it to an ideal and take the quotient. The projection onto this quotient will be the largest epimorphism of $L'$ that sends everything in $\mathrm{Im}(\phi)$ to zero, so it will be the cokernel of $\phi$.

Where everything falls apart is normality. The very fact that we have ideals as a separate concept from subalgebras is the problem. Any subalgebra is the image of a monomorphism — the inclusion, if nothing else. But not all these subalgebras are themselves kernels of other morphisms; only those that are ideals have this property.

Still, the category is very nice, and these properties will help us greatly in what follows.

August 14, 2012 Posted by | Algebra, Lie Algebras | 3 Comments

## Ideals of Lie Algebras

As we said, a homomorphism of Lie algebras is simply a linear mapping between them that preserves the bracket. I want to check, though, that this behaves in certain nice ways.

First off, there is a Lie algebra $0$. That is, the trivial vector space can be given a (unique) Lie algebra structure, and every Lie algebra has a unique homomorphism $L\to0$ and a unique homomorphism $0\to L$. This is easy.

Also pretty easy is the fact that we have kernels. That is, if $\phi:L\to L'$ is a homomorphism, then the set $I=\left\{x\in L\vert\phi(x)=0\in L'\right\}$ is a subalgebra of $L$. Indeed, it’s actually an “ideal” in pretty much the same sense as for rings. That is, if $x\in L$ and $y\in I$ then $[x,y]\in I$. And we can check that

$\displaystyle\phi\left([x,y]\right)=\left[\phi(x),\phi(y)\right]=\left[\phi(x),0\right]=0$

proving that $\mathrm{Ker}(\phi)\subseteq L$ is an ideal, and thus a Lie algebra in its own right.

Every Lie algebra has two trivial ideals: $0\subseteq L$ and $L\subseteq L$. Another example is the “center” — in analogy with the center of a group — which is the collection $Z(L)\subseteq L$ of all $z\in L$ such that $[x,z]=0$ for all $x\in L$. That is, those for which the adjoint action $\mathrm{ad}(z)$ is the zero derivation — the kernel of $\mathrm{ad}:L\to\mathrm{Der}(L)$ — which is clearly an ideal.

If $Z(L)=L$ we say — again in analogy with groups — that $L$ is abelian; this is the case for the diagonal algebra $\mathfrak{d}(n,\mathbb{F})$, for instance. Abelian Lie algebras are rather boring; they’re just vector spaces with trivial brackets, so we can always decompose them by picking a basis — any basis — and getting a direct sum of one-dimensional abelian Lie algebras.

On the other hand, if the only ideals of $L$ are the trivial ones, and if $L$ is not abelian, then we say that $L$ is “simple”. These are very interesting, indeed.

As usual for rings, we can construct quotient algebras. If $I\subseteq L$ is an ideal, then we can define a Lie algebra structure on the quotient space $L/I$. Indeed, if $x+I$ and $y+I$ are equivalence classes modulo $I$, then we define

$\displaystyle [x+I,y+I]=[x,y]+I$

which is unambiguous since if $x'$ and $y'$ are two other representatives then $x'=x+i$ and $y'=y+j$, and we calculate

$\displaystyle [x',y']=[x+i,y+j]=[x,y]+\left([x,j]+[i,y]+[i,j]\right)$

and everything in the parens on the right is in $I$.

Two last constructions in analogy with groups: the “normalizer” of a subspace $K\subseteq L$ is the subalgebra $N_L(K)=\left\{x\in L\vert[x,K]\in K\right\}$. This is the largest subalgebra of $L$ which contains $K$ as an ideal; if $K$ already is an ideal of $L$ then $N_L(K)=L$; if $N_L(K)=K$ we say that $K$ is “self-normalizing”.

The “centralizer” of a subset $X\subseteq L$ is the subalgebra $C_L(X)=\left\{x\in L\vert[x,X]=0\right\}$. This is a subalgebra, and in particular we can see that $Z(L)=C_L(L)$.

August 13, 2012 Posted by | Algebra, Lie Algebras | 4 Comments

## Derivations

When first defining (or, rather, recalling the definition of) Lie algebras I mentioned that the bracket makes each element of a Lie algebra $L$ act by derivations on $L$ itself. We can actually say a bit more about this.

First off, we need an algebra $A$ over a field $\mathbb{F}$. This doesn’t have to be associative, as our algebras commonly are; all we need is a bilinear map $A\otimes A\to A$. In particular, Lie algebras count.

Now, a derivation $\delta$ of $A$ is firstly a linear map from $A$ back to itself. That is, $\delta\in\mathrm{End}_\mathbb{F}(A)$, where this is the algebra of endomorphisms of $A$ as a vector space over $\mathbb{F}$, not the endomorphisms as an algebra. Instead of preserving the multiplication, we impose the condition that $\delta$ behave like the product rule:

$\displaystyle\delta(ab)=\delta(a)b+a\delta(b)$

It’s easy to see that the collection $\mathrm{Der}(A)\subseteq\mathrm{End}_\mathbb{F}(A)$ is a vector subspace, but I say that it’s actually a Lie subalgebra, when we equip the space of endomorphisms with the usual commutator bracket. That is, if $\delta$ and $\partial$ are two derivations, I say that their commutator is again a derivation.

This, we can check:

\displaystyle\begin{aligned} [\delta,\partial](ab)=&\delta(\partial(ab))-\partial(\delta(ab))\\=&\delta(\partial(a)b+a\partial(b)))-\partial(\delta(a)b+a\delta(b)))\\=&\delta(\partial(a)b)+\delta(a\partial(b)))-\partial(\delta(a)b)-\partial(a\delta(b)))\\=&\delta(\partial(a))b+\partial(a)\delta(b)+\delta(a)\partial(b)+a\delta(\partial(b))\\&-\partial(\delta(a))b-\delta(a)\partial(b)-\partial(a)\delta(b)-a\partial(\delta(b))\\=&[\delta,\partial](a)b+a[\delta,\partial](b)\end{aligned}

We’ve actually seen this before. We identified the vectors at a point $p$ on a manifold with the derivations of the (real) algebra of functions defined in a neighborhood of $p$, so we need to take the commutator of two derivations to be sure of getting a new derivation back.

So now we can say that the mapping that sends $x\in L$ to the endomorphism $y\mapsto[x,y]$ lands in $\mathrm{Der}(L)$ because of the Jacobi identity. We call this mapping $\mathrm{ad}:L\to\mathrm{Der}(L)$ the “adjoint representation” of $L$, and indeed it’s actually a homomorphism of Lie algebras. That is, $\mathrm{ad}([x,y])=[\mathrm{ad}(x),\mathrm{ad}(y)]$. The endomorphism on the left-hand side sends $z\in L$ to $[[x,y],z]$, while on the right-hand side we get $[x,[y,z]]-[y,[x,z]]$. That these two are equal is yet another application of the Jacobi identity.

One last piece of nomenclature: derivations in the image of $\mathrm{ad}:L\to\mathrm{Der}(L)$ are called “inner”; all others are called “outer” derivations.

August 10, 2012 Posted by | Algebra, Lie Algebras | 4 Comments

## Orthogonal and Symplectic Lie Algebras

For the next three families of linear Lie algebras we equip our vector space $V$ with a bilinear form $B$. We’re going to consider the endomorphisms $f\in\mathfrak{gl}(V)$ such that

$\displaystyle B(f(x),y)=-B(x,f(y))$

If we pick a basis $\{e_i\}$ of $V$, then we have a matrix for the bilinear form

$\displaystyle B_{ij}=B(e_i,e_j)$

and one for the endomorphism

$\displaystyle f(e_i)=\sum\limits_jf_i^je_j$

So the condition in terms of matrices in $\mathfrak{gl}(n,\mathbb{F})$ comes down to

$\displaystyle\sum\limits_kB_{kj}f_i^k=-\sum_kf_j^kB_{ik}$

or, more abstractly, $Bf=-f^TB$.

So do these form a subalgebra of $\mathfrak{gl}(V)$? Linearity is easy; we must check that this condition is closed under the bracket. That is, if $f$ and $g$ both satisfy this condition, what about their commutator $[f,g]$?

\displaystyle\begin{aligned}B([f,g](x),y)&=B(f(g(x))-g(f(x)),y)\\&=B(f(g(x)),y)-B(g(f(x)),y)\\&=-B(g(x),f(y))+B(f(x),g(y))\\&=B(x,g(f(y)))-B(x,f(g(y)))\\&=-B(x,f(g(y))-g(f(y)))\\&=-B(x,[f,g](y))\end{aligned}

So this condition will always give us a linear Lie algebra.

We have three different families of these algebras. First, we consider the case where $\mathrm{dim}(V)=2l+1$ is odd, and we let $B$ be the symmetric, nondegenerate bilinear form with matrix

$\displaystyle\begin{pmatrix}1&0&0\\ 0&0&I_l\\ 0&I_l&0\end{pmatrix}$

where $I_l$ is the $l\times l$ identity matrix. If we write the matrix of our endomorphism in a similar form

$\displaystyle\begin{pmatrix}a&b_1&b_2\\c_1&m&n\\c_2&p&q\end{pmatrix}$

our matrix conditions turn into

\displaystyle\begin{aligned}a&=0\\c_1&=-b_2^T\\c_2&=-b_1^T\\q&=-m^T\\n&=-n^T\\p&=-p^T\end{aligned}

From here it’s straightforward to count out $2l$ basis elements that satisfy the conditions on the first row and column, $\frac{1}{2}(l^2-l)$ that satisfy the antisymmetry for $p$, another $\frac{1}{2}(l^2-1)$ that satisfy the antisymmetry for $n$, and $l^2$ that satisfy the condition between $m$ and $q$, for a total of $2l^2+l$ basis elements. We call this Lie algebra the orthogonal algebra of $V$, and write $\mathfrak{o}(V)$ or $\mathfrak{o}(2l+1,\mathbb{F})$. Sometimes we refer to the isomorphism class of this algebra as $B_l$.

Next up, in the case where $\mathrm{dim}(V)=2l$ is even we let the matrix of $B$ look like

$\displaystyle\begin{pmatrix}0&I_l\\I_l&0\end{pmatrix}$

A similar approach to that above gives a basis with $2l^2-l$ elements. We also call this the orthogonal algebra of $V$, and write $\mathfrak{o}(V)$ or $\mathfrak{o}(2l,\mathbb{F})$. Sometimes we refer to the isomorphism class of this algebra as $D_l$.

Finally, we again take an even-dimensional $V$, but this time we use the skew-symmetric form

$\displaystyle\begin{pmatrix}0&I_l\\-I_l&0\end{pmatrix}$

This time we get a basis with $2l+2$ elements. We call this the symplectic algebra of $V$, and write $\mathfrak{sp}(V)$ or $\mathfrak{sp}(2l,\mathbb{F})$. Sometimes we refer to the isomorphism class of this algebra as $C_l$.

Along with the special linear Lie algebras, these form the “classical” Lie algebras. It’s a tedious but straightforward exercise to check that for any classical Lie algebra $L$, each basis element $e$ of $L$ can be written as a bracket of two other elements of $L$. That is, we have $[L,L]=L$. Since $L\subseteq\mathfrak{gl}(V)$ for some $V$, and since we know that $[\mathfrak{gl}(V),\mathfrak{gl}(V)]=\mathfrak{sl}(V)$, this establishes that $L\subseteq\mathfrak{sl}(V)$ for all classical $L$.

August 9, 2012 Posted by | Algebra, Lie Algebras | 1 Comment

## Special Linear Lie Algebras

More examples of Lie algebras! Today, an important family of linear Lie algebras.

Take a vector space $V$ with dimension $\mathrm{dim}(V)=l+1$ and start with $\mathfrak{gl}(V)$. Inside this, we consider the subalgebra of endomorphisms whose trace is zero, which we write $\mathfrak{sl}(V)$ and call the “special linear Lie algebra”. This is a subspace, since the trace is a linear functional on the space of endomorphisms:

$\displaystyle\mathrm{Tr}(ax+by)=a\mathrm{Tr}(x)+b\mathrm{Tr}(y)$

so if two endomorphisms have trace zero then so do all their linear combinations. It’s a subalgebra by using the “cyclic” property of the trace:

$\displaystyle\mathrm{Tr}(xy)=\mathrm{Tr}(yx)$

Note that this does not mean that endomorphisms can be arbitrarily rearranged inside the trace, which is a common mistake after seeing this formula. Anyway, this implies that

\displaystyle\begin{aligned}\mathrm{Tr}\left([x,y]\right)&=\mathrm{Tr}(xy-yx)\\&=\mathrm{Tr}(xy)-\mathrm{Tr}(yx)=0\end{aligned}

so actually not only is the bracket of two endomorphisms in $\mathfrak{sl}(V)$ back in the subspace, the bracket of any two endomorphisms of $\mathfrak{gl}(V)$ lands in $\mathfrak{sl}(V)$. In other words: $\left[\mathfrak{gl}(V),\mathfrak{gl}(V)\right]=\mathfrak{sl}(V)$.

Choosing a basis, we will write the algebra as $\mathfrak{sl}(l+1,\mathbb{F})$. It should be clear that the dimension is $(l+1)^2-1$, since this is the kernel of a single linear functional on the $(l+1)^2$-dimensional $\mathfrak{gl}(l+1,\mathbb{F})$, but let’s exhibit a basis anyway. All the basic matrices $e_{ij}$ with $i\neq j$ are traceless, so they’re all in $\mathfrak{sl}(n,\mathbb{F})$. Along the diagonal, $\mathrm{Tr}(e_{ii})=1$, so we need linear combinations that cancel each other out. It’s particularly convenient to define

$\displaystyle h_i=e_{ii}-e_{i+1,i+1}$

So we’ve got the $(l+1)^2$ basic matrices, but we take away the $l+1$ along the diagonal. Then we add back the $l$ new matrices $h_i$, getting $(l+1)^2-1$ matrices in our standard basis for $\mathfrak{sl}(l+1,\mathbb{F})$, verifying the dimension.

We sometimes refer to the isomorphism class of $\mathfrak{sl}(l+1,\mathbb{F})$ as $A_l$. Because reasons.

August 8, 2012 Posted by | Algebra, Lie Algebras | 5 Comments

## Linear Lie Algebras

So now that we’ve remembered what a Lie algebra is, let’s mention the most important ones: linear Lie algebras. These are ones that arise from linear transformations on vector spaces, ’cause mathematicians love them some vector spaces.

Specifically, let $V$ be a finite-dimensional vector space over $\mathbb{F}$, and consider the associative algebra of endomorphisms $\mathrm{End}(V)$ — linear transformations from $V$ back to itself. We can use the usual method of defining a bracket as a commutator:

$\displaystyle [x,y]=xy-yx$

to turn this into a Lie algebra. When considered as a Lie algebra like this, we call it the “general linear Lie algebra”, and write $\mathfrak{gl}(V)$. Many Lie algebras are written in the Fraktur typeface like this.

Any subalgebra of $\mathfrak{gl}(V)$ is called a “linear Lie algebra”, since it’s made up of linear transformations. It turns out that every finite-dimensional Lie algebra is isomorphic to a linear Lie algebra, but we reserve the “linear” term for those algebras which we’re actually thinking of having linear transformations as elements.

Of course, since $V$ is a vector space over $\mathbb{F}$, we can pick a basis. If $V$ has dimension $n$, then there are $n$ elements in any basis, and so our endomorphisms correspond to the $n\times n$ matrices $\mathrm{Mat}_n(\mathbb{F})$. When we think of it in these terms, we often write $\mathfrak{gl}(n,\mathbb{F})$ for the general linear Lie algebra.

We can actually calculate the bracket structure explicitly in this case; bilinearity tells us that it suffices to write it down in terms of a basis. The standard basis of $\mathfrak{gl}(n,\mathbb{F})$ is $\{e_{ij}\}_{i,j=1}^n$ which has a $1$ in the $i$th row and $j$th column and $0$ elsewhere. So we can calculate:

$\displaystyle [e_{ij},e_{kl}]=\delta_{jk}e_{il}-\delta_{li}e_{kj}$

where, as usual, $\delta_{ij}$ is the Kronecker delta: $1$ if the indices are the same and $0$ if they’re different.

We can now identify some important subalgebras of $\mathfrak{gl}(n,\mathbb{F})$. First, the strictly upper-triangular matrices $\mathfrak{n}(n,\mathbb{F})$ involve only the basis elements $e_{ij}$ with $i. If $i so the first term in the above expression for the bracket shows up, then the second term cannot show up, and vice versa. Either way, we conclude that the bracket of two basis elements of $\mathfrak{n}(n,\mathbb{F})$ — and thus any element of this subspace — involves only other basis elements of the subspace, which makes this a subalgebra.

Similarly, we conclude that the (non-strictly) upper-triangular matrices involving only $e_{ij}$ with $i\leq j$ also form a subalgebra $\mathfrak{t}(n,\mathbb{F})$. And, finally, the diagonal matrices involving only $e_{ii}$ also form a subalgebra $\mathfrak{d}(n,\mathbb{F})$. This last one is interesting, in that the bracket on $\mathfrak{d}(n,\mathbb{F})$ is actually trivial, since any two diagonal matrices commute.

As vector spaces, we see that $\mathfrak{t}(n,\mathbb{F})=\mathfrak{d}(n,\mathbb{F})+\mathfrak{n}(n,\mathbb{F})$. It’s easy to check that the bracket of a diagonal matrix and a strictly upper-triangular matrix is again strictly upper-triangular — we write $[\mathfrak{d}(n,\mathbb{F}),\mathfrak{n}(n,\mathbb{F})]=\mathfrak{n}(n,\mathbb{F})$ — and so we also have $[\mathfrak{t}(n,\mathbb{F}),\mathfrak{t}(n,\mathbb{F})]=\mathfrak{n}(n,\mathbb{F})$. This may seem a little like a toy example now, but it turns out to be surprisingly general; many subalgebras will relate to each other this way.

August 7, 2012 Posted by | Algebra, Lie Algebras | 7 Comments

## Lie Algebras Revisited

Well it’s been quite a while, but I think I can carve out the time to move forwards again. I was all set to start with Lie algebras today, only to find that I’ve already defined them over a year ago. So let’s pick up with a recap: a Lie algebra is a module — usually a vector space over a field $\mathbb{F}$ — called $L$ and give it a bilinear operation which we write as $[x,y]$. We often require such operations to be associative, but this time we impose the following two conditions:

\displaystyle\begin{aligned}{}[x,x]&=0\\ [x,[y,z]]+[y,[z,x]]+[z,[x,y]]&=0\end{aligned}

Now, as long as we’re not working in a field where $1+1=0$ — and usually we’re not — we can use bilinearity to rewrite the first condition:

\displaystyle\begin{aligned}0&=[x+y,x+y]\\&=[x,x]+[x,y]+[y,x]+[y,y]\\&=0+[x,y]+[y,x]+0\\&=[x,y]+[y,x]\end{aligned}

so $[y,x]=-[x,y]$. This antisymmetry always holds, but we can only go the other way if the character of $\mathbb{F}$ is not $2$, as stated above.

The second condition is called the “Jacobi identity”, and antisymmetry allows us to rewrite it as:

$\displaystyle[x,[y,z]]=[[x,y],z]+[y,[x,z]]$

That is, bilinearity says that we have a linear mapping $x\mapsto[x,\underline{\hphantom{X}}]$ that sends an element $x\in L$ to a linear endomorphism in $\mathrm{End}(L)$. And the Jacobi identity says that this actually lands in the subspace $\mathrm{Der}(L)$ of “derivations” — those which satisfy something like the Leibniz rule for derivatives. To see what I mean, compare to the product rule:

$\displaystyle\frac{d}{dt}\left(fg\right)=\frac{df}{dt}g+f\frac{dg}{dt}$

where $f$ takes the place of $y$, $g$ takes the place of $z$, and $\frac{d}{dt}$ takes the place of $x$. And the operations are changed around. But you should see the similarity.

Lie algebras obviously form a category whose morphisms are called Lie algebra homomorphisms. Just as we might expect, such a homomorphism is a linear map $\phi:L\to L'$ that preserves the bracket:

$\displaystyle\phi\left([x,y]\right)=\left[\phi(x),\phi(y)\right]$

We can obviously define subalgebras and quotient algebras. Subalgebras are a bit more obvious than quotient algebras, though, being just subspaces that are closed under the bracket. Quotient algebras are more commonly called “homomorphic images” in the literature, and we’ll talk more about them later.

We will take as a general assumption that our Lie algebras are finite-dimensional, though infinite-dimensional ones absolutely exist and are very interesting.

And I’ll finish the recap by reminding you that we can get Lie algebras from associative algebras; any associative algebra $(A,\cdot)$ can be given a bracket defined by

$\displaystyle [x,y]=x\cdot y-y\cdot x$

The above link shows that this satisfies the Jacobi identity, or you can take it as an exercise.

August 6, 2012 Posted by | Algebra, Lie Algebras | 7 Comments