The Unapologetic Mathematician

Mathematics for the interested outsider

The Submodule of Invariants

If V is a module of a Lie algebra L, there is one submodule that turns out to be rather interesting: the submodule V^0 of vectors v\in V such that x\cdot v=0 for all x\in L. We call these vectors “invariants” of L.

As an illustration of how interesting these are, consider the modules we looked at last time. What are the invariant linear maps \hom(V,W)^0 from one module V to another W? We consider the action of x\in L on a linear map f:

\displaystyle\left[x\cdot f\right](v)=x\cdot f(V)-f(x\cdot v)=0

Or, in other words:

\displaystyle x\cdot f(v)=f(x\cdot v)

That is, a linear map f\in\hom(V,W) is invariant if and only if it intertwines the actions on V and W. That is, \hom_\mathbb{F}(V,W)^0=hom_L(V,W).

Next, consider the bilinear forms on L. Here we calculate

\displaystyle\begin{aligned}\left[y\cdot B\right](x,z)&=-B([y,x],z)-B(x,[y,z])\\&=B([x,y],z)-B(x,[y,z])=0\end{aligned}

That is, a bilinear form is invariant if and only if it is associative, in the sense that the Killing form is: B([x,y],z)=B(x,[y,z])

September 21, 2012 Posted by | Algebra, Lie Algebras, Representation Theory | 19 Comments

More New Modules from Old

There are a few constructions we can make, starting with the ones from last time and applying them in certain special cases.

First off, if V and W are two finite-dimensional L-modules, then I say we can put an L-module structure on the space \hom(V,W) of linear maps from V to W. Indeed, we can identify \hom(V,W) with V^*\otimes W: if \{e_i\} is a basis for V and \{f_j\} is a basis for W, then we can set up the dual basis \{\epsilon^i\} of V^*, such that \epsilon^i(e_j)=\delta^i_j. Then the elements \{\epsilon^i\otimes f_j\} form a basis for V^*\otimes W, and each one can be identified with the linear map sending e_i to f_j and all the other basis elements of V to 0. Thus we have an inclusion V^*\otimes W\to\hom(V,W), and a simple dimension-counting argument suffices to show that this is an isomorphism.

Now, since we have an action of L on V we get a dual action on V^*. And because we have actions on V^* and W we get one on V^*\otimes W\cong\hom(V,W). What does this look like, explicitly? Well, we can write any such tensor as the sum of tensors of the form \lambda\otimes w for some \lambda\in V^* and w\in W. We calculate the action of x\cdot(\lambda\otimes w) on a vector v\in V:

\displaystyle\begin{aligned}\left[x\cdot(\lambda\otimes w)\right](v)&=\left[(x\cdot\lambda)\otimes w\right](v)+\left[\lambda\otimes(x\cdot w)\right](v)\\&=\left[x\cdot\lambda\right](v)w+\lambda(v)(x\cdot w)\\&=-\lambda(x\cdot v)w+x\cdot(\lambda(v)w)\\&=-\left[\lambda\otimes w\right](x\cdot v)+x\cdot\left[\lambda\otimes x\right](w)\end{aligned}

In general we see that \left[x\cdot f\right](v)=x\cdot f(v)-f(x\cdot v). In particular, the space of linear endomorphisms on V is \hom(V,V), and so it get an L-module structure like this.

The other case of interest is the space of bilinear forms on a module V. A bilinear form on V is, of course, a linear functional on V\otimes V. And thus this space can be identified with (V\otimes V)^*. How does x\in L act on a bilinear form B? Well, we can calculate:

\displaystyle\begin{aligned}\left[x\cdot B\right](v_1,v_2)&=\left[x\cdot B\right](v_1\otimes v_2)\\&=-B\left(x\cdot(v_1\otimes v_2)\right)\\&=-B\left((x\cdot v_1)\otimes v_2\right)-B\left(v_1\otimes(x\cdot v_2)\right)\\&=-B(x\cdot v_1,v_2)-B(v_1,x\cdot v_2)\end{aligned}

In particular, we can consider the case of bilinear forms on L itself, where L acts on itself by \mathrm{ad}. Here we read

\displaystyle\left[x\cdot B\right](v_1,v_2)=-B([x,v_1],v_2)-B(v_1,[x,v_2])

September 21, 2012 Posted by | Algebra, Lie Algebras, Representation Theory | 2 Comments

New Modules from Old

There are a few standard techniques we can use to generate new modules for a Lie algebra L from old ones. We’ve seen direct sums already, but here are a few more.

One way is to start with a module M and then consider its dual space M^*. I say that this can be made into an L-module by setting

\displaystyle\left[x\cdot\lambda\right](m)=-\lambda(x\cdot m)

for all x\in L, \lambda\in M^*, and m\in M. Bilinearity should be clear, so we just check the defining property of a module. That is, we take two Lie algebra elements x,y\in L and check

\displaystyle\begin{aligned}\left[[x,y]\cdot f\right](m)&=-f([x,y]\cdot m)\\&=-f(x\cdot(y\cdot m)-y\cdot(x\cdot m))\\&=-f(x\cdot(y\cdot m))+f(y\cdot(x\cdot m))\\&=\left[x\cdot f\right](y\cdot m)-\left[y\cdot f\right](x\cdot m)\\&=-\left[y\cdot(x\cdot f)\right](m)+\left[x\cdot(y\cdot f)\right](m)\\&=\left[x\cdot(y\cdot f)-y\cdot(x\cdot f)\right](m)\end{aligned}

so [x,y]\cdot f=x\cdot(y\cdot f)-y\cdot(x\cdot f) for all f\in M^*, as desired.

Another way is to start with modules M and N and form their tensor product M\otimes N. Now we define a module structure on this space by

\displaystyle x\cdot m\otimes n=(x\cdot m)\otimes n + m\otimes(x\cdot n)

We check the defining property again. Calculate:

\displaystyle\begin{aligned}{}[x,y]\cdot m\otimes n&=([x,y]\cdot m)\otimes n+m\otimes([x,y]\cdot n)\\&=(x\cdot(y\cdot m)-y\cdot(x\cdot m))\otimes n+m\otimes(x\cdot(y\cdot n)-y\cdot(x\cdot n))\\&=(x\cdot(y\cdot m))\otimes n-(y\cdot(x\cdot m))\otimes n+m\otimes(x\cdot(y\cdot n))-m\otimes(y\cdot(x\cdot n))\end{aligned}

while

\displaystyle\begin{aligned}x\cdot(y\cdot m\otimes n)-y\cdot(x\cdot m\otimes n)=&x\cdot((y\cdot m)\otimes n+m\otimes(y\cdot n))-y\cdot((x\cdot m)\otimes n+m\otimes(x\cdot n))\\=&x\cdot((y\cdot m)\otimes n)+x\cdot(m\otimes(y\cdot n))-y\cdot((x\cdot m)\otimes n)-y\cdot(m\otimes(x\cdot n))\\=&(x\cdot(y\cdot m))\otimes n+(y\cdot m)\otimes(x\cdot n)+(x\cdot m)\otimes(y\cdot n)+m\otimes(x\cdot(y\cdot n))\\&-(y\cdot(x\cdot m))\otimes n-(x\cdot m)\otimes(y\cdot n)-(y\cdot m)\otimes(x\cdot n)-m\otimes(y\cdot(x\cdot n))\\=&(x\cdot(y\cdot m))\otimes n+m\otimes(x\cdot(y\cdot n))-(y\cdot(x\cdot m))\otimes n-m\otimes(y\cdot(x\cdot n))\end{aligned}

These are useful, and they’re only just the beginning.

September 17, 2012 Posted by | Algebra, Lie Algebras, Representation Theory | 3 Comments

Reducible Modules

As might be surmised from irreducible modules, a reducible module M for a Lie algebra L is one that contains a nontrivial proper submodule — one other than 0 or M itself.

Now obviously if N\subseteq M is a submodule we can form the quotient M/N. This is the basic setup of a short exact sequence:

\displaystyle0\to N\to M\to M/N\to 0

The question is, does this sequence split? That is, can we write M as the direct sum of N and some other submodule isomorphic to M/N?

First of all, let’s be clear that direct sums of modules do make sense. Indeed, if A and B are L-modules then we can form an action on A\oplus B by defining it on each summand separately

\displaystyle\left[phi_{A\oplus B}(x)\right](a,b)=\left(\left[\phi_a(x)\right](a),\left[\phi_B(x)\right](b)\right)

Clearly the usual subspace inclusions and projections between A, B, and A\oplus B intertwine these actions, so they’re the required module morphisms. Further, it’s clear that (A\oplus B)/A\cong B.

So, do all short exact sequences of representations split? no. Indeed, let \mathfrak{t}(n,\mathbb{F}) be the algebra of n\times n upper-triangular matrices, along with the obvious n-dimensional representation. If we let e_i be the basic column vector with a 1 in the ith row and 0 elsewhere, then the one-dimensional space spanned by e_1 forms a one-dimensional submodule. Indeed, all upper-triangular matrices will send this column vector back to a multiple of itself.

On the other hand, it may be the case for a module M that any nontrivial proper submodule N has a complementary submodule N'\subseteq M with M=N\oplus N'. In this case, N is either irreducible or it’s not; if not, then any proper nontrivial submodule of N will also be a proper nontrivial submodule of M, and we can continue taking smaller submodules until we get to an irreducible one, so we may as well assume that N is irreducible. Now the same sort of argument works for N', showing that if it’s not irreducible it can be decomposed into the direct sum of some irreducible submodule and some complement, which is another nontrivial proper submodule of M. At each step, the complement gets smaller and smaller, until we have decomposed M entirely into a direct sum of irreducible submodules.

If M is decomposable into a direct sum of irreducible submodules, we say that M is “completely reducible”, or “decomposable”, as we did when we were working with groups. Any module where any nontrivial proper submodule has a complement is thus completely reducible; conversely, complete reducibility implies that any nontrivial proper submodule has a complement. Indeed, such a submodule must consist of some, but not all, of the summands of M, and the complement will consist of the rest.

September 16, 2012 Posted by | Algebra, Lie Algebras, Representation Theory | 1 Comment

Irreducible Modules

Sorry for the delay; it’s getting crowded around here again.

Anyway, an irreducible module for a Lie algebra L is a pretty straightforward concept: it’s a module M such that its only submodules are 0 and M. As usual, Schur’s lemma tells us that any morphism between two irreducible modules is either 0 or an isomorphism. And, as we’ve seen in other examples involving linear transformations, all automorphisms of an irreducible module are scalars times the identity transformation. This, of course, doesn’t depend on any choice of basis.

A one-dimensional module will always be irreducible, if it exists. And a unique — up to isomorphism, of course — one-dimensional module will always exist for simple Lie algebras. Indeed, if L is simple then we know that [L,L]=L. Any one-dimensional representation \phi:L\to\mathfrak{gl}(1,\mathbb{F}) must have its image in [\mathfrak{gl}(1,\mathbb{F}),\mathfrak{gl}(1,\mathbb{F})]=\mathfrak{sl}(1,\mathbb{F}). But the only traceless 1\times1 matrix is the zero matrix. Setting \phi(x)=0 for all x\in L does indeed give a valid representation of L.

September 15, 2012 Posted by | Algebra, Lie Algebras, Representation Theory | 1 Comment

Lie Algebra Modules

It should be little surprise that we’re interested in concrete actions of Lie algebras on vector spaces, like we were for groups. Given a Lie algebra L we define an L-module to be a vector space V equipped with a bilinear function L\times V\to V — often written (x,v)\mapsto x\cdot v satisfying the relation

\displaystyle [x,y]\cdot v=x\cdot(y\cdot v)-y\cdot(x\cdot v)

Of course, this is the same thing as a representation \phi:L\to\mathfrak{gl}(V). Indeed, given a representation \phi we can define x\cdot v=[\phi(x)](v); given an action we can define a representation \phi(x)\in\mathfrak{gl}(V) by [\phi(x)](v)=x\cdot v. The above relation is exactly the statement that the bracket in L corresponds to the bracket in \mathfrak{gl}(V).

Of course, the modules of a Lie algebra form a category. A homomorphism of L-modules is a linear map \phi:V\to W satisfying

\displaystyle\phi(x\cdot v)=x\cdot\phi(v)

We automatically get the concept of a submodule — a subspace sent back into itself by each x\in L — and a quotient module. In the latter case, we can see that if W\subseteq V is any submodule then we can define x\cdot(v+W)=(x\cdot v)+W. This is well-defined, since if v+w is any other representative of v+W then x\cdot(v+w)=x\cdot v+x\cdot w, and x\cdot w\in W, so x\cdot v and x\cdot(v+w) both represent the same element of v+W.

Thus, every submodule can be seen as the kernel of some homomorphism: the projection V\to V/W. It should be clear that every homomorphism has a kernel, and a cokernel can be defined simply as the quotient of the range by the image. All we need to see that the category of L-modules is abelian is to show that every epimorphism is actually a quotient, but we know this is already true for the underlying vector spaces. Since the (vector space) kernel of an L-module map is an L-submodule, this is also true for L-modules.

September 12, 2012 Posted by | Algebra, Lie Algebras, Representation Theory | 2 Comments

All Derivations of Semisimple Lie Algebras are Inner

It turns out that all the derivations on a semisimple Lie algebra L are inner derivations. That is, they’re all of the form \mathrm{ad}(x) for some x\in L. We know that the homomorphism \mathrm{ad}:L\to\mathrm{Der}(L) is injective when L is semisimple. Indeed, its kernel is exactly the center Z(L), which we know is trivial. We are asserting that it is also surjective, and thus an isomorphism of Lie algebras.

If we set D=\mathrm{Der}(L) and I=\mathrm{Im}(\mathrm{ad}), we can see that [D,M=I]\subseteq I. Indeed, if \delta is any derivation and x\in L, then we can check that

\displaystyle\begin{aligned}\left[\delta,\mathrm{ad}(x)\right](y)&=\delta([\mathrm{ad}(x)](y))-[\mathrm{ad}(x)](\delta(y))\\&=\delta([x,y])-[x,\delta(y)]\\&=[\delta(x),y]+[x,\delta(y)]-[x,\delta(y)]\\&=[\mathrm{ad}(\delta(x))](y)\end{aligned}

This makes I\subseteq D an ideal, so the Killing form \kappa of I is the restriction of I\times I of the Killing form of D. Then we can define I^\perp\subseteq D to be the subspace orthogonal (with respect to \kappa) to I, and the fact that the Killing form is nondegenerate tells us that I\cap I^\perp=0, and thus [I,I^\perp]=0.

Now, if \delta is an outer derivation — one not in I — we can assume that it is orthogonal to I, since otherwise we just have to use \kappa to project \delta onto I and subtract off that much to get another outer derivation that is orthogonal. But then we find that

\displaystyle\mathrm{ad}(\delta(x))=[\delta,\mathrm{ad}(x)]=0

since this bracket is contained in [I^\perp,I]=0. But the fact that \mathrm{ad} is injective means that \delta(x)=0 for all x\in L, and thus \delta=0. We conclude that I^\perp=0 and that I=D, and thus that \mathrm{ad} is onto, as asserted.

September 11, 2012 Posted by | Algebra, Lie Algebras | 8 Comments

Decomposition of Semisimple Lie Algebras

We say that a Lie algebra L is the direct sum of a collection of ideals L=I_1\oplus\dots\oplus I_n if it’s the direct sum as a vector space. In particular, this implies that [I_i,I_j]\subseteq I_i\cap I_j=0, meaning that the bracket of any two elements from different ideals is zero.

Now, if L is semisimple then there is a collection of ideals, each of which is simple as a Lie algebra in its own right, such that L is the direct sum of these simple ideals. Further, every such simple ideal of L is one in the collection — there’s no way to spread another simple ideal across two or more summands in this decomposition. And the Killing form of a summand is the restriction of the Killing form of L to that summand, as we expect for any ideal of a Lie algebra.

If I\subseteq L is any ideal then we can define the subspace I^\perp of vectors in L that are “orthogonal” to all the vectors in I with respect to the Killing form \kappa. The associativity of \kappa shows that I^\perp is also an ideal, just as we saw for the radical. Indeed, the radical of \kappa is just L^\perp. Anyhow, Cartan’s criterion again shows that the intersection I\cap I^\perp is solvable, but since L is semisimple this means I\cap I^\perp=0, and we can write L=I\oplus I^\perp.

So now we can use an induction on the dimension of L; if L has no nonzero proper ideal, it’s already simple. Otherwise we can pick some proper ideal I to get L=I\oplus I^\perp, where each summand has a lower dimension than L. Any ideal of I is an ideal of L — the bracket with anything from I^\perp is zero — so I and I^\perp must be semisimple as well, or else there would be a nonzero solvable ideal of L. By induction, each one can be decomposed into simple ideals, so L can as well.

Now, if I is any simple ideal of L, then [I,L] is an ideal of I. It can’t be zero, since if it were I would be contained in Z(L), which is zero. Thus, since I is simple, we must have [I,L]=I. But the direct-sum decomposition tells us that [I,L]=[I,L_1]\oplus\dots\oplus[I,L_n], so all but one of these brackets [I,L_i] must be zero, and that bracket must be I itself. But this means I\subseteq L_i for this simple summand, and — by the simplicity of L_iI=L_i.

From this decomposition we conclude that for all semisimple L we have [L,L]=L. Every ideal and every quotient of L must also be semisimple, since each must consist of some collection of the summands of L.

September 8, 2012 Posted by | Algebra, Lie Algebras | Leave a comment

Back to the Example

Let’s go back to our explicit example of L=\mathfrak{sl}(2,\mathbb{F}) and look at its Killing form. We first recall our usual basis:

\displaystyle\begin{aligned}x&=\begin{pmatrix}0&1\\ 0&0\end{pmatrix}\\y&=\begin{pmatrix}0&0\\1&0\end{pmatrix}\\h&=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\end{aligned}

which lets us write out matrices for the adjoint action:

\displaystyle\begin{aligned}\mathrm{ad}(x)&=\begin{pmatrix}0&0&-2\\ 0&0&0\\ 0&1&0\end{pmatrix}\\\mathrm{ad}(y)&=\begin{pmatrix}0&0&0\\ 0&0&2\\-1&0&0\end{pmatrix}\\\mathrm{ad}(h)&=\begin{pmatrix}2&0&0\\ 0&-2&0\\ 0&0&0\end{pmatrix}\end{aligned}

and from here it’s easy to calculate the Killing form. For example:

\displaystyle\begin{aligned}\kappa(x,y)&=\mathrm{Tr}\left(\mathrm{ad}(x)\mathrm{ad}(x)\right)\\&=\mathrm{Tr}\left(\begin{pmatrix}0&0&-2\\ 0&0&0\\ 0&1&0\end{pmatrix}\begin{pmatrix}0&0&0\\ 0&0&2\\-1&0&0\end{pmatrix}\right)\\&=\mathrm{Tr}\left(\begin{pmatrix}2&0&0\\ 0&0&0\\ 0&0&2\end{pmatrix}\right)\\&=4\end{aligned}

We can similarly calculate all the other values of the Killing form on basis elements.

\displaystyle\begin{aligned}\kappa(x,x)&=0\\\kappa(x,y)=\kappa(y,x)&=4\\\kappa(x,h)=\kappa(h,x)&=0\\\kappa(y,y)&=0\\\kappa(y,h)=\kappa(h,y)&=0\\\kappa(h,h)&=8\end{aligned}

So we can write down the matrix of \kappa:

\displaystyle\begin{pmatrix}0&4&0\\4&0&0\\ 0&0&8\end{pmatrix}

And we can test this for degeneracy by taking its determinant to find -128. Since this is nonzero, we conclude that \kappa is nondegenerate, which we know means that \mathfrak{sl}(2,\mathbb{F}) is semisimple — at least in fields where 1+1\neq0.

September 7, 2012 Posted by | Algebra, Lie Algebras | Leave a comment

The Radical of the Killing Form

The first and most important structural result using the Killing form regards its “radical”. We never really defined this before, but it’s not hard: the radical of a binary form B on a vector space V is the subspace consisting of all v\in V such that B(v,w)=0 for all w\in V. That is, if we regard B as a linear map v\mapsto B(v,\underline{\hphantom{X}}), the radical is the kernel of this map. Thus we see that B is nondegenerate if and only if its radical is zero; we’ve only ever dealt much with nondegenerate bilinear forms, so we’ve never really had to consider the radical.

Now, the radical of the Killing form \kappa is more than just a subspace of L; the associative property tells us that it’s an ideal. Indeed, if s is in the radical and x,y\in L are any other two Lie algebra elements, then we find that

\displaystyle\kappa([s,x],y)=\kappa(s,[x,y])=0

thus [s,x] is in the radical as well.

We recall that there was another “radical” we’ve mentioned: the radical of a Lie algebra is its maximal solvable ideal. This is not necessarily the same as the radical of the Killing form, but we can see that the radical of the form is contained in the radical of the algebra. By definition, if x is in the radical of \kappa and y\in L is any other Lie algebra element we have

\displaystyle\kappa(x,y)=\mathrm{Tr}(\mathrm{ad}(x),\mathrm{ad}(y))=0

Cartan’s criterion then tells us that the radical of \kappa is solvable, and is thus contained in \mathrm{Rad}(L), the radical of the algebra. Immediately we conclude that if L is semisimple — if \mathrm{Rad}(L)=0 — then the Killing form must be nondegenerate.

It turns out that the converse is also true. In fact, the radical of \kappa contains all abelian ideals I\subseteq L. Indeed, if x\in I and y\in L then \mathrm{ad}(x)\mathrm{ad}(y):L\to I, and the square of this map sends L into [I,I]=0. Thus \mathrm{ad}(x)\mathrm{ad}(y) is nilpotent, and thus has trace zero, proving that \kappa(x,y)=0, and that x is contained in the radical of \kappa. So if the Killing form is nondegenerate its radical is zero, and there can be no abelian ideals of L. But the derived series of \mathrm{Rad}(L) eventually hits zero, and its last nonzero term is an abelian ideal of L. This can only work out if \mathrm{Rad}(L) is already zero, and thus L is semisimple.

So we have a nice condition for semisimplicity: calculate the Killing form and check that it’s nondegenerate.

September 6, 2012 Posted by | Algebra, Lie Algebras | 3 Comments