Well it’s been quite a while, but I think I can carve out the time to move forwards again. I was all set to start with Lie algebras today, only to find that I’ve already defined them over a year ago. So let’s pick up with a recap: a Lie algebra is a module — usually a vector space over a field — called and give it a bilinear operation which we write as . We often require such operations to be associative, but this time we impose the following two conditions:
Now, as long as we’re not working in a field where — and usually we’re not — we can use bilinearity to rewrite the first condition:
so . This antisymmetry always holds, but we can only go the other way if the character of is not , as stated above.
The second condition is called the “Jacobi identity”, and antisymmetry allows us to rewrite it as:
That is, bilinearity says that we have a linear mapping that sends an element to a linear endomorphism in . And the Jacobi identity says that this actually lands in the subspace of “derivations” — those which satisfy something like the Leibniz rule for derivatives. To see what I mean, compare to the product rule:
where takes the place of , takes the place of , and takes the place of . And the operations are changed around. But you should see the similarity.
Lie algebras obviously form a category whose morphisms are called Lie algebra homomorphisms. Just as we might expect, such a homomorphism is a linear map that preserves the bracket:
We can obviously define subalgebras and quotient algebras. Subalgebras are a bit more obvious than quotient algebras, though, being just subspaces that are closed under the bracket. Quotient algebras are more commonly called “homomorphic images” in the literature, and we’ll talk more about them later.
We will take as a general assumption that our Lie algebras are finite-dimensional, though infinite-dimensional ones absolutely exist and are very interesting.
And I’ll finish the recap by reminding you that we can get Lie algebras from associative algebras; any associative algebra can be given a bracket defined by
The above link shows that this satisfies the Jacobi identity, or you can take it as an exercise.
Since Lie groups are groups, they have representations — homomorphisms to the general linear group of some vector space or another. But since is a Lie group, we can use this additional structure as well. And so we say that a representation of a Lie group should not only be a group homomorphism, but a smooth map of manifolds as well.
As a first example, we define a representation that every Lie group has: the adjoint representation. To define it, we start by defining conjugation by . As we might expect, this is the map — that is, . This is a diffeomorphism from back to itself, and in particular it has the identity as a fixed point: . Thus the derivative sends the tangent space at back to itself: . But we know that this tangent space is canonically isomorphic to the Lie algebra . That is, . So now we can define by . We call this the “adjoint representation” of .
To get even more specific, we can consider the adjoint representation of on its Lie algebra . I say that is just itself. That is, if we view as an open subset of then we can identify . The fact that and both commute means that , meaning that and are “the same” transformation, under this identification of these two vector spaces.
Put more simply: to calculate the adjoint action of on the element of corresponding to , it suffices to calculate the conjugate ; then
Since is an open submanifold of , the tangent space of at any matrix is the same as the tangent space to at . And since is (isomorphic to) a Euclidean space, we can identify with using the canonical isomorphism . In particular, we can identify it with the tangent space at the identity matrix , and thus with the Lie algebra of :
But this only covers the vector space structures. Since is an associative algebra it automatically has a bracket: the commutator. Is this the same as the bracket on under this vector space isomorphism? Indeed it is.
To see this, let be a matrix in and assign . This specifies the value of the vector field at the identity in . We extend this to a left-invariant vector field by setting
where we subtly slip from left-translation by within to left-translation within the larger manifold . We do the same thing to go from another matrix to another left-invariant vector field .
Now that we have our hands on two left-invariant vector fields and coming from two matrices and . We will calculate the Lie bracket — we know that it must be left-invariant — and verify that its value at indeed corresponds to the commutator .
Let be the function sending an matrix to its entry. We hit it with one of our vector fields:
That is, , where is right-translation by . To apply the vector to this function, we must take its derivative at in the direction of . If we consider the curve through defined by we find that
Similarly, we find that . And thus
Of course, for any we have the decomposition
Therefore, since we’ve calculated we know these two vectors have all the same components, and thus are the same vector. And so we conclude that the Lie bracket on agrees with the commutator on , and thus that these two are isomorphic as Lie algebras.
One of the most important examples of a Lie group we’ve already seen: the general linear group of a finite dimensional vector space . Of course for the vector space this is the same as — or at least isomorphic to — the group of all invertible real matrices, so that’s a Lie group we can really get our hands on. And if has dimension , then , and thus .
So, how do we know that it’s a Lie group? Well, obviously it’s a group, but what about the topology? The matrix group sits inside the algebra of all matrices, which is an -dimensional vector space. Even better, it’s an open subset, which we can see by considering the (continuous) map . Since is the preimage of — which is an open subset of — is an open subset of .
So we can conclude that is an open submanifold of , which comes equipped with the standard differentiable structure on . Matrix multiplication is clearly smooth, since we can write each component of a product matrix as a (quadratic) polynomial in the entries of and . As for inversion, Cramer’s rule expresses the entries of the inverse matrix as the quotient of a (degree ) polynomial in the entries of and the determinant of . So long as is invertible these are two nonzero smooth functions, and thus their quotient is smooth at .
Since a Lie group is a smooth manifold we know that the collection of vector fields form a Lie algebra. But this is a big, messy object because smoothness isn’t a very stringent requirement on a vector field. The value can’t vary too wildly from point to point, but it can still flop around a huge amount. What we really want is something more tightly controlled, and hopefully something related to the algebraic structure on to boot.
To this end, we consider the “left-invariant” vector fields on . A vector field is left-invariant if the diffeomorphism of left-translation intertwines with itself for all . That is, must satisfy ; or to put it another way: . This is a very strong condition indeed, for any left-invariant vector field is determined by its value at the identity . Just set and find that
The really essential thing for our purposes is that left-invariant vector fields form a Lie subalgebra. That is, if and are left-invariant vector fields, then so is their sum , scalar multiples — where is a constant and not a function varying as we move around — and their bracket . And indeed left-invariance of sums and scalar multiples are obvious, using the formula and the fact that is linear on individual tangent spaces. As for brackets, this follows from the lemma we proved when we first discussed maps intertwining vector fields.
So given a Lie group we get a Lie algebra we’ll write as . In general Lie groups are written with capital Roman letters, while their corresponding Lie algebras are written with corresponding lowercase fraktur letters. When has dimension , also has dimension — this time as a vector space — since each vector field in is uniquely determined by a single vector in .
We should keep in mind that while is canonically isomorphic to as a vector space, the Lie algebra structure comes not from that tangent space itself, but from the way left-invariant vector fields interact with each other.
And of course there’s the glaring asymmetry that we’ve chosen left-invariant vector fields instead of right-invariant vector fields. Indeed, we could have set everything up in terms of right-invariant vector fields and the right-translation diffeomorphism . But it turns out that the inversion diffeomorphism interchanges left- and right-invariant vector fields, and so we end up in the same place anyway.
How does the inversion act on vector fields? We recognize that , and find that it sends the vector field to . Now if is left-invariant then for all . We can then calculate
where the identities and reflect the simple group equations and , respectively. Thus we conclude that if is left-invariant then is right-invariant. The proof of the converse is similar.
The one thing that’s left is proving that if and are left-invariant then their right-invariant images have the same bracket. This will follow from the fact that , but rather than prove this now we’ll just push ahead and use left-invariant vector fields.
Now we come to one of the most broadly useful and fascinating structures on all of mathematics: Lie groups. These are objects which are both smooth manifolds and groups in a compatible way. The fancy way to say it is, of course, that a Lie group is a group object in the category of smooth manifolds.
To be a little more explicit, a Lie group is a smooth -dimensional manifold equipped with a multiplication and an inversion which satisfy all the usual group axioms (wow, it’s been a while since I wrote that stuff down) and are also smooth maps between manifolds. Of course, when we write we mean the product manifold.
We can use these to construct some other useful maps. For instance, if is any particular element we know that we have a smooth inclusion defined by . Composing this with the multiplication map we get a smooth map defined by , which we call “left-translation by “. Similarly we get a smooth right-translation .
There is a great source for generating many Lie algebras: associative algebras. Specifically, if we have an associative algebra we can build a lie algebra on the same underlying vector space by letting the bracket be the “commutator” from . That is, for any algebra elements and we define
In fact, this is such a common way of coming up with Lie algebras that many people think of the bracket as a commutator by definition.
Clearly this is bilinear and antisymmetric, but does it satisfy the Jacobi identity? Well, let’s take three algebra elements and form the double bracket
We can find the other orders just as easily
and when we add these all up each term cancels against another term, leaving zero. Thus the commutator in an associative algebra does indeed act as a bracket.
One more little side trip before we proceed with the differential geometry: Lie algebras. These are like “regular” associative algebras in that we take a module (often a vector space) and define a bilinear operation on it. This much is covered at the top of the post on algebras.
The difference is that instead of insisting that the operation be associative, we impose different conditions. Also, instead of writing our operation like a multiplication (and using the word “multiplication”), we will write it as and call it the “bracket” of and . Now, our first condition is that the bracket be antisymmetric:
Secondly, and more importantly, we demand that the bracket should satisfy the “Jacobi identity”:
What this means is that the operation of “bracketing with ” acts like a derivation on the Lie algebra; we can apply to the bracket by first applying it to and bracketing the result with , then bracketing with the result of applying the operation to , and adding the two together.
This condition is often stated in the equivalent form
It’s a nice exercise to show that (assuming antisymmetry) these two equations are indeed equivalent. This form of the Jacobi identity is neat in the way it shows a rotational symmetry among the three algebra elements, but I feel that it misses the deep algebraic point about why the Jacobi identity is so important: it makes for an algebra that acts on itself by derivations of its own structure.
It turns out that we already know of an example of a Lie algebra: the cross product of vectors in . Indeed, take three vectors , , and and try multiplying them out in all three orders:
and add the results together to see that you always get zero, thus satisfying the Jacobi identity.
Sorry for the break last Friday.
As long as we’re in the neighborhood — so to speak — we may as well define the concept of a “local ring”. This is a commutative ring which contains a unique maximal ideal. Equivalently, it’s one in which the sum of any two noninvertible elements is again noninvertible.
Why are these conditions equivalent? Well, if we have noninvertible elements and with invertible, then these elements generate principal ideals and . If we add these two ideals, we must get the whole ring, for the sum contains , and so must contain , and thus the whole ring. Thus and cannot both be contained within the same maximal ideal, and thus we would have to have two distinct maximal ideals.
Conversely, if the sum of any two noninvertible elements is itself noninvertible, then the noninvertible elements form an ideal. And this ideal must be maximal, for if we throw in any other (invertible) element, it would suddenly contain the entire ring.
Why do we care? Well, it turns out that for any manifold and point the algebra of germs of functions at is a local ring. And in fact this is pretty much the reason for the name “local” ring: it is a ring of functions that’s completely localized to a single point.
To see that this is true, let’s consider which germs are invertible. I say that a germ represented by a function is invertible if and only if . Indeed, if , then is certainly not invertible. On the other hand, if , then continuity tells us that there is some neighborhood of where . Restricting to this neighborhood if necessary, we have a representative of the germ which never takes the value zero. And thus we can define a function for , which represents the multiplicative inverse to the germ of .
With this characterization of the invertible germs in hand, it should be clear that any two noninvertible germs represented by and must have . Thus , and the germ of is again noninvertible. Since the sum of any two noninvertible germs is itself noninvertible, the algebra of germs is local, and its unique maximal ideal consists of those functions which vanish at .
Incidentally, we once characterized maximal ideals as those for which the quotient is a field. So which field is it in this case? It’s not hard to see that — any germ is sent to its value at , which is just a real number.
First let’s mention a few more general results about Kostka numbers.
Among all the tableaux that partition , it should be clear that . Thus the Kostka number is not automatically zero. In fact, I say that it’s always . Indeed, the shape is a single row with entries, and the content gives us a list of numbers, possibly with some repeats. There’s exactly one way to arrange this list into weakly increasing order along the single row, giving .
On the other extreme, , so might be nonzero. The shape is given by , and the content gives one entry of each value from to . There are no possible entries to repeat, and so any semistandard tableau with content is actually standard. Thus — the number of standard tableaux of shape .
This means that we can decompose the module :
But , which means each irreducible -module shows up here with a multiplicity equal to its dimension. That is, is always the left regular representation.
Okay, now let’s look at a full example for a single choice of . Specifically, let . That is, we’re looking for semistandard tableaux of various shapes, all with two entries of value , two of value , and one of value . There are five shapes with . For each one, we will look for all the ways of filling it with the required content.
Counting the semistandard tableaux on each row, we find the Kostka numbers. Thus we get the decomposition