## Lie groups, Lie algebras, and representations

I’m eventually going to get comments from Adams, Vogan, and Zuckerman about Kazhdan-Lusztig-Vogan polynomials, but I want to give a brief overview of some of the things I know will be involved. Most of this stuff I’ll cover more thoroughly at some later point.

One side note: I’m not going to link to the popular press articles. I’m glad they’re there, but they’re *awful* as far as the math goes. Science writers have this delusion that people are just incapable of understanding mathematics, and completely dumb things down to the point of breaking them. Either that or they don’t bother to put in even the minimum time to get a sensible handle on the concepts like they do for physics.

Okay, so a Lie group is a group of continuous transformations, like rotations of an object in space. The important thing is that the underlying set has the structure of a manifold, which is a space that “locally looks like” regular -dimensional space. The surface of the Earth is curved into a sphere, as we know, but close up it looks flat. That’s what being a manifold is all about. The group structure — the composition and the inversion — have to behave “smoothly” to preserve the manifold structure.

One important thing you can do with a Lie group is find subgroups with nice structures. Some of the nicest are the one-dimensional subgroups passing through the identity element. Since close up the group looks like -dimensional space, let’s get really close and stand on the identity. Now we can pick a direction and start walking that way, never turning. As we go, we trace out a path in the group. Let’s say that after minutes of elapsed time we’re at . If we’ve done things right, we have the extremely nice property that . That is, we can multiply group elements along our path by adding the time parameters. We call this sort of thing a “1-parameter subgroup”, and there’s one of them for each choice of direction and speed we leave the origin with.

So what if we start combining these subgroups? Let’s pick two and call them and . In general they won’t commute with each other. To see this, get a ball and put it on the table. Mark the point at the top of the ball so you can keep track of it. Now, roll the ball to the right by 90°, then away from you by 90°, then to your left by 90°, then towards you by 90°. The point isn’t back where it started, it’s pointing right at you! Try it again but make each turn 45°. Again, the point isn’t back at the top of the ball. If you do this for all different angles, you’ll trace out a curve of rotations, which is another 1-parameter subgroup! We can measure how much two subgroups fail to commute by getting a third subgroup out of them. And since 1-parameter subgroups correspond to vectors (directions and speeds) at the identity of the group, we can just calculate on those vectors. The set of vectors equipped with this structure is called a Lie algebra. Given two vectors and we write the resulting vector as . This satisfies a few properties.

Lie algebras are what we really want to understand.

So now I’m going to skip a bunch and just say that we can put Lie algebras together like we make direct products of groups, only now we call them direct sums. In fact, for many purposes all Lie algebras can be broken into a finite direct sum of a bunch of “simple” Lie algebras that can’t be broken up any more. Think about breaking a number into its prime factors. If we understand all the simple Lie algebras, then (in theory) we understand all the “semisimple” Lie algebras, which are sums of simple ones.

And amazingly, we *do* know all the simple Lie algebras! I’m not remotely going to go into this now, but at a cursory glance the Wikipedia article on root systems seems to be not completely insane. The upshot is that we’ve got four infinite families of Lie algebras and five weird ones. Of the five weird ones, is the biggest. Its root system (see the Wikipedia article) consists of 240 vectors living in an eight-dimensional space. This is the thing that, projected onto a plane, you’ve probably seen in all of the popular press articles looking like a big lace circle.

So we already understood ? What’s the big deal now? Well, it’s one thing to know it’s there, and another thing entirely to know how to work with such a thing. What we’d really like to know is how can act on other mathematical structures. In particular, we’d like to know how it can act as linear transformations on a vector space. Any vector space comes equipped with a Lie algebra : take the vector space of all linear transformations from to itself and make a bracket by (verify for yourself that this satisfies the requirements of being a Lie algebra). So what we’re interested in is functions from to that preserve all the Lie algebra structure.

And this, as I understand it, is where the Kazhdan-Lusztig-Vogan polynomials come in. Root systems and Dynkin diagrams are the essential tools for classifying Lie algebras. These polynomials are essential for understanding the structures of *representations* of Lie algebras. I’m not ready to go into how they do that, and when I am it will probably be at a somewhat higher level than I usually use here, but hopefully lower than the technical accounts available in Adams’ paper and Baez’ explanation.