I’ve had a flood of incoming people in the past couple days, and have even been linked from the article in The New York Times (or at least in their list of blogs commenting on the news). As I said before, their coverage is pretty superficial, and I’ve counted half a dozen errors in their picture captions alone.
One of the main reasons I write this weblog is because I believe anyone can follow the basic ideas of even the most bleeding-edge mathematics. Few mathematicians write towards the generally interested lay audience (“GILA”) the way physicists tend to do, and when mathematics does make it into the popular press the journalists don’t even make the effort they do in physics to get what they do say right.
My uncle, no mathematician he but definitely a GILA member, emailed me to mention he’d read that mathematicians had “solved E8″, but had no idea what it meant. Mostly he was asking if I knew Adams (I do), but I responded with a high-level overview of what they were doing and why. I’m going to post here what I told him. It’s designed to be pretty self-contained, and has been refined from a few days of explaining the ideas to other nonmathematicians.
Oh, and I’m not above link-baiting. If you find this coherent and illuminating, please pass the link to this post around. If there’s something that I’ve horribly screwed up in here, please let me know and I’ll try to smooth it over while keeping it accessible. I’m also trying to explain the ideas at a somewhat higher level (though not in full technicality) within the category “Atlas of Lie Groups”. If you want to know more, please keep watching there.
[UPDATE: I now also have another post trying to answer the "what's it good for?" question. That response starts at the fourth paragraph: "I also want to...".]
I understand not knowing what the news reports mean, because most of them are pretty horrible. It’s possible to give a stripped-down explanation, but the popular press doesn’t seem to want to bother.
A group is a collection of symmetries. A nice one is all the transformations of a square. You can flip it over left-to-right, flip it up-to-down, or rotate it by quarter turns. This group isn’t “simple” because there are smaller groups sitting inside it [yes, it's a bit more than that as readers here should know. --ed] — you could forget the flips and just consider the group of rotations. All groups can be built up from simple groups that have no smaller ones sitting inside them, so those are the ones we really want to understand. Think of it sort of like breaking a number into its prime factors.
The kinds of groups this project is concerned with are called Lie groups (pronounced “lee”) after the Norwegian mathematician Sophus Lie. They’re made up of continuous transformations like rotations of an object in 3-dimensional space. Again, the Lie groups we’re really interested in are the simple ones that can’t be broken down into smaller ones.
A hundred years ago, Élie Cartan and others came up with a classification of all these simple Lie groups. There are four infinite families like rotations in spaces of various dimensions or square matrices of various sizes with determinant 1 (if you remember any matrix algebra). These are called , , , and . There are also five extras that don’t fit into those four families, called , , , , and . That last one is the biggest. It takes three numbers to describe a rotation in 3-D space, but 248 numbers to describe an element of .
Classifying the groups is all well and good, but they’re still hard to work with. We want to know how these groups can act as symmetries of various objects. In particular, we want to find ways of assigning a matrix to each element of a group so that if you take two transformations in the group and do them one after the other, the matrix corresponding to that combination is the product of the matrices corresponding to the two transformations. We call this a “matrix representation” of the group. Again, some representations can be broken into simpler pieces, and we’re concerned with the simple ones that can’t be broken down anymore.
What the Atlas project is trying to do is build up a classification of all the simple representations of all the simple Lie groups, and the hardest chunk is , which has now been solved.
As another part of preparing for the digestion of the result, I need to talk about flag vareties. You’ll need at least some linear algebra to follow from this point.
A flag in a vector space is a chain of nested subspaces of specified dimensions. In three-dimensional space, for instance, one kind of flag is a choice of some plane through the origin and a line through the origin sitting inside that plane. Another kind is just a choice of a plane through the origin. The space of all flags of a given kind in a vector space can be described by solving a certain collection of polynomial equations, which makes it a “variety”. It’s sort of like a manifold, but there can be places where a variety intersects itself, or comes to a point, or has a sharp kink. In those places it doesn’t look like -dimensional space.
Flag varieties and Lie groups have a really interesting interaction. I’ll try to do the simplest example justice, and the rest are sort of similar. We take a vector space and consider the group of linear transformations with . Clearly this group acts on . If we pick a basis of we can represent each transformation as an matrix. Then there’s a subgroup of “upper triangular” matrices of the form
check that the product of two such matrices is again of this form, and that their determinants are always . Of course if we choose a different basis, the transformations in this subgroup are no longer in this upper triangular form. We’ll have a different subgroup of upper triangular matrices. The subgroups corresponding to different bases are related, though — they’re conjugate!
Corresponding to each basis we also have a flag. It consists of the line spanned by the first basis element contained in the plane spanned by the first two elements, contained in… and so on. So why do we care about this flag? Because the subgroup of upper triangular matrices with respect to this basis fixes this flag! The special line is sent back into itself, the special plane back into itself, and so on. In fact, the group acts on the flag variety “transitively” (there’s only one orbit) and the stabilizer of the flag corresponding to a basis is the subgroup of upper triangular matrices with respect to that basis! The upshot is that we can describe the flag variety from the manifold of by picking a basis, getting the subgroup of upper triangular matrices , and identifying elements of that “differ” by an element of . The subgroup is not normal in , so we can’t form a quotient group, but there’s still a space of cosets: .
So studying the flag variety in ends up telling us about the relationship between the group and its subgroup . In general if we have a Lie group and a subgroup satisfying a certain condition we can study the relation between these two by studying a certain related variety of flags.
I’m eventually going to get comments from Adams, Vogan, and Zuckerman about Kazhdan-Lusztig-Vogan polynomials, but I want to give a brief overview of some of the things I know will be involved. Most of this stuff I’ll cover more thoroughly at some later point.
One side note: I’m not going to link to the popular press articles. I’m glad they’re there, but they’re awful as far as the math goes. Science writers have this delusion that people are just incapable of understanding mathematics, and completely dumb things down to the point of breaking them. Either that or they don’t bother to put in even the minimum time to get a sensible handle on the concepts like they do for physics.
Okay, so a Lie group is a group of continuous transformations, like rotations of an object in space. The important thing is that the underlying set has the structure of a manifold, which is a space that “locally looks like” regular -dimensional space. The surface of the Earth is curved into a sphere, as we know, but close up it looks flat. That’s what being a manifold is all about. The group structure — the composition and the inversion — have to behave “smoothly” to preserve the manifold structure.
One important thing you can do with a Lie group is find subgroups with nice structures. Some of the nicest are the one-dimensional subgroups passing through the identity element. Since close up the group looks like -dimensional space, let’s get really close and stand on the identity. Now we can pick a direction and start walking that way, never turning. As we go, we trace out a path in the group. Let’s say that after minutes of elapsed time we’re at . If we’ve done things right, we have the extremely nice property that . That is, we can multiply group elements along our path by adding the time parameters. We call this sort of thing a “1-parameter subgroup”, and there’s one of them for each choice of direction and speed we leave the origin with.
So what if we start combining these subgroups? Let’s pick two and call them and . In general they won’t commute with each other. To see this, get a ball and put it on the table. Mark the point at the top of the ball so you can keep track of it. Now, roll the ball to the right by 90°, then away from you by 90°, then to your left by 90°, then towards you by 90°. The point isn’t back where it started, it’s pointing right at you! Try it again but make each turn 45°. Again, the point isn’t back at the top of the ball. If you do this for all different angles, you’ll trace out a curve of rotations, which is another 1-parameter subgroup! We can measure how much two subgroups fail to commute by getting a third subgroup out of them. And since 1-parameter subgroups correspond to vectors (directions and speeds) at the identity of the group, we can just calculate on those vectors. The set of vectors equipped with this structure is called a Lie algebra. Given two vectors and we write the resulting vector as . This satisfies a few properties.
Lie algebras are what we really want to understand.
So now I’m going to skip a bunch and just say that we can put Lie algebras together like we make direct products of groups, only now we call them direct sums. In fact, for many purposes all Lie algebras can be broken into a finite direct sum of a bunch of “simple” Lie algebras that can’t be broken up any more. Think about breaking a number into its prime factors. If we understand all the simple Lie algebras, then (in theory) we understand all the “semisimple” Lie algebras, which are sums of simple ones.
And amazingly, we do know all the simple Lie algebras! I’m not remotely going to go into this now, but at a cursory glance the Wikipedia article on root systems seems to be not completely insane. The upshot is that we’ve got four infinite families of Lie algebras and five weird ones. Of the five weird ones, is the biggest. Its root system (see the Wikipedia article) consists of 240 vectors living in an eight-dimensional space. This is the thing that, projected onto a plane, you’ve probably seen in all of the popular press articles looking like a big lace circle.
So we already understood ? What’s the big deal now? Well, it’s one thing to know it’s there, and another thing entirely to know how to work with such a thing. What we’d really like to know is how can act on other mathematical structures. In particular, we’d like to know how it can act as linear transformations on a vector space. Any vector space comes equipped with a Lie algebra : take the vector space of all linear transformations from to itself and make a bracket by (verify for yourself that this satisfies the requirements of being a Lie algebra). So what we’re interested in is functions from to that preserve all the Lie algebra structure.
And this, as I understand it, is where the Kazhdan-Lusztig-Vogan polynomials come in. Root systems and Dynkin diagrams are the essential tools for classifying Lie algebras. These polynomials are essential for understanding the structures of representations of Lie algebras. I’m not ready to go into how they do that, and when I am it will probably be at a somewhat higher level than I usually use here, but hopefully lower than the technical accounts available in Adams’ paper and Baez’ explanation.
A team of 18 mathematicians working on the Atlas of Lie Groups and Representations have completed computing the Kazhdan-Lusztig-Vogan polynomial for the exceptional Lie group . There’s a good explanation by John Baez at The n-Category Café (warning: technical).
I feel a sort of connection to this project, mostly socially. For one thing, it may seem odd but my advisor does a Lie algebra representations — not knot theory like I do — which is very closely related to the theory of Lie groups. His first graduate student, Jeff Adams, led the charge for . Dr. Adams was one of the best professors I had as an undergrad, and I probably owe my category-friendliness to his style in teaching the year-long graduate abstract algebra course I took, as well as his course on the classical groups. That approach of his probably has something to do with his being a student of Zuckerman’s. And around we go.