As another part of preparing for the digestion of the result, I need to talk about flag vareties. You’ll need at least some linear algebra to follow from this point.
A flag in a vector space is a chain of nested subspaces of specified dimensions. In three-dimensional space, for instance, one kind of flag is a choice of some plane through the origin and a line through the origin sitting inside that plane. Another kind is just a choice of a plane through the origin. The space of all flags of a given kind in a vector space can be described by solving a certain collection of polynomial equations, which makes it a “variety”. It’s sort of like a manifold, but there can be places where a variety intersects itself, or comes to a point, or has a sharp kink. In those places it doesn’t look like -dimensional space.
Flag varieties and Lie groups have a really interesting interaction. I’ll try to do the simplest example justice, and the rest are sort of similar. We take a vector space and consider the group of linear transformations with . Clearly this group acts on . If we pick a basis of we can represent each transformation as an matrix. Then there’s a subgroup of “upper triangular” matrices of the form
check that the product of two such matrices is again of this form, and that their determinants are always . Of course if we choose a different basis, the transformations in this subgroup are no longer in this upper triangular form. We’ll have a different subgroup of upper triangular matrices. The subgroups corresponding to different bases are related, though — they’re conjugate!
Corresponding to each basis we also have a flag. It consists of the line spanned by the first basis element contained in the plane spanned by the first two elements, contained in… and so on. So why do we care about this flag? Because the subgroup of upper triangular matrices with respect to this basis fixes this flag! The special line is sent back into itself, the special plane back into itself, and so on. In fact, the group acts on the flag variety “transitively” (there’s only one orbit) and the stabilizer of the flag corresponding to a basis is the subgroup of upper triangular matrices with respect to that basis! The upshot is that we can describe the flag variety from the manifold of by picking a basis, getting the subgroup of upper triangular matrices , and identifying elements of that “differ” by an element of . The subgroup is not normal in , so we can’t form a quotient group, but there’s still a space of cosets: .
So studying the flag variety in ends up telling us about the relationship between the group and its subgroup . In general if we have a Lie group and a subgroup satisfying a certain condition we can study the relation between these two by studying a certain related variety of flags.
I’m eventually going to get comments from Adams, Vogan, and Zuckerman about Kazhdan-Lusztig-Vogan polynomials, but I want to give a brief overview of some of the things I know will be involved. Most of this stuff I’ll cover more thoroughly at some later point.
One side note: I’m not going to link to the popular press articles. I’m glad they’re there, but they’re awful as far as the math goes. Science writers have this delusion that people are just incapable of understanding mathematics, and completely dumb things down to the point of breaking them. Either that or they don’t bother to put in even the minimum time to get a sensible handle on the concepts like they do for physics.
Okay, so a Lie group is a group of continuous transformations, like rotations of an object in space. The important thing is that the underlying set has the structure of a manifold, which is a space that “locally looks like” regular -dimensional space. The surface of the Earth is curved into a sphere, as we know, but close up it looks flat. That’s what being a manifold is all about. The group structure — the composition and the inversion — have to behave “smoothly” to preserve the manifold structure.
One important thing you can do with a Lie group is find subgroups with nice structures. Some of the nicest are the one-dimensional subgroups passing through the identity element. Since close up the group looks like -dimensional space, let’s get really close and stand on the identity. Now we can pick a direction and start walking that way, never turning. As we go, we trace out a path in the group. Let’s say that after minutes of elapsed time we’re at . If we’ve done things right, we have the extremely nice property that . That is, we can multiply group elements along our path by adding the time parameters. We call this sort of thing a “1-parameter subgroup”, and there’s one of them for each choice of direction and speed we leave the origin with.
So what if we start combining these subgroups? Let’s pick two and call them and . In general they won’t commute with each other. To see this, get a ball and put it on the table. Mark the point at the top of the ball so you can keep track of it. Now, roll the ball to the right by 90°, then away from you by 90°, then to your left by 90°, then towards you by 90°. The point isn’t back where it started, it’s pointing right at you! Try it again but make each turn 45°. Again, the point isn’t back at the top of the ball. If you do this for all different angles, you’ll trace out a curve of rotations, which is another 1-parameter subgroup! We can measure how much two subgroups fail to commute by getting a third subgroup out of them. And since 1-parameter subgroups correspond to vectors (directions and speeds) at the identity of the group, we can just calculate on those vectors. The set of vectors equipped with this structure is called a Lie algebra. Given two vectors and we write the resulting vector as . This satisfies a few properties.
Lie algebras are what we really want to understand.
So now I’m going to skip a bunch and just say that we can put Lie algebras together like we make direct products of groups, only now we call them direct sums. In fact, for many purposes all Lie algebras can be broken into a finite direct sum of a bunch of “simple” Lie algebras that can’t be broken up any more. Think about breaking a number into its prime factors. If we understand all the simple Lie algebras, then (in theory) we understand all the “semisimple” Lie algebras, which are sums of simple ones.
And amazingly, we do know all the simple Lie algebras! I’m not remotely going to go into this now, but at a cursory glance the Wikipedia article on root systems seems to be not completely insane. The upshot is that we’ve got four infinite families of Lie algebras and five weird ones. Of the five weird ones, is the biggest. Its root system (see the Wikipedia article) consists of 240 vectors living in an eight-dimensional space. This is the thing that, projected onto a plane, you’ve probably seen in all of the popular press articles looking like a big lace circle.
So we already understood ? What’s the big deal now? Well, it’s one thing to know it’s there, and another thing entirely to know how to work with such a thing. What we’d really like to know is how can act on other mathematical structures. In particular, we’d like to know how it can act as linear transformations on a vector space. Any vector space comes equipped with a Lie algebra : take the vector space of all linear transformations from to itself and make a bracket by (verify for yourself that this satisfies the requirements of being a Lie algebra). So what we’re interested in is functions from to that preserve all the Lie algebra structure.
And this, as I understand it, is where the Kazhdan-Lusztig-Vogan polynomials come in. Root systems and Dynkin diagrams are the essential tools for classifying Lie algebras. These polynomials are essential for understanding the structures of representations of Lie algebras. I’m not ready to go into how they do that, and when I am it will probably be at a somewhat higher level than I usually use here, but hopefully lower than the technical accounts available in Adams’ paper and Baez’ explanation.
At long last I come to quandles. I know there are some readers who have been waiting for this, but I wanted to at least get through a bunch of group theory before I introduced them, because they tend to feel a bit weirder so it’s good to warm up before jumping into them.
The story of quandles really begins back in the late ’50s and early ’60s when John Conway and Gavin Wraith considered the wrack and ruin that’s left when you violently rip away the composition from a group and just leave behind its conjugation action. This is a set with an operation , and it’s already a quandle. The part of the structure they considered, though, has lost its ‘w’ and become known as a “rack”.
In 1982, David Joyce independently discovered these structures while a student under Peter Freyd working on knot theory with a very categorical flavor (hmm.. sounds familiar). He called them “quandles” because he wanted a word that didn’t mean anything else already, and when the term popped into his head he just liked the sound of it. There are other names for similar structures, but “quandle” is the one that really took hold, partially because there are a lot of unusual algebraic structures that aren’t good for much but their own interest, but “quandle” was the term the knot theorists picked up and ran with.
Actually, after hearing one of my talks Dr. Freyd mentioned that Joyce had come up with a lot of good things while a student, but he (Freyd) never thought much would come of quandles. In the end quandles have become the biggest thing to come out of his (Joyce’s) thesis.
Okay, so let’s get down to work. There are three axioms for the structure of a quandle, and I’ll go through them in the reverse of the usual order for reasons that will become apparent. We start with a set with two operations, written and .
The third and most important axiom is that distributes over itself: . A set with one operation satisfying this property is called a “shelf”, leading to Alissa Crans’ calling the property “shelf-distributivity”. No, I’m not going to let her live down making such an awful pun, mostly because she beat me to it. We can verify that conjugation in a group satisfies this property:
The second axiom is that the two operations undo each other: . Some authors just focus on the one operation and insist that for every and the equation have a unique solution. Our second operation is just what gives you back that solution. A shelf satisfying (either form of) this axiom is called a rack. We again verify this for conjugation, using conjugation by the inverse as our second operation:
Finally, the first axiom is that . Indeed, for a group we have . This axiom makes a rack into a quandle.
One more specialization comes in handy: we call a quandle “involutory” if . Equivalently (by the second axiom), . That is, acts on by some sort of reflection, and acting twice restores the original.
As a bit of practice, check that in a rack the second operation is also self-distributive. That is, . Also verify that if we start with an abelian group (writing group composition as addition), the operation makes the set of elements of into an involutory quandle.
A team of 18 mathematicians working on the Atlas of Lie Groups and Representations have completed computing the Kazhdan-Lusztig-Vogan polynomial for the exceptional Lie group . There’s a good explanation by John Baez at The n-Category Café (warning: technical).
I feel a sort of connection to this project, mostly socially. For one thing, it may seem odd but my advisor does a Lie algebra representations — not knot theory like I do — which is very closely related to the theory of Lie groups. His first graduate student, Jeff Adams, led the charge for . Dr. Adams was one of the best professors I had as an undergrad, and I probably owe my category-friendliness to his style in teaching the year-long graduate abstract algebra course I took, as well as his course on the classical groups. That approach of his probably has something to do with his being a student of Zuckerman’s. And around we go.
I’m back from Ohio at the College Perk again. The place looks a lot different in daylight. Anyhow, since the last few days have been a little short on the exposition, I thought I’d cover integers.
Okay, we’ve covered that the natural numbers are a commutative ordered monoid. We can add numbers, but we’re used to subtracting numbers too. The problem is that we can’t subtract with just the natural numbers — they aren’t a group. What could we do with ?
Well, let’s just throw it in. In fact, let’s just throw in a new element for every possible subtraction of natural numbers. And since we can get back any natural number by subtracting zero from it, let’s just throw out all the original numbers and just work with these differences. We’re looking at the set of all pairs of natural numbers.
Oops, now we’ve overdone it. Clearly some of these differences should be the same. In particular, should be the same as . If we repeat this relation we can see that should be the same as where we’re using the definition of addition of natural numbers from last time. We can clean this up and write all of these in one fell swoop by defining the equivalence relation: . After checking that this is indeed an equivalence relation, we can pass to the set of equivalence classes and call these the integers .
Now we have to add structure to this set. We define an order on the integers by . The caveat here is that we have to check that if we replace a pair with an equivalent pair we get the same answer. Let’s say , , and . Then
so . The first equality uses the equivalences we assumed and the second uses the inequality. Throughout we’re using the associativity and commutativity. That the first inequality implies the second follows because addition of natural numbers preserves order.
We get an addition as well. We define . It’s important to note here that the addition on the left is how we’re defining the sum of two pairs, and those on the right are additions of natural numbers we already know how to do. Now if and we see
so . Addition of integers doesn’t depend on which representative pairs we use. It’s easy now to check that this addition is associative and commutative, that is an additive identity, that (giving additive inverses), and that addition preserves the order structure. All this together makes into an ordered abelian group.
Now we can relate the integers back to the natural numbers. Since the integers are a group, they’re also a monoid. We can give a monoid homomorphism embedding . Send the natural number to the integer represented by . We call the nonzero integers of this form “positive”, and their inverses of the form “negative”. We can verify that and . Check that every integer has a unique representative pair with on one side or the other, so each is either positive, negative, or zero. From now on we’ll just write for the integer represented by and for the one represented by , as we’re used to.
I’ve posted the slides for my talk. There are a few typos that I noticed as I was speaking, but nothing that makes it incomprehensible. I actually started the lecture on page 10 to save time, and because the run-up is pretty standard material.
Still no wireless, so I’ll again jot a little something about the noteworthy talks.
Louis Kauffman gave a talk about an invariant of “virtual” knots and links, which are described in his paper that I linked to the other day. The invariant is an extension of the Kauffman bracket that I extend to tangles. An obvious question is how to do both extensions, getting functors on the category of virtual tangles.
Heather Dye, Kauffman’s former student, then spoke on virtual homotopy. This may well be related to Allison Henrich’s work on Legendrian virtual knots. It’s all tangled (har) up together.
I gave my talk after that. I’ll make a separate post with the link to my slides.
After lunch, Alexander Shumakovitch gave a very clear (though not yet complete) combinatorial categorification of HOMFLY evaluations. There’s a parameter in his theory, and setting it to 2 gives back the combinatorial version of Khovanov homology. Setting it to higher values should correspond to what Josh Sussan — currently finishing his Ph.D. here at Yale — has done in the representation-theoretic picture for .
The last talk that really grabbed me was Michael “Cap” Khoury’s explanation of a new definition for the Alexander-Conway polynomial. The really interesting thing here is that it really looks like he’s realizing it as some sort of representable functor on some sort of category. Almost, but not quite. We talked for a bit about what’s missing, and I don’t think it’s impossible to push it a bit and get that last lousy point.
Beannachtai na Feile Padraig.
In honor of the day, I’d like to post a passage of Finnegans Wake. There’s a beautiful section from pages 293 to 299 “explaining” Euclid I.1. It’s as good a place as any to start in on “the book of Doublends Jined”, so if this sort of thing intrigues you I hope you’ll get a copy and go from here. If nothing else I hope that my own exegeses are somewhat easier to follow.
This area of the text is particularly… texty. I’ll do my best to match the original as exactly as possible. Individual pages will be separated by hard rules. The passage itself follows the jump.
Never believe anyone who says that drinking never helps anything, for tonight over many a pint o’ Guinness I hit upon the solution to a problem that’s been nagging at me for some time. The answer is so incredibly simple that I’m feeling stupid for not thinking of it before. So here it is:
Cospans in the comma category of quandles over a given quandle Q from the free quandle on letters to the free quandle on letters categorify the extension of link colorings by Q to tangles.
This gives me a new marker to aim at. I can explain this, but it will require more preliminaries before it’s really accessible to the GILA (Generally Interested Lay Audience) as yet. Those who know quandles — a topic I’ll be covering early next week — and category theory and some knot theory should be able to piece together the meaning now. For the rest of you.. stay tuned.
I don’t have wireless access here in the lecture hall, so I can’t “live blog”. I’m writing notes on the lectures I find noteworthy.
David Radford spoke about something called the Hennings Invariant of a finite-dimensional Hopf algebra. I’ve always liked his style, since he manages to boil down a lot of complicated algebraic structure to what’s essential for the application at hand. He also describes it incredibly clearly. His lectures are very accessible to a grad student who has a basic background in algebra, which is more than I can say about many algebraists. I think it should be clear why I think this is a Good Thing.
I’ll get to Hopf algebras in more depth eventually, but for now let me say this: they’re very much like groups, but using somewhat heavier machinery. In the long run, groups and Hopf algebras both work off of a very similar structure.
Pat Gilmer gave a talk on “congruence” and “similarity” of 3-manifolds. A 3-manifold is a space that looks close-up like three-dimensional space, like the surface of the Earth looks flat since we’re so close to it. These two concepts he’s pushing are equivalence relations. Two 3-manifolds may be different, but might still be “similar” or “congruent” if they’re related by certain modifications, called “surgeries”. Congruence was evidently studied about ten years ago and Gilmer reinvented it himself along with similarity. He’s particularly interested in how certain well-known invariants of 3-manifolds change as you apply these surgeries.
One interesting thing this brings to mind is the fact that we can get any 3-manifold from the 3-sphere (the surface of the Earth is a *2*-sphere) by cutting out a bunch of bagel-shaped regions that might be knotted, twisting up the boundaries of the parts we cut out, and putting them back in. This means that there’s a connection between knot theory and 3-manifold theory. In fact, a very large portion of mathematicians calling themselves knot theorists are really more interested in 3-manifolds and just use knot theory as a tool.
After lunch, Carmen Caprau gave her talk about an tangle homology. It manages to fix a big problem I’ve had with Khovanov’s homology theory — it tends to screw up the signs. Knot homology theories are really big business these days. Most people I know who are on the job market and work directly with these sorts of things have jobs nailed down, and Carmen is no exception. Good luck to her.
Scott Carter talked about cohomology in symmetric monoidal categories with products and coproducts. This extends the stuff he has done with Alissa Crans, Mohammed Elhamdadi, Pedro Lopes, and Masahico Saito. The last version of this talk I saw at last Spring’s Knots In Washington was some of the nicest theory I’ve seen. Now they’re taking this abstract setup describing Hoschschild homology theories (which try to capture the underlying essence of associativity), and “dualize” all the diagrams to get some sort of topological invariant. I always love mixing up notation and subject matter, and this is very much in that Kauffman-esque spirit. Hopefully there will be an updated version of their paper on the arXiv soon.
Maciej Niebrzydowski spoke on homology of dihedral quandles, which he worked on with his advisor, Jozef Przytycki. I’ll leave this alone since I’m almost ready to talk about quandles in full detail.