This is a little to the side of my usual topics, but I wanted to mention the passing of Kurt Vonnegut. So it goes.
The direct product of abelian groups and works as we expect it to, because the elements coming from and already commute inside . The free product of and as groups gives as before, but now this is not an abelian group. Let’s consider the property that defined free products a little more closely. Here’s the diagram.
We want to read it slightly differently now. The new condition is that for any abelian group and homomorphisms and there is a unique homomorphism of abelian groups from the free product to making the diagram commute. We know that there’s a unique homomorphism from already, but we need to “abelianize” this group. How do we do that?
We just move to the quotient of by its commutator subgroup of course! Recall that any homomorphism to an abelian group factors uniquely through this quotient: . So now is an abelian group with a unique homomorphism to making the diagram commute, it works for a free product in the context of abelian groups. This sort of thing feels odd at first, but you get used to it: when you change the context of a property (here from all groups to abelian groups) the implications change too.
Okay, so is like the free product , but we’ve thrown in relations making everything commute. We started with abelian groups and , so all we’ve really added is that elements coming from the two different groups commute with each other. And that gives us back (wait for it..) the direct product! When we restrict our attention to abelian groups, direct products and free products are the same thing. Since this is such a nice thing to happen and because we change all our notation when we look at abelian groups anyhow, we call this group the “direct sum” of the abelian groups and , and write it .
Now I didn’t really talk about this much before in the context of groups, but I’m going to need it shortly. We can take the direct sum of more than two groups at a time. I’ll leave it to you to verify that the groups and are isomorphic (use the universal property), so we can more or less unambiguously talk about the direct sum of any finite collection of groups. Infinite collections (which we’ll need soon) are a bit weirder.
Let’s say we have an infinite set and for each of its elements an abelian group . We can define the infinite direct sum as the collection of all “-tuples” where for all , and where all but a finite number of the are the zero element in their respective groups. This satisfies something like the free product’s universal property — each has a homomorphism , and so on — but with an infinite number of groups on the top of the diagram: one for every element of .
The direct product , on the other hand, satisfies something like the product condition but with an infinite number of groups down on the bottom of the diagram. Each of the comes with a homomorphism , and so on. We can realize this property with the collection of all -tuples, whether there are a finite number of nonzero entries or not.
What’s really interesting here is that for finite collections of groups the free product comes with an epimorphism onto the direct product. Now for infinite collections of abelian groups, the free product (direct sum) comes with a monomorphism into the direct product. The free product was much bigger before, but now it’s much smaller. When all these weird little effects begin to confuse me, I find it’s best just to plug my ears and go back to the universal properties. They will never steer you wrong.
Today I want to set out an incredibly important example of a ring. This example (and variations) come up over and over and over again throughout mathematics.
Let’s start with an abelian group . Now consider all the linear functions from back to itself. Remember that “linear function” is just another term for “abelian group homomorphism” — it’s a function that preserves the addition — and that we call such homomorphisms from a group to itself “endomorphisms”.
As for any group, this set has the structure of a monoid. We can compose linear functions by, well, composing them. First do one, then do the other. We define the operation by and verify that the composition is again a linear function:
This composition is associative, and the function that sends every element of to itself is an identity, so we do have a monoid.
Less obvious, though, is the fact that we can add such functions. Just add the values! Define . We check that this is another endomorphism:
Now this addition is associative. Further the function sending every element of to the element of is an additive identity, and the function is an additive inverse. The collection of endomorphisms with this addition becomes an abelian group.
So we have two structures: an abelian group and a monoid. Do they play well together? Indeed!
showing that composition distributes over addition.
So the endomorphisms of an abelian group form a ring with unit. We call this ring , and like I said it will come up everywhere, so it’s worth internalizing.
There’s a new edition of John Baez’ This Week’s Finds in Mathematical Physics. He talks about places to read up on Felix Klein’s Erlangen Programme, which is a great application of group theory.
More interesting to me (and hopefully to readers here) is his continuing “Tale of Groupoidification”. He really fleshes out this extension of the notion of group actions, and ties it into the concept of spans. I hope it’s not gushing too much to say that spans are one of the most amazingly useful inventions ever, and I’ll be talking about them a lot more once i’ve laid down enough foundations to handle them properly. It’s not exaggerating to say that I owe the lion’s share of progress on my own research program to spans and their dual notion, cospans.
I’ve posted my notes for the first of Zuckerman’s lectures. Hopefully my handwriting isn’t too awful for you. I’ve never been very good with that pen-and-paper stuff.
I’m trying to explain this pretty comprehensibly, but I do have to use some terms most mathematicians know without defining them. I’ve got plans to get to them eventually in the main stream of my writings, but for now the exegesis sits at a middle level. Anyhow, there’s a lot to unpack here, so I’ll put it behind the jump.
Well, now that I’ve built up a bit of a readership, I think I’ll try fishing a bit. I’ve got a problem I’d like to work on, but part of the theory as I understand it is “magic” to me, so I can’t really figure how to do what I want with it.
So, if you (or someone you know) is knowledgeable about representation and character varieties, or about flat connections on -manifolds, and is interested in collaborating on a problem in knot theory, please drop me a line.
Just like we had for groups, there is an isomorphism theorem for rings. In fact, the demonstration goes much the same as it did there.
Any subring of a ring comes equipped with an inclusion homomorphism . Any quotient of a ring by and ideal comes with a projection homomorphism . For both of these, just construct the inclusion or projection homomorphism for the underlying abelian groups and check that it preserves the multiplication. Just like before, the inclusion is a monomorphism, the projection is an epimorphism, and the kernel of the projection — the set of elements of that get sent to zero — is the ideal .
Now given any ring homomorphism , the kernel is an ideal. Indeed, if and then
So the set of elements that get sent to zero is closed under addition and under left and right multiplication by any element of the ring.
Also, the image of is a subring of . Given elements and in the image of we have and , so sums and products of elements of the image are again in the image.
Now we want to take and make an isomorphism . For any coset in , pick a representative element of and define . Clearly this lands in , but does it really define a homomorphism from ? Indeed it does, because any other representative of the same coset looks like for some element of the kernel. Then
so we get the same answer — the value of doesn’t depend on the choice of representative we make.
Is a monomorphism? Yes, because if then , so is a representative of , which takes the place of in . Is it an epimorphism? Yes, because every element of comes from some element of , so we can hit it by taking .
Putting it all together we can factor any homomorphism into the composition of an epimorphism , an isomorphism , and a monomorphism . All homomorphisms of rings work this way: factor out some kernel, then send the quotient isomorphically to some subring of the target ring. Again, all the interesting stuff really happens in the first step. Studying homomorphisms from a given ring really comes down to studying the possible ideals of a given ring. In particular, if a ring has no ideals but the whole ring itself and the ideal consisting only of we call it “simple”. Every homomorphic image of a simple ring is either zero or the whole ring itself.
As I said above, this really looks a lot like what we did for groups, and there’s actually a very good reason why that I want to put off a while longer. Essentially, the fact that we have an isomorphism theorem like this doesn’t depend on the “groupiness” or the “ringiness” of the objects we’re studying, but on deeper structure shared by both groups and rings — or rather shared by group and ring homomorphisms.
The new Carnival of Mathematics post is up over at Science and Reason, continuing my efforts to become the blogosphere’s go-to guy for publicizing the Atlas project.
This afternoon in the graduate student seminar, Joshua Sussan broke down parts of the famous paper of Bernstein, Frenkel, and Khovanov that launched knot homology theories: A categorification of the Temperley-Lieb algebra and Schur quotients of via projective and Zuckerman functors. Yes, Zuckerman is my advisor, and yes this also ties back into the stuff I’ve been talking about with respect to .
One particular bit of self-promotion on this point: often Khovanov homology — either this original representation-theoretic approach, a later sheaf-theoretic approach, or Khovanov and Rozansky’s combinatoral version — is called a categorification of the Jones polynomial, or of the bracket polynomial. I’ve mentioned the bracket before, specifically in relation to my talk on bracket extensions. In fact, Khovanov homology on tangles categorifies one of my bracket-extending functors: , where is the standard -dimensional representation of the -deformed enveloping algebra and is the canonical pairing from to the trivial representation.
Don’t get me wrong. Khovanov homology is a truly brilliant idea, but I hold out hope that there’s some other categorification that makes it clear what the topological content of the Kauffman bracket polynomial is.
Often enough we’re going to see the following situation. There are three abelian groups — , , and — and a function that is linear in each variable. What does this mean?
Well, “linear” just means “preserves addition” as in a homomorphism, but I don’t mean that is a homomorphism from to . That would mean the following equation held:
Instead, I want to say that if we fix one variable, the remaining function of the other variable is a homomorphism. That is
we call such a function “bilinear”.
The tensor product is a construction we use to replace bilinear functions by linear functions. It is defined by yet another universal property: a tensor product of and is an abelian group and a bilinear function so that for every other bilinear function there is a unique linear function so that . Like all objects defined by universal properties, the tensor product is automatically unique up to isomorphism if it exists.
So here’s how to construct one. I claim that has a presentation by generators and relations as an abelian group. For generators take all elements of , and for relations take all the elements and for all and in and and in .
By the properties of such presentations, any function of the generators — of — defines a linear function on if and only if it satisfies the relations. That is, if we apply a function to each relation and get every time, then defines a unique function on the presented group. So what does that look like here?
So a bilinear function gives rise to a linear function , just as we want.
Usually we’ll write the tensor product of and as , and the required bilinear function as .
Now, why am I throwing this out now? Because we’ve already seen one example. It’s a bit trivial now, but catching it while it’s small will help see the relation to other things later. The distributive law for a ring says that multiplication is a bilinear function of the underlying abelian group! That is, we can view a ring as an abelian group equipped with a linear map for multiplication.
Even though Andrew Wiles was speaking at the Branford College master’s tea today, I didn’t go. Zuckerman was giving the first of two or three lectures on this whole KLV thing. And I actually took notes!
Unfortunately the scanner in the computer lab was being evil, so I can’t post them quite yet. I’ll definitely have them by Monday, though, and after that I’ll try to explain what I wrote. They should be already more than readable to mathematicians, though.
What I do have is pictures of a conceptual diagram we constructed on the blackboard in his office the other day. I managed to get it mostly into three parts: 1 2 3 (~700KB each). I apologize for the quality of the middle one — I couldn’t use a flash without washing out the board entirely. It should still be readable. The third cuts off some lists of names associated with the topics they’re next to. The second list is “Jantzen, Vogan, Speh”, while in the first list “H.C.” is Harish-Chandra and “Z.” is Zuckerman.
As for what all this means, that’s what these lectures are to explain more thoroughly. Here we see the entire subject circling “characters”, which are certain functions on the groups we’re interested in. Properly defining them was the subject of today’s lecture. In the lower left is a list of examples of the sorts of groups we’re interested in — is the now-(in)famous one. To the right of the diagram is the statement that two special classes of representations, the “standard” and “irreducible” ones, are related in a certain way. On the right is the recipe for computing the irreducible representations into which the Atlas project’s computation fits.