Sorry for missing yesterday. I had this written up but completely forgot to post it while getting prepared for next week’s trip back to a city. Speaking of which, I’ll be heading off for the week, and I’ll just give things here a rest until the beginning of December. Except for the Samples, and maybe an I Made It or so…
Okay, let’s say we have a group . This gives us a cocommutative Hopf algebra. Thus the category of representations of is monoidal — symmetric, even — and has duals. Let’s consider these structures a bit more closely.
We start with two representations and . We use the comultiplication on to give us an action on the tensor product . Specifically, we find
That is, we make two copies of the group element , use to act on the first tensorand, and use to act on the second tensorand. If and came from actions of on sets, then this is just what you’d expect from linearizing the product of the -actions.
Symmetry is straightforward. We just use the twist on the underlying vector spaces, and it’s automatically an intertwiner of the actions, so it defines a morphism between the representations.
Duals, though, take a bit of work. Remember that the antipode of sends group elements to their inverses. So if we start with a representation we calculate its dual representation on :
Composing linear maps from the right reverses the order of multiplication from that in the group, but taking the inverse of reverses it again, and so we have a proper action again.
One things I don’t think I’ve mentioned is that the category of vector spaces over a field is symmetric. Indeed, given vector spaces and we can define the “twist” map by setting and extending linearly.
Now we know that an algebra is commutative if we can swap the inputs to the multiplication and get the same answer. That is, if . Or, more succinctly: .
The group algebra of a group is a cocommutative Hopf algebra. Indeed, since , we can twist this either way and get the same answer.
So what does cocommutativity buy us? It turns out that the category of representations of a cocommutative bialgebra is not only monoidal, but it’s also symmetric! Indeed, given representations and , we have the tensor product representations on , and on . To twist them we define the natural transformation to be the twist of the vector spaces: .
We just need to verify that this actually intertwines the two representations. If we act first and then twist we find
On the other hand, if we twist first and then act we find
It seems there’s a problem. In general this doesn’t work. Ah! but we haven’t used cocommutativity yet! Now we write
Again, remember that this doesn’t mean that the two tensorands are always equal, but only that the results after (implicitly) summing up are equal. Anyhow, that’s enough for us. It shows that the twist on the underlying vector spaces actually does intertwine the two representations, as we wanted. Thus the category of representations is symmetric.
It took us two posts, but we showed that the category of representations of a Hopf algebra has duals. This is on top of our earlier result that the category of representations of any bialgebra is monoidal. Let’s look at this a little more conceptually.
Earlier, we said that a bialgebra is a comonoid object in the category of algebras over . But let’s consider this category itself. We also said that an algebra is a category enriched over , but with only one object. So we should really be thinking about the category of algebras as a full sub-2-category of the 2-category of categories enriched over .
So what’s a comonoid object in this 2-category? When we defined comonoid objects we used a model category . Now let’s augment it to a 2-category in the easiest way possible: just add an identity 2-morphism to every morphism!
But the 2-category language gives us a bit more flexibility. Instead of demanding that the morphism satisfy the associative law on the nose, we can add a “coassociator” 2-morphism to our model 2-category. Similarly, we dispense with the left and right counit laws and add left and right counit 2-morphisms. Then we insist that these 2-morphisms satisfy pentagon and triangle identities dual to those we defined when we talked about monoidal categories.
What we’ve built up here is a model 2-category for weak comonoid objects in a 2-category. Then any weak comonoid object is given by a 2-functor from this 2-category to the appropriate target 2-category. Similarly we can define a weak monoid object as a 2-functor from the opposite model 2-category to an appropriate target 2-category.
So, getting a little closer to Earth, we have in hand a comonoid object in the 2-category of categories enriched over — our algebra . But remember that a 2-category is just a category enriched over categories. That is, between (considered as a category) and we have a hom-category . The entry in the first slot is described by a 2-functor from the model category of weak comonoid objects to the 2-category of categories enriched over . This hom-functor is contravariant in the first slot (like all hom-functors), and so the result is described by a 2-functor from the opposite of our model 2-category. That is, it’s a weak monoid object in the 2-category of all categories. And this is just a monoidal category!
This is yet another example of the way that hom objects inherit structure from their second variable, and inherit opposite structure from their first variable. I’ll leave it to you to verify that a monoidal category with duals is similarly a weak group object in the 2-category of categories, and that this is why a Hopf algebra — a (weak) cogroup object in the 2-category of categories enriched over has dual representations.
Now that we have a coevaluation for vector spaces, let’s make sure that it intertwines the actions of a Hopf algebra. Then we can finish showing that the category of representations of a Hopf algebra has duals.
Take a representation , and pick a basis of and the dual basis of . We define the map by . Now , so if we use the action of on before transferring to , we get . Be careful not to confuse the counit with the basis elements .
On the other hand, if we transfer first, we must calculate
Now let’s use the fact that we’ve got this basis sitting around to expand out both and as matrices. We’ll just take on matrix indices on the right for our notation. Then we continue the calculation above:
And so the coevaluation map does indeed intertwine the two actions of . Together with the evaluation map, it provides the duality on the category of representations of a Hopf algebra that we were looking for.
Okay, I noticed that I never really gave the definition of the coevaluation when I introduced categories with duals, because you need some linear algebra. Well, now we have some linear algebra, so let’s do it.
Let be a finite-dimensional vector space with dual space . Then if we have a basis of we immediately get a dual basis for (yet another to keep straight), defined by . We now define a map by setting . That is, we take the tensor product of each dual basis element with its corresponding basis element, and add them all up (summation convention).
But this seems to depend on which basis we started with. What if we used a different basis and dual basis ? We know that there is a change of basis matrix , so let’s see how this works on the dual basis.
The dual basis is defined by the fact that . So we use this new expression for to write . That is, must be the inverse matrix to , which we’ll write as . But now we can check
And so we find that when we change bases.
Now we can use the same definition for above with our new basis. We set , and then substitute our expressions in terms of the old bases:
which is what we got before. That is, this map actually doesn’t depend on the basis we chose!
Okay, now does this coevaluation — along with the evaluation map from before — actually satisfy the conditions for a duality? First, let’s start with a vector written out in terms of a basis: . Now we use the coevaluation to send it to . Next we evaluate on the first two tensorands to find . So we do indeed have the identity here. Verifying the other condition is almost the same, starting from an arbitrary covector .
So now we know that the category has duals. Tomorrow we can promote this to a duality on the category of finite-dimensional representations of a Hopf algebra.
We’ve seen that the category of representations of a bialgebra is monoidal. What do we get for Hopf algebras? What does an antipode buy us? Duals! At least when we restrict to finite-dimensional representations.
Again, we base things on the underlying category of vector spaces. Given a representation , we want to find a representation . And it should commute with the natural transformations which make up the dual structure.
Easy enough! We just take the dual of each map to find . But no, this can’t work. Duality reverses the order of composition. We need an antiautomorphism to reverse the multiplication on . Then we can define .
The antiautomorphism we’ll use will be the antipode. Now to make these representations actual duals, we’ll need natural transformations and . This natural transformation is not to be confused with the counit of the Hopf algebra. Given a representation on the finite-dimensional vector space , we’ll just use the and that come from the duality on the category of finite-dimensional vector spaces.
Thus we find that is the pairing . Does this commute with the actions of ? On the one side, we calculate
Then we apply the evaluation to find
Which is the same as the result we’d get by applying the “unit” action after evaluating. Notice how we used the definition of the dual map, the fact that is a representation, and the antipodal property in obtaining this result.
This much works whether or not is a finite-dimensional vector space. The other direction, though, needs more work, especially since I waved my hands at it when I used as the motivating example of a category with duals. Tomorrow I’ll define this map.
Let’s say we have two algebra representations and . These are morphisms in the category of -algebras, and so of course we can take their tensor product . But this is not a representation of the same algebra. It’s a representation of the tensor square of the algebra:
Ah, but if we have a way to send to (an algebra homomorphism, that is), then we can compose it with this tensor product to get a representation of . And that’s exactly what the comultiplication does for us. We abuse notation slightly and write:
where the homomorphism of this representation is the comultiplication followed by the tensor product of the two homomorphisms, followed by the equivalence of algebras.
Notice here that the underlying vector space of the tensor product of two representations is the tensor product of their underlying vector spaces . That is, if we think (as many approaches to representation theory do) of the vector space as fundamental and the homomorphism as extra structure, then this is saying we can put the structure of a representation on the tensor product of the vector spaces.
Which leads us to the next consideration. For the tensor product to be a monoidal structure we need an associator. And the underlying linear map on vector spaces must clearly be the old associator for . We just need to verify that it commutes with the action of .
So let’s consider three representations , , and . Given an algebra element and vectors , , and , we have the action
On the other hand, if we associate the other way we have the action
Where we have used the Sweedler notation to write out the comultiplications of . But now we can use the coassociativity of the comultiplication — along with the fact that, as algebra homomorphisms, the representations are linear maps — to show that the associator on intertwines these actions, and thus acts as an associator for the category of representations of as well.
We also need a unit object, and similar considerations to those above tell us it should be based on the vector space unit object. That is, we need a homomorphism . But linear maps from the base field to itself (considered as a one-dimensional vector space) are just multiplications by field elements! That is, the algebra is just the field itself, and we need a homomorphism . This is precisely what the counit provides! I’ll leave it to you to verify that the left and right unit maps from vector spaces intertwine the relevant representations.
As we work with coalgebras, we’ll need a nice way to write out the comultiplication of an element. In the group algebra we’ve been using as an example, we just have , but not all elements are so cleanly sent to two copies of themselves. And other comltiplications in other coalgebras aren’t even defined so nicely on any basis. So we introduce the so-called “Sweedler notation”. If you didn’t like the summation convention, you’re going to hate this.
Okay, first of all, we know that the comultiplication of an element is an element of the tensor square . Thus it can be written as a finite sum
Now, this uses two whole new letters, and , which might be really awkward to come up with in practice. Instead, let’s call them and , to denote the first and second factors of the comultiplication. We’ll also move the indices to superscripts, just to get them out of the way.
The whole index-summing thing is a bit awkward, especially because the number of summands is different for each coalgebra element . Let’s just say we’re adding up all the terms we need to for a given :
Then if we’re really pressed for space we can just write . Since we don’t use a subscript in parentheses for anything else, we remember that this is implicitly a summation.
Let’s check out the counit laws in this notation. Now they read . Or, more expansively:
Similarly, the coassociativity condition now reads
In the Sweedler notation we’ll write both of these equal sums as
Or more simply as .
As a bit more practice, let’s write out the condition that a linear map between coalgebras is a coalgebra morphism. The answer is that must satisfy
Notice here that there are implied summations here. We are not asserting that all the summands are equal, and definitely not that (for instance). Sweedler notation hides a lot more than the summation convention ever did, but it’s still possible to expand it back out to a proper summation-heavy format when we need to.
One more piece of structure we need. We take a bialgebra , and we add an “antipode”, which behaves sort of like an inverse operation. Then what we have is a Hopf algebra.
An antipode will be a linear map on the underlying vector space. Here’s what we mean by saying that an antipode “behaves like an inverse”. In formulas, we write that:
On either side, first we comultiply an algebra element to split it into two parts. Then we use on one or the other part before multiplying them back together. In the center, this is the same as first taking the counit to get a field element, and then multiplying that by the unit of the algebra.
By now it shouldn’t be a surprise that the group algebra is also a Hopf algebra. Specifically, we set . Then we can check the “left inverse” law:
One thing that we should point out: this is not a group object in the category of vector spaces over . A group object needs the diagonal we get from the finite products on the target category. But in the category of vector spaces we pointedly do not use the categorical product as our monoidal structure. There is no “diagonal” for the tensor product.
Instead, we move to the category of coalgebras over . Now each coalgebra comes with its own comultiplication , which stands in for the diagonal. In the case of we’ve been considering, this comultiplication is clearly related to the diagonal on the underlying set of the group . In fact, it’s not going too far to say that “linearizing” a set naturally brings along a coalgebra structure on top of the vector space structure we usually consider. But many coalgebras, bialgebras, and Hopf algebras are not such linearized sets.
In the category of coalgebras over , a Hopf algebra is a group object, so long as we use the comultiplications and counits that come with the coalgebras instead of the ones that come from the categorical product structure. Dually, we can characterize a Hopf algebra as a cogroup object in the category of algebras over , subject to a similar caveat. It is this cogroup structure that will be important moving forwards.
In yesterday’s post I used the group algebra of a group as an example of a coalgebra. In fact, more is true.
A bialgebra is a vector space equipped with both the structure of an algebra and the structure of a coalgebra, and that these two structures are “compatible” in a certain sense. The traditional definitions usually consist in laying out the algebra maps and relations, then the coalgebra maps and relations. Then they state that the algebra structure preserves the coalgebra structure, and that the coalgebra structure preserves the algebra structure, and they note that really you only need to require one of these last two conditions because they turn out to be equivalent.
In fact, our perspective allows this equivalence to come to the fore. The algebra structure makes the bialgebra a monoid object in the category of vector space over . Then a compatible coalgebra structure makes it a comonoid object in the category of algebras over . Or in the other order, we have a monoid object in the category of comonoid objects in the category of vector spaces over . And these describe essentially the same things because internalizations commute!
Okay, let’s be explicit about what we mean by “compatibility”. This just means that — on the one side — the coalgebra maps are not just linear maps between the underlying vector spaces, but actually are algebra homomorphisms. On the other side, it means that the algebra maps are actually coalgebra homomorphisms.
Multiplication and comultiplication being compatible actually mean the same thing. Take two algebra elements and multiply them, then comultiply the result. Alternatively, comultiply each of them, and the multiply corresponding factors of the result. We should get the same answer whether we multiply or comultiply first. That is: , where is the twist map, exchanging two factors.
Let’s check this condition for the group algebra :
Similarly, if we multiply two algebra elements and then take the counit, it should be the same as the product (in ) of the counits of the elements. Dually, the product of two copies of the algebra unit should be the algebra unit again, and the counit of the algebra unit should be the unit in . It’s straightforward to verify that these hold for .