## Representations of Hopf Algebras I

We’ve seen that the category of representations of a bialgebra is monoidal. What do we get for Hopf algebras? What does an antipode buy us? Duals! At least when we restrict to finite-dimensional representations.

Again, we base things on the underlying category of vector spaces. Given a representation , we want to find a representation . And it should commute with the natural transformations which make up the dual structure.

Easy enough! We just take the dual of each map to find . But no, this can’t work. Duality reverses the order of composition. We need an antiautomorphism to reverse the multiplication on . Then we can define .

The antiautomorphism we’ll use will be the antipode. Now to make these representations actual duals, we’ll need natural transformations and . This natural transformation is not to be confused with the counit of the Hopf algebra. Given a representation on the finite-dimensional vector space , we’ll just use the and that come from the duality on the category of finite-dimensional vector spaces.

Thus we find that is the pairing . Does this commute with the actions of ? On the one side, we calculate

Then we apply the evaluation to find

Which is the same as the result we’d get by applying the “unit” action after evaluating. Notice how we used the definition of the dual map, the fact that is a representation, and the antipodal property in obtaining this result.

This much works whether or not is a finite-dimensional vector space. The other direction, though, needs more work, especially since I waved my hands at it when I used as the motivating example of a category with duals. Tomorrow I’ll define this map.

## Representations of Bialgebras

What’s so great about bialgebras? Their categories of representations are monoidal!

Let’s say we have two algebra representations and . These are morphisms in the category of -algebras, and so of course we can take their tensor product . But this is *not* a representation of the same algebra. It’s a representation of the tensor square of the algebra:

Ah, but if we have a way to send to (an algebra homomorphism, that is), then we can compose it with this tensor product to get a representation of . And that’s exactly what the comultiplication does for us. We abuse notation slightly and write:

where the homomorphism of this representation is the comultiplication followed by the tensor product of the two homomorphisms, followed by the equivalence of algebras.

Notice here that the underlying vector space of the tensor product of two representations is the tensor product of their underlying vector spaces . That is, if we think (as many approaches to representation theory do) of the vector space as fundamental and the homomorphism as extra structure, then this is saying we can put the structure of a representation on the tensor product of the vector spaces.

Which leads us to the next consideration. For the tensor product to be a monoidal structure we need an associator. And the underlying linear map on vector spaces must clearly be the old associator for . We just need to verify that it commutes with the action of .

So let’s consider three representations , , and . Given an algebra element and vectors , , and , we have the action

On the other hand, if we associate the other way we have the action

Where we have used the Sweedler notation to write out the comultiplications of . But now we can use the coassociativity of the comultiplication — along with the fact that, as algebra homomorphisms, the representations are linear maps — to show that the associator on intertwines these actions, and thus acts as an associator for the category of representations of as well.

We also need a unit object, and similar considerations to those above tell us it should be based on the vector space unit object. That is, we need a homomorphism . But linear maps from the base field to itself (considered as a one-dimensional vector space) are just multiplications by field elements! That is, the algebra is just the field itself, and we need a homomorphism . This is precisely what the counit provides! I’ll leave it to you to verify that the left and right unit maps from vector spaces intertwine the relevant representations.

## Bialgebras

In yesterday’s post I used the group algebra of a group as an example of a coalgebra. In fact, more is true.

A bialgebra is a vector space equipped with both the structure of an algebra and the structure of a coalgebra, and that these two structures are “compatible” in a certain sense. The traditional definitions usually consist in laying out the algebra maps and relations, then the coalgebra maps and relations. Then they state that the algebra structure preserves the coalgebra structure, and that the coalgebra structure preserves the algebra structure, and they note that really you only need to require one of these last two conditions because they turn out to be equivalent.

In fact, our perspective allows this equivalence to come to the fore. The algebra structure makes the bialgebra a monoid object in the category of vector space over . Then a compatible coalgebra structure makes it a comonoid object in the category of *algebras* over . Or in the other order, we have a monoid object in the category of comonoid objects in the category of vector spaces over . And these describe essentially the same things because internalizations commute!

Okay, let’s be explicit about what we mean by “compatibility”. This just means that — on the one side — the coalgebra maps are not just linear maps between the underlying vector spaces, but actually are algebra homomorphisms. On the other side, it means that the algebra maps are actually coalgebra homomorphisms.

Multiplication and comultiplication being compatible actually mean the same thing. Take two algebra elements and multiply them, then comultiply the result. Alternatively, comultiply each of them, and the multiply corresponding factors of the result. We should get the same answer whether we multiply or comultiply first. That is: , where is the twist map, exchanging two factors.

Let’s check this condition for the group algebra :

Similarly, if we multiply two algebra elements and then take the counit, it should be the same as the product (in ) of the counits of the elements. Dually, the product of two copies of the algebra unit should be the algebra unit again, and the counit of the algebra unit should be the unit in . It’s straightforward to verify that these hold for .

## Coalgebras

Okay, back to business. We’re about to need a little more algebraic structure floating around. This is something that’s always present, but many approaches don’t explicitly mention it until much later. Since I’m taking a categorical view of things, it’s easier to show what’s really going on right away.

Remember that an -algebra is a monoid object in the category of vector spaces over . Dually, an -*co*algebra is a *co*monoid object in the category of vector spaces over . That’s all well and good, but what’s a comonoid object? We’ve mentioned them before, but let’s be more explicit this time around.

Remember that a monoid object was a functor from a certain category we cooked up to mirror the axioms of a monoid. We gave the category objects corresponding to the natural numbers, corresponding to lists of monoid elements. We have a map corresponding to multiplication, and a map picking out the unit in the monoid.

So a *co*monoid object will be a functor from the *dual* of this category! That is, we’ve still got all the same objects, but now we have a “*co*multiplication” arrow , and a “*co*unit” arrow .

Now, the model category describing monoid objects isn’t just objects and arrows. We also have the relations that make a monoid a monoid: the associative law , and the left and right unit laws .

Dually, we must have dual relations for comonoid objects. We have a *co*associative law , and left and right *co*unit laws .

We could write these down in terms of commuting diagrams, but it’s even more instructive to look at “string diagrams” like we did before. This makes the sense of what’s going on all the clearer.

So a coalgebra is a comonoid object in the category of vector spaces over . That is, it’s an vector space , equipped with a linear comultiplication and a linear counit , which satisfy the coassociative and counit laws. I’ll admit that this seems an extremely quirky structure to discuss, so an example is in order. The one we care most about right now is the group algebra. Yes, it turns out to also be a coalgebra!

To really wrap our heads around it, let’s start with a finite group . Then we get a finite-dimensional vector space , with a basis indexed by elements of . Let’s forget, for the moment, that we have a multiplication and a unit. Instead, we define the comultiplication by for each basis element. We also define the counit by for each element . Both of these maps extend by linearity.

Now, let’s check the coassociative property. It suffices to check it on basis elements, because the extensions by linearity have to agree. In this case we have

Similarly, we can check the right counit law:

and the left counit law is similar. Thus these maps do indeed describe the structure of a coalgebra.

## The Category of Representations

Now let’s narrow back in to representations of algebras, and the special case of representations of groups, but with an eye to the categorical interpretation. So, representations are functors. And this immediately leads us to the category of such functors. The objects, recall, are functors, while the morphisms are natural transformations. Now let’s consider what, exactly, a natural transformation consists of in this case.

Let’s say we have representations and . That is, we have functors and with , — where is the single object of , when it’s considered as a category — and the given actions on morphisms. We want to consider a natural transformation .

Such a natural transformation consists of a list of morphisms indexed by the objects of the category . But has only one object: . Thus we only have one morphism, , which we will just call .

Now we must impose the naturality condition. For each arrow in we ask that the diagram

commute. That is, we want for every algebra element . We call such a transformation an “intertwiner” of the representations. These intertwiners are the morphisms in the category of of representations of . If we want to be more particular about the base field, we might also write .

Here’s another way of putting it. Think of as a “translation” from to . If is an isomorphism of vector spaces, for instance, it could be a change of basis. We want to take a transformation from the algebra and apply it, and we also want to translate. We could first apply the transformation in , using the representation , and then translate to . Or we could first translate from to and then apply the transformation, now using the representation . Our condition is that either order gives the same result, no matter which element of we’re considering.

## Category Representations

We’ve seen how group representations are special kinds of algebra representations. But even more general than that is the representation of a category.

A group is a special monoid, within which each element is invertible. And a monoid is just a category with a single object. Similarly, an -algebra is just like a monoid but enriched over the category of vector spaces over . That is, it’s a one-object category with an -bilinear composition. It makes sense to regard both of these structures as categories of sorts. A representation will then be a functor from one of these categories.

The clear target category is . So what’s a functor from, say, a group (considered as a category) to ? First the single object of the category picks out some object . That is, is a vector space over . Then for each arrow in — each group element — we have an arrow . Since has to be invertible, this must be invertible — an element of .

What about an algebra? Now our source category and our target category are both enriched over . It only makes sense, then, for us to consider -linear functors. Such a functor again picks out a single vector space for the single object of (considered as a category). Every arrow in gets sent to an arrow . This mapping is linear over the field .

So what do category representations get us? Well, one thing is this: consider a combinatorial graph — a collection of “vertices” with some directed “edges” joining them. A path in the graph is a sequence of directed edges joined tip-to-tail, and the collection of all paths in the graph constitutes the “path category” of the graph (exercise: identify the identity paths). A representation of this path category is what mathematicians call a “quiver representation”, and they’re big business.

More interesting to me is this: the category of tangles (or of oriented tangles, of framed tangles, or of framed, oriented tangles). This is a monoidal category with duals, as is , and so it only makes sense to ask that our functors respect those structures as well. We don’t ask that it send the braiding to the symmetry on , since that would trivialize the structure.

So what is a representation of the category ? It is my contention that this is nothing but a knot invariant, viewed in a more natural habitat. A little more generally, knot invariants are the restrictions to knots (and links) of functors defined on the category of tangles, which can often (always?) be decategorified — or otherwise rendered down — into representations of . This is my work: to translate existing knot theoretical ideas into this algebraic language, where I believe they find a better home.

## The Sum of Subspaces

We know what the direct sum of two vector spaces is. That we define abstractly and without reference to the internal structure of each space. It’s sort of like the disjoint union of sets, and in fact the basis for a direct sum is the disjoint union of bases for the summands.

Let’s use universal properties to prove this! We consider the direct sum , and we have a basis for and a basis for . But remember that the whole point of a basis is that vector spaces are free modules.

That is, there is a forgetful functor from to , sending a vector space to its underlying set. This functor has a left adjoint which assigns to any set the vector space of formal linear combinations of elements of . This is the free vector space on the basis , and when we choose the basis for a vector space we are actually choosing an isomorphism .

Okay. So we’re really considering the direct sum , and we’re asserting that it is isomorphic to . But we just said that constructing a free vector space is a functor, and this functor has a right adjoint. And we know that any functor that has a right adjoint preserves colimits! The disjoint union of sets is a coproduct, and the direct sum of vector spaces is a biproduct, which means it’s also a coproduct. Thus we have our isomorphism. Neat!

But not all unions of sets are disjoint. Sometimes the sets share elements, and the easiest way for this to happen is for them to both be subsets of some larger set. Then the union of the two subsets has to take this overlap into account. And since subspaces of a larger vector space may intersect nontrivially, their sum as subspaces has to take this into account.

First, here’s a definition in terms of the vectors themselves: given two subspaces and of a larger vector space , the sum will be the subspace consisting of all vectors that can be written in the form for and . Notice that there’s no uniqueness requirement here, and that’s because if and overlap in anything other than the trivial subspace we can add a vector in that overlap to and subtract it from , getting a different decomposition. This is precisely the situation a direct sum avoids.

Alternatively, let’s consider the collection of all subspaces of . This is a partially-ordered set, where the order is given by containment of the underlying sets. It’s sort of like the power set of a set, except that only those subsets of which are subspaces get included.

Now it turns out that, like the power set, this poset is actually a lattice. The meet is the intersection of subspaces, but the join isn’t their union. Indeed, the union of subspaces usually isn’t a subspace at all! What do we use instead? The sum, of course! It’s easiest to verify this with the algebraic definition of a lattice.

The lattice does have a top element (the whole space ) and a bottom element (the trivial subspace ). It’s even modular! Indeed, let , , and be subspaces with . Then on the one hand we consider , which is the collection of all vectors , where , , and . On the other hand we consider , which is collection of all vectors , where , , and . I’ll leave it to you to show how these two conditions are equivalent.

Unfortunately, the lattice isn’t distributive. I could work this out directly, but it’s easier to just notice that complements aren’t unique. Just consider three subspaces of : has all vectors of the form , has all of the form , and has all of the form . Then , and , but .

This is all well and good, but it’s starting to encroach on Todd’s turf, so I’ll back off a bit. The important bit here is that the sum behaves like a least-upper-bound.

In categorical terms, this means that it’s a product in the lattice of subspaces (considered as a category). Don’t get confused here! Direct sums are coproducts in the category , while sums are coproducts in the category (lattice) of subspaces of a given vector space. These are completely different categories, so don’t go confusing coproducts in one with those in the other.

In this case, all we mean by saying this is a categorical coproduct is that we have a description of the sum of two subspaces which doesn’t refer to the elements of the subspaces at all. The sum is the smallest subspace of which contains both and . It is the “smallest” in the sense that any other subspace containing both and must contain . This description from the outside of the subspaces will be useful when we don’t want to get our hands dirty with actual vectors.

## More on the C-G Eversion

Some people had trouble grabbing the whole 50MB file that I posted, so Scott Carter broke it into pieces. He also included these comments:

The red, blue, and purple curves on the large (distorted) spherical objects at the bottom of each page of the eversion are the preimages of the the folds (color coded of course) and the double decker sets. Since at each time the sphere is immersed it may have double and triple points. Each arc of double points lifts to a pair of arcs on the ambient sphere, and each triple point lifts to three points on the ambient sphere. These lifts are the “decker sets.”

They are obtained via Gauss-Morse codes. Pick a base point and orientation on each curve in a movie. These are chosen

consistently from one still to the next. Label the double points and the optima and read the labels as they are encountered upon a single journey around the curve. The labels, too, are chosen consistently from one still to the next. Write these down for each curve in a movie, and connect the letters in the words as the curves change according to the basic changes that occur in each of the movie scenes.These curves then are instructions on how to immerse the ambient sphere to create the illustrations.

Sarah’s thesis computes that the fold set is an annulus, the double point set is the connected sum of three projective planes, and the double decker set is the connected orientation double cover: a genus 2 surface.

So here are the pieces:

- Immersed spheres as movies (2.2 MB)
- The basic movie moves (3.4 MB)
- The eversion from the red side to the quadruple point (19 MB)
- Half of the eversion from the quadruple point halfway to the blue side (24 MB)
- The other half of the eversion from the quadruple point halfway to the blue side (17 MB)

~~There’s a glitch in part 4, so I’ll post that as soon as I can.~~

## The Carter-Gelsinger Eversion

I’ve mentioned *Outside In* before. That video shows a way of turning a sphere inside out. It’s simpler than the first explicit eversions to be discovered, but the simplicity is connected to a high degree of symmetry. This leads to very congested parts of the movie, where it’s very difficult to see what’s going on. Further, many quadruple points — where four sections of the surface pass through the same point — occur simultaneously, and even higher degree points occur. We need a simpler version.

What would constitute “simple” for us, then? We want as few multiple points as possible, and as few at a time as possible. In fact, it would be really nice if we could write it down algebraically, in some sense? But what sense?

Go back to the diagrammatics of braided monoidal categories with duals. There we could draw knots and links to represent morphisms from the monoidal identity object to itself. And topologically deformed versions of the same knot encoded the same morphism. This is the basic idea of the category of tangles.

But if we shift our perspective a bit, we consider the *2*-category of tangles. Instead of saying that deformations are “the same” tangle, we consider explicit 2-isomorphisms between tangles. We’ve got basic 2-isomorphisms for each of the Reidemeister moves, and a couple to create or cancel caps and cups in pairs (duality) and to pull crossings past caps or cups (naturality). Just like we can write out any link diagram in terms of a small finite collection of basic tangles, we can write out any link diagram isotopy in terms of a small finite collection of basic moves.

What does a link diagram isotopy describe? Links (in our picture) are described by collections of points moving around in the plane. As we stack up pictures of these planes the points trace out a link. So now we’ve got links moving around in space. As we stack up pictures of these spaces, the links trace out linked *surfaces* in *four*-dimensional space. And we can describe any such surface in terms of a small collection of basic 2-morphisms in the braided monoidal 2-category of 2-tangles. These are analogous to the basic cups, caps, and crossings for tangles.

Of course the natural next step is to consider how to deform 2-tangles into each other. And we again have a small collection of basic 3-morphisms that can be used to describe any morphisms of 2-tangles. These are analogous to the Reidemeister moves. Any deformation of a surface (which is written in terms of the basic 2-morphisms) can be written out in terms of these basic 3-morphisms.

We can simplify our picture a bit. Instead of knotting surfaces in four-dimensional space, let’s just let them intersect each other in three-dimensional space. To do this, we need to use a *symmetric* monoidal 3-category with duals, since there’s no distinction between two types of crossings.

And now we come back to eversions. We write the sphere as a 2-dimensional cup followed by a 2-dimensional cap. Since we have duals, we can consider one side to be “painted red” and one side “painted blue”. One way of writing the sphere has the outside painted red and the other side is painted blue. An eversion in our language will be an explicit list of 3-morphisms that run from one of these spheres to the other.

Scott Carter and Sarah Gelsinger have now created just such an explicit list of directions to evert a sphere. And, what’s more, they’ve rendered pictures of it! Here, for the first time in public, is a 50MB PDF file showing the Carter-Gelsinger eversion.

First they illustrate the basic pieces of a diagram of knotted surfaces (pp. 1-4). Then they illustrate the basic 2-morphisms that build up surfaces (pp. 5-6), and write out a torus as an example (p. 7). Then come a few more basic 2-morphisms that involve self-intersections (pp. 8-9) and a more complicated immersed sphere (pp. 10-11). Each of these is written out also as a “movie” of self-intersecting loops in the plane. Next come the “movie moves” — the 3-morphisms connecting the 2-morphism “movies” (pp. 12-17). These are the basic pieces that let us move from one immersed surface to another.

Finally, the eversion itself, consisting of the next 79 pages. Each one consists of an immersed sphere, rendered in a number of different ways. On the left is a movie of immersed plane curves. On the top are three views of the sphere as a whole — a “solid” view on the right, a sketch of the double-point curves in the middle, and a “see-through” view on the left. The largest picture on each page is a more schematic view I don’t want to say too much about.

The important thing to see here is that between each two frames of this movie is exactly one movie move. Everything here is rendered into pictures, but we could write out the movie on each page as a sequence of 2-morphisms form the top of the page to the bottom. Then moving from one page to the next we trace out a sequence of 3-morphisms, writing out the eversion explicitly in terms of the basic 3-morphisms. As an added bonus, there’s only ever one quadruple point — where we pass from Red 26 to Blue 53 — and no higher degree points.

I’d like to thank Scott for not only finishing off this rendering he’s been promising for ages, but for allowing me to host its premiere weblog appearance. I, for one, am looking forward to the book, although I’m not sure this one will be better than the movie.

*[UPDATE]* Some people have been having trouble with the whole 50MB PDF (and more people might as the Carnival comes to see this page. Scott Carter broke the file up into five pieces, and I’ve put them up here in a new post. ~~There’s a glitch in part 4, but I’ll have that one up as soon as I can.~~

## The Splitting Lemma

Evidently I never did this one when I was talking about abelian categories. Looks like I have to go back and patch this now.

We start with a short exact sequence:

A large class of examples of such sequences are provided by the split-exact sequences:

where these arrows are those from the definition of the biproduct. But in this case we’ve also got other arrows: and that satisfy certain relations.

The lemma says that we can go the other direction too. If we have one arrow so that then everything else falls into place, and . Similarly, a single arrow so that will “split” the sequence. We’ll just prove the first one, since the second goes more or less the same way.

Just like with diagram chases, we’re going to talk about “elements” of objects as if the objects are abelian groups. Of course, we don’t really mean “elements”, but the exact same semantic switch works here.

So let’s consider an element and write it as . Clearly lands in . We can also check

so . That is, any element of can be written as the sum of an element of and an element of . But these two intersect trivially. That is, if and then , and so . This shows that . Thus we can write every uniquely as .

Now consider an element . By exactness, there must be some so that . That is, we have a unique with . This shows that . It’s straightforward to show that also . Thus we have split the sequence: .