The Unapologetic Mathematician

Mathematics for the interested outsider

Braided Monoidal Categories with Duals

As one last example along these lines, let’s throw all these structures in together. We start with a monoidal category, and we want it to have both a braiding and duals. Naturally, they’ll have to play well together.

Now I’ll want to go back and tweak one of the things I said about duals. I insisted that the duality functor satisfy (M^*)^*\cong M, but we can actually weaken it slightly. What I defined before we can call a “right dual”, since it provides a pairing M\otimes M^*\rightarrow\mathbf{1}. Instead of insisting that (M^*)^*\cong M, we can put in “left duals” by hand. This will be another contravariant functor M\rightarrow {}^*M, and arrows \overline{\epsilon_M}:{}^*M\otimes M\rightarrow\mathbf{1} and \overline{\eta_M}:\mathbf{1}\rightarrow M\otimes{}^*M satisfying the “mirror images” of the identities for \eta and \epsilon.

What does this all have to do with a braiding? Well we can use a braiding to turn a left dual into a right dual and vice versa. Let’s say we’ve got (right) dual maps \eta_M:\mathbf{1}\rightarrow M^*\otimes M and \epsilon_M:M\otimes M^*\rightarrow\mathbf{1}, along with braiding maps \beta_{M,N}:M\otimes N\rightarrow N\otimes M. Now we can define left dual maps by \overline{\eta_M}[\beta_{M,M^*}^{-1}\circ\eta_M]:\mathbf{1}\rightarrow M\otimes M^* and \overline{\epsilon_M}[\epsilon_M\circ\beta_{M^*,M}]:M^*\otimes M\rightarrow\mathbf{1}. This will automatically satisfy the left dual axioms. You can either try to show this now using the algebra or hold off a bit until we have a better tool to attack such identities.

Okay, so a braided monoidal category with duals has a braiding and has right duals. Since we define left duals in terms of these structures they’ll automatically play well together. From these we can further build a morphism for each object called the “balancing”. It’s defined by \theta_M=(1_M\otimes\epsilon_M)\circ(\beta_{M,M}\otimes1_M)\circ(1_M\otimes\overline{\eta_M}). We call an object M “unframed” if \theta_M=1_M.

I know that there’s a lot of stuff here and it’s hard to remember it all. We need a better way to think about these things. Just like we had for braided categories and categories with duals, we have a diagrammatic rendering of the free braided monoidal category with duals on one object.

First I’ll deal with the free braided monoidal category with duals on one self-dual, unframed object. That is, we start with one object and say it’s its own (left and right) dual and for whom the balancing is the identity. Then we build all the other objects as tensor powers of this one. Then we throw in the braiding morphisms and the duality morphisms and insist they satisfy all the relevant relations. And what we get is the category \mathcal{T}ang of tangles. Here’s an example:
Sample Tangle

As usual we read this from bottom to top. Notice that this means we can read off the algebraic notation just by going top to bottom, left to right, to get (\beta_{1,1}\otimes\epsilon_1)\circ(1_1\otimes\beta_1\otimes1_1)\circ(\beta_1\otimes\eta_1):2\rightarrow2. The unframed condition tells us that we can untwist that loop to the right and still have the same tangle.

What if we didn’t ask that our generator be unframed? Then we get the category \mathcal{F}r\mathcal{T}ang of framed tangles, which differs only in that we can’t untwist loops like the one above. It’s like working with knot diagrams where all we can use are Reidemeister moves 2 and 3.

What if our generator is unframed, but isn’t self-dual? Then we get the category \mathcal{OT}ang of oriented tangles. All we do now is label each strand of our tangle diagram with an orientation, like we did in the case of oriented Temperley-Lieb diagrams.

What if our generator is neither unframed nor self-dual? Then we get the category \mathcal{F}r\mathcal{OT}ang of framed, oriented tangles.

Each of these leads to a diagrammatic way of looking at the respective sorts of categories. For example, go back and take the definition of left duals in terms of right duals and write it out as a diagram of framed, oriented tangles. Then write down the condition it must satisfy as an equation between framed oriented tangles. Finally, verify that it satisfies the equation by finding a sequence of Reidemeister moves 2 and 3 from one side to the other (and invoke the definition of right duals at some point).

The category of tangles is very simply described in combinatorial and algebraic terms, but you might not yet have noticed how general it is. To illustrate this, I give with no further comment an example of a tangle from {}0 to {}0:
Tangle from 0 to 0


July 13, 2007 Posted by | Category theory, Knot theory | 2 Comments

The Temperley-Lieb Category

Okay, after last week’s shake-ups I’m ready to get back into the swing of things. I mentioned yesterday something called the “Temperley-Lieb Category”, and it just so happens we’re right on schedule to explain it properly.

We’ve seen the category of braids and how the braided coherence theorem makes it the “free braided monoidal category on one object”. That is, it has exactly the structure needed for a braided monoidal category — no more, no less — and if we pick any object in another such category \mathcal{C} we get a unique functor from \mathcal{B}raid to \mathcal{C}.

So of course we want the same sort of thing for monoidal categories with duals. We’ll even draw the same sorts of pictures. A point on a horizontal line will be our generating object, but we also need a dual object. So specifically we’ll think of the object as being “going up through the point” and the dual as “going down through the point”. Then we can draw cups and caps to connect an upward line and a downward line and interpret it as a duality map. Notice, though, that we can’t make any curves cross each other because we have no braiding! Here’s an example of such a Temperley-Lieb diagram:
Oriented Temperley-Lieb diagram

Again, we read this from bottom to top, and from left to right. On the bottom line we have a downward line followed by an upward line, which means we start at the object X^*\otimes X. Then we pass through a cap, which corresponds to the transformation \epsilon_{X^*}. Then we go through a cup (\eta_X^*) to get to X\otimes X^*, and another cup to get to X\otimes X^*\otimes X\otimes X^*. A cap in the middle (\epsilon_{X^*}) is followed by a cup (\eta_X), and then another pair of caps (\epsilon_X\otimes\epsilon_X). Then we have a cup \eta_{X^*} and another \eta_X to end up at X\otimes X^*\otimes X\otimes X^*.

We could simplify this a bit by cancelling two cup/cap pairs using the equations we imposed on the natural transformations \eta and \epsilon. In fact, this is probably a much easier way to remember what those equations mean. The equations tell us in algebraic terms that we can cancel off a neighboring cup and cap, while the topology of diagram says that we can straighten out a zig-zag.

Incidentally, one feature that’s missing from this diagram is that it’s entirely possible to have an arc (pointing either way) start at the bottom of the diagram and leave at the top.

Now if we have any category \mathcal{C} with duals and an object C we can build a unique functor from the category \mathcal{OTL} of oriented Temperley-Lieb diagrams to \mathcal{C} sending the upwards-oriented line to the object C. It sends the above diagram (for example) to the morphism

C^*\otimes C\rightarrow C\otimes C^*\otimes C\otimes C^*.

Another useful category is the free monoidal category with duals on a single self-dual object. This is the Temperley-Lieb category \mathcal{TL} which looks just the same as \mathcal{OTL} with one crucial difference: since the object X is its own dual, we can’t tell the difference between the two different directions a line could go. Up and down are the same thing. In the algebra this might seem a little odd, but in the diagram all it means is we get to drop the little arrows that tell us which way to go.

And now if we have any category \mathcal{C} with duals and any self-dual object C=C^* we have a unique functor from \mathcal{TL} to \mathcal{C} sending the strand to \mathcal{C}. This is how Temperley-Lieb diagrams are turned into (categorified) \mathfrak{sl}_2 representations in Khovanov homology.

July 12, 2007 Posted by | Category theory, Knot theory | 7 Comments

Categories with Duals

Now we’ve got monoidal categories to categorify the notion of a monoid, we should consider what the proper analogue of an inverse is. What we’ll do is define a “dual object”, which is sort of halfway a left inverse and halfway a right inverse.

So, we say a monoidal category \mathcal{M} “has duals” if for every object M there is a “dual object” M^* and natural transformations \eta_M:\mathbf{1}\rightarrow M^*\otimes M and \epsilon_M:M\otimes M^*\rightarrow\mathbf{1}. We do not ask that these be isomorphisms, since for the best examples they’re actually not.

Instead we ask that the transformations be compatible in the sense that the following equations hold:
The first one means (reading right to left) that if we create a copy of the identity on the right with \rho^{-1}, then hit it with \eta to create M^*\otimes M, then associate, then hit the M\otimes M^* on the left with \epsilon to get the identity, then remove it with \lambda, it’s just the same as doing nothing at all! And there’s a similar interpretation of the other equation. I can hear your furiously scratching your heads and saying “huh?” right now, but I’ll come back to this with another viewpoint later.

Anyhow, we also want this duality to be a functor — a contravariant functor even — which you should have guessed when I called \eta and \epsilon natural transformations. So we need for every f:M\rightarrow N and arrow f^*:N^*\rightarrow M^* so that (f\circ g)^*=g^*\circ f^* and 1_M^*=1_{M^*}. There are also a naturality conditions for \eta and \epsilon that you should be able to write down if you think of this as a covariant functor (\underline{\hphantom{X}})^*:\mathcal{M}^\mathrm{op}\rightarrow\mathcal{M}. The functor should also be monoidal, in the sense that (M\otimes N)^*\cong N^*\otimes M^* (why this form? write out the condition explicitly).

The motivating example here is the category \mathbf{FinVect}_k of finite-dimensional vector spaces over a field k. Here we know that tensor product over k gives a monoidal structure, and we’ll use the dual module of linear functionals as our functor. Indeed, if we set M^*=\hom_{\mathbf{Vect}_k}(M,k) we clearly have a contravariant functor. We can verify that it’s monoidal in the proper sense, and that the pairing gives a natural transformation M\otimes M^*\rightarrow\mathbf{1}=k. The other natural transformation required is also there, but we don’t yet have the tools needed (unless you’ve taken some linear algebra on your own). Still, you can keep this example in mind.

Now one thing that should be bugging you is that we have arrows M\otimes M^*\rightarrow\mathbf{1} and \mathbf{1}\rightarrow M^*\otimes M\rightarrow, but there are two more it seems we should have. To handle this, we’ll insist that (M^*)^*\cong M — that duality “is its own inverse”, so that we can just use \eta_{M^*} and \epsilon_{M^*} as the other natural transformations we need. This does indeed hold in our motivating example, but we again won’t prove it completely until later.

Note that there’s no reason to assume that \epsilon_{M^*}\circ\eta_M=1_\mathbf{1} or \epsilon_{M}\circ\eta_{M_*}=1_\mathbf{1}, so these still aren’t isomorphisms. If they are, though, then we really do have natural isomorphisms to replace left and right inverse rules in the definition of a group. If you’re feeling up to it, try to state (prove?) a coherence theorem that says what diagrams must commute to ensure they all do.

July 7, 2007 Posted by | Category theory | 10 Comments

The Braided Coherence Theorem

I find myself up earlier than expected this morning, so I thought I’d bang out the proof I’d promised of the “coherence theorem” for braided monoidal categories.

Recall that we’re considering a braided monoidal category \mathcal{M} with underlying “ordinary category” \mathcal{M}_0 — the same category, just forgetting that it’s braided and monoidal. Then the statement is that the category of braided monoidal functors from \mathcal{B}raid to \mathcal{M} is equivalent to \mathcal{M}_0.

To make our lives a bit easier, we’ll use the strictification theorem to pass from the (weak) monoidal category \mathcal{M} to a monoidally equivalent strict monoidal category \mathcal{S}, with underlying category \mathcal{S}_0. Now the braiding on \mathcal{M} induces (via the equivalence) one on \mathcal{S}. The new statement is that the category of strictly monoidal functors from \mathcal{B}raid to \mathcal{S} is isomorphic to \mathcal{S}_0. Then the result we really want (about \mathcal{M}) will follow.

One direction is easy: we take a functor F and evaluate it at the object 1 to get an object F(1)\in\mathcal{S}_0. A monoidal natural transformation \eta:F\rightarrow G has, in particular, the component \eta_1:F(1)\rightarrow G(1), which makes “evaluate at 1” a functor.

For the other direction, we’ll pick an object S of \mathcal{S}_0 and use it to construct a strictly braided monoidal functor from \mathcal{B}raid to \mathcal{S} that sends 1 to S, and then show that an arrow f:S\rightarrow S' in \mathcal{S}_0 induces a natural transformation between the functors built on S and S'. We’ll see that everything in sight is unique, so this construction actually provides an “on the nose” inverse functor to “evaluate at 1“.

So, we’ve got an object S and we set F_S(1)=S. Then for F to be monoidal we must set F(n)=S^{\otimes n}, since n is 1^{\otimes n} (remember that the “tensor power” is defined just like regular exponentiation: take the tensor product of n copies of the object in question). This completely defines the functor F_S on objects.

Now any morphism in \mathcal{B}raid is an element of some braid group B_n. Let’s look at simply crossing the left strand over the right in B_2: this is \beta_{1,1}, so to have a strictly braided functor we must set F_S(\beta_{1,1})=\beta_{S,S}. In B_n, the crossings of one strand over the one to its right generate the whole group. If we cross strand i over strand i+1 we’ll call the generator \sigma_i. In terms of the category \mathcal{B}raid, we can write \sigma_i as 1_{i-1}\otimes\beta_{1,1}\otimes1_{n-i-1}, so monoidality forces us to set F_S(\sigma_i)=1_{S^{\otimes i-1}}\otimes\beta_{S,S}\otimes1_{S^{\otimes n-i-1}}.

We do have to check that the relations of the braid group are satisfied. But if |i-j|>1 then we have F_S(\sigma_i\sigma_j)=F_S(\sigma_j\sigma_i) because the monoidal product in \mathcal{S} is a functor. And we see that F_S(\sigma_i\sigma_{i+1}\sigma_i)=F_S(\sigma_{i+1}\sigma_i\sigma_{i+1}) because this is exactly an instance of the triangle relation for the braiding on \mathcal{S}!

So once we pick the single object S=F_S(1) everything else about F_S is uniquely determined. If we have an arrow f:S\rightarrow S' then we must set the component \eta_1 of the induced natural transformation to be f:F_S(1)\rightarrow F_{S'}(1). Then monoidality forces \eta_n:F_S(n)\rightarrow F_{S'}(n) to be f^{\otimes n}. The construction is indeed a functor sending \mathcal{S}_0 to the category of strictly monoidal functors from \mathcal{B}raid to \mathcal{S}, and it is clearly inverse to evaluation at 1.

July 6, 2007 Posted by | Category theory, Knot theory | Leave a comment

The Category of Braids

Before I jet off for a week of knot homology — where I’m looking forward to seeing Scott Morrison, Jacob Rasmussen, and Ben Webster of the Secret Blogging Seminar — I thought I’d finally start tying in (sorry) this category theory stuff with knot theory.

As I said yesterday, the analogue of commutativity for a monoidal category is called a braiding. The name comes from a deep connection between braided monoidal categories and braid groups. Specifically, there is a (strict) braided monoidal category of braids: \mathcal{B}raid. The objects of this category are the natural numbers, and \hom_{\mathcal{B}raid}(m,n) is the group B_n if m=n and empty otherwise. We think of a braid with n strands as taking a collection of n points and shuffling them around, and we draw a diagram like this:
Sample Braid
As a side note, I always intend diagrams like these to be read from bottom to top, though other authors go the other way.

First, the monoidal structure. We don’t need any of the structural isomorphisms because we’re making a strict monoidal structure, but we do need a bifunctor \underline{\hphantom{X}}\otimes\underline{\hphantom{X}}. On the objects it’s just addition. For morphisms we need a way of taking a braid with m strands and one with n strands and making a braid with m+n strands. We’ll just stand both of our braids next to each other like this:
The Monoidal Product of Braids

Now we need a braiding. That is, for each pair of objects (m,n) we need a morphism (a braid) from m+n to n+m that “commutes” with all the other morphisms. We’ll just pass the m strands from the left over the n strands from the right. Here’s what I mean for m=3 and n=2:
A Sample Braiding

This is natural because if we stick on a braid below the leftmost three strands on the bottom, the Reidemeister moves will let us pull it up and over the other two strands until it’s sitting on top of the three strands at the right on the top. Similarly, a braid can be pulled along the undercrossing strands. Since our monoidal structure is strict, the hexagon identities degenerate into triangles, and the proof is just the following diagram:
The Hexagon Identity for Braids

So we indeed have a strict braided monoidal category. It turns out to be extremely important because of the following theorem of Joyal and Street:

If \mathcal{M} is a braided monoidal category with underlying category \mathcal{M}_0 (forget the monoidal structure and the braiding to just have a regular old category), then the category of braided monoidal functors from \mathcal{B}raid to \mathcal{M} is equivalent to \mathcal{M}_0.

This is sort of like a coherence theorem for braided monoidal categories: for each natural isomorphism built from \alpha, \lambda, \rho, and \beta there is an “underlying braid”, and two such isomorphisms are equal if and only if they have the same underlying braid. I’ll defer the proof of this for now, but you should think about it a bit. The details aren’t that bad, but the basic idea just leaps out at you after a bit.

July 3, 2007 Posted by | Category theory, Knot theory | 2 Comments

Braidings and Symmetries

So we’ve got monoidal categories that categorify the notion of monoids. In building up, we had to weaken things, refusing to talk about “equalities” of objects. Instead we replaced them with (natural) isomorphisms and then talked about equalities of morphisms. Now we’re ready to specialize a bit.

Many monoids we’re interested in are commutative: ab=ba for all elements a and b of our monoid M. So what does this look like up at the category level? We learned our lesson with associativity, so we already know we should look for a natural isomorphism with components \beta_{A,B}:A\otimes B\rightarrow B\otimes A to swap objects around the monoidal product.

Of course, there’s a coherence condition:
The Hexagon Identity
If we start with (A\otimes B)\otimes C and want to get to B\otimes (C\otimes A), we could pull A past B, associate, and then pull A past C. We could also associate, pull A past B\otimes C, and associate again. And, naturally, we want these two to be the same. There’s another coherence condition we’ll need, but first let’s notice something.

Since \beta_{A,B}:A\otimes B\rightarrow B\otimes A is an isomorphism, there’s an inverse \beta_{A,B}^{-1}:B\otimes A\rightarrow A\otimes B. Of course, we also have \beta_{B,A}:B\otimes A\rightarrow A\otimes B sitting around. Now the obvious thing is for these two to be the same, but actually that’s not what we want. In fact, there are very good reasons to allow them to be different. So in general we’ll have two different ways of pulling A past B, either with a \beta or with a \beta^{-1}, and both of them should satisfy the hexagon identity shown above. That’s our second coherence condition.

We call such a natural transformation satisfying (both forms of) the hexagon identity a “braiding”, and a monoidal category equipped with such a “braided monoidal category”, for a very good reason I’ll talk about tomorrow (hint). Now if by chance the braiding \beta is its own inverse, we call it a “symmetry”, and call the category a “symmetric monoidal category”.

We say that a monoidal functor F:\mathcal{C}\rightarrow\mathcal{D} is braided (symmetric) if its monoidal structure plays well with both braidings. That is, if F_{(B,A)}\circ\beta_{F(A),F(B)}=F(\beta_{A,B})\circ F_{(A,B)}. Notice that if F is strictly monoidal this just says that F(\beta_{A,B})=\beta_{F(A),F(B)}.

Both of these kinds of categories give back commutative monoids when we decategorify. This tells us that even though commutativity is relatively straightforward down at the level of sets, categorification lets us tease apart a subtle distinction and get a better insight into what’s “really going on”.

We know that any category with finite products (or coproducts) can use them as a monoidal structure. As an exercise, use the universal properties of products to come up with a braiding, then show that it’s symmetric.

Another exercise: show that any braiding satisfies \rho_A=\lambda_A\circ\beta_{A,I} and \lambda_A=\rho_A\circ\beta_{I,A}. That is, when we have a braiding the left and right units are determined by each other and the braiding.

July 2, 2007 Posted by | Category theory | 6 Comments

The “Strictification” Theorem

I want to get in another post in the current line before I head off to Portugal on Tuesday. This theorem is another heavy one to prove, so if the proof of the Coherence Theorem was overwhelming you might just want to skim this one and remember the result: Any monoidal category \mathcal{M} is monoidally equivalent to a strict monoidal category \mathcal{S} — one whose objects form an honest monoid without any “up to isomorphism” dodging about. This will justify the usual laxity with which we treat associativity and unit laws in monoidal categories, writing A\otimes B\otimes C\otimes\cdots without regard to parentheses and such.

When we naively write down a monoidal product like this we’re treating it as a list of objects of \mathcal{M}, and we take the product of two lists just by concatenation. So let’s start by defining the collection of objects of \mathcal{S} to be the free monoid on the objects of \mathcal{M}, which is exactly this collection of lists. We’ll write a typical list as A=[A_1,A_2,...,A_m]. Then we have the concatenation product \left[A_1,...A_m\right]\cdot\left[B_1,...B_n\right]=\left[A_1,...A_m,B_1,...,B_n\right] and the empty list \left[\right] for an identity object. All the monoid rules hold “on the nose” already, so we don’t need to worry about associators and all.

We don’t have any arrows in our category yet, but let’s press on a bit first to define a map F:\mathcal{S}\rightarrow \mathcal{M}. If this has any hope of being the object function of a monoidal functor we’ll need F([])=\mathbf{1}. We also need to figure out how F should deal with a monoidal product. What we’re going to do is just choose to make the result have all its parentheses on the right. We define F([A_1,A_2,...])=A_1\otimes F([A_2,...]) to get (for example) F([A_1,A_2,A_3,A_4])=A_1\otimes(A_2\otimes(A_3\otimes A_4)).

Now we’ll go back and add arrows to \mathcal{S} in just such a way as to make F into a monoidal functor. Given two lists A and B in \mathcal{S} I’ll define \hom_\mathcal{S}(A,B)=\hom_\mathcal{M}(F(A),F(B)). Notice that this makes the functor F fully faithful by definition, and that every object M in \mathcal{M} is F([M]), so we’re well on our way to having an equivalence of categories!

What’s missing now? Well, we still don’t know quite how to take the monoidal product of arrows in \mathcal{S}, so it’s still not quite a monoidal category yet. We can use the isomorphisms of hom-sets we just set up to do this. Given f:A\rightarrow C and g:B\rightarrow D in \mathcal{S} their monoidal product f\cdot g:A\cdot B\rightarrow C\cdot D is an arrow in \hom_\mathcal{M}(F(A\cdot B),F(C\cdot B)). We define it to be the composite:
F(A\cdot B)\rightarrow F(A)\otimes F(B)\rightarrow F(C)\otimes F(D)\rightarrow F(C\cdot D)
Here the middle arrow is the monoidal product of f and g in \mathcal{M}, and the outer two arrows are the (unique!) ways to change parentheses as needed. The uniqueness follows from the Coherence Theorem, so we have a well-defined monoidal product on \mathcal{S}.

So now F is a monoidal functor. We take F_* to be the identity F([])=\mathbf{1}, and we let F_{(A,B)}:F(A)\otimes F(B)\rightarrow F(A\cdot B) be the (unique!) arrow moving all the parentheses to the right. The Coherence Theorem gives us this uniqueness, and also handles the diagrams required for a monoidal functor.

Now for the other direction we’ll just go ahead and define G:\mathcal{M}\rightarrow\mathcal{S} by setting G(A)=[A] and G(f)=f. We also define G_*:[]\rightarrow[\mathbf{1}] by noticing that G_*\in\hom_\mathcal{M}(\mathbf{1},\mathbf{1}, so we can just use the identity. We define G_{(A,B)}:[A,B]\rightarrow[A\otimes B] similarly.

I’ll leave the cleanup to you: G does satisfy coherence conditions for a monoidal functor, F\circ G:\mathcal{M}\rightarrow\mathcal{M} is the identity functor, and there is a monoidal natural isomorphism from G\circ F to the identity functor on \mathcal{S}. It’s all pretty striaghtforward, though you’ll have to appeal to the Coherence Theorem a couple times like we have above.

July 1, 2007 Posted by | Category theory | 3 Comments

Monoidal Functors and Natural Transformations

Now that we’ve got monoidal categories as the category-level analogue of monoids we need the category-level analogue of monoid homomorphisms. Recall that a monoid homomorphism is a function from the underlying set of one monoid to that of another that preserves the monoid compositions and identities. Since functors are the category-level version of functions, the natural thing to do is to ask for functors that preserves the monoidal category structure.

Of course it can’t be just that simple. As usual, we want to only ask for things to work out “up to isomorphism”. That is, given monoidal categories \mathcal{C} and \mathcal{D} — I’ll write \otimes and \mathbf{1} for both monoidal products and both identities and trust context to make clear which is which — we don’t want to insist that a functor F:\mathcal{C}\rightarrow\mathcal{D} satisfies F(\mathbf{1})=\mathbf{1} and F(A\otimes B)=F(A)\otimes F(B) and so on. Instead we want natural isomorphisms to replace these equations.

So there should be an isomorphism F_*:\mathbf{1}\rightarrow F(\mathbf{1}) in \mathcal{D}. As specified objects of \mathcal{D}, both \mathbf{1} and F(\mathbf{1}) are given by functors from the category \bullet to \mathcal{D}, and an isomorphism between the objects is automatically a natural isomorphism between these functors.

There should also be a natural isomorphism F_{(--)}:F(\underline{\hphantom{X}})\otimes F(\underline{\hphantom{X}})\rightarrow F(\underline{\hphantom{X}}\otimes\underline{\hphantom{X}}). That is, for every pair of objects A and B of \mathcal{C} there’s an isomorphism F_{(A,B)}:F(A)\otimes F(B)\rightarrow F(A\otimes B) in \mathcal{D}, and these commute with arrows A\rightarrow A' and B\rightarrow B' in \mathcal{C}.

And, strictly speaking, we need such a natural isomorphism for every functor built from \underline{\hphantom{X}}\otimes\underline{\hphantom{X}} and \mathbf{1}. For example, there should be a natural isomorphism F_{(-((1-)-))}:F(\underline{\hphantom{X}})\otimes((F(\mathbf{1})\otimes F(\underline{\hphantom{X}}))\otimes F(\underline{\hphantom{X}}))\rightarrow F(\underline{\hphantom{X}}\otimes((\mathbf{1}\otimes\underline{\hphantom{X}})\otimes\underline{\hphantom{X}})). And, of course, all these isomorphisms should fit together well. This makes for a lot of data, and how are we going to manage it all?

The Coherence Theorem will come to our rescue! There’s a unique natural isomorphism built from \alphas, \lambdas, and \rhos taking (say) \underline{\hphantom{X}}\otimes((\mathbf{1}\otimes\underline{\hphantom{X}})\otimes\underline{\hphantom{X}}) to \underline{\hphantom{X}}\otimes(\underline{\hphantom{X}}\otimes\underline{\hphantom{X}}) in \mathcal{C} (call it \gamma) and another unique such isomorphism in \mathcal{D} (call it \delta). So we can build the above isomorphism F_{(-((1-)-))} as F(\gamma^{-1})\circ F_{(-(--))}\circ\delta. Now all we need is the natural isomorphisms for completely right-parenthesized products with no slots filled with identity objects.

And we can build these from the natural isomorphisms F_{(--)} and F_* (and make sure all the isomorphisms fit together well) — as long as these satisfy a few coherence conditions to make sure they play well with the associator and other structural isomorphisms:
Coherence Conditions for Monoidal Functors

Similarly to the terminology for monoidal categories, if the natural isomorphisms F_* and F_{(--)} are actually identities (and thus so are all the others built from them) then we call F a “strict monoidal functor”. Alternatively, some prefer to call what I’ve defined above a “weak monoidal functor”.

We aren’t quite done yet, though. At the level of sets we have sets and functions between them. Up at the category level we have categories, functors, and now also natural transformations, so we should also focus on the natural transformations that preserve monoidal structures. A monoidal natural transformation between monoidal functors \eta:F\rightarrow G is a natural transformation satisfying G_{(A,B)}\circ(\eta_A\otimes\eta_B)=\eta_{A\otimes B}\circ F_{(A,B)} and G_*=\eta_\mathbf{1}\circ F_*.

Now we have all the analogous tools for monoidal categories as we used for just plain categories to define equivalence. We say that two monoidal categories \mathcal{C} and \mathcal{D} are monoidally equivalent if there are monoidal functors F:\mathcal{C}\rightarrow\mathcal{D} and G:\mathcal{D}\rightarrow\mathcal{C} with monoidal natural isomorphisms \eta:1_\mathcal{C}\rightarrow G\circ F and \epsilon:F\circ G\rightarrow1_\mathcal{D}.

It’s useful to verify at this point that the set of isomorphism classes of objects of a monoidal category \mathcal{C} form a monoid C with \otimes as its composition and (the isomorphism class of) \mathbf{1} as identity, a monoidal functor F:\mathcal{C}\rightarrow\mathcal{D} induces a homomorphism of monoids f:C\rightarrow D, and an equivalence between \mathcal{C} and \mathcal{D} induces an isomorphism between the monoids C and D. These are all pretty straightforward, and they’re good for getting used to the feel of how equalities weaken to isomorphisms as we move from sets to categories.

June 30, 2007 Posted by | Category theory | 1 Comment

Mac Lane’s Coherence Theorem

Okay, as I promised yesterday, I’m going to prove the Coherence Theorem for monoidal categories today. That is, any two natural transformations built from \alpha, \lambda, and \rho between any two functors built from \underline{\hphantom{X}}\otimes\underline{\hphantom{X}} and \mathbf{1} are equal once we know the pentagon and triangle identities are satisfied.

Here’s an example. Brace yourself for the diagram.
A Diagram for the Coherence Theorem

Okay, the outer pentagon is the pentagon identity for the quadruple (A,B,\mathbf{1},\mathbf{1}). The triangles in the upper-right and on the left are the triangle identity. The quadrilaterals are all naturality squares for \alpha and \rho. And the last triangle near the center is the identity we’re proving. It says that \rho_{A\otimes B}=(1_A\otimes\rho_B)\circ\alpha_{A,B,\mathbf{1}}. There’s a similar diagram to show that \lambda_A\otimes1_B=\lambda_{A\otimes B}\circ\alpha_{\mathbf{1},A,B}, which you should find.

Now we’re ready to really get down to work. We can get any functor built from \underline{\hphantom{X}}\otimes\underline{\hphantom{X}} and \mathbf{1} by writing down a parenthesized sequence of multiplications like \underline{\hphantom{X}}\otimes((\underline{\hphantom{X}}\otimes(\underline{\hphantom{X}}\otimes\underline{\hphantom{X}}))\otimes\underline{\hphantom{X}}) and filling some of the open slots with \mathbf{1}.

First, let’s get rid of those identity objects. Each of them shows up either to the right or to the left of some product symbol, so we can use either a \rho or a \lambda to remove it, but there’s some choice involved here. For example, near the \mathbf{1} we’re trying to get rid of the functor might look like (\underline{\hphantom{X}}\otimes\mathbf{1})\otimes\underline{\hphantom{X}}. We can either use a \rho right away, or we can hit it with an associator and then use a \lambda. The triangle identity says that these two ways of removing the \mathbf{1} are the same. Similarly, if nearby the functor looks like (\mathbf{1}\otimes\underline{\hphantom{X}})\otimes\underline{\hphantom{X}} or (\underline{\hphantom{X}}\otimes\underline{\hphantom{X}})\otimes\mathbf{1} we have some leeway, but the two new triangles we proved above say we’ll get the same natural transformation to remove the \mathbf{1} either way. The upshot is that any two natural transformations we use to reach a functor without any slots filled with identity objects are the same.

So now we just have to deal with functors built from monoidal products. Each of them is just a different way to parenthesize a multiplication of some number of variables, and the associator (or its inverse) lets us move between different ways of parenthesizing the same number of variables. One convenient way of parenthesizing is to have all pairs of parentheses at the end, like \underline{\hphantom{X}}\otimes(\underline{\hphantom{X}}\otimes(\underline{\hphantom{X}}\otimes(\underline{\hphantom{X}}\otimes\underline{\hphantom{X}}))), and what we’re going to do is try to make every functor look like this.

Now imagine a big diagram with all the different ways of parenthesizing a product of n terms as its vertices, and draw an arrow from one vertex to another if you can go from the first to the second by applying one associator. When n=4, this is just the pentagon diagram. Try actually drawing the one for n=5 (there should be 14 vertices). We’re allowed to walk forwards or backwards along the arrows, since the associator is an isomorphism. Every path is a natural isomorphism between the two functors at its ends, since it’s a composition of natural isomorphisms. What the Coherence Theorem says is that any two paths between the same two points give the same natural isomorphism.

Let’s say that we have a path from a vertex V in this diagram to a vertex W, which might consist of some forward arrows and some reverse arrows. Each time we change directions from forward to reverse we can pick a path using only forward arrows from the turning point to the vertex R, which has all its parentheses to the right. Here’s a sketch of what I’m talking about
Another Diagram for the Coherence Theorem
Each arrow here is a path consisting of only forward arrows. We start with the path along the top. At the two points where we change from walking forwards along the arrows to walking backwards, we pick a path made up of only forward arrows to the point R. Now the path we started with is the same as walking forwards a step from V, down to R, back the way we just came (undoing the isomorphism), forwards to the next juncture, down to R and back, and on to W.

It seems we’ve made our lives more complicated by adding in these side-trips to R, just to undo them immediately, but something really nice happens. There are two paths here from the middle point of our original path down to R, and each of them consists only of forward arrows. If these two paths are the same, I can cut out that whole middle loop. Then our path walks straight down from V to R along forward arrows, then back up to W along backward arrows. If there are more changing points in the original path, then there are just more loops to cut out. So if any two paths from a vertex to R consisting of only forward arrows give the same isomorphism, then every path from V to W is the same as one passing through R. And even better, the paths from V and W to R themselves consist entirely of forward arrows, so there’s only one of each of them. That is, if we can show that any two paths made of forward arrows from an arbitrary vertex V to the fixed vertex R give the same natural isomorphism, then we’re done.

Now here’s where it gets a bit messy. We’ll need to use inductions on both the number of terms in our functors and on how close they are to R. We know the Coherence Theorem works up to n=4 terms, because that’s the pentagon identity. We’ll prove it for n terms assuming it’s known for fewer than n terms and work our way up. Similarly, for a fixed n we’ll start by establishing the result for the vertices closest to R and work our way out.

Let’s start with a functor V, which is the product of functors V_1 and V_2. Let \beta and \gamma be two arrows out of V to functors X and Y, respectively, which them continue on to paths to R. We need to find a Z and paths from X and Y to Z to make the square commute. Since X and Y are both closer than V to R, we can assume that both paths from X to R in the diagram give the same isomorphism, as do both paths from Y to R. Then since Z is even closer to R we can assume there’s only one transformation from Z to R, so the result follows for V. Here’s another sketch:
Yet Another Diagram for the Coherence Theorem

If \beta and \gamma are already the same there’s nothing to be done. Pick Z=X=Y and go home. Otherwise there are three possibilities for \beta

  • \beta applies an associator inside V_1
  • \beta applies an associator inside V_2
  • V_1=V_{11}\otimes V_{12}, and \beta:(V_{11}\otimes V_{12})\otimes V_2\rightarrow V_{11}\otimes(V_{12}\otimes V_2) is an associator

and the same three for \gamma.

If \beta and \gamma both act within V_1 or within V_2 then we’re done because of the induction on the number of terms. If \beta acts in one factor and \gamma in the other then they commute because \otimes is a functor of two variables. All that’s left to see is if \beta is an associator and \gamma acts in either V_1 or V_2.

If \gamma acts inside V_2, sending it to V_2', then we get (1_{V_{11}}\otimes(1_{V_{12}}\otimes\gamma))\circ\alpha_{V_{11},V_{12},V_2}=\alpha_{V_{11},V_{12},V_2'}\circ((V_{11}\otimes V_{12})\otimes\gamma) by the naturality of \alpha. A similar statement holds if \gamma acts inside V_{11} or V_{12}. Finally, if \gamma is itself the associator \alpha_{V_{111},V_{112},V_{12}}, then \beta and \gamma are actually the first steps around the pentagon identity, so we can use the far vertex of the pentagon as our Z.

And after all these cases, we’re done. If two paths start out differently from V we can guide them back together so that the square where they differed commutes, and we can keep doing this until we get to R. Thus there’s only one natural isomorphism built from associators taking a functor the the fully right-parenthesized functor on the same number of terms.

Thus there’s only one natural isomorphism built from associators between any two functors built from monoidal products on the same number of terms.

Thus any two natural transformations built from \alpha, \lambda, and \rho between any two functors built from \underline{\hphantom{X}}\otimes\underline{\hphantom{X}} and \mathbf{1} are equal, given that the pentagon and the triangle identities are satisfied.

Now breathe.

June 29, 2007 Posted by | Category theory | 15 Comments

Monoidal categories

We know that monoids are one of the most basic algebraic structures on which many others are built. Naturally, they’re one of the first concepts we want to categorify. That is, we want to consider a category with some extra structure making its objects behave like a monoid.

So let’s charge ahead and try to write down what this means. We need some gadget that takes two objects and spits out another. The natural thing to use here is a functor \underline{\hphantom{X}}\otimes\underline{\hphantom{X}}:\mathcal{C}\times\mathcal{C}\rightarrow\mathcal{C}. We’re using the same symbol we did for tensor products — and for a good reason — but we don’t need it to be that operation.

Now we need this functor to satisfy a couple rules to make it like a monoid multiplication. It should be associative, so (A\otimes B)\otimes C=A\otimes(B\otimes C) for all objects A, B, and C in \mathcal{C}. There should be an “identity” object \mathbf{1} so that A\otimes\mathbf{1}=A=\mathbf{1}\otimes A for all objects A\in\mathcal{C}.

We know that the natural numbers \mathbb{N} form a monoid under multiplication with 1 as the identity, and we know that the category \mathbf{FinSet} of finite sets categorifies the natural numbers with Cartesian products standing in for multiplication. So let’s look at it to verify that everything works out. We use \times as our monoidal structure and see that (A\times B)\times C=A\times(B\times C)… but it doesn’t really. On the left we have the set \{((a,b),c)|a\in A,b\in B,c\in C\}, and on the right we have the set \{(a,(b,c))|a\in A,b\in B,c\in C\}, and these are not the same set. What happened?

The problem is that the results are not the same, but are only isomorphic. The monoid conditions are equations

  • (a\cdot b)\cdot c=a\cdot(b\cdot c)
  • a\cdot1=a=1\cdot a

So when we categorify the concept we need to replace these by natural isomorphisms

  • \alpha_{A,B,C}:(A\otimes B)\otimes C\rightarrow A\otimes(B\otimes C)
  • \lambda_A:\mathbf{1}\otimes A\rightarrow A
  • \rho_A:A\otimes\mathbf{1}\rightarrow A

These say that while the results of the two functors on either side of the arrow might not be the same, they are isomorphic. Even better, the isomorphism should commute with arrows in \mathcal{C}, as described by the naturality squares. For instance, if we have an arrow f:A_1\rightarrow A_2 in \mathcal{C} then we can apply it before or after \lambda: \lambda_{A_2}\circ(1_\mathbf{1}\otimes f)=f\circ\lambda_{A_1} as arrows from \mathbf{1}\otimes A_1 to A_2.

As a side note, the isomorphism \alpha is often called the “associator”, but I don’t know of a similarly catchy name for the other two isomorphisms. When we’ve “weakened” the definition of a monoidal category like this we sometimes call the result a “weak monoidal category”. Alternatively — and this is the convention I prefer — we call these the monoidal categories, and the above definition with equalities instead of just isomorphisms gives “strict monoidal categories”.

Unfortunately, we’re not quite done with revising our definition yet. We’ll be taking our tensor products and identity objects and stringing them together to make new functors, and similarly we’ll be using these natural isomorphisms to relate these functors, but we need to make sure that the relationship doesn’t depend on how we build it from the basic natural isomorphisms. An example should help make this clearer.

Pentagon Diagram

This is the pentagon diagram. The vertices of the pentagon are the five different ways of parenthesizing a product of four different objects. The edges are single steps, each using one associator. Around the left, we apply the associator to the first three factors and leave D alone (use the identity arrow 1_D), then we apply the associator to A, B\otimes C, and D, and finally we apply the associator to the last three factors and leave A alone. Around the right, we apply the associator twice, first to A\otimes B, C, and D, and then to A, B, and C\otimes D. So we have two different natural isomorphisms from ((\underline{\hphantom{X}}\otimes\underline{\hphantom{X}})\otimes\underline{\hphantom{X}})\otimes\underline{\hphantom{X}} to \underline{\hphantom{X}}\otimes(\underline{\hphantom{X}}\otimes(\underline{\hphantom{X}}\otimes\underline{\hphantom{X}})). And we have to insist that they’re the same.

Here’s another example:
Triangle Diagram

This triangle diagram is read the same as the pentagon above: we have two different natural transformations from (\underline{\hphantom{X}}\otimes\mathbf{1})\otimes\underline{\hphantom{X}} to \underline{\hphantom{X}}\otimes\underline{\hphantom{X}}, and we insist that they be the same.

What’s happened is we’ve replaced equations at the level of sets with (natural) isomorphisms at the level of the category, but these isomorphisms are now subject to new equations. We’ve seen two examples of these new equations, and it turns out that all the others follow from these two. I’ll defer the justification of this “coherence theorem” until later.

For now, let’s go back to \mathbf{FinSet} We can use the universal property of the product to give an arrow \alpha_{A,B,C}:(A\times B)\times C\rightarrow A\times(B\times C), and we can verify that these form the components of a natural isomorphism. Similarly, we can use the singleton * as an identity object and determine isomorphisms \lambda_A:*\times A\rightarrow A and \rho_A:A\times *\rightarrow A. They do indeed satisfy the pentagon and triangle identities above, making \mathbf{FinSet} into a monoidal category.

In fact, you could establish the associator and other isomorphisms for \mathbf{FinSet} by looking at the elements of the sets and defining particular functions, but if we do it all by the universal properties of products and terminal objects we get a great generalization: any category with finite products (in particular, pairwise products and a terminal object) can use them as a monoidal structure. Dually, any category with finite coproducts can use them as a monoidal structure.

For any ring R, the category R\mathbf{-mod-}R of all R-R bimodules has a monoidal structure given by \otimes_R, and because of this monoidal categories are often called “tensor categories” and the monoidal structure a tensor product.

June 28, 2007 Posted by | Category theory | 22 Comments