I don’t know about you, but all this algebraic notation starts to blur together. Wouldn’t it be nice if we could just draw pictures?
Well luckily for use we can! Just like we had diagrams for braided categories, categories with duals, and braided categories with duals, we have certain diagrammatics to help us talk about monoid objects.
First off, we think of our generating object as a point on a line. As we tensor copies of this object together, we just add more points. Then our morphisms will be diagrams in the plane. At the bottom of the diagram is the incoming object — a bunch of marked points — and at the top is the outgoing object — another bunch of marked points. In between, we have morphisms we can build from the two basic pieces we added: multiplication and unit.
See? For multiplication, two points come in. They move together and multiply, leaving one point to go out. For the unit, a point comes “out of nowhere” to leave the diagram.
As before, we set two diagrams side-by-side for the monoidal product and stack them top-to-bottom for composition. Now, what do those associativity and identity relations look like?
Neat! Associativity just means we can pull the branch in the middle to either side of the threefold multiplication, while identity means we can absorb a dangling free end.
I haven’t bothered to render a diagram for symmetry, but we can draw it by just having lines cross through each other. The naturality of the symmetry means that we can pull any morphism from one side of a crossing line to the other.
And now what about comonoid objects? We’ve got diagrams to talk about them too!
Here’s a comultiplication and a counit. We just flip the multiplication and unit upside-down to dualize them. And we do the same thing for the coassociativity and coidentity relations.
The one thing we have to take careful note of here is that everything in sight is strict. These diagrams don’t make any distinction between and ; or between , , and .
Another example we’ve seen already is that a ring with unit is a monoid object in — the category of abelian groups with the tensor product of abelian groups as the monoidal structure. Similarly, given a commutative ring , a monoid object in the category with tensor product of -modules as its monoidal structure is a -algebra with unit. For extra credit, how would we get rings and -algebras without units?
Here’s one we haven’t seen (and which I’ll talk more about later): given any category , the category of “endofunctors” has a monoidal structure given by composition of functors from to itself. This is the one I was thinking of that doesn’t have a symmetry, by the way. A monoid object in this category consists of a functor along with natural transformations and . These turn out to be all sorts of useful in homology theory, and also in theoretical computer science. In fact, the programming language Haskell makes extensive and explicit use of them.
And now for a really interesting class of examples. Let’s say we start with a monoidal category with monoidal structure . We immediately get a monoidal structure on the opposite category . Just define for objects. For morphisms we take and (which are in and , respectively), and define , which is in .
So what’s a monoid object in ? It’s a contravariant functor from to . Equivalently, we can write it as a covariant functor from to . It will be easier to just write down explicitly what this opposite category is.
So we need to take and reverse all the arrows. It’s enough to just reverse the arrows we threw in to generate the category, and their composites will be reversed as well. We’ll also have to dualize the relations we imposed to make everything work out right. So we’ll have an arrow called comultiplication and another arrow called the counit. These we require to satisfy the coassociative condition and the left and right coidentity conditions .
Now a functor from this category to another monoidal category picks out an object and arrows (reusing the names) and satisfying coassociativity and coidentity conditions. We call such an object with extra structure a “comonoid object” in . In we call them “comonoids”. In we call them “corings” (with counit), in we call them “coalgebras” (with counit), and in we call them “comonads”. In general, we call this new model category — the “theory of comonoids”.
Now it’s time to start getting into the fun things we can do with monoidal categories. For my first trick, I’m going to build a neat monoidal category and show you what we can do with it.
Any monoidal category has an “identity” object , so to make it a bit more interesting let’s throw in a single non-identity object . Then we get for free all the monoidal products built with and . Let’s make our lives easier by saying our category is strict. Then all our objects look like — the monoidal product of copies of . We can see that , and that .
This is all well and good, but we still don’t really have much going on here. All the morphisms in sight are identities. We don’t even have associators or unit isomorphisms because our category is strict. So let’s throw in a couple morphisms, and of course all the other ones we can build from them.
First let’s make our category symmetric. That is, we’ll add a “twist” that swaps the copies of . We’ll insist that it satisfy . We can then build a braiding by swapping the copies of one at a time. This seems a little silly at first glance. If had any additional structure — if it was a set, for instance — this would be clearly useful. As it stands, though, the use isn’t apparent. Don’t worry, we’ll get to it.
Next, let’s add a morphism . From this we can get a bunch of other morphisms. For example, or . We can use this one to increase the number of copies of in a product in many different ways, depending on where we stick the new copy of .
But we could also add a new copy of in one place and use the symmetric structure to move it to a different place. For example, instead of adding a copy on the right with , we could instead use to add a copy on the left and then swap the two. Notice also that and , which means that these two morphisms are and . The naturality of says that these two are really the same. So, adding a new copy of and then moving it around immediately to another position is the same as just adding it in the new position right away.
Now let’s add a way to reduce the number of copies. We’ll use a morphism . Of course, we get for free such compositions as and . There will be some equalities arising from the naturality of , but nothing too important yet.
So let’s throw in a few more equalities. Let’s say that and that . And of course there are other equalities we can build from these. The whole thing should start looking a bit familiar by this point.
Okay, so we’ve got ourselves a strict monoidal category with a bunch of objects and a few morphisms satisfying some equations. So what? Well, let’s start looking at symmetric monoidal functors from into other symmetric monoidal categories.
The first monoidal category we’ll look at is , which uses the cartesian product as its monoidal structure. What does a monoidal functor look like? Well, is some set , and by monoidality we see that — the cartesian product of copies of . In particular, : a set with a single element.
The symmetry for is the natural isomorphism defined by . In particular, we get .
The morphism now becomes , which picks out a particular point of . Let’s call this point , just like the function that picks it out.
The morphism is now a function . The equations that we imposed in must still apply here: and . Since we’re in the category of sets, let’s just write these all out as functions and see what they do to particular elements.
The first equation is between two functions with source , so let’s pick an arbitary element to follow. The left side of the equation sends this to , while the right sends it to . The equation now reads . But that’s just the associative law for a composition! The second equation is between three functions that all have source . Starting with an arbitrary element we read off the equation . And that’s the left and right unit law for a composition!
So what we see here is that gets the structure of a monoid. And given any monoid we can construct such a symmetric monoidal functor with and sending and to the multiplication and identity functions.
Can we do better? Sure we can. Let’s say we’ve got a homomorphism between two monoids . We can consider this to be a function between their underlying sets. Immediately we get as well, applying to each entry of the product. This is clearly symmetric. Saying that preserves the multiplication of these monoids is just the same as saying that , which is the naturality square for . Similarly, preserving the identities is the same as making the naturality square for commute. So a monoid homomorphism is the same as a natural transformation between these functors!
Let’s back up a bit and give our toy category a better name. Let’s call it — the “theory of monoids”. What we’ve just seen is that our familiar category of monoids is “really” the category of symmetric monoidal functors from the “theory of monoids” to sets. We now slightly shift our terminology and instead of calling such a set-with-extra-structure a “monoid”, we call it a “monoid object in “.
And now the road is clear to generalize. Given any symmetric monoidal category we can take the category of “monoid objects in “.
[UPDATE]: On reflection, the symmetric property isn’t really essential. That is, we can just consider the category of monoidal functors from to . In fact, there’s one example I’ll be getting to that doesn’t have a symmetry. In general, though, when the target category has a symmetry we’ll usually ask that our functors preserve that structure as well.
[UPDATE]: You know what? Scrap that whole symmetry bit altogether. Sometimes the target category will have symmetry and sometimes that will be helpful, but it’s just not worth it in the general theory. I’m almost sorry I brought it up in the first place.
We can easily see that limits commute with each other, as do colimits. If we have a functor , then we can take the limit either all at once, or one variable at a time: . That is, if the category has -limits, then the functor preserves all other limits.
But now we know that limit functors are right adjoints. And it turns out that any functor which has a left adjoint (and thus is a right adjoint) preserves all limits. Dually, any functor which has a right adjoint (and thus is a left adjoint) preserves all colimits.
First we need to note that we can compose adjunctions. That is, if we have adjunctions and then we can put them together to get an adjunction . Indeed, we have
We also need to note that adjoints are unique up to natural isomorphism. That is, if and then there is a natural isomorphism . This is essentially because adjunctions are determined by universal arrows, and universal arrows are unique up to isomorphism.
Okay, now we can get to work. We start with an adjunction . Given another (small) category we can build the functor categories and . It turns out we get an adjunction here too. Define for each functor . The unit induces a unit . We can similarly define and , and show that they determine an adjunction
Now let’s say that and both have -limits. Then we have an adjunction and a similar one for . We can thus form the composite adjunctions
So what is ? Well, is the functor that sends every object of to and every morphism to . Then composing this with gives the functor that sends every object of to and every morphism to . That is, we get . So . But these are the two left adjoints listed above. Thus the two right adjoints listed above are both right adjoint to the same functor, and therefore must be naturally isomorphic! We have for every functor . And thus preserves -limits.
When considering limits, we started by talking about the diagonal functor . This assigns to an object the “constant” functor that sends each object of to and each morphism of to .
Then towards the end of our treatment of limits we showed that taking limits is a functor. That is, if each functor from to has a limit , then is a functor from to . Dually, if every such functor has a colimit , then is also a functor.
And now we can fit these into the language of adjoints: when it exists, the limit functor is right adjoint to the diagonal functor. Dually, the colimit functor is left adjoint to the diagonal functor when it exists. I’ll handle directly the case of colimits, but the limit statements and proofs are straightforward dualizations.
So we definitely have a well-defined functor . By assumption we have for each functor an object . If we look at the third entry in our list of ways to specify an adjunction, all we need now is a universal arrow . But this is exactly how we defined limits! Now the machinery we set up yesterday takes over and promotes this collection of universal arrows into the unit of an adjunction .
For thoroughness’ sake: the unit of this adjunction is the colimiting cocone, considered as a natural transformation from to the constant functor on the colimiting object. The counit of this adjunction is just the identity arrow on because the colimit of the constant functor is just the constant value. The “quasi-inverse” conditions state that is the identity natural isomorphism on , and that is the identity natural isomorphism on , both of which are readily checked.
And our original definition of an adjoint here reads that . That is, for each cocone to on (one of the natural transformations on the right) there is a unique arrow from the colimiting object of to .
The unit of an adjunction picks out, for each object , an arrow . This arrow is an object in the comma category . And, amazingly enough, it’s an initial object in that category. Given any other object and arrow we need to find an arrow in so that . Since the obvious guess is . Then we can calculate:
where the second equality uses the naturality of and the third uses the “quasi-inverse” condition we discussed yesterday.
So, an adjunction means that for each and every object the component of the unit gives a universal arrow from to . Dually, for every object the component of the counit gives a couniversal arrow from to .
On the other hand, let’s say we start out with a functor and for each an object and an arrow that is universal from to . Then given an arrow we can build an arrow . By the universality of there is then a unique arrow so that . It’s straightforward now to show that and are the object and morphism functions of a functor , and that is a natural transformation.
Now, say we have functors and and a natural transformation so that each is universal from to . Given an arrow , there is (by universality of ) a unique arrow so that . This sets up a bijection defined by . This construction is natural in because is, and it’s natural in because is a functor. And so this data is enough to define an adjunction .
Dually, we can start with a functor and for each an object and an arrow universal from to . Then we can build up into a functor and up into a natural transformation with each component a couniversal arrow. And this is enough to define an adjunction .
And, of course, we know that giving a universal arrow from to is equivalent to giving a representation of the functor , and dually.
So we have quite a long list of ways to specify an adjunction
- Functors and and a natural isomorphism
- Functors and and natural transformations and satisfying and
- A functor and for each an object and a universal arrow
- A functor and for each an object and a couniversal arrow
- A functor and for each a representation of the functor
- A functor and for each a representation of the functor
- Functors and and a natural transformation so that each component is universal from to
- Functors and and a natural transformation so that each component is universal from to
Last time we took an adjunction and came up with two natural transformations, weakened versions of the natural isomorphisms defining an equivalence. Today we’ll see how to go back the other way.
So let’s say we have an adjunction given by natural isomorphism . Remember that we defined the unit and counit by and . We can take either one of these and reverse-engineer it. For instance, given an arrow in we can calculate
so once we know the unit of the adjunction we can calculate from it. Notice how we use the naturality of in the second equality.
Dually, we can determine in terms of the counit. Given in , we calculate:
so we can also determine the natural isomorphism of hom-sets in terms of the counit.
Of course, since we can determine the same isomorphism (technically the isomorphism and its inverse) from either the unit or the counit, they must be related. So what do these equations really tell us?
For this we have to go back to the way we compose natural transformations. The obvious way is where we have natural transformations and between three functors from to . We put them together to get .
Less obviously, we can consider functors and from to , functors and from to , and natural transformations and . We can put these together to get , defined by or (exercise: show that these two composites are equal).
Now what we need here is this “horizontal” composite. Let’s go back to the adjunction and take the natural transformations and . The components of their horizontal composite is then given by . Similarly, if we take the natural transformations and , their horizontal composite has components given by . Now the “vertical” composite of these two has components . And the above formula for the adjunction isomorphism in terms of the unit tells us that this is .
To put it at a bit of a higher level, if we start with the functor , use the unit to turn it into the functor , then use the counit to move back to , the composite natural transformation is just the identity transformation on . Similarly, we can show that the composite taking to and back to is the identity transformation on .
Inherent in this is also the converse statement. If we have natural transformations and satisfying these two identities, then we can use the above formulae to define a natural isomorphism in terms of and its inverse in terms of . Thus an adjunction is determined by a unit and a counit satisfying these “quasi-inverse” relations.
If you’re up to it, try to see where we’ve seen these quasi-inverse relations before in a completely different context. I’ll be coming back to this later.
Let’s say we have an adjunction . That is, functors and and a natural isomorphism .
Last time I drew an analogy between equivalences and adjunctions. In the case of an equivalence, we have natural isomorphisms and . This presentation seems oddly asymmetric, and now we’ll see why by moving these structures to the case of an adjunction.
So let’s set like we did to show that an equivalence is an adjunction. The natural isomorphism is now . Now usually this doesn’t give us much, but there’s one of these hom-sets that we know has a morphism in it: if then . Then is an arrow in from to .
We’ll call this arrow . Doing this for every object gives us all the components of a natural transformation . For this, we need to show the naturality condition for each arrow . This is a straightforward calculation:
using the definition of and the naturality of .
This natural isomorphism is called the “unit” of the adjunction . Dually we can set and extract an arrow for each object and assemble them into a natural transformation called the “counit”. If both of these natural transformations are natural isomorphisms, then we have an equivalence.
For a particular example, let’s look at this in the case of the free-monoid functor as the left adjoint to the underlying-set monoid . The unit will give an arrow , which here is just the inclusion of the generators (elements of ) as elements of the underlying set of the free monoid. The counit, on the other hand, will give an arrow . That is, we take all elements of the monoid and use them as generators of a new free monoid — write out “words” where each “letter” is a whole element of . Then to take such a word and send it to an element of , we just take all the letters and multiply them together as elements of . Since we gave a description of last time for this case, it’s instructive to sit down and work through the definitions of and to show that they do indeed give these arrows.
Today I return to the discussion of universals, limits, representability, and related topics. The last piece of this puzzle is the notion of an adjunction. I’ll give a definition and examples today and work out properties later.
An adjunction between categories and consists of a pair of functors and and a natural isomorphism . Notice that the functors on either side of go from to , so each component is a bijection of sets. We say that is “left adjoint” to , and conversely that is “right adjoint” to , and we write .
Now, we have been seeing these things all along our trip so far, but without mentioning them as such. For instance, we have all the “free” constructions:
- the free monoid on a set
- the free group on a set
- the free group on a monoid
- the semigroup ring
- the free ring on an abelian group
- the free module on a set
- the free algebra on a module
and maybe more that I’ve mentioned, but don’t recall.
These all have a very similar form in their definitions. For instance, the free monoid on a set is characterized by the following universal property: every function from into the underlying set of a monoid extends uniquely to a monoid homomorphism . If we write the underlying set of as , we easily see that is a functor. The condition then is that every element of the hom-set corresponds to exactly one element of the hom-set , and every monoid homomorphism restricts to a function on . That is, for every set and monoid we have an isomorphism of sets .
Now, given a function from a set to a set we can consider to be a subset of the free monoid on itself, giving a function . This extends to a unique monoid homomorphism . This construction preserves identities and compositions, making into a functor from to .
If we have a function and a monoid homomorphism then we can build functions and . The isomorphisms and commute with these arrows, so they form the components of a natural isomorphism between the two functors. This proves that the free monoid functor is a left adjoint to the forgetful functor .
All the other examples listed above go exactly the same way, giving left adjoints to all the forgetful functors.
As a slightly different example, we have a forgetful functor that takes an abelian group and “forgets” that it’s abelian, leaving just a group. Conversely, we can take any group and take the quotient by its commutator subgroup to get an abelian group. This satisfies the property that for any group homomorphism from to an abelian group (considered as just a group) there is a unique homomorphism of abelian groups . Thus it turns out that “abelianization” of a group is left adjoint to the forgetful functor from abelian groups to groups.
There are more explicit examples we’ve seen, but I’ll leave them to illustrate some particular properties of adjoints. Take note, though, that not all adjunctions involve forgetful functors like these examples have.
An adjunction between two categories can be seen as a weaker version of an equivalence. An equivalence given by functors and tells us that both and are fully faithful, so . Now let’s put to find that , where the last isomorphism uses the natural isomorphism . So every equivalence is an adjunction.
I’m exhausted from spending all morning and much of the afternoon purchasing a new (to me) car. As a result, I’ll just forward you to the excellent notes that Miguel Carrión Alvarez took in John Baez’ seminar on quantum gravity back in fall and winter of 2000-1.
Specifically, pay attention to the diagrammatics. He’s talking mostly about finite-dimensional vector spaces over the field of complex numbers, but most everything applies to a general (braided) monoidal category (with duals). Also, he draws his diagrams from top to bottom, while (as I keep reminding you) I write mine from bottom to top to make it easier to read off the algebraic notation.
We’ve already seen some of the basic pieces as braid, Temperley-Lieb, and tangle diagrams, but here each arc in the diagram carries a label from the objects of a category, and usually an arrow. We can move to a dual object by reversing the arrow or changing the label.
Morphisms can be put in boxes, with the incoming object in the bottom and the outgoing one at the top. The naturality for the dual morphisms basically says we can slide a morphism up over a cup or down under a cap to get its dual. Also, often a morphism will have a number of incoming or outgoing strands, which means that the incoming object is the tensor product of the objects on the incoming strands.
A braiding is written as a crossing (lower-left over upper-right), and the inverse of the braiding is written as the other kind of crossing. Naturality means that we can pull a morphism along a strand through a crossing.
There’s a lot more to the notes than just the diagrammatics, though. If you’re up to it, I highly recommend giving it all a look. If not, just look for the pictures and read the sections around them for the explanations. I’ll be back on Monday with more exposition.