Now that we know how to transform adjoints, we can talk about whole families of adjoints parametrized by some other category. That is, for each object of the parametrizing category we’ll have an adjunction , and for each morphism of we’ll have a transformation of the adjunctions.
Let’s actually approach this from a slightly different angle. Say we have a functor , and that for each the functor has a right adjoint . Then I claim that there is a unique way to make into a functor from to so that the bijection is natural in all three variables. Note that must be contravariant in here to make the composite functors have the same variance in .
If we hold fixed, the bijection is already natural in and . Let’s hold and fixed and see how to make it natural in . The components are already given in the setup, so we can’t change them. What we need are functions and for each arrow .
For naturality to hold, we need . But from what we saw last time this just means that the pair of natural transformations forms a conjugate pair from to . And this lets us define uniquely in terms of , the counit of , and the unit of by using the first of the four listed equalities.
From here, it’s straightforward to show that this definition of how acts on morphisms of makes it functorial in both variables, proving the claim. We can also flip back to the original viewpoint to define an adjunction between categories and parametrized by the category as a functor from to the category of adjunctions between those two categories.
And now we go back to adjoints. Like every other structure out there, we want to come up with some analogue of a homomorphism between two adjunctions. Let’s consider the adjunctions and , and try to find a good notion of a transformation from the first to the second.
We’ll proceed by considering an adjunction to consist of the pair of categories with the functors giving extra structure. Down in the land of groups and rings and such, we’d consider sets with extra structure and functions that preserved that structure. So here, naturally, we want to consider functors which preserve this extra structure. That is, a map of adjunctions consists of a pair of functors and . These must preserve the structure in that and .
But hold up a second, we’ve forgotten something else that goes into an adjunction: the isomorphism . Here’s a diagram showing how the map of adjunctions should play nicely with them:
Equivalently we can specify an adjunction by its unit and counit. In this case the compatibility in question is a pair of equations of natural transformations: and .
What if we’re looking at two different adjunctions between the same pair of categories? Well then we may as well try to use the appropriate identity functors for and . But then it’s sort of silly to insist that on the nose, and similarly for . Instead, as we do so often, let’s weaken this equality to just a natural transformation.
We’ll say that a pair of natural transformations and are “conjugate” if . This is equivalent, in terms of the unit and counit, to any one of the following four equalities:
Now it’s easily verified that given a pair of categories we can build a category whose objects are adjunctions and whose morphisms are conjugate pairs of natural transformations, which we write out in full as . We compose conjugate pairs in this category in the obvious way, which we write .
On the other hand, if we have a pair and another , then we can form the composite , which we’ll write as . Notice the similarity of this situation with the two different compositions of natural transformations between functors.
There’s another approach to the theory of monoids which finds more direct application in topology and homology theory (which, yes, I’ll get to eventually) — the “simplicial category” . Really it’s an isomorphic category to , but some people think better in these other terms. I personally like the direct focus on the algebra, coupled with the diagrammatics so reminiscent of knot theory, but for thoroughness’ sake I’ll describe the other approach.
Note that the objects of correspond exactly with the natural numbers. Each object is the monoidal product of some number of copies of the generating object . We’re going to focus here on the model of given by the ordinal numbers. That is, the object corresponds to the ordinal number , which is a set of elements with its unique (up to isomorphism) total order. In fact, we’ve been implicitly thinking about an order all along. When we draw our diagrams, the objects consist of a set of marked points along the upper or lower edge of the diagram, which we can read in order from left to right.
Let’s pick a specific representation of each ordinal to be concrete about this. The ordinal will be represented by the set of natural numbers from to with the usual order relation. The monoidal structure will just be addition — .
The morphisms between ordinals are functions which preserve the order. A function between ordinals satisfies this property if whenever in then in . Note that we can send two different elements of to the same element of , just as long as we don’t pull them past each other.
So what sorts of functions do we have to play with? Well, we have a bunch of functions from to that skip some element of the image. For instance, we could send to by sending to , skipping , sending to , and sending to . We’ll say for the function that skips in its image. The above function is then . For a fixed , the index can run from to .
We also have a bunch of functions from to that repeat one element of the image. For example, we could send to by sending to , and both to , and to . We’ll say for the function that repeats in its image. The above function is then . Again, for a fixed , the index can run from to .
Notice in particular that “skipping” and “repeating” are purely local properties of the function. For instance, is the unique function from (the empty set) to , which clearly skips . Then can be written as , since it leaves the numbers from to alone, sticks in a new , and then just nudges over everything from (the old) to . Similarly, is the unique function from to that sends both elements in its domain to . Then all the other can be written as .
Now every order-preserving function is determined by the set of elements of the range that are actually in the image of the function along with the set of elements of its domain where it does not increase. That is, if we know where it skips and where it repeats, we know the whole function. This tells us that we can write any function as a composition of and functions. These basic functions satisfy a few identities:
- If then .
- If then .
- If then .
- If or then .
- If then .
We could check all these by hand, and if you like that sort of thing you’re welcome to it. Instead, I’ll just assume we’ve checked the second one for and the fourth one for .
What’s so special about those conditions? Well, notice that takes two copies of to one copy, and that the second relation becomes the associativity condition for this morphism. Then also takes zero copies to one copy, and the fourth relation becomes the left and right identity conditions. That is, with these two morphisms is a monoid object in this category! Now we can verify all the other relations by using our diagrams rather than a lot of messy calculations!
We can also go back the other way, breaking any of our diagrams into basic pieces and translating each piece into one of the or functions. The category of ordinal numbers not only contains a monoid object, it is actually isomorphic to the “theory of monoids” functor — it contains the “universal” monoid object.
So why bother with this new formulation at all? Well, for one thing it’s always nice to see the same structure instantiated in many different ways. Now we have it built from the ground up as , we have it implemented as a subcategory of , we have it as the category of ordinal numbers, and thus we also have it as a full subcategory of — the category of all small categories (why?).
There’s another reason, though, which won’t really concern us for a while yet. The morphisms and turn out to be very well-known to topologists as “face” and “degeneracy” maps when working with shapes they call “simplicial complexes”. Not only is this a wonderful oxymoron, it’s the source of the term “simplicial category”. If you know something about topology or homology, you can probably see how these different views start to tie together. If not, don’t worry — I’ll get back to this stuff.
For any monoid object we have an associative law for the multiplication: . This basically says that the two different ways of multiplying together three inputs to give one output are the same. Let’s call the result . In fact, we might go so far as to say , , and even .
This generalizes a lot. We want to say that there’s a unique way (called ) to multiply together inputs. The usual way is to pick some canonical form and show that everything can be reduced to that form. This ends up being a lot like the Coherence Theorem. In fact, if we take a monoid object in the category of small categories, this is the Coherence Theorem for a strict monoidal category.
But there’s an easier way than walking through that big proof again, and it uses our diagrammatic approach! The first thing we need to realize is that if we can show this rule holds in , then it will hold for all monoid objects. That’s why the “theory of monoids” category is so nice — it exactly encodes the structure of a monoid. Anything that is true for all monoids can be proved by just looking at this category and proving it there!
So how do we show that the general associative law holds in ? Now we need to notice that the functor that makes into a monoid object is faithful. That is, if two Temperley-Lieb diagrams in the image are the same, then they must come from the same morphism in . But if two diagrams are equivalent they differ by either sliding loops arcs around in the plane — which uses the monoidal structure to pull cups or caps past each other — or by using the zig-zag identities — which encode the left and right identity laws. Thus any equalities that hold in the image of the functor must come from equalities already present in !
Now any way of multiplying together inputs to give one output is a morphism in , which will be sent to a diagram in . It’s not too hard to see that there’s really only one of these diagrams that could be in the image of the functor (up to equivalence of diagrams). So all such multiplications are sent to the same diagram. By the faithfulness above, this means that they were all the same morphism in to begin with, and we’re done.
By the way, you should try playing around with the oriented Temperley-Lieb diagrams to verify the claim I made of uniqueness. Try to work out exactly what diagrams are in , and then which ones can possibly be in the image of the functor. Playing with the diagrams like this should give you a much better intuition for how they work. If nothing else, drawing a bunch of pictures is a lot more fun than algebra homework from back in school.
Let’s pick up with the diagrams for monoid objects from yesterday. In fact, let’s draw the multiplication and unit diagrams again, but this time let’s make the lines really thick.
Now we’re looking at something more like a region of the plane than a curve. We really don’t need all that inside part, so let’s rub it out and just leave the outline. Of course, whenever we go from a blob to its outline we like to remember where the blob was. We do this by marking a direction on the outline so if we walk in that direction the blob would be on our left. Those of you who have taken multivariable calculus probably have a vague recollection of this sort of thing. Don’t worry, though — we’re not doing calculus here.
Okay, now the outline diagrams look like this:
That’s odd. These diagrams look an awful lot like Temperley-Lieb diagrams. And in fact they are! In fact, we get a functor from to that sends to . That is, a downward-oriented strand next to an upward-oriented strand makes a monoid object on !
But to be sure of this, we need to check that the associativity and identity relations hold. Here’s associativity:
Well that’s pretty straightforward. It’s just sliding the arcs around in the plane. How about the identity relations?
The right identity relation holds because of one of the “zig-zag” relations for duals, and the left identity relation holds because of the other!
Now you should be able to find a comonoid object in in a very similar way.
I don’t know about you, but all this algebraic notation starts to blur together. Wouldn’t it be nice if we could just draw pictures?
Well luckily for use we can! Just like we had diagrams for braided categories, categories with duals, and braided categories with duals, we have certain diagrammatics to help us talk about monoid objects.
First off, we think of our generating object as a point on a line. As we tensor copies of this object together, we just add more points. Then our morphisms will be diagrams in the plane. At the bottom of the diagram is the incoming object — a bunch of marked points — and at the top is the outgoing object — another bunch of marked points. In between, we have morphisms we can build from the two basic pieces we added: multiplication and unit.
See? For multiplication, two points come in. They move together and multiply, leaving one point to go out. For the unit, a point comes “out of nowhere” to leave the diagram.
As before, we set two diagrams side-by-side for the monoidal product and stack them top-to-bottom for composition. Now, what do those associativity and identity relations look like?
Neat! Associativity just means we can pull the branch in the middle to either side of the threefold multiplication, while identity means we can absorb a dangling free end.
I haven’t bothered to render a diagram for symmetry, but we can draw it by just having lines cross through each other. The naturality of the symmetry means that we can pull any morphism from one side of a crossing line to the other.
And now what about comonoid objects? We’ve got diagrams to talk about them too!
Here’s a comultiplication and a counit. We just flip the multiplication and unit upside-down to dualize them. And we do the same thing for the coassociativity and coidentity relations.
The one thing we have to take careful note of here is that everything in sight is strict. These diagrams don’t make any distinction between and ; or between , , and .
Another example we’ve seen already is that a ring with unit is a monoid object in — the category of abelian groups with the tensor product of abelian groups as the monoidal structure. Similarly, given a commutative ring , a monoid object in the category with tensor product of -modules as its monoidal structure is a -algebra with unit. For extra credit, how would we get rings and -algebras without units?
Here’s one we haven’t seen (and which I’ll talk more about later): given any category , the category of “endofunctors” has a monoidal structure given by composition of functors from to itself. This is the one I was thinking of that doesn’t have a symmetry, by the way. A monoid object in this category consists of a functor along with natural transformations and . These turn out to be all sorts of useful in homology theory, and also in theoretical computer science. In fact, the programming language Haskell makes extensive and explicit use of them.
And now for a really interesting class of examples. Let’s say we start with a monoidal category with monoidal structure . We immediately get a monoidal structure on the opposite category . Just define for objects. For morphisms we take and (which are in and , respectively), and define , which is in .
So what’s a monoid object in ? It’s a contravariant functor from to . Equivalently, we can write it as a covariant functor from to . It will be easier to just write down explicitly what this opposite category is.
So we need to take and reverse all the arrows. It’s enough to just reverse the arrows we threw in to generate the category, and their composites will be reversed as well. We’ll also have to dualize the relations we imposed to make everything work out right. So we’ll have an arrow called comultiplication and another arrow called the counit. These we require to satisfy the coassociative condition and the left and right coidentity conditions .
Now a functor from this category to another monoidal category picks out an object and arrows (reusing the names) and satisfying coassociativity and coidentity conditions. We call such an object with extra structure a “comonoid object” in . In we call them “comonoids”. In we call them “corings” (with counit), in we call them “coalgebras” (with counit), and in we call them “comonads”. In general, we call this new model category — the “theory of comonoids”.
I’m adding another new link, this time to God Plays Dice. This one is run by a mysterious and shadowy figure known only as “The Probabilist”. I don’t know why, though. There’s a lot of great stuff here, very accessible to a general audience. In fact, it’s rather like another direction I could have gone with this site six months ago, but I think “The Probabilist” does a better job of it than I would have.
So let this also be a call for “The Probabilist” to unmask and accept credit for this work! I’ve already figured out the secret, and I imagine others have as well, so we’re all just waiting for the other shoe to drop. However, I will respect “The Probabilist”‘s pseudonymity, however little I understand it.
Now it’s time to start getting into the fun things we can do with monoidal categories. For my first trick, I’m going to build a neat monoidal category and show you what we can do with it.
Any monoidal category has an “identity” object , so to make it a bit more interesting let’s throw in a single non-identity object . Then we get for free all the monoidal products built with and . Let’s make our lives easier by saying our category is strict. Then all our objects look like — the monoidal product of copies of . We can see that , and that .
This is all well and good, but we still don’t really have much going on here. All the morphisms in sight are identities. We don’t even have associators or unit isomorphisms because our category is strict. So let’s throw in a couple morphisms, and of course all the other ones we can build from them.
First let’s make our category symmetric. That is, we’ll add a “twist” that swaps the copies of . We’ll insist that it satisfy . We can then build a braiding by swapping the copies of one at a time. This seems a little silly at first glance. If had any additional structure — if it was a set, for instance — this would be clearly useful. As it stands, though, the use isn’t apparent. Don’t worry, we’ll get to it.
Next, let’s add a morphism . From this we can get a bunch of other morphisms. For example, or . We can use this one to increase the number of copies of in a product in many different ways, depending on where we stick the new copy of .
But we could also add a new copy of in one place and use the symmetric structure to move it to a different place. For example, instead of adding a copy on the right with , we could instead use to add a copy on the left and then swap the two. Notice also that and , which means that these two morphisms are and . The naturality of says that these two are really the same. So, adding a new copy of and then moving it around immediately to another position is the same as just adding it in the new position right away.
Now let’s add a way to reduce the number of copies. We’ll use a morphism . Of course, we get for free such compositions as and . There will be some equalities arising from the naturality of , but nothing too important yet.
So let’s throw in a few more equalities. Let’s say that and that . And of course there are other equalities we can build from these. The whole thing should start looking a bit familiar by this point.
Okay, so we’ve got ourselves a strict monoidal category with a bunch of objects and a few morphisms satisfying some equations. So what? Well, let’s start looking at symmetric monoidal functors from into other symmetric monoidal categories.
The first monoidal category we’ll look at is , which uses the cartesian product as its monoidal structure. What does a monoidal functor look like? Well, is some set , and by monoidality we see that — the cartesian product of copies of . In particular, : a set with a single element.
The symmetry for is the natural isomorphism defined by . In particular, we get .
The morphism now becomes , which picks out a particular point of . Let’s call this point , just like the function that picks it out.
The morphism is now a function . The equations that we imposed in must still apply here: and . Since we’re in the category of sets, let’s just write these all out as functions and see what they do to particular elements.
The first equation is between two functions with source , so let’s pick an arbitary element to follow. The left side of the equation sends this to , while the right sends it to . The equation now reads . But that’s just the associative law for a composition! The second equation is between three functions that all have source . Starting with an arbitrary element we read off the equation . And that’s the left and right unit law for a composition!
So what we see here is that gets the structure of a monoid. And given any monoid we can construct such a symmetric monoidal functor with and sending and to the multiplication and identity functions.
Can we do better? Sure we can. Let’s say we’ve got a homomorphism between two monoids . We can consider this to be a function between their underlying sets. Immediately we get as well, applying to each entry of the product. This is clearly symmetric. Saying that preserves the multiplication of these monoids is just the same as saying that , which is the naturality square for . Similarly, preserving the identities is the same as making the naturality square for commute. So a monoid homomorphism is the same as a natural transformation between these functors!
Let’s back up a bit and give our toy category a better name. Let’s call it — the “theory of monoids”. What we’ve just seen is that our familiar category of monoids is “really” the category of symmetric monoidal functors from the “theory of monoids” to sets. We now slightly shift our terminology and instead of calling such a set-with-extra-structure a “monoid”, we call it a “monoid object in “.
And now the road is clear to generalize. Given any symmetric monoidal category we can take the category of “monoid objects in “.
[UPDATE]: On reflection, the symmetric property isn’t really essential. That is, we can just consider the category of monoidal functors from to . In fact, there’s one example I’ll be getting to that doesn’t have a symmetry. In general, though, when the target category has a symmetry we’ll usually ask that our functors preserve that structure as well.
[UPDATE]: You know what? Scrap that whole symmetry bit altogether. Sometimes the target category will have symmetry and sometimes that will be helpful, but it’s just not worth it in the general theory. I’m almost sorry I brought it up in the first place.
We can easily see that limits commute with each other, as do colimits. If we have a functor , then we can take the limit either all at once, or one variable at a time: . That is, if the category has -limits, then the functor preserves all other limits.
But now we know that limit functors are right adjoints. And it turns out that any functor which has a left adjoint (and thus is a right adjoint) preserves all limits. Dually, any functor which has a right adjoint (and thus is a left adjoint) preserves all colimits.
First we need to note that we can compose adjunctions. That is, if we have adjunctions and then we can put them together to get an adjunction . Indeed, we have
We also need to note that adjoints are unique up to natural isomorphism. That is, if and then there is a natural isomorphism . This is essentially because adjunctions are determined by universal arrows, and universal arrows are unique up to isomorphism.
Okay, now we can get to work. We start with an adjunction . Given another (small) category we can build the functor categories and . It turns out we get an adjunction here too. Define for each functor . The unit induces a unit . We can similarly define and , and show that they determine an adjunction
Now let’s say that and both have -limits. Then we have an adjunction and a similar one for . We can thus form the composite adjunctions
So what is ? Well, is the functor that sends every object of to and every morphism to . Then composing this with gives the functor that sends every object of to and every morphism to . That is, we get . So . But these are the two left adjoints listed above. Thus the two right adjoints listed above are both right adjoint to the same functor, and therefore must be naturally isomorphic! We have for every functor . And thus preserves -limits.