The Unapologetic Mathematician

Mathematics for the interested outsider

Adjoints with parameters

Now that we know how to transform adjoints, we can talk about whole families of adjoints parametrized by some other category. That is, for each object P of the parametrizing category \mathcal{P} we’ll have an adjunction F_P\dashv G_P:\mathcal{C}\rightarrow\mathcal{D}, and for each morphism of \mathcal{P} we’ll have a transformation of the adjunctions.

Let’s actually approach this from a slightly different angle. Say we have a functor F:\mathcal{C}\times\mathcal{P}\rightarrow\mathcal{D}, and that for each P\in\mathcal{P} the functor F(\underline{\hphantom{X}},P):\mathcal{C}\rightarrow\mathcal{D} has a right adjoint G(P,\underline{\hphantom{X}}). Then I claim that there is a unique way to make G into a functor from \mathcal{P}^\mathrm{op}\times\mathcal{D} to \mathcal{C} so that the bijection \phi_{C,P,D}:\hom_\mathcal{D}(F(C,P),D)\rightarrow\hom_\mathcal{C}(C,G(P,D)) is natural in all three variables. Note that G must be contravariant in P here to make the composite functors have the same variance in P.

If we hold P fixed, the bijection is already natural in C and D. Let’s hold C and D fixed and see how to make it natural in P. The components \phi_P:\hom_\mathcal{D}(F(C,P),D)\rightarrow\hom_\mathcal{C}(C,G(P,D)) are already given in the setup, so we can’t change them. What we need are functions \hom_\mathcal{D}(F(1_C,p),1_D):\hom_\mathcal{D}(F(C,P'),D)\rightarrow\hom_\mathcal{D}(F(C,P),D) and \hom_\mathcal{C}(1_C,G(p,1_D)):\hom_\mathcal{C}(C,G(P',D))\rightarrow\hom_\mathcal{C}(C,G(P,D)) for each arrow p:P\rightarrow P'.

For naturality to hold, we need \hom_\mathcal{C}(1_C,G(p,1_D))\circ\phi_{P'}=\phi_P\circ\hom_\mathcal{D}(F(1_C,p),1_D). But from what we saw last time this just means that the pair of natural transformations (F(1_C,p),G(p,1_D)) forms a conjugate pair from F(\underline{\hphantom{X}},P)\dashv G(P,\underline{\hphantom{X}}) to F(\underline{\hphantom{X}},P')\dashv G(P',\underline{\hphantom{X}}). And this lets us define G(p,1_D) uniquely in terms of F(1_C,p), the counit \epsilon' of F(\underline{\hphantom{X}},P')\dashv G(P',\underline{\hphantom{X}}), and the unit \eta of F(\underline{\hphantom{X}},P)\dashv G(P,\underline{\hphantom{X}}) by using the first of the four listed equalities.

From here, it’s straightforward to show that this definition of how G acts on morphisms of \mathcal{P} makes it functorial in both variables, proving the claim. We can also flip back to the original viewpoint to define an adjunction between categories \mathcal{C} and \mathcal{D} parametrized by the category \mathcal{P} as a functor from \mathcal{P} to the category \mathbf{Adj}(\mathcal{C},\mathcal{D}) of adjunctions between those two categories.

July 31, 2007 Posted by | Category theory | 4 Comments

Transformations of Adjoints

And now we go back to adjoints. Like every other structure out there, we want to come up with some analogue of a homomorphism between two adjunctions. Let’s consider the adjunctions F\dashv G:\mathcal{C}\rightarrow\mathcal{D} and F'\dashv G':\mathcal{C}'\rightarrow\mathcal{D}', and try to find a good notion of a transformation from the first to the second.

We’ll proceed by considering an adjunction to consist of the pair of categories (\mathcal{C},\mathcal{D}) with the functors giving extra structure. Down in the land of groups and rings and such, we’d consider sets with extra structure and functions that preserved that structure. So here, naturally, we want to consider functors which preserve this extra structure. That is, a map of adjunctions consists of a pair of functors K:\mathcal{C}\rightarrow\mathcal{C}' and L:\mathcal{D}\rightarrow\mathcal{D}'. These must preserve the structure in that K\circ G=G'\circ L and L\circ F=F'\circ K.

But hold up a second, we’ve forgotten something else that goes into an adjunction: the isomorphism \phi:\hom_\mathcal{D}(F(C),D)\rightarrow\hom_\mathcal{C}(C,G(D)). Here’s a diagram showing how the map of adjunctions should play nicely with them:
Coherence for Adjunction Map

Equivalently we can specify an adjunction by its unit and counit. In this case the compatibility in question is a pair of equations of natural transformations: 1_K\circ\eta=\eta'\circ1_K and 1_L\circ\epsilon=\epsilon'\circ1_L.

What if we’re looking at two different adjunctions between the same pair of categories? Well then we may as well try to use the appropriate identity functors for K and L. But then it’s sort of silly to insist that G=1_\mathcal{C}\circ G=G'\circ\mathcal{D}=G' on the nose, and similarly for F'. Instead, as we do so often, let’s weaken this equality to just a natural transformation.

We’ll say that a pair of natural transformations \sigma:F\rightarrow F' and \tau:G'\rightarrow G are “conjugate” if \hom_\mathcal{C}(1_C,\tau_D)\circ\phi'_{C,D}=\phi_{C,D}\circ\hom_\mathcal{D}(\sigma_C,1_D). This is equivalent, in terms of the unit and counit, to any one of the following four equalities:

  • \tau=(1_G\circ\epsilon')\cdot(1_G\circ\sigma\circ1_{G'})\cdot(\eta\circ1_{G'})
  • \sigma=(\epsilon\circ1_{F'})\cdot(1_F\circ\tau\circ1_{F'})\cdot(1_F\circ\eta')
  • \epsilon\cdot(1_F\circ\tau)=\epsilon'\cdot(\sigma\circ1_{G'})
  • (1_G\circ\sigma)\cdot\eta=(\tau\circ1_{F'})\cdot\eta'

Now it’s easily verified that given a pair of categories (\mathcal{C},\mathcal{D}) we can build a category whose objects are adjunctions F\dashv G:\mathcal{C}\rightarrow\mathcal{D} and whose morphisms are conjugate pairs of natural transformations, which we write out in full as (\sigma,\tau):(F,G,\phi)\rightarrow(F',G',\phi'):\mathcal{C}\rightarrow\mathcal{D}. We compose conjugate pairs in this category in the obvious way, which we write (\sigma',\tau')\cdot(\sigma,\tau).

On the other hand, if we have a pair (\sigma,\tau):(F,G,\phi)\rightarrow(F',G',\phi'):\mathcal{C}\rightarrow\mathcal{D} and another (\bar{\sigma},\bar{\tau}):(\bar{F},\bar{G},\bar{\phi})\rightarrow(\bar{F}',\bar{G}',\bar{\phi}'):\mathcal{D}\rightarrow\mathcal{E}, then we can form the composite (\bar{\sigma}\circ\sigma,\tau\circ\bar{\tau}):(\bar{F}\circ F,G\circ\bar{G},\bar{\phi}\cdot\phi)\rightarrow(\bar{F}'\circ F',G'\circ\bar{G}',\bar{\phi}'\cdot\phi'):\mathcal{C}\rightarrow\mathcal{E}, which we’ll write as (\bar{\sigma},\bar{\tau})\circ(\sigma,\tau). Notice the similarity of this situation with the two different compositions of natural transformations between functors.

July 30, 2007 Posted by | Category theory | 3 Comments

The Simplicial Category

There’s another approach to the theory of monoids which finds more direct application in topology and homology theory (which, yes, I’ll get to eventually) — the “simplicial category” \mathbf{\Delta}. Really it’s an isomorphic category to \mathrm{Th}(\mathbf{Mon}), but some people think better in these other terms. I personally like the direct focus on the algebra, coupled with the diagrammatics so reminiscent of knot theory, but for thoroughness’ sake I’ll describe the other approach.

Note that the objects of \mathrm{Th}(\mathbf{Mon}) correspond exactly with the natural numbers. Each object is the monoidal product of some number of copies of the generating object M. We’re going to focus here on the model of \mathbb{N} given by the ordinal numbers. That is, the object M^{\otimes n} corresponds to the ordinal number \mathbf{n}, which is a set of n elements with its unique (up to isomorphism) total order. In fact, we’ve been implicitly thinking about an order all along. When we draw our diagrams, the objects consist of a set of marked points along the upper or lower edge of the diagram, which we can read in order from left to right.

Let’s pick a specific representation of each ordinal to be concrete about this. The ordinal \mathbf{n} will be represented by the set of natural numbers from {0} to n-1 with the usual order relation. The monoidal structure will just be addition — \mathbf{m}\otimes\mathbf{n}=\mathbf{m+n}.

The morphisms between ordinals are functions which preserve the order. A function f:X\rightarrow Y between ordinals satisfies this property if whenever i\leq j in X then f(i)\leq f(j) in Y. Note that we can send two different elements of X to the same element of Y, just as long as we don’t pull them past each other.

So what sorts of functions do we have to play with? Well, we have a bunch of functions from \mathbf{n} to \mathbf{n+1} that skip some element of the image. For instance, we could send \mathbf{3} to \mathbf{4} by sending {0} to {0}, skipping 1, sending 1 to 2, and sending 2 to 3. We’ll say \delta^n_i:\mathbf{n}\rightarrow\mathbf{n+1} for the function that skips i in its image. The above function is then \delta^3_1. For a fixed n, the index i can run from {0} to n.

We also have a bunch of functions from \mathbf{n+1} to \mathbf{n} that repeat one element of the image. For example, we could send \mathbf{4} to \mathbf{3} by sending {0} to {0}, 1 and 2 both to 1, and 3 to 2. We’ll say \sigma^n_i:\mathbf{n+1}\rightarrow\mathbf{n} for the function that repeats i in its image. The above function is then \sigma^3_1. Again, for a fixed n, the index i can run from {0} to n-1.

Notice in particular that “skipping” and “repeating” are purely local properties of the function. For instance, \delta^0_0 is the unique function from \mathbf{0} (the empty set) to \mathbf{1}, which clearly skips 0\in\mathbf{1}. Then \delta^n_i can be written as 1_i\otimes\delta^0_0\otimes1_{n-i}, since it leaves the numbers from {0} to i-1 alone, sticks in a new i, and then just nudges over everything from (the old) i to n. Similarly, \sigma^1_0 is the unique function from \mathbf{2} to \mathbf{1} that sends both elements in its domain to 0\in\mathbf{1}. Then all the other \sigma^n_i can be written as 1_i\otimes\sigma^0_0\otimes1_{n-i-1}.

Now every order-preserving function is determined by the set of elements of the range that are actually in the image of the function along with the set of elements of its domain where it does not increase. That is, if we know where it skips and where it repeats, we know the whole function. This tells us that we can write any function as a composition of \delta and \sigma functions. These basic functions satisfy a few identities:

  • If i\leq j then \delta^{n+1}_i\circ\delta^n_j=\delta^{n+1}_{j+1}\circ\delta^n_i.
  • If i\leq j then \sigma^{n-1}_j\circ\sigma^n_i=\sigma^{n-1}_i\circ\sigma^n_{j+1}.
  • If i<j then \sigma^n_j\circ\delta^n_i=\delta^{n-1}_i\circ\sigma^{n-1}_{j-1}.
  • If i=j or i=j+1 then \sigma^n_j\circ\delta^n_i=1.
  • If i>j+1 then \sigma^n_j\circ\delta^n_i=\delta^{n-1}_{i-1}\circ\sigma^{n-1}_j.

We could check all these by hand, and if you like that sort of thing you’re welcome to it. Instead, I’ll just assume we’ve checked the second one for n=2 and the fourth one for n=1.

What’s so special about those conditions? Well, notice that \sigma^1_0:\mathbf{1}\otimes\mathbf{1}\rightarrow\mathbf{1} takes two copies of \mathbf{1} to one copy, and that the second relation becomes the associativity condition for this morphism. Then also \delta^0_0:\mathbf{0}\rightarrow\mathbf{1} takes zero copies to one copy, and the fourth relation becomes the left and right identity conditions. That is, \mathbf{1} with these two morphisms is a monoid object in this category! Now we can verify all the other relations by using our diagrams rather than a lot of messy calculations!

We can also go back the other way, breaking any of our diagrams into basic pieces and translating each piece into one of the \delta or \sigma functions. The category of ordinal numbers not only contains a monoid object, it is actually isomorphic to the “theory of monoids” functor — it contains the “universal” monoid object.

So why bother with this new formulation at all? Well, for one thing it’s always nice to see the same structure instantiated in many different ways. Now we have it built from the ground up as \mathrm{Th}(\mathbf{Mon}), we have it implemented as a subcategory of \mathcal{OTL}, we have it as the category of ordinal numbers, and thus we also have it as a full subcategory of \mathbf{Cat} — the category of all small categories (why?).

There’s another reason, though, which won’t really concern us for a while yet. The morphisms \delta^n_i and \sigma^n_i turn out to be very well-known to topologists as “face” and “degeneracy” maps when working with shapes they call “simplicial complexes”. Not only is this a wonderful oxymoron, it’s the source of the term “simplicial category”. If you know something about topology or homology, you can probably see how these different views start to tie together. If not, don’t worry — I’ll get back to this stuff.

July 28, 2007 Posted by | Category theory | 5 Comments

The General Associative Law

For any monoid object we have an associative law for the multiplication: \mu\circ(\mu\otimes1_X)=\mu\circ(1_X\otimes\mu). This basically says that the two different ways of multiplying together three inputs to give one output are the same. Let’s call the result \mu_3:X^{\otimes3}\rightarrow X. In fact, we might go so far as to say \mu_2=\mu:X^{\otimes2}\rightarrow X, \mu_1=1_X:X^{\otimes1}\rightarrow X, and even \mu_0=1:X^{\otimes0}\rightarrow X.

This generalizes a lot. We want to say that there’s a unique way (called \mu_n) to multiply together n inputs. The usual way is to pick some canonical form and show that everything can be reduced to that form. This ends up being a lot like the Coherence Theorem. In fact, if we take a monoid object in the category \mathbf{Cat} of small categories, this is the Coherence Theorem for a strict monoidal category.

But there’s an easier way than walking through that big proof again, and it uses our diagrammatic approach! The first thing we need to realize is that if we can show this rule holds in \mathrm{Th}(\mathbf{Mon}), then it will hold for all monoid objects. That’s why the “theory of monoids” category is so nice — it exactly encodes the structure of a monoid. Anything that is true for all monoids can be proved by just looking at this category and proving it there!

So how do we show that the general associative law holds in \mathrm{Th}(\mathbf{Mon})? Now we need to notice that the functor that makes \downarrow\otimes\uparrow into a monoid object is faithful. That is, if two Temperley-Lieb diagrams in the image are the same, then they must come from the same morphism in \mathrm{Th}(\mathbf{Mon}). But if two diagrams are equivalent they differ by either sliding loops arcs around in the plane — which uses the monoidal structure to pull cups or caps past each other — or by using the zig-zag identities — which encode the left and right identity laws. Thus any equalities that hold in the image of the functor must come from equalities already present in \mathrm{Th}(\mathbf{Mon})!

Now any way of multiplying together n inputs to give one output is a morphism f:M^{\otimes n}\rightarrow M in \mathrm{Th}(\mathbf{Mon}), which will be sent to a diagram F(f):(\downarrow\otimes\uparrow)^{\otimes n}\rightarrow\downarrow\otimes\uparrow in \mathcal{OTL}. It’s not too hard to see that there’s really only one of these diagrams that could be in the image of the functor (up to equivalence of diagrams). So all such multiplications are sent to the same diagram. By the faithfulness above, this means that they were all the same morphism in \mathrm{Th}(\mathbf{Mon}) to begin with, and we’re done.

By the way, you should try playing around with the oriented Temperley-Lieb diagrams to verify the claim I made of uniqueness. Try to work out exactly what diagrams are in \hom_\mathcal{OTL}((\downarrow\otimes\uparrow)^{\otimes n},\downarrow\otimes\uparrow), and then which ones can possibly be in the image of the functor. Playing with the diagrams like this should give you a much better intuition for how they work. If nothing else, drawing a bunch of pictures is a lot more fun than algebra homework from back in school.

July 27, 2007 Posted by | Category theory | Leave a comment

More monoid diagrams

Let’s pick up with the diagrams for monoid objects from yesterday. In fact, let’s draw the multiplication and unit diagrams again, but this time let’s make the lines really thick.

Thick Multiplication and Unit Diagram

Now we’re looking at something more like a region of the plane than a curve. We really don’t need all that inside part, so let’s rub it out and just leave the outline. Of course, whenever we go from a blob to its outline we like to remember where the blob was. We do this by marking a direction on the outline so if we walk in that direction the blob would be on our left. Those of you who have taken multivariable calculus probably have a vague recollection of this sort of thing. Don’t worry, though — we’re not doing calculus here.

Okay, now the outline diagrams look like this:

Multiplication and Unit Diagram Outlines

That’s odd. These diagrams look an awful lot like Temperley-Lieb diagrams. And in fact they are! In fact, we get a functor from \mathrm{Th}(\mathbf{Mon}) to \mathcal{OTL} that sends M to \downarrow\otimes\uparrow. That is, a downward-oriented strand next to an upward-oriented strand makes a monoid object on \mathcal{OTL}!

But to be sure of this, we need to check that the associativity and identity relations hold. Here’s associativity:

Associativity T-L Diagram

Well that’s pretty straightforward. It’s just sliding the arcs around in the plane. How about the identity relations?

Identity T-L Diagram

The right identity relation holds because of one of the “zig-zag” relations for duals, and the left identity relation holds because of the other!

Now you should be able to find a comonoid object in \mathcal{OTL} in a very similar way.

July 26, 2007 Posted by | Category theory | 3 Comments

Diagrammatics for Monoid Objects

I don’t know about you, but all this algebraic notation starts to blur together. Wouldn’t it be nice if we could just draw pictures?

Well luckily for use we can! Just like we had diagrams for braided categories, categories with duals, and braided categories with duals, we have certain diagrammatics to help us talk about monoid objects.

First off, we think of our generating object as a point on a line. As we tensor copies of this object together, we just add more points. Then our morphisms will be diagrams in the plane. At the bottom of the diagram is the incoming object — a bunch of marked points — and at the top is the outgoing object — another bunch of marked points. In between, we have morphisms we can build from the two basic pieces we added: multiplication and unit.

Multiplication and Unit Diagrams

See? For multiplication, two points come in. They move together and multiply, leaving one point to go out. For the unit, a point comes “out of nowhere” to leave the diagram.

As before, we set two diagrams side-by-side for the monoidal product and stack them top-to-bottom for composition. Now, what do those associativity and identity relations look like?

Associativity and Identity Diagrams

Neat! Associativity just means we can pull the branch in the middle to either side of the threefold multiplication, while identity means we can absorb a dangling free end.

I haven’t bothered to render a diagram for symmetry, but we can draw it by just having lines cross through each other. The naturality of the symmetry means that we can pull any morphism from one side of a crossing line to the other.

And now what about comonoid objects? We’ve got diagrams to talk about them too!

Comultiplication and Counit Diagrams

Here’s a comultiplication and a counit. We just flip the multiplication and unit upside-down to dualize them. And we do the same thing for the coassociativity and coidentity relations.

Coassociativity and Coidentity Diagrams

The one thing we have to take careful note of here is that everything in sight is strict. These diagrams don’t make any distinction between (M\otimes M)\otimes M and M\otimes(M\otimes M); or between M, \mathbf{1}\otimes M, and M\otimes\mathbf{1}.

July 25, 2007 Posted by | Category theory | 2 Comments

Examples of Monoid Objects

It’s all well and good to define monoid objects, but it’s better to see that they subsume a lot of useful concepts. The basic case is, of course, that a monoid object in \mathbf{Set} is a monoid.

Another example we’ve seen already is that a ring with unit is a monoid object in \mathbf{Ab} — the category of abelian groups with the tensor product of abelian groups as the monoidal structure. Similarly, given a commutative ring K, a monoid object in the category K\mathbf{-mod} with tensor product of K-modules as its monoidal structure is a K-algebra with unit. For extra credit, how would we get rings and K-algebras without units?

Here’s one we haven’t seen (and which I’ll talk more about later): given any category \mathcal{C}, the category of “endofunctors” \mathcal{C}^\mathcal{C} has a monoidal structure given by composition of functors from \mathcal{C} to itself. This is the one I was thinking of that doesn’t have a symmetry, by the way. A monoid object in this category consists of a functor T:\mathcal{C}\rightarrow\mathcal{C} along with natural transformations \mu:T\circ T\rightarrow T and \eta:1_\mathcal{C}\rightarrow T. These turn out to be all sorts of useful in homology theory, and also in theoretical computer science. In fact, the programming language Haskell makes extensive and explicit use of them.

And now for a really interesting class of examples. Let’s say we start with a monoidal category \mathcal{C} with monoidal structure \otimes. We immediately get a monoidal structure \otimes^\mathrm{op} on the opposite category \mathcal{C}^\mathrm{op}. Just define A\otimes^\mathrm{op}B=A\otimes B for objects. For morphisms we take f:A\rightarrow C and g:B\rightarrow D (which are in \hom_{\mathcal{C}^\mathrm{op}}(C,A) and \hom_{\mathcal{C}^\mathrm{op}}(D,B), respectively), and define f\otimes^\mathrm{op}g=f\otimes g, which is in \hom_{\mathcal{C}^\mathrm{op}}(C\otimes^\mathrm{op}D,A\otimes^\mathrm{op}B).

So what’s a monoid object in \mathcal{C}^\mathrm{op}? It’s a contravariant functor from \mathrm{Th}(\mathbf{Mon}) to \mathcal{C}. Equivalently, we can write it as a covariant functor from \mathrm{Th}(\mathbf{Mon})^\mathrm{op} to \mathcal{C}. It will be easier to just write down explicitly what this opposite category is.

So we need to take \mathrm{Th}(\mathbf{Mon}) and reverse all the arrows. It’s enough to just reverse the arrows we threw in to generate the category, and their composites will be reversed as well. We’ll also have to dualize the relations we imposed to make everything work out right. So we’ll have an arrow \Delta:M\rightarrow M\otimes M called comultiplication and another arrow \epsilon:M\otimes\mathbf{1} called the counit. These we require to satisfy the coassociative condition (\Delta\otimes1_M)\circ\Delta=(1_M\otimes\Delta)\circ\Delta and the left and right coidentity conditions (\epsilon\otimes1_M)\circ\Delta=1_M=(1_M\otimes\epsilon)\circ\Delta.

Now a functor from this category to another monoidal category picks out an object C\in\mathcal{C} and arrows (reusing the names) \epsilon:C\rightarrow\mathbf{1} and \Delta:C\rightarrow C\otimes C satisfying coassociativity and coidentity conditions. We call such an object with extra structure a “comonoid object” in \mathcal{C}. In \mathbf{Set} we call them “comonoids”. In \mathbf{Ab} we call them “corings” (with counit), in K\mathbf{-mod} we call them “coalgebras” (with counit), and in \mathcal{C}^\mathcal{C} we call them “comonads”. In general, we call this new model category \mathrm{Th}(\mathbf{CoMon}) — the “theory of comonoids”.

July 24, 2007 Posted by | Category theory | 4 Comments

Tell it to Einstein!

I’m adding another new link, this time to God Plays Dice. This one is run by a mysterious and shadowy figure known only as “The Probabilist”. I don’t know why, though. There’s a lot of great stuff here, very accessible to a general audience. In fact, it’s rather like another direction I could have gone with this site six months ago, but I think “The Probabilist” does a better job of it than I would have.

So let this also be a call for “The Probabilist” to unmask and accept credit for this work! I’ve already figured out the secret, and I imagine others have as well, so we’re all just waiting for the other shoe to drop. However, I will respect “The Probabilist”‘s pseudonymity, however little I understand it.

July 24, 2007 Posted by | Uncategorized | 1 Comment

Monoid Objects

Now it’s time to start getting into the fun things we can do with monoidal categories. For my first trick, I’m going to build a neat monoidal category \mathcal{M} and show you what we can do with it.

Any monoidal category has an “identity” object \mathbf{1}, so to make it a bit more interesting let’s throw in a single non-identity object M. Then we get for free all the monoidal products built with M and \mathbf{1}. Let’s make our lives easier by saying our category is strict. Then all our objects look like M^{\otimes n} — the monoidal product of n copies of M. We can see that \mathbf{1}=M^{\otimes0}, and that M^{\otimes n_1}\otimes M^{\otimes n_2}=M^{\otimes(n_1+n_2)}.

This is all well and good, but we still don’t really have much going on here. All the morphisms in sight are identities. We don’t even have associators or unit isomorphisms because our category is strict. So let’s throw in a couple morphisms, and of course all the other ones we can build from them.

First let’s make our category symmetric. That is, we’ll add a “twist” \tau:M\otimes M\rightarrow M\otimes M that swaps the copies of M. We’ll insist that it satisfy \tau^2=1_{M\otimes M}. We can then build a braiding \beta_{M^{\otimes n_1},M^{\otimes n_2}}:M^{\otimes n_1}\otimes M^{\otimes n_2} by swapping the copies of M one at a time. This seems a little silly at first glance. If M had any additional structure — if it was a set, for instance — this would be clearly useful. As it stands, though, the use isn’t apparent. Don’t worry, we’ll get to it.

Next, let’s add a morphism e:\mathbf{1}\rightarrow M. From this we can get a bunch of other morphisms. For example, 1_M\otimes e:M\otimes\mathbf{1}\rightarrow M\otimes M or e\otimes1_M:\mathbf{1}\otimes M\rightarrow M\otimes M. We can use this one to increase the number of copies of M in a product in many different ways, depending on where we stick the new copy of M.

But we could also add a new copy of M in one place and use the symmetric structure to move it to a different place. For example, instead of adding a copy on the right with 1_M\otimes e, we could instead use \tau\circ(e\otimes1_M) to add a copy on the left and then swap the two. Notice also that 1_M=\beta_{\mathbf{1},M} and \tau=\beta_{M,M}, which means that these two morphisms are (1_M\otimes e)\circ\beta_{\mathbf{1},M} and \beta_{M,M}\circ(e\otimes1_M). The naturality of \beta says that these two are really the same. So, adding a new copy of M and then moving it around immediately to another position is the same as just adding it in the new position right away.

Now let’s add a way to reduce the number of copies. We’ll use a morphism m:M\otimes M\rightarrow M. Of course, we get for free such compositions as (m\otimes1_M)\circ(1_M\otimes\tau):M^{\otimes3}\rightarrow M^{\otimes2} and m\circ(e\otimes1_M):M\rightarrow M. There will be some equalities arising from the naturality of \beta, but nothing too important yet.

So let’s throw in a few more equalities. Let’s say that m\circ(m\otimes1_M)=m\circ(1_M\otimes m) and that m\circ(e\otimes1_M)=1_M=m\circ(1_M\otimes e). And of course there are other equalities we can build from these. The whole thing should start looking a bit familiar by this point.

Okay, so we’ve got ourselves a strict monoidal category \mathcal{M} with a bunch of objects and a few morphisms satisfying some equations. So what? Well, let’s start looking at symmetric monoidal functors from \mathcal{M} into other symmetric monoidal categories.

The first monoidal category we’ll look at is \mathbf{Set}, which uses the cartesian product as its monoidal structure. What does a monoidal functor F:\mathcal{M}\rightarrow\mathbf{Set} look like? Well, F(M) is some set X, and by monoidality we see that F(M^{\otimes n})=X^{\times n} — the cartesian product of n copies of X. In particular, F(\mathbf{1})=\{*\}: a set with a single element.

The symmetry for \mathbf{Set} is the natural isomorphism \beta_{A,B}:A\times B\rightarrow B\times A defined by \beta_{A,B}: (a,b)\mapsto(b,a). In particular, we get F(\tau)=\beta_{X,X}: (x_1,x_2)\mapsto(x_2,x_1).

The morphism e:\mathbf{1}\rightarrow M now becomes F(e)=1:\{*\}\rightarrow X, which picks out a particular point of X. Let’s call this point 1, just like the function that picks it out.

The morphism m:M\otimes M\rightarrow M is now a function F(m)=\mu:X\otimes X\rightarrow X. The equations that we imposed in \mathcal{M} must still apply here: \mu\circ(\mu\times1_X)=\mu\circ(1_X\times\mu) and \mu\circ(1\otimes1_X)=1_X=\mu\circ(1_X\otimes1). Since we’re in the category of sets, let’s just write these all out as functions and see what they do to particular elements.

The first equation is between two functions with source X\times X\times X, so let’s pick an arbitary element (x_1,x_2,x_3) to follow. The left side of the equation sends this to \mu(\mu(x_1,x_2),x_3), while the right sends it to \mu(x_1,\mu(x_2,x_3)). The equation now reads \mu(\mu(x_1,x_2),x_3)=\mu(x_1,\mu(x_2,x_3)). But that’s just the associative law for a composition! The second equation is between three functions that all have source X. Starting with an arbitrary element x we read off the equation \mu(1,x)=x=\mu(x,1). And that’s the left and right unit law for a composition!

So what we see here is that X=F(M) gets the structure of a monoid. And given any monoid X we can construct such a symmetric monoidal functor with F(M)=X and sending m and e to the multiplication and identity functions.

Can we do better? Sure we can. Let’s say we’ve got a homomorphism between two monoids f:X_1\rightarrow X_2. We can consider this to be a function between their underlying sets. Immediately we get f^{\times n}:X_1^{\times n}\rightarrow X_2^{\times n} as well, applying f to each entry of the product. This is clearly symmetric. Saying that f preserves the multiplication of these monoids is just the same as saying that f\circ F_1(m)=F_2(m)\circ(f\times f), which is the naturality square for m. Similarly, preserving the identities is the same as making the naturality square for e commute. So a monoid homomorphism is the same as a natural transformation between these functors!

Let’s back up a bit and give our toy category a better name. Let’s call it \mathrm{Th}(\mathbf{Mon}) — the “theory of monoids”. What we’ve just seen is that our familiar category \mathbf{Mon} of monoids is “really” the category \mathbf{Set}^{\mathrm{Th}(\mathbf{Mon})} of symmetric monoidal functors from the “theory of monoids” to sets. We now slightly shift our terminology and instead of calling such a set-with-extra-structure a “monoid”, we call it a “monoid object in \mathbf{Set}“.

And now the road is clear to generalize. Given any symmetric monoidal category \mathcal{C} we can take the category \mathcal{C}^{\mathrm{Th}(\mathbf{Mon})} of “monoid objects in \mathcal{C}“.

[UPDATE]: On reflection, the symmetric property isn’t really essential. That is, we can just consider the category of monoidal functors from \mathrm{Th}(\mathbf{Mon}) to \mathcal{C}. In fact, there’s one example I’ll be getting to that doesn’t have a symmetry. In general, though, when the target category has a symmetry we’ll usually ask that our functors preserve that structure as well.

[UPDATE]: You know what? Scrap that whole symmetry bit altogether. Sometimes the target category will have symmetry and sometimes that will be helpful, but it’s just not worth it in the general theory. I’m almost sorry I brought it up in the first place.

July 23, 2007 Posted by | Category theory | 13 Comments

Adjoints Preserve Limits

We can easily see that limits commute with each other, as do colimits. If we have a functor F:\mathcal{J}_1\times\mathcal{J}_2\rightarrow\mathcal{C}, then we can take the limit \varprojlim_{\mathcal{J}_1\times\mathcal{J}_2}F either all at once, or one variable at a time: \varprojlim_{\mathcal{J}_1}\varprojlim_{\mathcal{J}_2}F=\varprojlim_{\mathcal{J}_2}\varprojlim_{\mathcal{J}_1}F. That is, if the category \mathcal{C} has \mathcal{J}-limits, then the functor \varprojlim_{J} preserves all other limits.

But now we know that limit functors are right adjoints. And it turns out that any functor which has a left adjoint (and thus is a right adjoint) preserves all limits. Dually, any functor which has a right adjoint (and thus is a left adjoint) preserves all colimits.

First we need to note that we can compose adjunctions. That is, if we have adjunctions F_1\dashv F_2:\mathcal{C}\rightarrow\mathcal{D} and G_1\dashv G_2:\mathcal{D}\rightarrow\mathcal{E} then we can put them together to get an adjunction G_1\circ F_1\dashv F_2\circ G_2:\mathcal{C}\rightarrow\mathcal{E}. Indeed, we have
\hom_\mathcal{E}(G_1(F_1(C)),E)\cong\hom_\mathcal{D}(F_1(C),G_2(E))\cong\hom_\mathcal{C}(C,F_2(G_2(E)))

We also need to note that adjoints are unique up to natural isomorphism. That is, if F\dashv G_1:\mathcal{C}\rightarrow\mathcal{D} and F\dashv G_2:\mathcal{C}\rightarrow\mathcal{D} then there is a natural isomorphism G_1\cong G_2. This is essentially because adjunctions are determined by universal arrows, and universal arrows are unique up to isomorphism.

Okay, now we can get to work. We start with an adjunction F\dashv G:\mathcal{C}\rightarrow\mathcal{D}. Given another (small) category \mathcal{J} we can build the functor categories \mathcal{C}^\mathcal{J} and \mathcal{D}^\mathcal{J}. It turns out we get an adjunction here too. Define F^\mathcal{J}(S)=F\circ S for each functor S:\mathcal{J}\rightarrow\mathcal{C}. The unit \eta:1_\mathcal{C}\rightarrow G\circ F induces a unit \eta^\mathcal{J}_S=\eta\circ1_S:S\rightarrow G\circ F\circ S. We can similarly define G^\mathcal{J} and \epsilon^\mathcal{J}, and show that they determine an adjunction F^\mathcal{J}\dashv G^\mathcal{J}:\mathcal{C}^\mathcal{J}\rightarrow\mathcal{D}^\mathcal{J}

Now let’s say that \mathcal{C} and \mathcal{D} both have \mathcal{J}-limits. Then we have an adjunction \Delta\dashv\varprojlim_\mathcal{J}:\mathcal{C}\rightarrow\mathcal{C}^\mathcal{J} and a similar one for \mathcal{D}. We can thus form the composite adjunctions
F^\mathcal{J}\circ\Delta\dashv\varprojlim_\mathcal{J}\circ G^\mathcal{J}:\mathcal{C}\rightarrow\mathcal{D}^\mathcal{J}
\Delta\circ F\dashv G\circ\varprojlim_\mathcal{J}:\mathcal{C}\rightarrow\mathcal{D}^\mathcal{J}

So what is F^\mathcal{J}(\Delta(C))? Well, \Delta(C) is the functor that sends every object of \mathcal{J} to C and every morphism to 1_C. Then composing this with F gives the functor that sends every object of \mathcal{J} to F(C) and every morphism to 1_{F(C)}. That is, we get \Delta(F(C)). So F^\mathcal{J}\circ\Delta=\Delta\circ F. But these are the two left adjoints listed above. Thus the two right adjoints listed above are both right adjoint to the same functor, and therefore must be naturally isomorphic! We have \varprojlim_\mathcal{J}G\circ T\cong G(\varprojlim_\mathcal{J}T) for every functor T:\mathcal{J}\rightarrow\mathcal{D}. And thus G preserves \mathcal{J}-limits.

July 21, 2007 Posted by | Category theory | 2 Comments

Follow

Get every new post delivered to your Inbox.

Join 389 other followers