The Unapologetic Mathematician

Mathematics for the interested outsider

Internalizations commute

Okay, back in the saddle. We know about monoid objects and group objects, and also the slight variant of abelian group objects. These are functors (preserving some sort of product structure) from the toy categories \mathrm{Th}(\mathbf{Mon}), \mathrm{Th}(\mathbf{Grp}), and \mathrm{Th}(\mathbf{Ab}) to our target category \mathcal{C}, respectively. We could also write out all the properties of (unital) rings in terms of a category \mathrm{Th}(\mathbf{Ring}) — the “theory of rings” — and then use this to define “ring objects

We know, though, that a ring is a monoid object in the category \mathbf{Ab} of abelian groups. And the category of abelian groups is the category of abelian group objects in \mathbf{Set}. That is, \mathbf{Ab}\cong\mathbf{Set}^{\mathrm{Th}(\mathbf{Ab})}. And then we get monoid objects (rings) by considering (\mathbf{Set}^{\mathrm{Th}(\mathbf{Ab})})^{\mathrm{Th}(\mathbf{Mon})}.

But what is a functor from one category into the category of functors from a second category into a third? What is (\mathcal{A}^\mathcal{B})^\mathcal{C}? This is just a functor from the product category \mathcal{C}\times\mathcal{B} to \mathcal{A}! Thus (\mathcal{A}^\mathcal{B})^\mathcal{C}\cong\mathcal{A}^{\mathcal{C}^\mathcal{B}}. This should look familiar — it says that (to whatever extent it’s well-defined) the category of categories is a closed category!

Now we also know that the product of two categories is “weakly symmetric” — \mathcal{B}\times\mathcal{C} is equivalent to \mathcal{C}\times\mathcal{B}. And thus the categories of functors from either one of these to a third category are equivalent — \mathcal{A}^{\mathcal{C}\times\mathcal{B}}\cong\mathcal{A}^{\mathcal{B}\times\mathcal{C}}. Putting this together with the “currying” above we find that (\mathcal{A}^\mathcal{B})^\mathcal{C}\cong(\mathcal{A}^\mathcal{C})^\mathcal{B}.

So let’s bring this back to internalizations. We find that (\mathbf{Set}^{\mathrm{Th}(\mathbf{Ab})})^{\mathrm{Th}(\mathbf{Mon})} and (\mathbf{Set}^{\mathrm{Th}(\mathbf{Mon})})^{\mathrm{Th}(\mathbf{Ab})} are equivalent. A monoid object in \mathbf{Ab} is equivalent to an abelian group object in \mathbf{Mon}.

And we saw evidence of this before. We’ve taken abelian groups and built free monoid objects on them, and we’ve taken monoids and built free abelian group objects on them. And the rings we get from each case are isomorphic to each other.

This whole setup generalizes. For any two algebraic structures we can describe by “theory of” categories, we can add both structures to an object of a category \mathcal{C} in either order. the processes of “internalizing” the two structures into the category \mathcal{C} commute with each other.

Now, for an exercise you can work out the naïve approach mentioned above. Write down the definition of \mathrm{Th}(\mathbf{Ring}) from first principles, just like we did for monoids and groups and such. Then show that the resulting category is equivalent to \mathrm{Th}(\mathbf{Mon})\times\mathrm{Th}(\mathbf{Ab}).

August 10, 2007 Posted by | Category theory | 2 Comments

Group objects

Just like we have monoid objects, we can construct a category called \mathrm{Th}(\mathbf{Grp}), which encodes the notion of a “group object”.

Groups are a lot like monoids, but we’ll need to be able to do a few more things to describe groups than we needed for monoids. So let’s start with all the same setup as for monoid objects, but let’s make the monoidal structure in our toy category be an actual cartesian structure. That is, we start with an object G and we build up all the “powers”, but now we insist that they be built from categorical products rather than just some arbitrary monoidal structure. Then G^{\times n} is the categorical product of n copies of G. Not only does this work like a monoidal structure, but it come equipped with a bunch of extra arrows. For example, there are arrows to “project out” a copy of G from a product, and every object X has a unique arrow t_X from X to the terminal object — the product of zero copies of G.

More importantly it has an arrow \Delta:G\rightarrow G\times G called the “diagonal” that is defined as the unique arrow satisfying \pi_1\circ\Delta=1_G=\pi_2\circ\Delta. That is, it makes an “identical copy” of G. For instance, in the context of sets this is the function \Delta:S\rightarrow S\times S defined by \Delta(s)=(s,s).

Now we do everything like we did for monoid objects. There’s a morphism m:G\times G\rightarrow G, and one e:G^{\otimes0}\rightarrow G, and these satisfy the identity and associativity relations. Now we also throw in an arrow i:G\rightarrow G satisfying m\circ(i\times1_G)\circ\Delta=e\circ t_G=m\circ(1_G\times i)\circ\Delta. That is, we can start with an “element” of G and split it into two copies. Then we can either copy with i and leave the other alone. Then we can multiply together the copies. Either choice of which one to hit with i will give us the exact same result as if we’d just “forgotten” the original element of G by passing to the terminal object, and then created a copy of the identity element with e.

Wow, that looks complicated. Well, let’s take a functor from this category to \mathbf{Set} that preserves products. Then what does the equation say in terms of elements of sets? We read off m(x,i(x))=e=m(i(x),x). That is, the product of x and i(x) on either side is just the single element in the image of the arrow described by e — the identity element of the monoid. But this is the condition that i(x) be the inverse of x. So we’re just saying that (when we read the condition in sets) every element of our monoid has an inverse, which makes it into a group! Now a group object in any other category \mathcal{C} is a product-preserving functor from \mathrm{Th}(\mathbf{Grp}) to \mathcal{C}.

We can do even better. Since the monoidal structure in \mathrm{Th}(\mathbf{Grp}) is cartesian, it comes with a symmetry. The twist \tau_{A,B}:A\times B\rightarrow B\times A is defined as the unique arrow satisfying \pi_1\circ\tau_{A,B}=\pi_2 and \pi_2\circ\tau_{A,B}=\pi_1. In sets, this means \tau_{A,B}(a,b)=(b,a). Now we can add the relation m\circ\tau_{G,G}=m to our category. In sets this reads that m(x,y)=m(y,x), which says that the multiplication is commutative. The resulting category is \mathrm{Th}(\mathbf{Ab}) — the “theory of abelian groups” — and an “abelian group object” in a category \mathcal{C} is a product-preserving functor in \mathcal{C}^{\mathrm{Th}(\mathbf{Ab})}.

August 4, 2007 Posted by | Category theory | 13 Comments

Free Algebras

Let’s work an explicit example from start to finish to illustrate these free monoid objects a little further. Consider the category K\mathbf{-mod} of modules over the commutative ring K, with tensor product over K as its monoidal structure. We know that monoid objects here are K-algebras with units.

Before we can invoke our theorem to cook up free K-algebras, we need to verify the hypotheses. First of all, K\mathbf{-mod} is symmetric. Well, remember that the tensor product is defined so that K-bilinear functions from A\times B to C are in bijection with K-linear functions from A\otimes B to C. So we’ll define the function T(a,b)=b\otimes a. Now there is a unique function \tau:A\otimes B\rightarrow B\otimes A which sends a\otimes b to b\otimes a. Naturality and symmetry are straightforward from here.

Now we need to know that K\mathbf{-mod} is closed. Again, this goes back to the definition of tensor products. The set \hom_K(A\otimes B,C) consists of K-linear functions from A\otimes B to C, which correspond to K-bilinear functions from A\times B to C. Now we can use th same argument we did for sets to see such a function as a K-linear function from A to the K-module \hom_K(B,C). Remember here that every modyle over K is both a left and a right module because K is commutative. That is, we have a bijection \hom_K(A\otimes B)\cong\hom_K(A,\hom_K(B,C)). Naturality is easy to check, so we conclude that K\mathbf{-mod} is indeed closed.

Finally, we need to see that K\mathbf{-mod} has countable coproducts. But the direct sum of modules gives us our coproducts (but not products, since our index set is infinite). Then since K\mathbf{-mod} is closed the tensor product preserves all of these coproducts.

At last, the machinery of our free monoid object theorem creaks to life and says that the free K-algebra on a K-module A is \bigoplus\limits_n(A^{\otimes n}). And we see that this is exactly how we constructed the free ring on an abelian group! In fact, that’s a special case of this construction because abelian groups are \mathbb{Z}-modules and rings are \mathbb{Z}-algebras.

August 3, 2007 Posted by | Category theory | 1 Comment

Free Monoid Objects

When we have an algebraic concept described as a set with extra structure, the morphisms between such structured sets are usually structure-preserving functions between the underlying sets. This gives us a “forgetful” functor which returns the underlying sets and functions. Then as we saw, we often have a left adjoint to this forgetful functor giving the “free” structure generated by a set.

But now that we’re talking about monoid objects we’re trying not to think about sets. A monoid object in \mathcal{C} is a monoidal functor from \mathrm{Th}(\mathbf{Mon}) to \mathcal{C}, and a “homomorphism” of such monoid objects is a monoidal natural transformation. But the object part of such a functor is specified by one object of \mathcal{C} — the image of M\in\mathrm{Th}(\mathbf{Mon}) — which we can reasonably call the “underlying object” of the monoid object. Similarly, a natural transformation will be specified by a morphism between the underlying objects (subject to naturality conditions, of course). That is, we have a “forgetful functor” from monoid objects in \mathcal{C} to \mathcal{C} itself. And a reasonable notion of a “free” monoid object will be a left adjoint to this functor.

Now, if the monoidal category \mathcal{C} has coproducts indexed by the natural numbers, and if the functors C\otimes\underline{\hphantom{X}} and \underline{\hphantom{X}}\otimes C preserve these coproducts for all objects C\in\mathcal{C}, then the forgetful functor above will have a left adjoint. To say that the monoidal structure preserves these coproducts is to say that the following “distributive laws” hold:

\coprod\limits_n(A\otimes B_n)\cong A\otimes\biggl(\coprod\limits_nB_n\biggr)
\coprod\limits_n(A_n\otimes B)\cong\biggl(\coprod\limits_nA_n\biggr)\otimes B

For any object C\in\mathcal{C} we can define the “free monoid object on C” to be \coprod\limits_nC^{\otimes n}, equipped with certain multiplication and unit morphisms. For the unit, we will use the inclusion morphism C^{\otimes0}\rightarrow\coprod\limits_nC^{\otimes n} that comes for free with the coproduct. The multiplication will take a bit more work.

Given any natural numbers m and n, the object C^{\otimes m}\otimes C^{\otimes n} is canonically isomorphic to C^{\otimes m+n}, which then includes into \coprod\limits_kC^{\otimes k} using the coproduct morphisms. But this object also includes into \coprod\limits_{i,j}(C^{\otimes i}\otimes C^{\otimes j}), which is isomorphic to \biggl(\coprod\limits_iC^{\otimes i}\biggr)\otimes\biggl(\coprod\limits_jC^{\otimes j}\biggr). Thus by the universal property of coproducts there is a unique morphism \mu:\biggl(\coprod\limits_iC^{\otimes i}\biggr)\otimes\biggl(\coprod\limits_jC^{\otimes j}\biggr)\rightarrow\biggl(\coprod\limits_kC^{\otimes k}\biggr). This is our multiplication.

Proving that these two morphisms satisfy the associativity and identity relations is straightforward, though somewhat tedious. Thus we have a monoid object in \mathcal{C}. The inclusion C=C^{\otimes1}\rightarrow\coprod\limits_nC^{\otimes n} defines a universal arrow from C to the forgetful functor, and so we have an adjunction.

So what does this look like in \mathbf{Set}? The free monoid object on a set S will consist of the coproduct (disjoint union) of the sets of ordered n-tuples of elements of S. The unit will be the unique {0}-tuple (), and I’ll leave it to you to verify that the multiplication defined above becomes concatenation in this context. And thus we recover the usual notion of a free monoid.

One thing I slightly glossed over is showing that \mathbf{Set} satisfies the hypotheses of our construction. It works here for the same reason it will work in many other contexts: \mathbf{Set} is a closed category. Given any closed category with countable coproducts, the functor C\otimes\underline{\hphantom{X}} has a right adjoint by definition. And thus it preserves all colimits which might exist. In particular, it preserves the countable coproducts, which is what the construction requires. The other functor preserves these coproducts as well because the category is symmetric — tensoring by C on the left and tensoring by C on the right are naturally isomorphic. Thus we have free monoid objects in any closed category with countable coproducts.

August 2, 2007 Posted by | Category theory | 2 Comments

Closed Categories

A closed category is a symmetric monoidal category \mathcal{C} where each functor \underline{\hphantom{X}}\otimes B has a specified right adjoint called an “exponential”: (\underline{\hphantom{X}})^B. By what we said yesterday, this means that there is a unique way to sew all these adjoints parametrized by \mathcal{C} together into a functor.

The canonical example of such a category is the category of sets with the cartesian product as its monoidal structure. For each set B we need an adjunction \hom_\mathbf{Set}(A\times B,C)\cong\hom_\mathbf{Set}(A,C^B). That is, functions taking a pair of elements — one from A and one from B — to an element of C should be in bijection with functions from A to C^B. And indeed we have such an adjunction: C^B is the set \hom_\mathbf{Set}(B,C) of all functions from B to C.

Let’s say this a bit more concretely. If we consider the set of natural numbers \mathbb{N} we have the function +, which takes two numbers and gives back a third: if we stick in 1 and 2 we get back 1+2=3. But what if we just stick in the first number of the pair? Then what we get back is a function that will take the second number and give us the sum: if we just feed in 1 we get back the function x\mapsto1+x. That is, we can see addition either as a function \mathbb{N}\times\mathbb{N}\rightarrow\mathbb{N} taking a pair of numbers to a number, or we can see it as a function \mathbb{N}\rightarrow\mathbb{N}^\mathbb{N}, taking a number to a function from numbers to numbers.

More generally, if f:A\times B\rightarrow C we can define \bar{f}(a)=f_a, which is defined by f_a(b)=f(a,b)\in C. That is, \bar{f} takes an element of A and gives back a function from B to C. We call this process of turning functions of many variables into functions of a single variable at a time “currying”, after Haskell Curry. It turns out to be phenomenally useful for discussing theories of computation, and forms part of the basis of functional programming.

Closed categories are an attempt to mirror this currying procedure in other categories. In general, if the monoidal structure in question is the categorical product (which is always symmetric) then we say the category is “cartesian closed”. Most such categories still look a lot like this example in sets, with morphisms given by functions preserving structure and the exponential given by an appropriate analogue of the \hom functor.

Here’s an example, though, of a cartesian closed category that looks rather different. It requires the notion of a “predicate calculus”, but not very much of it. Basically, if you have a rough idea of what such a thing is you’ll be fine.

Okay, there’s a category \mathcal{F} whose objects are well-formed formulas in the calculus, and whose arrows are proofs. The product in this category is \wedge, read “and”. If we know P\wedge Q then we can derive P and we can derive Q, so there are two arrows. On the other hand, if R implies both P and Q, then R implies P\wedge Q. This is just a rough sketch that \wedge is the categorical product, but it’s good enough for now.

Anyhow, the exponential is \implies, read “implies”. This is subtler than it seems. In the last paragraph I was saying things like “P implies Q“, but this is not what I mean here. The formula P\implies Q is the statement within the calculus that there exists a proof starting from P and ending with Q. Writing out “implies” as I’m discussing the structure is a high-level view from outside the system. Writing \implies in the structure itself is a low-level view, “internal” to the category. “P implies Q” is a statement about the category, while “P\implies Q” is a statement within the category.

Now if P\wedge Q implies R we have (by definition) a proof starting from P\wedge Q and ending with R. Then we can use this proof to devise a proof starting from P and ending with Q\implies R. That is, P\wedge Q implies R if and only if P implies Q\implies R. This shows that Q\implies\underline{\hphantom{X}} is right adjoint to P\wedge\underline{\hphantom{X}}, as we claimed.

August 1, 2007 Posted by | Category theory | 16 Comments

Adjoints with parameters

Now that we know how to transform adjoints, we can talk about whole families of adjoints parametrized by some other category. That is, for each object P of the parametrizing category \mathcal{P} we’ll have an adjunction F_P\dashv G_P:\mathcal{C}\rightarrow\mathcal{D}, and for each morphism of \mathcal{P} we’ll have a transformation of the adjunctions.

Let’s actually approach this from a slightly different angle. Say we have a functor F:\mathcal{C}\times\mathcal{P}\rightarrow\mathcal{D}, and that for each P\in\mathcal{P} the functor F(\underline{\hphantom{X}},P):\mathcal{C}\rightarrow\mathcal{D} has a right adjoint G(P,\underline{\hphantom{X}}). Then I claim that there is a unique way to make G into a functor from \mathcal{P}^\mathrm{op}\times\mathcal{D} to \mathcal{C} so that the bijection \phi_{C,P,D}:\hom_\mathcal{D}(F(C,P),D)\rightarrow\hom_\mathcal{C}(C,G(P,D)) is natural in all three variables. Note that G must be contravariant in P here to make the composite functors have the same variance in P.

If we hold P fixed, the bijection is already natural in C and D. Let’s hold C and D fixed and see how to make it natural in P. The components \phi_P:\hom_\mathcal{D}(F(C,P),D)\rightarrow\hom_\mathcal{C}(C,G(P,D)) are already given in the setup, so we can’t change them. What we need are functions \hom_\mathcal{D}(F(1_C,p),1_D):\hom_\mathcal{D}(F(C,P'),D)\rightarrow\hom_\mathcal{D}(F(C,P),D) and \hom_\mathcal{C}(1_C,G(p,1_D)):\hom_\mathcal{C}(C,G(P',D))\rightarrow\hom_\mathcal{C}(C,G(P,D)) for each arrow p:P\rightarrow P'.

For naturality to hold, we need \hom_\mathcal{C}(1_C,G(p,1_D))\circ\phi_{P'}=\phi_P\circ\hom_\mathcal{D}(F(1_C,p),1_D). But from what we saw last time this just means that the pair of natural transformations (F(1_C,p),G(p,1_D)) forms a conjugate pair from F(\underline{\hphantom{X}},P)\dashv G(P,\underline{\hphantom{X}}) to F(\underline{\hphantom{X}},P')\dashv G(P',\underline{\hphantom{X}}). And this lets us define G(p,1_D) uniquely in terms of F(1_C,p), the counit \epsilon' of F(\underline{\hphantom{X}},P')\dashv G(P',\underline{\hphantom{X}}), and the unit \eta of F(\underline{\hphantom{X}},P)\dashv G(P,\underline{\hphantom{X}}) by using the first of the four listed equalities.

From here, it’s straightforward to show that this definition of how G acts on morphisms of \mathcal{P} makes it functorial in both variables, proving the claim. We can also flip back to the original viewpoint to define an adjunction between categories \mathcal{C} and \mathcal{D} parametrized by the category \mathcal{P} as a functor from \mathcal{P} to the category \mathbf{Adj}(\mathcal{C},\mathcal{D}) of adjunctions between those two categories.

July 31, 2007 Posted by | Category theory | 4 Comments

Transformations of Adjoints

And now we go back to adjoints. Like every other structure out there, we want to come up with some analogue of a homomorphism between two adjunctions. Let’s consider the adjunctions F\dashv G:\mathcal{C}\rightarrow\mathcal{D} and F'\dashv G':\mathcal{C}'\rightarrow\mathcal{D}', and try to find a good notion of a transformation from the first to the second.

We’ll proceed by considering an adjunction to consist of the pair of categories (\mathcal{C},\mathcal{D}) with the functors giving extra structure. Down in the land of groups and rings and such, we’d consider sets with extra structure and functions that preserved that structure. So here, naturally, we want to consider functors which preserve this extra structure. That is, a map of adjunctions consists of a pair of functors K:\mathcal{C}\rightarrow\mathcal{C}' and L:\mathcal{D}\rightarrow\mathcal{D}'. These must preserve the structure in that K\circ G=G'\circ L and L\circ F=F'\circ K.

But hold up a second, we’ve forgotten something else that goes into an adjunction: the isomorphism \phi:\hom_\mathcal{D}(F(C),D)\rightarrow\hom_\mathcal{C}(C,G(D)). Here’s a diagram showing how the map of adjunctions should play nicely with them:
Coherence for Adjunction Map

Equivalently we can specify an adjunction by its unit and counit. In this case the compatibility in question is a pair of equations of natural transformations: 1_K\circ\eta=\eta'\circ1_K and 1_L\circ\epsilon=\epsilon'\circ1_L.

What if we’re looking at two different adjunctions between the same pair of categories? Well then we may as well try to use the appropriate identity functors for K and L. But then it’s sort of silly to insist that G=1_\mathcal{C}\circ G=G'\circ\mathcal{D}=G' on the nose, and similarly for F'. Instead, as we do so often, let’s weaken this equality to just a natural transformation.

We’ll say that a pair of natural transformations \sigma:F\rightarrow F' and \tau:G'\rightarrow G are “conjugate” if \hom_\mathcal{C}(1_C,\tau_D)\circ\phi'_{C,D}=\phi_{C,D}\circ\hom_\mathcal{D}(\sigma_C,1_D). This is equivalent, in terms of the unit and counit, to any one of the following four equalities:

  • \tau=(1_G\circ\epsilon')\cdot(1_G\circ\sigma\circ1_{G'})\cdot(\eta\circ1_{G'})
  • \sigma=(\epsilon\circ1_{F'})\cdot(1_F\circ\tau\circ1_{F'})\cdot(1_F\circ\eta')
  • \epsilon\cdot(1_F\circ\tau)=\epsilon'\cdot(\sigma\circ1_{G'})
  • (1_G\circ\sigma)\cdot\eta=(\tau\circ1_{F'})\cdot\eta'

Now it’s easily verified that given a pair of categories (\mathcal{C},\mathcal{D}) we can build a category whose objects are adjunctions F\dashv G:\mathcal{C}\rightarrow\mathcal{D} and whose morphisms are conjugate pairs of natural transformations, which we write out in full as (\sigma,\tau):(F,G,\phi)\rightarrow(F',G',\phi'):\mathcal{C}\rightarrow\mathcal{D}. We compose conjugate pairs in this category in the obvious way, which we write (\sigma',\tau')\cdot(\sigma,\tau).

On the other hand, if we have a pair (\sigma,\tau):(F,G,\phi)\rightarrow(F',G',\phi'):\mathcal{C}\rightarrow\mathcal{D} and another (\bar{\sigma},\bar{\tau}):(\bar{F},\bar{G},\bar{\phi})\rightarrow(\bar{F}',\bar{G}',\bar{\phi}'):\mathcal{D}\rightarrow\mathcal{E}, then we can form the composite (\bar{\sigma}\circ\sigma,\tau\circ\bar{\tau}):(\bar{F}\circ F,G\circ\bar{G},\bar{\phi}\cdot\phi)\rightarrow(\bar{F}'\circ F',G'\circ\bar{G}',\bar{\phi}'\cdot\phi'):\mathcal{C}\rightarrow\mathcal{E}, which we’ll write as (\bar{\sigma},\bar{\tau})\circ(\sigma,\tau). Notice the similarity of this situation with the two different compositions of natural transformations between functors.

July 30, 2007 Posted by | Category theory | 3 Comments

The Simplicial Category

There’s another approach to the theory of monoids which finds more direct application in topology and homology theory (which, yes, I’ll get to eventually) — the “simplicial category” \mathbf{\Delta}. Really it’s an isomorphic category to \mathrm{Th}(\mathbf{Mon}), but some people think better in these other terms. I personally like the direct focus on the algebra, coupled with the diagrammatics so reminiscent of knot theory, but for thoroughness’ sake I’ll describe the other approach.

Note that the objects of \mathrm{Th}(\mathbf{Mon}) correspond exactly with the natural numbers. Each object is the monoidal product of some number of copies of the generating object M. We’re going to focus here on the model of \mathbb{N} given by the ordinal numbers. That is, the object M^{\otimes n} corresponds to the ordinal number \mathbf{n}, which is a set of n elements with its unique (up to isomorphism) total order. In fact, we’ve been implicitly thinking about an order all along. When we draw our diagrams, the objects consist of a set of marked points along the upper or lower edge of the diagram, which we can read in order from left to right.

Let’s pick a specific representation of each ordinal to be concrete about this. The ordinal \mathbf{n} will be represented by the set of natural numbers from {0} to n-1 with the usual order relation. The monoidal structure will just be addition — \mathbf{m}\otimes\mathbf{n}=\mathbf{m+n}.

The morphisms between ordinals are functions which preserve the order. A function f:X\rightarrow Y between ordinals satisfies this property if whenever i\leq j in X then f(i)\leq f(j) in Y. Note that we can send two different elements of X to the same element of Y, just as long as we don’t pull them past each other.

So what sorts of functions do we have to play with? Well, we have a bunch of functions from \mathbf{n} to \mathbf{n+1} that skip some element of the image. For instance, we could send \mathbf{3} to \mathbf{4} by sending {0} to {0}, skipping 1, sending 1 to 2, and sending 2 to 3. We’ll say \delta^n_i:\mathbf{n}\rightarrow\mathbf{n+1} for the function that skips i in its image. The above function is then \delta^3_1. For a fixed n, the index i can run from {0} to n.

We also have a bunch of functions from \mathbf{n+1} to \mathbf{n} that repeat one element of the image. For example, we could send \mathbf{4} to \mathbf{3} by sending {0} to {0}, 1 and 2 both to 1, and 3 to 2. We’ll say \sigma^n_i:\mathbf{n+1}\rightarrow\mathbf{n} for the function that repeats i in its image. The above function is then \sigma^3_1. Again, for a fixed n, the index i can run from {0} to n-1.

Notice in particular that “skipping” and “repeating” are purely local properties of the function. For instance, \delta^0_0 is the unique function from \mathbf{0} (the empty set) to \mathbf{1}, which clearly skips 0\in\mathbf{1}. Then \delta^n_i can be written as 1_i\otimes\delta^0_0\otimes1_{n-i}, since it leaves the numbers from {0} to i-1 alone, sticks in a new i, and then just nudges over everything from (the old) i to n. Similarly, \sigma^1_0 is the unique function from \mathbf{2} to \mathbf{1} that sends both elements in its domain to 0\in\mathbf{1}. Then all the other \sigma^n_i can be written as 1_i\otimes\sigma^0_0\otimes1_{n-i-1}.

Now every order-preserving function is determined by the set of elements of the range that are actually in the image of the function along with the set of elements of its domain where it does not increase. That is, if we know where it skips and where it repeats, we know the whole function. This tells us that we can write any function as a composition of \delta and \sigma functions. These basic functions satisfy a few identities:

  • If i\leq j then \delta^{n+1}_i\circ\delta^n_j=\delta^{n+1}_{j+1}\circ\delta^n_i.
  • If i\leq j then \sigma^{n-1}_j\circ\sigma^n_i=\sigma^{n-1}_i\circ\sigma^n_{j+1}.
  • If i<j then \sigma^n_j\circ\delta^n_i=\delta^{n-1}_i\circ\sigma^{n-1}_{j-1}.
  • If i=j or i=j+1 then \sigma^n_j\circ\delta^n_i=1.
  • If i>j+1 then \sigma^n_j\circ\delta^n_i=\delta^{n-1}_{i-1}\circ\sigma^{n-1}_j.

We could check all these by hand, and if you like that sort of thing you’re welcome to it. Instead, I’ll just assume we’ve checked the second one for n=2 and the fourth one for n=1.

What’s so special about those conditions? Well, notice that \sigma^1_0:\mathbf{1}\otimes\mathbf{1}\rightarrow\mathbf{1} takes two copies of \mathbf{1} to one copy, and that the second relation becomes the associativity condition for this morphism. Then also \delta^0_0:\mathbf{0}\rightarrow\mathbf{1} takes zero copies to one copy, and the fourth relation becomes the left and right identity conditions. That is, \mathbf{1} with these two morphisms is a monoid object in this category! Now we can verify all the other relations by using our diagrams rather than a lot of messy calculations!

We can also go back the other way, breaking any of our diagrams into basic pieces and translating each piece into one of the \delta or \sigma functions. The category of ordinal numbers not only contains a monoid object, it is actually isomorphic to the “theory of monoids” functor — it contains the “universal” monoid object.

So why bother with this new formulation at all? Well, for one thing it’s always nice to see the same structure instantiated in many different ways. Now we have it built from the ground up as \mathrm{Th}(\mathbf{Mon}), we have it implemented as a subcategory of \mathcal{OTL}, we have it as the category of ordinal numbers, and thus we also have it as a full subcategory of \mathbf{Cat} — the category of all small categories (why?).

There’s another reason, though, which won’t really concern us for a while yet. The morphisms \delta^n_i and \sigma^n_i turn out to be very well-known to topologists as “face” and “degeneracy” maps when working with shapes they call “simplicial complexes”. Not only is this a wonderful oxymoron, it’s the source of the term “simplicial category”. If you know something about topology or homology, you can probably see how these different views start to tie together. If not, don’t worry — I’ll get back to this stuff.

July 28, 2007 Posted by | Category theory | 5 Comments

The General Associative Law

For any monoid object we have an associative law for the multiplication: \mu\circ(\mu\otimes1_X)=\mu\circ(1_X\otimes\mu). This basically says that the two different ways of multiplying together three inputs to give one output are the same. Let’s call the result \mu_3:X^{\otimes3}\rightarrow X. In fact, we might go so far as to say \mu_2=\mu:X^{\otimes2}\rightarrow X, \mu_1=1_X:X^{\otimes1}\rightarrow X, and even \mu_0=1:X^{\otimes0}\rightarrow X.

This generalizes a lot. We want to say that there’s a unique way (called \mu_n) to multiply together n inputs. The usual way is to pick some canonical form and show that everything can be reduced to that form. This ends up being a lot like the Coherence Theorem. In fact, if we take a monoid object in the category \mathbf{Cat} of small categories, this is the Coherence Theorem for a strict monoidal category.

But there’s an easier way than walking through that big proof again, and it uses our diagrammatic approach! The first thing we need to realize is that if we can show this rule holds in \mathrm{Th}(\mathbf{Mon}), then it will hold for all monoid objects. That’s why the “theory of monoids” category is so nice — it exactly encodes the structure of a monoid. Anything that is true for all monoids can be proved by just looking at this category and proving it there!

So how do we show that the general associative law holds in \mathrm{Th}(\mathbf{Mon})? Now we need to notice that the functor that makes \downarrow\otimes\uparrow into a monoid object is faithful. That is, if two Temperley-Lieb diagrams in the image are the same, then they must come from the same morphism in \mathrm{Th}(\mathbf{Mon}). But if two diagrams are equivalent they differ by either sliding loops arcs around in the plane — which uses the monoidal structure to pull cups or caps past each other — or by using the zig-zag identities — which encode the left and right identity laws. Thus any equalities that hold in the image of the functor must come from equalities already present in \mathrm{Th}(\mathbf{Mon})!

Now any way of multiplying together n inputs to give one output is a morphism f:M^{\otimes n}\rightarrow M in \mathrm{Th}(\mathbf{Mon}), which will be sent to a diagram F(f):(\downarrow\otimes\uparrow)^{\otimes n}\rightarrow\downarrow\otimes\uparrow in \mathcal{OTL}. It’s not too hard to see that there’s really only one of these diagrams that could be in the image of the functor (up to equivalence of diagrams). So all such multiplications are sent to the same diagram. By the faithfulness above, this means that they were all the same morphism in \mathrm{Th}(\mathbf{Mon}) to begin with, and we’re done.

By the way, you should try playing around with the oriented Temperley-Lieb diagrams to verify the claim I made of uniqueness. Try to work out exactly what diagrams are in \hom_\mathcal{OTL}((\downarrow\otimes\uparrow)^{\otimes n},\downarrow\otimes\uparrow), and then which ones can possibly be in the image of the functor. Playing with the diagrams like this should give you a much better intuition for how they work. If nothing else, drawing a bunch of pictures is a lot more fun than algebra homework from back in school.

July 27, 2007 Posted by | Category theory | Leave a comment

More monoid diagrams

Let’s pick up with the diagrams for monoid objects from yesterday. In fact, let’s draw the multiplication and unit diagrams again, but this time let’s make the lines really thick.

Thick Multiplication and Unit Diagram

Now we’re looking at something more like a region of the plane than a curve. We really don’t need all that inside part, so let’s rub it out and just leave the outline. Of course, whenever we go from a blob to its outline we like to remember where the blob was. We do this by marking a direction on the outline so if we walk in that direction the blob would be on our left. Those of you who have taken multivariable calculus probably have a vague recollection of this sort of thing. Don’t worry, though — we’re not doing calculus here.

Okay, now the outline diagrams look like this:

Multiplication and Unit Diagram Outlines

That’s odd. These diagrams look an awful lot like Temperley-Lieb diagrams. And in fact they are! In fact, we get a functor from \mathrm{Th}(\mathbf{Mon}) to \mathcal{OTL} that sends M to \downarrow\otimes\uparrow. That is, a downward-oriented strand next to an upward-oriented strand makes a monoid object on \mathcal{OTL}!

But to be sure of this, we need to check that the associativity and identity relations hold. Here’s associativity:

Associativity T-L Diagram

Well that’s pretty straightforward. It’s just sliding the arcs around in the plane. How about the identity relations?

Identity T-L Diagram

The right identity relation holds because of one of the “zig-zag” relations for duals, and the left identity relation holds because of the other!

Now you should be able to find a comonoid object in \mathcal{OTL} in a very similar way.

July 26, 2007 Posted by | Category theory | 3 Comments

Follow

Get every new post delivered to your Inbox.

Join 392 other followers