The Unapologetic Mathematician

Mathematics for the interested outsider

Duality terminology

I should have mentioned this before, but usually dual notions are marked by the prefix “co-“. As an example, we have “well-powered” and “co-well-powered” categories.

Another example: We know through Erdős that “a mathematician is a device for turning coffee into theorems”. It thus follows by duality that a comathematician is a device for turning cotheorems into ffee.

May 31, 2007 Posted by | Category theory | 3 Comments

The Opposite Category

One of the most interesting general facts about categories is how ubiquitous the notion of duality is. Pretty much everything has a “mirror image”, and the mirror is the opposite category.

Given a category \mathcal{C}, with objects {\rm Ob}(\mathcal{C}) and morphisms {\rm Mor}(\mathcal{C}), we can construct the “opposite category” \mathcal{C}^{\rm op} simply enough. In fact, it has the exact same objects and morphisms as \mathcal{C} does. The difference comes in how they relate to each other.

Remember that we had two functions, s(m) and t(m) assigning the “source” and “target” objects to any arrow. To get the opposite category we just swap them. Given a morphism, its source in \mathcal{C}^{\rm op} is its target in \mathcal{C}, and vice versa. Of course, now we have to swap the order of composition. If we have f:A\rightarrow B and g:B\rightarrow C in \mathcal{C}, then we get f:B\rightarrow A and g:C\rightarrow B in \mathcal{C}^{\rm op}. In \mathcal{C} the composition g\circ f:A\rightarrow C is defined, but in \mathcal{C}^{\rm op} the composition f\circ g:C\rightarrow A is defined.

Most definitions we give will automatically come with a “dual” definition, which we get by reversing all the arrows like this. For example, monos and epis are dual notions, as are subobjects and quotient objects. Just write down one definition in terms of morphisms, reverse all the morphisms (and the order of composition), and you get the other.

Theorems work this way too. If you dualize the hypotheses and the conclusions, then you can dualize each step of the proof to prove the dual theorem. I can prove that any injection is monic, so it follows immediately by duality that any surjection is epic. Many texts actually completely omit even the statements of dual notions and theorems once they define the opposite category, but I’ll try to be explicit about the duals (though of course I won’t need to give the proofs).

Another place duality comes up is in defining “contravariant” functors. This is just a functor F:\mathcal{C}^{\rm op}\rightarrow\mathcal{D}. It sends each object of \mathcal{C} to an object of \mathcal{D}, and sends a morphism f:A\rightarrow B in \mathcal{C} to a morphism F(f):F(B)\rightarrow F(A) in \mathcal{D}. See how the direction of the image morphism flipped? Early on, contravariant and regular (“covariant”) functors were treated somewhat separately, but really they’re just the same thing once you take the opposite category into account. Sometimes, though, it’s easier to think in terms of contravariant functors rather than mentally flipping all the arrows.

I’ll close with an example of a contravariant functor we’ve seen before. Consider a ring R with unit and a left module M over R. That is, M is an object in the category R-{\bf mod}. We can construct the dual module M^*, which is now an object in the category {\bf mod}-R of right R-modules. I say that this is a contravariant functor. We’ve specified how the dual module construction behaves on objects, but we need to see how it behaves on morphisms. This is what makes it functorial.

So let’s say we have two left R-modules M and N, and an R-module homomorphism f:M\rightarrow N. Since we want this to be a contravariant functor we need to find a morphism f^*:N^*\rightarrow M^*. But notice that M^*=\hom_{R-{\bf mod}}(M,R), and similarly for N. Then we have the composition of R-module homomorphisms \circ:\hom_{R-{\bf mod}}(N,R)\otimes\hom_{R-{\bf mod}}(M,N)\rightarrow\hom_{R-{\bf mod}}(M,R). If \nu is a linear functional on N, then we get \nu\circ f:M\rightarrow R as a linear functional on M. We can define f^*(\nu)=\nu\circ f.

Now, is this construction functorial? We have to check that it preserves identities and compositions. For identities it’s simple: 1_N^*(\nu)=\nu\circ1_N=\nu, so every linear functional on N gets sent back to itself. For compositions we have to be careful. The order has to switch around because this is a contravariant functor. We take f:M\rightarrow N and g:N\rightarrow L and check (g\circ f)^*(\lambda)=\lambda\circ g\circ f=g^*(\lambda)\circ f=f^*(g^*(\lambda))=(f^*\circ g^*)(\lambda), as it should.

May 31, 2007 Posted by | Category theory | 7 Comments

Solution to the ARML Scrimmage Power Question

Today I’ll give my own solution to the power question I posted last week. I’m going to present it as I always wished I could have: as a whole with answers to the specific questions spun off as they naturally arise, rather than separated part by part.

First I’ll restate the setup into a more natural language. This is really a question about certain endomorphisms on vector spaces over the field \mathbb{Z}_2. This is the quotient of the ring of integers by the maximal ideal generated by 2. If you haven’t seen it before, you should be able to construct it by all I’ve said so far.

Anyhow, the “arrangements” are just vectors in the vector space \mathbb{Z}_2^n. Such a space comes with a basis \{e_i\}_{i=1}^n, where e_i has a 1 in place i and {}0 elsewhere. Each of these spaces of course has the identity endomorphism I_n, and it also has a “left rotation” endomorphism L_n sending e_i to e_{i-1} and e_1 to e_n, and its inverse R_n — right rotation. The transformation we are concerned with is T_n=I_n+L_n.

Since we are given bases we are justified in writing out matrices for transformations, and even for vectors (because V\cong\hom(\mathbb{Z}_2,V)). The transformation T_n has the matrix (t_{i,j}) where t_{i,j} is 1 if j=i or j=i+1 and {}0 otherwise. A vector of length n will be an n\times1 matrix.

Numbers 1 and 2 are just calculations, so I’ll omit them.

We can write T_2 as the matrix \begin{pmatrix}1&1\\1&1\end{pmatrix}, so T_2^2 has matrix \begin{pmatrix}1&1\\1&1\end{pmatrix}\begin{pmatrix}1&1\\1&1\end{pmatrix}=\begin{pmatrix}0&0\\ 0&0\end{pmatrix}, which sends every vector to the zero vector. This is part 4.

Now one interesting property we can see straight off. We can tell whether there are an even or an odd number of ones in a given arrangement by adding up all the entries. That is, we can take the product with the 1\times n matrix p consisting of all ones. If we first transform an arrangement a, then measure the number of ones, this is like taking the product pT_na. But each column of the matrix for T_n has exactly two ones in it, so the product pT_n consists of all zeroes, and thus pT_na is always zero. That is, the image of any arrangement after a transformation always has an even number of ones. That’s number 10.

What arrangements are fixed by the transformation? This amounts to solving the equation
a=T_na=I_na+L_na=a+L_na
so we must have 0=L_na or a=R_n0=0. The zero vector is the only fixed point.

What vectors get sent to this fixed point? This is the kernel of the transformation — the vectors a such that 0=a+L_na. Equivalently, these satisfy a=L_na (why?). Thus all the entries must be the same, and the vector must consist of all zeroes or all ones. That’s number 9.

Now we see that if T_n^ka=0 then T_n^{k-1}a is in the kernel of T_n, and is thus either all zeroes or all ones. But if n is odd, the vector consisting of all ones is not in the image of T_n. Thus if we don’t start with a vector in the kernel we’ll never land in the kernel, and we’ll never get to the vector of all zeroes. That’s number 11. As a special case we have number 3.

Let’s consider T_n^2. We expand this as (I_n+L_n)(I_n+L_n)=I_n+L_n+L_n+L_n^2=I_n+L_n^2, since we’re working over \mathbb{Z}_2. Similarly, if we square this we get I_n+L_n^4. In fact, we have that T_n^{2^k}=I_n+L_n^{2^k}. Indeed, this is true by definition for k=0, and if it’s true for k then
T_n^{2^{k+1}}=(T_n^{2^k})^2=(I_n+L_n^{2^k})^2=I_n+(L_n^{2^k})^2=I_n+L_n^{2^{k+1}}
so the claim is true by induction.

This means that after 2^k transformations a vector is sent to the sum (in \mathbb{Z}_2) of itself with its left-rotation by 2^k places. Thus if 2^k|n we can divide the entries in a into 2^k vectors of length \frac{n}{2^k} each — just pick the entries spaced out by separations of 2^k. Then T_n^{2^k} acts on a the same way T_{\frac{n}{2^k}} acts on each of the subvectors, since the shift by 2^k places fixes the subvectors. That’s number 7. Parts 5 and 6 are now special cases.

Also, if n=2^k then L_n^{2^k}=I_n, so T_n^{2^k}=0, so 2^k transformations send every vector of length 2^k to zero. That’s number 8.

Finally, let n be any even number that is not a power of 2. Now the result of T_n^{q2^k}a is the same as applying T_n^q to each of the 2^k subvectors as described above. But now each subvector has odd length. If a has a single 1 in it, then one of these subvectors must contain it. By number 11 this subvector can never be sent to zero, so T_n^{q2^k}a is never zero. If T_n^ma were ever zero then T_n^{qn^k}a would be for a large enough q, which will never happen. That’s part 12.

May 31, 2007 Posted by | Uncategorized | 2 Comments

Equivalence of categories

It should be pretty clear from all the examples in groups, rings, and modules that two categories \mathcal{C} and \mathcal{D} are isomorphic if there are functors F:\mathcal{C}\rightarrow\mathcal{D} and G:\mathcal{D}\rightarrow\mathcal{C} so that F\circ G is the identity functor on \mathcal{D} and G\circ F is the identity functor on \mathcal{C}.

Unfortunately, that’s not really as useful as it is in more mundane structures. The problem is that isomorphism is really strict, and categories are more naturally sort of loose and free-flowing. We have to weaken the notion of isomorphism to get something really useful. I’ll try to motivate this with one example. It’s sort of simple, but it gets to the heart of the problem pretty quickly.

Remember the category \mathbf{1} with one object and its identity morphism. We draw it as \bullet, taking the identity as given. Let’s also consider a category with two objects and two non-identity morphisms, one going in either direction: \bullet\rightleftarrows\bullet. Call the objects 1 and 2, and the four arrows f_{1,1}, f_{1,2}, f_{2,1}, f_{2,2}, where f_{i,j} goes from i to j. Then we have f_{1,2}\circ f_{2,1}=f_{2,2} and f_{2,1}\circ f_{1,2}=f_{1,1}. That is, the two non-identity arrows are inverses to each other, and the two objects are thus isomorphic.

Now since the two objects in our toy category are isomorphic, they are “essentially the same”. The category should (and does) behave pretty much the same as the category \mathbf{1}, but they’re clearly not isomorphic as categories.

Here’s another way they’re “the same” category. Since each hom-set in our toy category has at most one object, we can consider it as a preorder. We have the set \{1,2\} with 1\leq2 and 2\leq1 in the order. If we (as usual) move to a partial order, we identify these two points and just get the unique partial order on one point — the category \mathbf{1}!

Okay, so if they’re not isomorphic, what are they? The key here is that the two objects in our toy category are isomorphic, but not identical. The definition of isomorphism of categories requires that (for example) G(F(C)) is exactly C again. But really should it matter if we don’t get back C itself, but just an object isomorphic to it? Not really. And we also shouldn’t care if we get different results for every object in \mathcal{C} as long as they’re all isomorphic to the object we stick into G\circ F, and as long as these isomorphisms all fit together nicely with morphisms between the objects.

The upshot of all this is that we do know how to weaken the concept of isomorphism of categories: we just change the functor equalities in the definition to natural isomorphisms of functors! That is, we have natural isomorphisms \eta:1_\mathcal{C}\rightarrow G\circ F and \epsilon:F\circ G\rightarrow1_\mathcal{D}. Let’s unpack this all for our toy category.

The functor F from \bullet\rightleftarrows\bullet to \bullet is the only one possible: send both objects to the single object in \mathbf{1} and all the arrows to its identity. For a functor G the other way we have two choices. Let’s send the singlt object of \mathbf{1} to the object 1. The identity arrow must then go to f_{1,1}. Now we need natural isomorphisms.

To find these, let’s write down the compositions explicitly: F\circ G sends the single object of \mathbf{1} to the object 1 in our toy category, which then gets sent back to the single object in \mathbf{1}. Its identity arrow just comes along for the ride. Thus F\circ G is exactly the identity functor on \mathbf{1}, so we can just use its identity natural transformation as our \epsilon.

On the other hand, G\circ F sends both objects of our toy category to the object 1, and all four arrows to the identity arrow f_{1,1}. This is clearly not the identity functor on our category, so we need to find a natural isomorphism. That is, we need to find isomorphisms \eta_1:1\rightarrow G(F(1))=1 and \eta_2:2\rightarrow G(F(2))=1 so that the two nontrivial naturality squares commute. Clearly we can only pick \eta_1=f_{1,1} and \eta_2=f_{2,1}. Now if we write out the naturality squares for f_{1,2} and f_{2,1} (do it!) we see that they do commute, so \eta is a natural isomorphism.

When we have two functors running back and forth between two categories so that their composites are naturally isomorphic to the respective identities like this, we say that they furnish an “equivalence”, and that the categories are “equivalent”. Every isomorphism of categories is clearly an equivalence, but there are many examples of categories which are equivalent but not isomorphic. This is a much more natural notion than isomorphism for saying categories are “essentially the same”.

May 30, 2007 Posted by | Category theory | 9 Comments

Special kinds of morphisms, subobjects, and quotient objects

Last week I was using the word “invertible” as if it was perfectly clear. Well, it should be, more or less, since categories are pretty similar to monoids, but I should be a bit more explicit. First, though, there’s a few other kinds of morphisms we should know.

We want to motivate these definitions from what we know of sets, but the catch is that sets are actually pretty special. Some properties turn out to be the same when applied to sets, though they can be different in other categories.

First of all, let’s look at injective functions. Remember that these are functions f:X\rightarrow Y where f(x_1)=f(x_2) implies x_1=x_2. That is, distinct inputs produce distinct outputs. Now we can build a function g:B\rightarrow A as follows: if y=f(x) for some x\in X we define g(y)=x. This is well-defined because at most one x can work, by injectivity. Then for all the other elements of Y we just assign them to random elements of X. Now the composition g\circ f is the identity function on X because g(f(x))=x for all x\in X. We say that the function f has a (non-unique) “left inverse”.

Now since f has a left inverse g there’s something else that happens: if we have two functions h_1 and h_2 both from Y to X, and if f\circ h_1=f\circ h_2 then h_1=g\circ f\circ h_1=g\circ f\circ h_2=h_2. That is, f is “left cancellable”.

Now in any category \mathcal{C} we say a morphism f is a “monomorphism” (or “a mono”, or “f is monic”) if it is left cancellable, whether or not the cancellation comes from a left inverse as above. If f has a left inverse we say f is “injective” or that it is “an injection”. By the same argument as above, every injection is monic, but in general not all monos are injective. In \mathbf{Set} the two concepts are the same.

Similarly, a surjective function f has a right inverse g, and is thus right cancellable. We say in general that a right cancellable morphism is an “epimorphism” (or “an epi”, or “f is epic”). If the right cancellation comes from a right inverse, we say that f is “surjective”, or that it is “a surjection”. Again, every surjection is epic, but not all epis are surjective. In \mathbf{Set} the two concepts are again the same.

If a morphism is both monic and epic then we call it a “bimorphism”, and it can be cancelled from either side. If it is both injective and surjective we call it an “isomorphism”. All isomorphisms are bimorphisms, but not all bimorphisms are isomorphisms. If f is an isomorphism, then we can show (try it) that the left and right inverses are not only unique, but are the same, and we call the (left and right) inverse f^{-1}. When I said “invertible” last week I meant that such an inverse exists.

We’ve already seen these terms in other categories. In groups and rings we have monomorphisms and epimorphisms, which are monos and epis in the categories \mathbf{Grp} and \mathbf{Ring}.

Now recall that any subset T of a set S comes with an injective function T\rightarrow S “including” T into S. Similarly, subgroups and subrings come with “inclusion” monomorphisms. We generalize this concept and define a “subobject” of an object C in a category \mathcal{C} to be a monomorphism S\rightarrow C. In the same way we generalize quotient groups and quotient rings by defining a “quotient objects” of C to be epimorphisms C\rightarrow Q.

Notice that we define a subobject to be an arrow, and we allow any monomorphism. Consider the function f:\{a,b\}\rightarrow\{1,2,3\} defined by f(a)=1 and f(b)=3. It seems odd at first, but we say that this is a subobject of \{1,2,3\}. The important thing here is that we don’t define these concepts in terms of elements of sets, but in terms of arrows and their relations to each other. We “can’t tell the difference” between \{a,b\} and \{1,3\} since they are isomorphic as sets. If we just look at the arrow f and the usual inclusion arrow of \{1,3\}\subseteq\{1,2,3\}, they pick out the same subset of \{1,2,3\} so we may as well consider them to be the same subset.

Let’s be a little more general here. Let f_1:S_1\rightarrow C and f_2:S_2\rightarrow C be two subobjects of C. We say that f_1 “factors through” f_2 if there is an arrow g:S_1\rightarrow S_2 so that f_1=f_2\circ g. If we take the class of all subobjects of C (all monomorphisms into C) we can give it the structure of a preorder by saying f_1\leq f_2 if f_1 factors through f_2. It should be straightforward to verify that this is a preorder.

Now we can turn this preorder into a partial order as usual by identifying any two subobjects which factor through each other. If f_1=f_2\circ g_2 and f_2=f_1\circ g_1 then f_1=f_1\circ g_1\circ g_2. Since f_1 is monic we can cancel it from the left and find that 1_{S_1}=g_1\circ g_2. similarly we find that 1_{S_2}=g_2\circ g_1. That is, g_1 and g_2 are inverses of each other, and so S_1 and S_2 are isomorphic as subobjects of C. Conversely, if S_1 and S_2 are isomorphic subobjects then f_1 and f_2 factor through each other by an isomorphism g:S_1\rightarrow S_2. This gives us a partial order on (equivalence classes of) subobjects of C. If the class of equivalence classes of subobjects is in fact a proper set for every object C we say that our category is “well-powered”.

The preceding two paragraphs can be restated in terms of quotient objects. Just switch the directions of all the arrows and the orders of all the compositions. We get a partial order on (equivalence classes of) quotient objects of C. If the class of equivalence classes is a proper set for each object C then we say that the category is “co-well-powered”.

It should be noted that even though isomorphic subobjects come with an isomorphism between their objects, just having an isomorphism between the objects is not enough. One toy example is given in the comments below. Another is to consider two distinct one-element subsets of a given set. Clearly the object for each is a singleton, and all singletons are isomorphic, but the two subsets are not isomorphic as subobjects.

As an exercise, consider the category \mathbf{CRing} of commutative rings with unit and determine the partial order on the set of quotient objects of \mathbb{Z}.

May 29, 2007 Posted by | Category theory | 20 Comments

New This Week’s Finds

Now, back to the math. John Baez has posted a new This Week’s Finds. He continues his “Tale of Groupoidification”. It features a great comparison of group actions on sets and those on vector spaces (which we’ll get to soon enough). Even better, he gives a translation “for students trying to learn a little basic category theory” into the language of categories.

May 29, 2007 Posted by | Uncategorized | Leave a comment

Natural Transformations and Functor Categories

So we know about categories and functors describing transformations between categories. Now we come to transformations between functors — natural transformations.

This is really what category theory was originally invented for, and the terminology predates the theory. Certain homomorphisms were called “natural”, but there really wasn’t a good notion of what “natural” meant. In the process of trying to flesh that out it became clear that we were really talking about transformations between the values of two different constructions from the same source, and those constructions became functors. Then in order to rigorously define what a functor was, categories were introduced.

Given two functors F:\mathcal{C}\rightarrow\mathcal{D} and G:\mathcal{C}\rightarrow\mathcal{D}, a natural transformation \eta:F\rightarrow G is a collection of arrows \eta_C:F(C)\rightarrow G(C) in \mathcal{D} indexed by the objects of \mathcal{C}. The condition of naturality is that the following square commutes for every arrow f:A\rightarrow B in \mathcal{C}:

\begin{matrix}F(A)&\rightarrow^{\eta_A}&G(A)\\\downarrow^{F(f)}&&\downarrow^{G(f)}\\F(B)&\rightarrow^{\eta_B}&G(B)\end{matrix}

The vertical arrows from from applying the two functors to the arrow f, and the horizontal arrows are the components of the natural transformation.

If we have three functors F, G, and H from \mathcal{C} to \mathcal{D} and natural transformations \eta:F\rightarrow G and \xi:G\rightarrow H we get a natural transformation \xi\circ\eta:F\rightarrow H with components (\xi\circ\eta)_C=\xi_C\circ\eta_C. Indeed, we can just stack the naturality squares beside each other:

\begin{matrix}F(A)&\rightarrow^{\eta_A}&G(A)&\rightarrow^{\xi_A}&H(A)\\\downarrow^{F(f)}&&\downarrow^{G(f)}&&\downarrow^{H(f)}\\F(B)&\rightarrow^{\eta_B}&G(B)&\rightarrow^{\xi_B}&H(B)\end{matrix}

and the outer square commutes because both the inner ones do.

Every functor comes with the identity natural transformation 1_F, whose components are all identity morphisms. Clearly it acts as the identity for the above composition of natural transformations.

A natural transformation is invertible for the above composition if and only if each component is invertible as an arrow in \mathcal{D}. In this case we call it a “natural isomorphism”. We say two functors are “naturally isomorphic” if there is a natural isomorphism between them.

All of this certainly looks like we’re talking about a category, but again the set theoretic constraints often work against us. There are, however, times where we really do have a category. If one of \mathcal{C} or \mathcal{D} (or both) are small, then all the set theory works out and we get an honest category of functors from \mathcal{C} to \mathcal{D}. We will usually denote this category as \mathcal{D}^\mathcal{C}. Its objects are functors from \mathcal{C} to \mathcal{D}, and its morphisms are natural transformations between such functors.

And now we can explain the notation \mathcal{C}^\mathbf{2} for the category of arrows. This is the category of functors from \mathbf{2} to \mathcal{C}! What is a functor from \mathbf{2} to \mathcal{C}? Remember that \mathbf{2} is the category with two objects A and B and one non-trivial arrow f:A\rightarrow B. Thus a functor F:\mathbf{2}\rightarrow\mathcal{C} is defined by an arrow F(f):F(A)\rightarrow F(B), and there’s exactly one functor for every arrow in \mathcal{C}.

Now let’s say we have two such functors F(f):F(A)\rightarrow F(B) and G(f):G(A)\rightarrow G(B). A natural transformation \eta:F\rightarrow G consists of morphisms \eta_A:F(A)\rightarrow G(A) and \eta_B:F(B)\rightarrow G(B) so that the naturality square commutes. But this is the same thing we used to define morphisms in the arrow category, just with some different notation!

Natural transformations and functor categories show up absolutely everywhere once you know to look for them. We’ll be seeing a lot more examples as we go on.

May 26, 2007 Posted by | Category theory | 7 Comments

Comma Categories

Another useful example of a category is a comma category. The term comes from the original notation which has since fallen out of favor because, as Saunders MacLane put it, “the comma is already overworked”.

We start with three categories \mathcal{A}, \mathcal{B}, and \mathcal{C}, and two functors F:\mathcal{A}\rightarrow\mathcal{C} and G:\mathcal{B}\rightarrow\mathcal{C}. The objects of the comma category (F\downarrow G) are triples (A,x,B) where A is an object of \mathcal{A}, B is an object of \mathcal{B}, and x is an arrow F(A)\rightarrow G(B) in \mathcal{C}. The morphisms are pairs (f,g) — with f an arrow in \mathcal{A} and g an arrow in \mathcal{B} — making the following square commute:

\begin{matrix}F(A_1)&\rightarrow^{F(f)}&F(A_2)\\\downarrow^{x_1}&&\downarrow^{x_2}\\G(B_1)&\rightarrow^{G(g)}&G(B_2)\end{matrix}

So what? Well, let’s try picking F to be the functor A:\mathbf{1}\rightarrow\mathcal{A} sending the single object of \mathbf{1} to the object A\in\mathcal{A}. Then let \mathcal{G} be the identity functor on \mathcal{A}. Now an object of (A\downarrow1_\mathcal{A}) is an arrow f:A\rightarrow B, where B can be any other object in \mathcal{A}. A morphism is then a triangle:

\begin{matrix}A&&\\\downarrow^{f}&\searrow^{f'}&\\B&\rightarrow^{g}&B'\end{matrix}

Work out for yourself the category (1_\mathcal{A}\downarrow A).

Here’s another example: the category (1_\mathcal{A}\downarrow 1_\mathcal{A}). Verify that this is exactly the arrow category \mathcal{A}^\mathbf{2}.

And another: check that given objects A and B in \mathcal{A}, the category (A\downarrow B) is the discrete category (set) \hom_\mathcal{A}(A,B).

Neat!

May 26, 2007 Posted by | Category theory | 11 Comments

Ave, Borcherds

I know I’m a few days late in this, but I’d like to extend a welcome to Richard Borcherds, newest blatherer.

May 25, 2007 Posted by | Uncategorized | Leave a comment

Cardinals and Ordinals as Categories

We can import all of what we’ve said about cardinal numbers and ordinal numbers into categories.

For cardinals, it’s actually not that interesting. Remember that every cardinal number is just an equivalence class of sets under bijection. So, given a cardinal number we can choose a representative set S and turn it into a category \mathcal{S}. We let {\rm Ob}(\mathcal{S})=S and just give every object its identity morphism — there are no other morphisms at all in this category. We call this sort of thing a “discrete category”.

For ordinal numbers it gets a little more interesting. First, remember that a preorder is a category with objects the elements of the preorder, one morphism from a to b if a\leq b, and no morphisms otherwise.

It turns out that (small) categories like this are exactly the preorders. Let \mathcal{P} be a small category with a set of objects P and no more than one morphism between any two objects. We define a preorder relation \leq by saying a\leq b if hom_\mathcal{P}(a,b) is nonempty. The existence of identities shows that a\leq a, and the composition shows that if a\leq b and b\leq c then a\leq c, so this really is a preorder.

If now we require that no morphism (other than the identity) has an inverse, we get a partial order. Indeed, if a\leq b and b\leq a then we have arrows f:a\rightarrow b and g:b\rightarrow a. We compose these to get g\circ f:a\rightarrow a and f\circ g:b\rightarrow b. These must be the respective identities because there’s only the one arrow from any object to itself. But we’re also requiring that no non-identity morphism be invertible, so a has to be the same object as b.

Now we add to this that for every distinct pair of objects either \hom_\mathcal{P}(a,b) or \hom_\mathcal{P}(b,a) is nonempty. They can’t both be — that would lead to an inverse — but we insist that one of them is. Here we have total orders, where either a\leq b or b\leq a.

Finally, a technical condition I haven’t completely defined (but I will soon): we require that every nonempty “full subcategory” of \mathcal{P} have an “initial object”. This is the translation of the “well-ordered” criterion into the language of categories. Categories satisfying all of these conditions are the same as well-ordered sets.

Okay, that got a bit out there. We can turn back from the theory and actually get our hands on some finite ordinals. Remember that every finite ordinal number is determined up to isomorphism by its cardinality, so we just need to give definititions suitable for each natural number.

We define the category \mathbf{0} to have no objects and no morphisms. Easy.

We define the category \mathbf{1} to have a single object and its identity morphism. If we take the identity morphisms as given we can draw it like this: \bullet.

We define the category \mathbf{2} to have two objects and their identity morphisms as well as an arrow from one object to the other (but not another arrow back the other way). Again taking identities as given we get \bullet\rightarrow\bullet.

Things start to get interesting at \mathbf{3}. This category has three objects and three (non-identity) morphisms. We can draw it like this:
\begin{matrix}\bullet&\rightarrow&\bullet\\&\searrow&\downarrow\\&&\bullet\end{matrix}
The composite of the horizontal and vertical arrows is the diagonal arrow.

Try now for yourself drawing out the next few ordinal numbers as categories. Most people get by without thinking of them too much, but it’s helpful to have them at hand for certain purposes. In fact, we’ve already used \mathbf{2}!

Now to be sure that this does eventually cover all natural numbers. We can take our {0} to be the category \mathbf{0}. Then for a successor “function” we can take any category and add a new object, its identity morphism, and a morphism from every old object to the new one. With a little thought this is the translation into categories of the proof that well-ordered finite sets give a model of the natural numbers, and the same proof holds true here.

May 24, 2007 Posted by | Category theory | 6 Comments