## Hiatus

I think I’m done with categories *qua* categories for now, and am ready to move on to another subject for a little while. But before I do, I’m going to take a little break and finally get down to this restructuring of the subjects listed on the right. It’ll also give me some time to catch up on some stuff in the real world that needs doing.

The RSS feed will probably be going nuts with updates to posts as I crawl through the archives and relabel things. My apologies in advance.

*[UPDATE]:* I’ve finished the group theory archives. Unfortunately, the “category” bar on the right seems to not indent nested categories in this theme. Sort of annoying…

*[UPDATE]:* There seems to be a bug with subcategories on WordPress today, and I’m not the only one having it. So I’ll have to hold off on refining the rings and categories topics until later. In the meantime, I’ve reworked the sidebars a bit, including a search field!

## Chalk is a “Feelie”

Okay, so I’ll pile on with the interactive fiction chatter. I really should, since I’ve been playing IF games since I was a wee lad.

First someone pointed out that a long calculation is like a computer game where you have to save and keep backtracking to your saved states. Isabel at *God Plays Dice* then drew the more specific connection to interactive fiction. Then Mark at *Inductio Ex Machina* contributed this sample transcript of such a “game”.

I’d like to point out an unintended analogy here. It’s pretty well accepted within the IF community that any puzzle should be solvable at a first pass. That is, if you’ve done everything right you’ll have all you need to solve a given puzzle without guessing, failing, and backtracking. In fact, it’s the height of bad writing to include a puzzle that requires you to attempt it and fail to gain information needed to pass.

I think that the same holds true in mathematics. If you find that you *must* do a hard calculation with attendant backtracking, you’re asking the wrong question. When properly viewed, the solution to any problem should be inherent in the problem itself. Of course, it might be more convenient in context to bash your head against a wall than to look for the hidden doorway, but it’s really not the best way to go about things in the long run. I come back to my favorite passage from Grothendieck’s *Récoltes et Semailles*.

Prenons par exemple la tâche de démontrer un théorème qui reste hypothétique (à quoi, pour certains, semblerait se réduire le travail mathématique). Je vois deux approches extrêmes pour s’y prendre. L’une est celle du

marteau et du burin, quand le problème posé est vu comme une grosse noix, dure et lisse, dont il s’agit d’atteindre l’intérieur, la chair nourricière protégée par la coque. Le principe est simple: on pose le tranchant du burin contre la coque, et on tape fort. Au besoin, on recommence en plusieurs endroits différents, jusqu’à ce que la coque se casse — et on est content. Cette approche est surtout tentante quand la coque présente des aspérités ou protubérances, par où “la prendre”. Dans certains cas, de tels “bouts” par où prendre la noix sautent aux yeux, dans d’autres cas, il faut la retourner attentivement dans tous les sens, la prospecter avec soin, avant de trouver un point d’attaque. Le cas le plus difficile est celui où la coque est d’une rotondité et d’une dureté parfaite et uniforme. On a beau taper fort, le tranchant du burin patine et égratigne à peine la surface — on finit par se lasser à la tâche. Parfois quand même on finit par y arriver, à force de muscle et d’endurance.Je pourrais illustrer la deuxième approche, en gardant l’image de la noix qu’il s’agit d’ouvrir. La première parabole qui m’est venue à l’esprit tantôt, c’est qu’on plonge la noix dans un liquide émollient, de l’eau simplement pourquoi pas, de temps en temps on frotte pour qu’elle pénètre mieux, pour le reste on laisse faire le temps. La coque s’assouplit au fil des semaines et des mois — quand le temps est mûr, une pression de la main suffit, la coque s’ouvre comme celle d’un avocat mûr à point ! Ou encore, on laisse mûrir la noix sous le soleil et sous la pluie et peut-être aussi sous les gelées de l’hiver. Quand le temps est mûr c’est une pousse délicate sortie de la substantifique chair qui aura percé la coque, comme en se jouant — ou pour mieux dire, la coque se sera ouverte d’elle-même, pour lui laisser passage.

L’image qui m’était venue il y a quelques semaines était différente encore, la chose inconnue qu’il s’agit de connaître m’apparaissait comme quelque étendue de terre ou de marnes compactes, réticente à se laisser pénétrer. On peut s’y mettre avec des pioches ou des barres à mine ou même des marteaux-piqueurs: c’est la première approche, celle du “burin” (avec ou sans marteau). L’autre est celle de la

mer. La mer s’avance insensiblement et sans bruit, rien ne semble se casser rien ne bouge l’eau est si loin on l’entend à peine… Pourtant elle finit par entourer la substance rétive, celle-ci peu à peu devient une presqu’île, puis une île, puis un îlot, qui finit par être submergé à son tour, comme s’il s’était finalement dissous dans l’océan s’étendant à perte de vue…Le lecteur qui serait tant soit peu familier avec certains de mes travaux n’aura aucune difficulté à reconnaître lequel de ces deux modes d’approche est “le mien” — et j’ai eu occasion déjà dans la première partie de Récoltes et Semailles de m’expliquer à ce sujet, dans un contexte quelque peu différent. C’est “l’approche de la mer”, par submersion, absorption, dissolution — celle où, quand on n’est très attentif, rien ne semble se passer à aucun moment: chaque chose à chaque moment est si évidente, et surtout, si naturelle, qu’on se ferait presque scrupule souvent de la noter noir sur blanc, de peur d’avoir l’air de combiner, au lieu de taper sur un burin comme tout le monde… C’est pourtant là l’approche que je pratique d’instinct depuis mon jeune âge, sans avoir vraiment eu à l’apprendre jamais.

In case you haven’t yet passed your French language qualifier, I’ll give a rough translation.

Take, for example, the task of proving a theorem. I see two extreme approaches one could take. The first is that of

hammer and chisel, wherein the problem posed is seen as a large nut, hard and smooth, which contains a nourishing meat protected by the shell. The principle is simple: one puts the edge of the chisel against the shell and hits it hard. If necessary, one tries again in many different places, until the shell cracks — and one is happy. This approach is especially appealing when the shell shows a rough or bumpy patch where it can be grasped. In some cases, such places to grab the nut jump to the eye. In other cases, one must use all one’s senses and search carefully before finding a point of attack. The most difficult case is that where the shell is perfectly round and evenly firm. When hit strongly, the edge of the chisel just scratches the surface — one ends up merely tired. Sometimes the nut will finally crack through mere strength and stamina.I can illustrate the second approach with the same metaphor of a nut to be opened. The first explanation that comes to mind is to immerse the nut in some softening liquid — water, for instance — and to rub it from time to time to allow the water to penetrate better, but otherwise to leave it alone. Over weeks and months, the shell softens — when the time is right, a flick of the wrist is sufficient, and the shell opens to it like a ripe avocado! Or again, one can leave the nut out in the sun and the rain and even through the icy winter. When the time is right, it is a delicate touch that breaks the shell — or to say it better, the shell will open itself to let one through.

The pictur that came to me recently was again different. The unknown thing one is trying to undertand seems to me like a stretch of land or a hard patch of earth, hard to dig into. One might go at it with picks or mining tools, or even with jackhammers: this is the first approach, that of “chisels” (with or without hammer). The other is that of the

sea. The sea advances imperceptibly and noiselessly. Nothing seems to break, nothing moves… Yet eventually it surrounds the land. It slowly becomes a peninsula, then an island, then an islet, and finally it is submerged completely, dissolved into the ocean which stretches as far as the eye can see…The reader who is familiar with some of my work will have no difficuly determining which of these two approaches is “mine” — and I have had occasion already in the first part of these “Reapings and Sowings” to explain myself on this subject, in a slightly different context. It is “the method of the sea”, by submersion, absorption, dissolution — that where, if one does not pay close attention, nothing seems to happen at any given moment: everything is at each moment so evident and so natural that one feels nervous to write it down in black and white for fear of being others’ disapproval, rather than banging away at a chisel like everyone else… Yet this is the approach that I instinctively took since I was young, never having really noticed it.

As for the title of this post, a feelie is a physical object — often some document — that was packaged with a game and containing information crucial to some puzzle you’d need to solve. That is, if you didn’t buy the game and get the feelie, you couldn’t get past a certain point. It provided a crude level of copy-protection back in the good old days, under the pretense of extending the game experience (more common in non-IF games was asking the user to type in some specified word from the documentation). Thus, a feelie was all too often a hack — a puzzle relying on them was awkward and inelegant, pulling you out of the experience of the game rather than immersing you in it as was hoped.

Blackboards full of equations serve the same obscuring purpose. True understanding never lies in a calculation. The chalk on the board should not be a map, but a lens, and the mathematics is not in the equations, but behind them.

## Chain Homotopies and Homology

Sorry about going AWOL yesterday, but I got bogged down in writing another exam.

Okay, so we’ve set out chain homotopies as our 2-morphisms in the category of chain complexes in an abelian category . We also know that each of these 2-morphisms is an isomorphism, so decategorifying amounts to saying that two chain maps are “the same” if they are chain-homotopic.

Many interesting properties of chain maps are invariant under chain homotopies, which means that they descend to properties of this decategorified version. Alternately, some properties are defined by 2-functors, which means that if we apply a chain homotopy we change our answer by a 2-morphism in the target 2-category, which must itself be an isomorphism. I like to call these “homotopy covariants”, rather than “invariants”. Anyway, then the decategorification of this property is an invariant, and what I said before applies.

The big one of these properties we’re going to be interested in is the induced map on homology. Let’s consider chain complexes and , chain maps and from to , and let’s say there’s a chain homotopy . The chain maps induce maps and . I assert that .

To see this, first notice that passing to the induced map is linear. That is, . So all we really need to show is that a null-homotopic map induces the zero map on homology. But if makes null-homotopic, then . When we restrict to the kernel of , this just becomes , which clearly lands in the image of , which is zero in , as we wanted to show.

Now if we have chain maps and along with chain homotopies and , we say that and are “homotopy equivalent”. Then the induced maps on homology and are inverses of each other, and so the homologies of and are isomorphic.

This passage from covariance to invariance is the basis for why Khovanov homology works. We start with a 2-category of tangles (which I’ll eventually explain fore thoroughly). Then we pick a ring and consider the 2-category of chain complexes over the abelian category of -modules. We construct a 2-functor that picks a chain complex for each number of free ends, a chain map for each tangle, and a chain homotopy for each ambient isotopy of tangles. Then two isotopic tangles are assigned homotopic chain maps — the chain map is a “tangle covariant”. When we pass to homology, we get tangle invariants, which turn out to be related to well-known knot invariants.

## Chain Homotopies

We’ve defined chain complexes in an abelian category, and chain maps between them, to form an -category . Today, we define chain homotopies between chain maps, which gives us a 2-category.

First, we say that a chain map given by is “null-homotopic” if we have arrows such that . Here’s the picture:

In particular, the zero chain map with for all is null-homotopic — just pick .

Now we say that chain maps and are homotopic if is null-homotopic. That is, . We call the collection a chain homotopy from to . Then a chain map is null-homotopic if and only if it is homotopic to the zero chain map. We can easily check that this is an equivalence relation. Any chain map is homotopic to itself because and the zero chain map is null-homotopic. If and are homotopic by a chain homotopy , then is a chain homotopy from to . Finally, if is null-homotopic by and is null-homotopic by , then is null-homotopic by .

Another way to look at this is to note that we have an abelian group of chain maps from to , and the null-homotopic maps form a subgroup. Then two chain maps are homotopic if and only if they differ by a null-homotopic chain map, which leads us to consider the quotient of by this subgroup. We will be interested in properties of chain maps which are invariant under chain homotopies — properties that only depend on this quotient group.

In the language of category theory, the homotopies are 2-morphisms. Given 1-morphisms (chain maps) , , and from to , and homotopies and , we compose them by simply adding the corresponding components to get .

On the other hand, if we have 1-morphisms and from to , 1-morphisms and from to , and 2-morphisms and , then we can “horizontally” compose these chain homotopies to get with components . Indeed, we calculate

We could also have used and done a similar calculation. In fact, it turns out that and are *themselves* homotopic in a sense, and so we consider them to be equivalent. If we pay attention to this homotopy between homotopies, we get a structure analogous to the tensorator. I’ll leave you to verify the exchange identity on your own, which will establish the 2-categorical structure of .

One thing about this structure that’s important to note is that *every 2-morphism is an isomorphism*. That is, if two chain maps are homotopic, they are isomorphic as 1-morphisms. Thus if we decategorify this structure by replacing 1-morphisms by isomorphism classes of 1-morphisms, we are just passing from chain maps to homotopy classes of chain maps. In other words, we pass from the abelian group of chain maps to its quotient by the null-homotopic subgroup.

## Chain maps

As promised, something lighter.

Okay, a couple weeks ago I defined a chain complex to be a sequence with the property that . The maps are called the “differentials” of the sequence. As usual, these are the objects of a category, and we now need to define the morphisms.

Consider chain complexes and . We will write the differentials on as and those on as . A chain map is a collection of arrows that commute with the differentials. That is, . That these form the morphisms of an -category should be clear.

Given two chain complexes with zero differentials — like those arising as homologies — any collection of maps will constitute a chain map. These trivial complexes form a full -subcategory of the category of all chain complexes.

We already know how the operation of “taking homology” acts on a chain complex. It turns out to have a nice action on chain maps as well. Let’s write for the kernel of and for the image of , and similarly for . Now if we take a member (in the sense of our diagram chasing rules) so that , then clearly . That is, if we restrict to , it factors through . Similarly, if there is a with , then , and thus the restriction of to factors through .

So we can restrict to get an arrow which sends the whole subobject into the subobject . Thus we can pass to the homology objects to get arrows . That is, we have a chain map from to . Further, it’s straightforward to show that this construction is -functorial — it preserves addition and composition of chain maps, along with zero maps and identity maps.

## A New Definition of Spans

I was trying to add in duals to the theory, and I ran into some trouble. The fix seems to be something I’d considered a little earlier for a possible generalization, but it seems that duals force the issue. We need to tweak our definition of spans.

Now, most of the definition is fine as it stands. Our objects are just the same as in , and our 1-morphisms are spans . When I first (re)invented these things, I stopped at this level. I handled the associativity not with 2-morphisms, but with defining two spans and to be equivalent if there was an isomorphism so that and . Then I said that the 1-morphisms were equivalence classes of spans, giving me a 1-category.

Now still sits inside this category I’ll call . Indeed, if we use the same inclusion as before, the only way two arrows of can be isomorphic in (and thus equal in ) is for them to be equal already in .

But we want to pay attention to those 2-morphisms, and that’s where things start to get interesting. See, those arrows are nice as 2-morphisms, but they’re not very.. “spanny”. Instead, let’s define a 2-morphism from to to be *itself* a span satisfying and . Here’s the picture:

Now, we handle the associativity at the 2-morphism level by cutting off the same way we did in and say that a 2-morphism is really an equivalence class of spans. This makes the vertical composition of 2-morphisms just the same pullback construction as for the composition of 1-morphisms.

The horizontal composition of these 2-morphisms gets tricky. Here’s another picture:

Here we have our two 2-morphisms and we’ve already pulled back to get the 1-morphisms for the source and target of the composition. We write for the composite , and similarly on the other side.

Now we can pull back the diagram to get an object with arrows to and . If we follow these arrows, then up to and (respectively) the universal property of gives us a unique arrow from to . Similarly, we have a unique arrow from to . These arrows make the required squares commute, and so define a span from to which is our composite 2-morphism.

When we compose two spans, again we only have associativity up to isomorphism. In , this becomes an equality, so we’re fine. In we made this isomorphism a 2-morphism between the two composite spans. Now in we can make this isomorphism into one leg of a span 2-morphism, and everything works out as before. The exchange identity for the two compositions of 2-morphisms also works out, but it’s even more complicated than the definition of horizontal composition.

Seriously, does anyone know of a tool that will render commutative diagrams in 3-D, like with Java or something? This is getting ridiculous.

Anyhow, I think now I can throw away the request that the monoidal structures on play nice with the pullbacks. Unfortunately, it’s getting a lot more complicated now and I have other real-world obligations I’ve got to attend to. So I think I’ll back-burner this discussion and move back to something old rather than spend too much time working this stuff out live as I have been doing.

## Braidings on Span 2-Categories

Now that we can add a monoidal structure to our 2-category of spans, we want to add something like a braiding.

So, what’s a braided monoidal 2-category? Again, we need some data:

- For any objects and an equivalence called the “braiding” of and .
- For any 1-morphism and object , a 2-isomorphism called the braiding of and .
- For any object and morphism , a 2-isomorphism called the braiding of and .
- For any objects , , and , 2-isomorphisms and called the “braiding coherence 2-morphisms”.

We define the braiding to be an equivalence rather than an isomorphism because we don’t want to ask it to be exactly invertible. That is, there will be another 1-morphism with 2-isomorphisms and . The 2-isomorphisms braiding objects with 1-morphisms act in the place of naturality relations, allowing us to pull 1-morphisms back and forth through the object braiding. And then the hexagon identities from the definition of a braiding in a 1-category are now weakened to the braiding coherence 2-morphisms.

So, where are we going to find this structure for our bicategory ? Well, let’s assume we have a braiding on . Then we’ll just define to be the span . This is actually a 1-isomorphism, using the obvious inverse.

As we saw before with the tensorator, it turns out that we can set all of the and the to be the appropriate identities, as well as all the braiding coherence relations. Requiring the the monoidal product on preserve pullbacks turns out to be an extremely powerful condition!

And now what are the conditions that make this data into a braided monoidal 2-category?

- For 1-morphisms and , we have

- For any 1-morphisms and , 2-morphism , and object , we have

- For any 1-morphisms and , 2-morphism , and object , we have

- For any 1-morphisms and and object , we have

- For any 1-morphisms and and object , we have

- For any objects , , and , and 1-morphism , we have

- There are five more conditions like the last one, with the 1-morphism in other slots and different associations of the three terms.
- For any objects , , , , we have

- There are two more conditions like the last one, corresponding to different ways of associating the four terms.
- For any objects , , and , we have

. - , , and are each the identity whenever one of their slots is filled with the unit object .

Now in our currrent situation, almost all the structure here is trivial, and we have proofs to match! Specifically, all of the 2-morphisms that show up here are just the identities for various 1-morphisms. That is, they move between different ways of writing the same 1-morphism. For example, the first condition just reduces to saying that the composite of three applications of the identity 2-morphism on the span are the same as three other applications of the identity on the same span. And on and on they go, identities on identities, and there’s ultimately nothing to do here.

So the upshot is that if we have a braiding on , then is a braided monoidal 2-category. Dually, gets this structure if is a braided monoidal category with pushouts preserved by the monoidal product.

## Monoidal Structures on Span 2-Categories II

As I just stated in my update to yesterday’s post, I’ve given the data for a monoidal structure on the 2-category . Now we need some conditions on the data.

- For any object , the maps and are 2-functors.
- For any object, 1-morphism, or 2-morphism,
- For any object, 1-morphism, or 2-morphism, and any objects and , we have the equalities
- For any 1-morphisms , , and , we have the equalities
- For any objects and we have
- For any 1-morphisms and , we have and
- For any 1-morphisms , , and and any 2-morphism , we have
- For any 1-morphisms , , and and any 2-morphism , we have
- For any 1-morphisms , , and , we have
- For any 1-morphisms , , and , we have

Okay, a bunch of conditions. Notice here that we have stated a bunch of equalities. Most of them are at the level of 2-morphisms, and everything at that level in a 2-morphism *should* hold on the nose. But some of them are equalities between 1-morphisms, which our philosophy says we should weaken.

Really what we’re laying out here is a *semistrict* monoidal 2-category, as described in *Higher Dimensional Algebra I*. Just like for monoidal categories, there’s a “coherence theorem” that tells us that once we specify a number of relations, all the others we want will follow. There’s also something like a “strictification theorem”, but now we can’t just wipe away *all* the structure morphisms. We can only “semi-strictify” an arbitrary monoidal 2-category. So that’s what we’re doing here and avoiding all the extra conditions that would be required if we had associators and other such things floating around.

Okay, enough of what everyone (who’s crazy enough to work on this stuff) already knows. Let’s get down to what (as far as I can tell) is being worked out for the first time as I’m writing it down here.

For condition 1, let’s just consider tensoring on the left with an object, since tensoring on the right is almost exactly the same. It clearly preserves identity 1-morphisms, since is just and we know that in the monoidal structure on . Similarly, an identity 2-morphism in is given by an identity arrow in , and tensoring it on the left with gives back the appropriate identity arrow.

Does tensoring on the left with preserve all three compositions? Sure. It preserves vertical composition of 2-morphisms straight off. It preserves composition of 1-morphisms becase we’re assuming that the tensor product on preserves pullbacks. The horizontal composition of 2-morphisms is a little trickier, partly because I was never very explicit on how to compose 2-morphisms like this. Here’s how it looks:

Start with spans , , , and . Throw in arrows and making the appropriate triangles commute. Then pull back and to get and , respectively. We can follow the arrow from to and then by on to . Similarly we can get an arrow from to . If we compose these with the arrows from and down to , the square clearly commutes, so by the universal property of the pullback there is a unique arrow . This is the horizontal composite of and . From here it’s straightforward to see that if I tensor everything in sight on the left with some object, everything will be preserved. And so we have shown condition 1.

Conditions 2 and 3 is all but trivial, since everything we to any object, 1-morphism, or 2-morphism to tensor it with an object is to invoke the monoidal product down on , where these associativity and unit constraints hold automatically. Similarly, 5 also follows immediately from the monoidal structure on .

If we look back at what we did last time, we see that we *can* have set up our pullbacks as in the diagram I showed then. In fact, I’m coming to think I was being overly cautious to even bring that up. That is, we can just take the tensorator to be the identity 2-morphism on the appropriate span. This immediately satisfies conditions 4 and 6. The remaining conditions 7, 8, 9, and 10 also fall in line once you write out the compositions in terms of the spans. But I won’t kid you, they look *ugly*. I took a picture of the diagram for condition 7. Above the center it’s the left of the equation and below the center it’s the right. I’ll eventually TeX these up, but for now suffice to say that if you actually draw out these span diagrams and set the tensorator to be the identity that everything works out smoothly.

So *now* we’ve proven that the data we laid out yesterday does, in fact, constitute the structure of a monoidal 2-category.

## Monoidal Structures on Span 2-Categories

Now we want to take our 2-categories of spans and add some 2-categorical analogue of a monoidal structure on it.

Here’s what we need:

- An object called the unit object.
- For objects and , an object .
- For an object and a 1-morphism , 1-morphisms and .
- For an object and a 2-morphism , 2-morphisms and .
- For 1-morphisms and , a 2-morphism called the “tensorator”.

Notice that instead of defining the tensor product as a functor, we define its action on a single object and a single 1-morphism (in either order). Then if we have two 1-morphisms we have two ways of doing first one on one side of the tensor product, then the other on the other side. To say that is a functor would say that these two are equal, but we want to weaken this to say that there is some 2-morphism from one to the other.

Now let’s assume that we’ve got a regular monoidal structure on our category , and further that this monoidal structure preserves the pullbacks we’re assuming exist in . That is, if is a pullback of the diagram and is a pullback of the diagram , then will be a pullback of the diagram .

So what does this mean for ? Well, the monoidal structure on gives us a unit object and monoidal product objects . If we have a span and an object , we can form the spans and . If we have spans and and an arrow with and then the arrow satisfies and , and similarly the arrow satisfies and . And so we have our monoidal products of objects with 1- and 2-morphisms.

When we take spans and , we can form the following two composite spans:

where we use the assumption that the monoidal product preserves pullbacks to show that the squares in these diagrams are indeed pullback squares.

As we’ve drawn them, these two spans are the same. However, remember that the pullback in is only defined up to isomorphism. That is, when we define the pullback as a functor, we choose some isomorphism class of cones, and these diagrams say that the pullbacks we’ve drawn are isomorphic to those defined by the pullback functor. But that means that whatever the “real” pullbacks and are, they’re both isomorphic to , and that those isomorphisms play nicely with the other arrows we’ve drawn. And so there will be some isomorphism between the “real” pullbacks that make the required triangles commute, giving us our tensorator.

Therefore what we have shown is this: Given a monoidal category with pullbacks such that the monoidal structure preserves those pullbacks, we get the data for the structure of a (weak) monoidal 2-category on . Dually, we can show that given a monoidal category with *pushouts*, such that the monoidal structure preserves *them*, we get the data for a monoidal 2-category .

*[UPDATE]:* In my hurry to get to my second class, I overstated myself. I should have said that we have the *data* of the monoidal structure. The next post contains the conditions the data must satisfy.

## Spans and Cospans II

There’s something we need to note about spans that will come in extremely handy as we start trying to add structure to our categories of spans.

Remember that we’re starting with a category with pullbacks, and from this we construct the weak 2-category . It turns out that we can find inside . First of all, we consider as a 2-category itself by our usual trick of considering a set as a category — to every morphism in , just add one identity 2-morphism and nothing else.

Now we’re going to need an inclusion 2-functor. We could just look for it to preserve compositions up to some natural 2-morphism, but we can actually get preservation on the nose. Just send to the span , where the left arrow is the identity on and the right arrow is . The composite of two such spans looks like this:

Clearly the identity arrow in is sent to the identity span. Since there are only identity 2-morphisms in , their image under the inclusion is obvious.

This inclusion has a number of nice properties. First off, it’s always faithful. Clearly two such spans are the same only if the original arrows were the same. But we can even go further to assert that if two spans in the image of the inclusion are even related by a 2-morphism, they have to be from the same arrow in . Indeed, here’s the diagram:

The only possible arrow in the middle that makes the left triangle commute is the identity, and then the right triangle can only commute if .

On the other hand, the inclusion is almost *never* full, even if we only ask for “essential” fullness. Indeed, if there’s any non-identity arrow then we could use it on the left of a span to make something that can’t be in the image. On the other hand, if we ask that every span have a 2-morphism from something in the image we find the diagram:

Here we can always pick to make the right side commute as long as we can find an arrow in the middle that makes the left side commute. But that would mean that every arrow in has an arrow with . That is, every single arrow in would have to be a surjection, which is far too much to ask. And at the level of 2-morphisms there’s not nearly enough in the image to be full.

So the category sits inside the 2-category as a sub-2-category. This means (roughly) that if we get some structure on there must be a corresponding structure on itself by restriction. Then we can turn around and try to extend structures on to the whole of . I’ve worked out some of the basic examples, which I’ll start in on tomorrow. But some of the interesting ones I’ll be working out as I write them up, so this should prove interesting indeed.