When considering limits, we started by talking about the diagonal functor . This assigns to an object the “constant” functor that sends each object of to and each morphism of to .
Then towards the end of our treatment of limits we showed that taking limits is a functor. That is, if each functor from to has a limit , then is a functor from to . Dually, if every such functor has a colimit , then is also a functor.
And now we can fit these into the language of adjoints: when it exists, the limit functor is right adjoint to the diagonal functor. Dually, the colimit functor is left adjoint to the diagonal functor when it exists. I’ll handle directly the case of colimits, but the limit statements and proofs are straightforward dualizations.
So we definitely have a well-defined functor . By assumption we have for each functor an object . If we look at the third entry in our list of ways to specify an adjunction, all we need now is a universal arrow . But this is exactly how we defined limits! Now the machinery we set up yesterday takes over and promotes this collection of universal arrows into the unit of an adjunction .
For thoroughness’ sake: the unit of this adjunction is the colimiting cocone, considered as a natural transformation from to the constant functor on the colimiting object. The counit of this adjunction is just the identity arrow on because the colimit of the constant functor is just the constant value. The “quasi-inverse” conditions state that is the identity natural isomorphism on , and that is the identity natural isomorphism on , both of which are readily checked.
And our original definition of an adjoint here reads that . That is, for each cocone to on (one of the natural transformations on the right) there is a unique arrow from the colimiting object of to .
The unit of an adjunction picks out, for each object , an arrow . This arrow is an object in the comma category . And, amazingly enough, it’s an initial object in that category. Given any other object and arrow we need to find an arrow in so that . Since the obvious guess is . Then we can calculate:
where the second equality uses the naturality of and the third uses the “quasi-inverse” condition we discussed yesterday.
So, an adjunction means that for each and every object the component of the unit gives a universal arrow from to . Dually, for every object the component of the counit gives a couniversal arrow from to .
On the other hand, let’s say we start out with a functor and for each an object and an arrow that is universal from to . Then given an arrow we can build an arrow . By the universality of there is then a unique arrow so that . It’s straightforward now to show that and are the object and morphism functions of a functor , and that is a natural transformation.
Now, say we have functors and and a natural transformation so that each is universal from to . Given an arrow , there is (by universality of ) a unique arrow so that . This sets up a bijection defined by . This construction is natural in because is, and it’s natural in because is a functor. And so this data is enough to define an adjunction .
Dually, we can start with a functor and for each an object and an arrow universal from to . Then we can build up into a functor and up into a natural transformation with each component a couniversal arrow. And this is enough to define an adjunction .
And, of course, we know that giving a universal arrow from to is equivalent to giving a representation of the functor , and dually.
So we have quite a long list of ways to specify an adjunction
- Functors and and a natural isomorphism
- Functors and and natural transformations and satisfying and
- A functor and for each an object and a universal arrow
- A functor and for each an object and a couniversal arrow
- A functor and for each a representation of the functor
- A functor and for each a representation of the functor
- Functors and and a natural transformation so that each component is universal from to
- Functors and and a natural transformation so that each component is universal from to
Last time we took an adjunction and came up with two natural transformations, weakened versions of the natural isomorphisms defining an equivalence. Today we’ll see how to go back the other way.
So let’s say we have an adjunction given by natural isomorphism . Remember that we defined the unit and counit by and . We can take either one of these and reverse-engineer it. For instance, given an arrow in we can calculate
so once we know the unit of the adjunction we can calculate from it. Notice how we use the naturality of in the second equality.
Dually, we can determine in terms of the counit. Given in , we calculate:
so we can also determine the natural isomorphism of hom-sets in terms of the counit.
Of course, since we can determine the same isomorphism (technically the isomorphism and its inverse) from either the unit or the counit, they must be related. So what do these equations really tell us?
For this we have to go back to the way we compose natural transformations. The obvious way is where we have natural transformations and between three functors from to . We put them together to get .
Less obviously, we can consider functors and from to , functors and from to , and natural transformations and . We can put these together to get , defined by or (exercise: show that these two composites are equal).
Now what we need here is this “horizontal” composite. Let’s go back to the adjunction and take the natural transformations and . The components of their horizontal composite is then given by . Similarly, if we take the natural transformations and , their horizontal composite has components given by . Now the “vertical” composite of these two has components . And the above formula for the adjunction isomorphism in terms of the unit tells us that this is .
To put it at a bit of a higher level, if we start with the functor , use the unit to turn it into the functor , then use the counit to move back to , the composite natural transformation is just the identity transformation on . Similarly, we can show that the composite taking to and back to is the identity transformation on .
Inherent in this is also the converse statement. If we have natural transformations and satisfying these two identities, then we can use the above formulae to define a natural isomorphism in terms of and its inverse in terms of . Thus an adjunction is determined by a unit and a counit satisfying these “quasi-inverse” relations.
If you’re up to it, try to see where we’ve seen these quasi-inverse relations before in a completely different context. I’ll be coming back to this later.
Let’s say we have an adjunction . That is, functors and and a natural isomorphism .
Last time I drew an analogy between equivalences and adjunctions. In the case of an equivalence, we have natural isomorphisms and . This presentation seems oddly asymmetric, and now we’ll see why by moving these structures to the case of an adjunction.
So let’s set like we did to show that an equivalence is an adjunction. The natural isomorphism is now . Now usually this doesn’t give us much, but there’s one of these hom-sets that we know has a morphism in it: if then . Then is an arrow in from to .
We’ll call this arrow . Doing this for every object gives us all the components of a natural transformation . For this, we need to show the naturality condition for each arrow . This is a straightforward calculation:
using the definition of and the naturality of .
This natural isomorphism is called the “unit” of the adjunction . Dually we can set and extract an arrow for each object and assemble them into a natural transformation called the “counit”. If both of these natural transformations are natural isomorphisms, then we have an equivalence.
For a particular example, let’s look at this in the case of the free-monoid functor as the left adjoint to the underlying-set monoid . The unit will give an arrow , which here is just the inclusion of the generators (elements of ) as elements of the underlying set of the free monoid. The counit, on the other hand, will give an arrow . That is, we take all elements of the monoid and use them as generators of a new free monoid — write out “words” where each “letter” is a whole element of . Then to take such a word and send it to an element of , we just take all the letters and multiply them together as elements of . Since we gave a description of last time for this case, it’s instructive to sit down and work through the definitions of and to show that they do indeed give these arrows.
Today I return to the discussion of universals, limits, representability, and related topics. The last piece of this puzzle is the notion of an adjunction. I’ll give a definition and examples today and work out properties later.
An adjunction between categories and consists of a pair of functors and and a natural isomorphism . Notice that the functors on either side of go from to , so each component is a bijection of sets. We say that is “left adjoint” to , and conversely that is “right adjoint” to , and we write .
Now, we have been seeing these things all along our trip so far, but without mentioning them as such. For instance, we have all the “free” constructions:
- the free monoid on a set
- the free group on a set
- the free group on a monoid
- the semigroup ring
- the free ring on an abelian group
- the free module on a set
- the free algebra on a module
and maybe more that I’ve mentioned, but don’t recall.
These all have a very similar form in their definitions. For instance, the free monoid on a set is characterized by the following universal property: every function from into the underlying set of a monoid extends uniquely to a monoid homomorphism . If we write the underlying set of as , we easily see that is a functor. The condition then is that every element of the hom-set corresponds to exactly one element of the hom-set , and every monoid homomorphism restricts to a function on . That is, for every set and monoid we have an isomorphism of sets .
Now, given a function from a set to a set we can consider to be a subset of the free monoid on itself, giving a function . This extends to a unique monoid homomorphism . This construction preserves identities and compositions, making into a functor from to .
If we have a function and a monoid homomorphism then we can build functions and . The isomorphisms and commute with these arrows, so they form the components of a natural isomorphism between the two functors. This proves that the free monoid functor is a left adjoint to the forgetful functor .
All the other examples listed above go exactly the same way, giving left adjoints to all the forgetful functors.
As a slightly different example, we have a forgetful functor that takes an abelian group and “forgets” that it’s abelian, leaving just a group. Conversely, we can take any group and take the quotient by its commutator subgroup to get an abelian group. This satisfies the property that for any group homomorphism from to an abelian group (considered as just a group) there is a unique homomorphism of abelian groups . Thus it turns out that “abelianization” of a group is left adjoint to the forgetful functor from abelian groups to groups.
There are more explicit examples we’ve seen, but I’ll leave them to illustrate some particular properties of adjoints. Take note, though, that not all adjunctions involve forgetful functors like these examples have.
An adjunction between two categories can be seen as a weaker version of an equivalence. An equivalence given by functors and tells us that both and are fully faithful, so . Now let’s put to find that , where the last isomorphism uses the natural isomorphism . So every equivalence is an adjunction.
I’m exhausted from spending all morning and much of the afternoon purchasing a new (to me) car. As a result, I’ll just forward you to the excellent notes that Miguel Carrión Alvarez took in John Baez’ seminar on quantum gravity back in fall and winter of 2000-1.
Specifically, pay attention to the diagrammatics. He’s talking mostly about finite-dimensional vector spaces over the field of complex numbers, but most everything applies to a general (braided) monoidal category (with duals). Also, he draws his diagrams from top to bottom, while (as I keep reminding you) I write mine from bottom to top to make it easier to read off the algebraic notation.
We’ve already seen some of the basic pieces as braid, Temperley-Lieb, and tangle diagrams, but here each arc in the diagram carries a label from the objects of a category, and usually an arrow. We can move to a dual object by reversing the arrow or changing the label.
Morphisms can be put in boxes, with the incoming object in the bottom and the outgoing one at the top. The naturality for the dual morphisms basically says we can slide a morphism up over a cup or down under a cap to get its dual. Also, often a morphism will have a number of incoming or outgoing strands, which means that the incoming object is the tensor product of the objects on the incoming strands.
A braiding is written as a crossing (lower-left over upper-right), and the inverse of the braiding is written as the other kind of crossing. Naturality means that we can pull a morphism along a strand through a crossing.
There’s a lot more to the notes than just the diagrammatics, though. If you’re up to it, I highly recommend giving it all a look. If not, just look for the pictures and read the sections around them for the explanations. I’ll be back on Monday with more exposition.
As one last example along these lines, let’s throw all these structures in together. We start with a monoidal category, and we want it to have both a braiding and duals. Naturally, they’ll have to play well together.
Now I’ll want to go back and tweak one of the things I said about duals. I insisted that the duality functor satisfy , but we can actually weaken it slightly. What I defined before we can call a “right dual”, since it provides a pairing . Instead of insisting that , we can put in “left duals” by hand. This will be another contravariant functor , and arrows and satisfying the “mirror images” of the identities for and .
What does this all have to do with a braiding? Well we can use a braiding to turn a left dual into a right dual and vice versa. Let’s say we’ve got (right) dual maps and , along with braiding maps . Now we can define left dual maps by and . This will automatically satisfy the left dual axioms. You can either try to show this now using the algebra or hold off a bit until we have a better tool to attack such identities.
Okay, so a braided monoidal category with duals has a braiding and has right duals. Since we define left duals in terms of these structures they’ll automatically play well together. From these we can further build a morphism for each object called the “balancing”. It’s defined by . We call an object “unframed” if .
I know that there’s a lot of stuff here and it’s hard to remember it all. We need a better way to think about these things. Just like we had for braided categories and categories with duals, we have a diagrammatic rendering of the free braided monoidal category with duals on one object.
First I’ll deal with the free braided monoidal category with duals on one self-dual, unframed object. That is, we start with one object and say it’s its own (left and right) dual and for whom the balancing is the identity. Then we build all the other objects as tensor powers of this one. Then we throw in the braiding morphisms and the duality morphisms and insist they satisfy all the relevant relations. And what we get is the category of tangles. Here’s an example:
As usual we read this from bottom to top. Notice that this means we can read off the algebraic notation just by going top to bottom, left to right, to get . The unframed condition tells us that we can untwist that loop to the right and still have the same tangle.
What if we didn’t ask that our generator be unframed? Then we get the category of framed tangles, which differs only in that we can’t untwist loops like the one above. It’s like working with knot diagrams where all we can use are Reidemeister moves 2 and 3.
What if our generator is unframed, but isn’t self-dual? Then we get the category of oriented tangles. All we do now is label each strand of our tangle diagram with an orientation, like we did in the case of oriented Temperley-Lieb diagrams.
What if our generator is neither unframed nor self-dual? Then we get the category of framed, oriented tangles.
Each of these leads to a diagrammatic way of looking at the respective sorts of categories. For example, go back and take the definition of left duals in terms of right duals and write it out as a diagram of framed, oriented tangles. Then write down the condition it must satisfy as an equation between framed oriented tangles. Finally, verify that it satisfies the equation by finding a sequence of Reidemeister moves 2 and 3 from one side to the other (and invoke the definition of right duals at some point).
The category of tangles is very simply described in combinatorial and algebraic terms, but you might not yet have noticed how general it is. To illustrate this, I give with no further comment an example of a tangle from to :
Okay, after last week’s shake-ups I’m ready to get back into the swing of things. I mentioned yesterday something called the “Temperley-Lieb Category”, and it just so happens we’re right on schedule to explain it properly.
We’ve seen the category of braids and how the braided coherence theorem makes it the “free braided monoidal category on one object”. That is, it has exactly the structure needed for a braided monoidal category — no more, no less — and if we pick any object in another such category we get a unique functor from to .
So of course we want the same sort of thing for monoidal categories with duals. We’ll even draw the same sorts of pictures. A point on a horizontal line will be our generating object, but we also need a dual object. So specifically we’ll think of the object as being “going up through the point” and the dual as “going down through the point”. Then we can draw cups and caps to connect an upward line and a downward line and interpret it as a duality map. Notice, though, that we can’t make any curves cross each other because we have no braiding! Here’s an example of such a Temperley-Lieb diagram:
Again, we read this from bottom to top, and from left to right. On the bottom line we have a downward line followed by an upward line, which means we start at the object . Then we pass through a cap, which corresponds to the transformation . Then we go through a cup () to get to , and another cup to get to . A cap in the middle () is followed by a cup (), and then another pair of caps (). Then we have a cup and another to end up at .
We could simplify this a bit by cancelling two cup/cap pairs using the equations we imposed on the natural transformations and . In fact, this is probably a much easier way to remember what those equations mean. The equations tell us in algebraic terms that we can cancel off a neighboring cup and cap, while the topology of diagram says that we can straighten out a zig-zag.
Incidentally, one feature that’s missing from this diagram is that it’s entirely possible to have an arc (pointing either way) start at the bottom of the diagram and leave at the top.
Now if we have any category with duals and an object we can build a unique functor from the category of oriented Temperley-Lieb diagrams to sending the upwards-oriented line to the object . It sends the above diagram (for example) to the morphism
Another useful category is the free monoidal category with duals on a single self-dual object. This is the Temperley-Lieb category which looks just the same as with one crucial difference: since the object is its own dual, we can’t tell the difference between the two different directions a line could go. Up and down are the same thing. In the algebra this might seem a little odd, but in the diagram all it means is we get to drop the little arrows that tell us which way to go.
And now if we have any category with duals and any self-dual object we have a unique functor from to sending the strand to . This is how Temperley-Lieb diagrams are turned into (categorified) representations in Khovanov homology.
As part of today’s maintenance, I’m going through some weblogs I’d been wanting to catch up on and possibly add to the sidebar. Two for now: The Everything Seminar, which does for Cornell what Secret Blogging Seminar does for Berkeley, and Zero Divides, documenting the life of an aspiring mathematician from all the way back in undergrad life.
TES is very new, and has plunged in with series about graph theory and some homological algebra. On the other hand, ZD has been around for a while. Unfortunately, the author (“Zero”? “Divides”? “ZiDane”?) seems to have configured comments very restrictively, so you’ll just have to send your encouragement through the æther. Or maybe she’ll see this pingback and realize that nobody can comment but those she’s specifically authorized to do so.
So I’m back from a week in Faro, Portugal, talking about various things surrounding the ideas of “knot homology”. So what is it? Well, this will be a bit of a loose treatment of the subject, and may not be completely on the mark. I like David Corfield’s idea that a mathematician is a sort of storyteller, and I’m not about to let mere history get in the way of a good história. Besides, I’ll get to most of the details in my main line sooner or later.
First I should mention the Bracket polynomial and the Jones polynomial. Jones was studying a certain kind of algebra when he realized that the defining relations for these algebras were very much like those of the braid groups. In fact, he was quickly able to use this similarity to assign a Laurent polynomial — one which allows negative powers of the variable — to every knot diagram that didn’t change when two diagrams differed by a Reidemeister move. That is, it was a new invariant of knots.
The Jones polynomial came out of nowhere, from the perspective of the day’s knot theorists. And it set the whole field on its ear. From my perspective looking back, there’s a huge schism in knot theory between those who primarily study the geometry and the “classical topology” of the situation and those who primarily study the algebra, combinatorics, and the rising field of “quantum topology”. To be sure there are bridges between the two, some of which I’ll mention later. But the upshot was that the Jones polynomial showed a whole new way of looking at knots and invariants.
Immediately in its aftermath a huge number of interpretations and generalizations poured forth. One of the most influential was Louis Kauffman’s “state-sum” model: the Bracket. This is an invariant of regular isotopy instead of ambient isotopy, which basically means we throw out Reidemeister I moves. Meanwhile, I glossed over above that the Jones polynomial actually applies to oriented links, where there’s a little arrow saying “go this way around the loop”. This is a subtle distinction between the Bracket and the Jones polynomial that many authors steamroll over, but I find it important for my own reasons.
Anyhow, the Bracket also assigns a Laurent polynomial to every diagram in a way that’s invariant under an apropriate collection of moves. It does this by taking each crossing and “splitting” it in two ways — turning an incoming strand to the left or the right rather than connecting straight across. For a link diagram with crossings there are now “states” of the diagram. Now we assign each diagram a “weight” and just add up the weights for all the different states. Thus: “state-sum”. If we choose the rule for weighting states correctly we can make the resulting polynomial into an invariant.
So now we flash forward from the mid-’80s to the late ’90s. Mikhail Khovanov, as a student of Igor Frenkel at Yale, becomes interested in the nascent field of categorification. Particularly, he was interested in categorifying the Lie algebra . That is, he needed to find a category and functors from that category to itself that satisfied certain relations analogous to the defining relations of the Lie algebra structure. He did this using some techniques from a field called “homological algebra”. I’ll eventually talk about it, but for now ask Michi.
But as it happens, this Lie algebra has a very nice category of representations. It’s a monoidal category with duals. In fact, every object is its own dual, and we can (morally, at least) build them all from a single fundamental representation. That means that it’s deeply related to the category of so-called “unoriented Temperley-Lieb” diagrams, which is (roughly) to categories with duals as the category of braids is to braided monoidal categories.
A Temperley-Lieb diagram is just a bunch of loops and arcs on the plane. The arcs connect marked points at the top and bottom of the diagram (like braid strands) while the loops just sorta float there, and none of the strands cross each other at all. So if there are no arcs, there’s just a bunch of separate loops. And we care about this because the states of a link in the definition of the Bracket are just bunches of separate loops too!
So we can take each state and read it in terms of this homological categorification of . And we can read a combination of states — a state sum — in such homological terms as well. So the defining relations of the bracket become “chain homotopies” — natural isomorphisms — in the homological context of the categorification. Thus we have a homological categorification of the Bracket model of the Jones polynomial.
And again, it just came out of nowhere and has immediately revolutionized the field. Homology theories are hot right now. This high-level approach has been broken down by Khovanov and Rozansky into a combinatorial formulation, which knot theory groups like those at George Washington University and the University of Iowa have latched onto. The field of “Heegaard Floer homology” has been nudged closer and closer to the combinatorial Khovanov framework from its origins in analytic problems. Other knot invariants are lining up to be categorified along the same lines. And all the while the incredibly rich structure of Khovanov homology itself is being spruced up and neatened, leading to a series of clear examples to act as guideposts for those probing higher categorical structures in general.
And that’s what we just spent the last week talking about in Faro.