The Unapologetic Mathematician

Mathematics for the interested outsider

Representations of Hopf Algebras I

We’ve seen that the category of representations of a bialgebra is monoidal. What do we get for Hopf algebras? What does an antipode buy us? Duals! At least when we restrict to finite-dimensional representations.

Again, we base things on the underlying category of vector spaces. Given a representation \rho:H\rightarrow\hom_\mathbb{F}(V,V), we want to find a representation \rho^*:H\rightarrow\hom_\mathbb{F}(V^*,V^*). And it should commute with the natural transformations which make up the dual structure.

Easy enough! We just take the dual of each map to find \rho(h)^*:V^*\rightarrow V^*. But no, this can’t work. Duality reverses the order of composition. We need an antiautomorphism S to reverse the multiplication on H. Then we can define \rho^*(h)=\rho(S(h))^*.

The antiautomorphism we’ll use will be the antipode. Now to make these representations actual duals, we’ll need natural transformations \eta_\rho:\mathbf{1}\rightarrow\rho^*\otimes\rho and \epsilon_\rho:\rho\otimes\rho^*\rightarrow\mathbf{1}. This natural transformation \epsilon is not to be confused with the counit of the Hopf algebra. Given a representation \rho on the finite-dimensional vector space V, we’ll just use the \eta_V and \epsilon_V that come from the duality on the category of finite-dimensional vector spaces.

Thus we find that \epsilon_\rho is the pairing v\otimes\lambda\mapsto\lambda(v). Does this commute with the actions of H? On the one side, we calculate

\begin{aligned}\left[\left[\rho\otimes\rho^*\right](h)\right](v\otimes\lambda)=\left[\rho\left(h_{(1)}\right)\otimes\rho^*\left(h_{(2)}\right)\right](v\otimes\lambda)\\=\left[\rho\left(h_{(1)}\right)\otimes\rho\left(S\left((h_{(2)}\right)\right)^*\right](v\otimes\lambda)\\=\left[\rho\left(h_{(1)}\right)\right](v)\otimes\left[\rho\left(S\left(h_{(2)}\right)\right)^*\right](\lambda)\end{aligned}

Then we apply the evaluation to find

\begin{aligned}\left[\left[\rho\left(S\left(h_{(2)}\right)\right)^*\right](\lambda)\right]\left(\left[\rho\left(h_{(1)}\right)\right](v)\right)=\lambda\left(\left[\rho\left(S\left(h_{(2)}\right)\right)\right]\left(\left[\rho\left(h_{(1)}\right)\right](v)\right)\right)\\=\lambda\left(\left[\rho\left(h_{(1)}S\left(h_{(2)}\right)\right)\right](v)\right)=\lambda\left(\left[\rho\left(\mu\left(h_{(1)}\otimes S\left(h_{(2)}\right)\right)\right)\right](v)\right)\\=\lambda\left(\left[\rho\left(\iota\left(\epsilon(h)\right)\right)\right](v)\right)=\epsilon(h)\lambda(v)\end{aligned}

Which is the same as the result we’d get by applying the “unit” action after evaluating. Notice how we used the definition of the dual map, the fact that \rho is a representation, and the antipodal property in obtaining this result.

This much works whether or not V is a finite-dimensional vector space. The other direction, though, needs more work, especially since I waved my hands at it when I used \mathbf{FinVect} as the motivating example of a category with duals. Tomorrow I’ll define this map.

November 12, 2008 Posted by | Algebra, Category theory, Representation Theory | 3 Comments

Representations of Bialgebras

What’s so great about bialgebras? Their categories of representations are monoidal!

Let’s say we have two algebra representations \rho:A\rightarrow\hom_\mathbb{F}(V,V) and \sigma:A\rightarrow\hom_\mathbb{F}(W,W). These are morphisms in the category of \mathbb{F}-algebras, and so of course we can take their tensor product \rho\otimes\sigma. But this is not a representation of the same algebra. It’s a representation of the tensor square of the algebra:

\rho\otimes\sigma:A\otimes A\rightarrow\hom_\mathbb{F}(V,V)\otimes\hom_\mathbb{F}(W,W)\cong\hom_\mathbb{F}(V\otimes W,V\otimes W)

Ah, but if we have a way to send A to A\otimes A (an algebra homomorphism, that is), then we can compose it with this tensor product to get a representation of A. And that’s exactly what the comultiplication \Delta does for us. We abuse notation slightly and write:

\rho\otimes\sigma:A\rightarrow\hom_\mathbb{F}(V\otimes W,V\otimes W)

where the homomorphism of this representation is the comultiplication \Delta followed by the tensor product of the two homomorphisms, followed by the equivalence of \hom algebras.

Notice here that the underlying vector space of the tensor product of two representations \rho\otimes\sigma is the tensor product of their underlying vector spaces V\otimes W. That is, if we think (as many approaches to representation theory do) of the vector space as fundamental and the homomorphism as extra structure, then this is saying we can put the structure of a representation on the tensor product of the vector spaces.

Which leads us to the next consideration. For the tensor product to be a monoidal structure we need an associator. And the underlying linear map on vector spaces must clearly be the old associator for \mathbf{Vect}(\mathbb{F}). We just need to verify that it commutes with the action of A.

So let’s consider three representations \rho:A\rightarrow\hom_\mathbb{F}(U,U), \sigma\rightarrow\hom_\mathbb{F}(V,V), and \tau:A\rightarrow\hom_\mathbb{F}(W,W). Given an algebra element a and vectors u, v, and w, we have the action

\begin{aligned}\left[\left[(\rho\otimes\sigma)\otimes\tau\right](a)\right]((u\otimes v)\otimes w)=\\\left(\left[\rho\left(\left(a_{(1)}\right)_{(1)}\right)\right](u)\otimes\left[\sigma\left(\left(a_{(1)}\right)_{(2)}\right)\right](v)\right)\otimes\left[\tau\left(a_{(2)}\right)\right](v)\end{aligned}

On the other hand, if we associate the other way we have the action

\begin{aligned}\left[\left[\rho\otimes(\sigma\otimes\tau)\right](a)\right](u\otimes(v\otimes w))=\\\left[\rho\left(a_{(1)}\right)\right](u)\otimes\left(\left[\sigma\left(\left(a_{(2)}\right)_{(1)}\right)\right](v)\otimes\left[\tau\left(\left(a_{(2)}\right)_{(2)}\right)\right](v)\right)\end{aligned}

Where we have used the Sweedler notation to write out the comultiplications of a. But now we can use the coassociativity of the comultiplication — along with the fact that, as algebra homomorphisms, the representations are linear maps — to show that the associator on \mathbf{Vect}(\mathbb{F}) intertwines these actions, and thus acts as an associator for the category of representations of A as well.

We also need a unit object, and similar considerations to those above tell us it should be based on the vector space unit object. That is, we need a homomorphism A\rightarrow\hom_\mathbb{F}(\mathbb{F},\mathbb{F}). But linear maps from the base field to itself (considered as a one-dimensional vector space) are just multiplications by field elements! That is, the \hom algebra is just the field \mathbb{F} itself, and we need a homomorphism A\rightarrow\mathbb{F}. This is precisely what the counit \epsilon provides! I’ll leave it to you to verify that the left and right unit maps from vector spaces intertwine the relevant representations.

November 11, 2008 Posted by | Algebra, Category theory, Representation Theory | 5 Comments

Bialgebras

In yesterday’s post I used the group algebra \mathbb{F}[G] of a group G as an example of a coalgebra. In fact, more is true.

A bialgebra is a vector space A equipped with both the structure of an algebra and the structure of a coalgebra, and that these two structures are “compatible” in a certain sense. The traditional definitions usually consist in laying out the algebra maps and relations, then the coalgebra maps and relations. Then they state that the algebra structure preserves the coalgebra structure, and that the coalgebra structure preserves the algebra structure, and they note that really you only need to require one of these last two conditions because they turn out to be equivalent.

In fact, our perspective allows this equivalence to come to the fore. The algebra structure makes the bialgebra a monoid object in the category of vector space over \mathbb{F}. Then a compatible coalgebra structure makes it a comonoid object in the category of algebras over \mathbb{F}. Or in the other order, we have a monoid object in the category of comonoid objects in the category of vector spaces over \mathbb{F}. And these describe essentially the same things because internalizations commute!

Okay, let’s be explicit about what we mean by “compatibility”. This just means that — on the one side — the coalgebra maps are not just linear maps between the underlying vector spaces, but actually are algebra homomorphisms. On the other side, it means that the algebra maps are actually coalgebra homomorphisms.

Multiplication and comultiplication being compatible actually mean the same thing. Take two algebra elements and multiply them, then comultiply the result. Alternatively, comultiply each of them, and the multiply corresponding factors of the result. We should get the same answer whether we multiply or comultiply first. That is: \Delta\circ\mu=(\mu\otimes\mu)\circ(1_A\otimes\tau_{A,A}\otimes1_A)\circ(\Delta\otimes\Delta), where \tau is the twist map, exchanging two factors.

Let’s check this condition for the group algebra \mathbb{F}[G]:

\begin{aligned}\left[\mu\otimes\mu\right]\left(\left[1_A\otimes\tau_{A,A}\otimes1_A\right]\left(\left[\Delta\otimes\Delta\right](e_g\otimes e_h)\right)\right)=\\\left[\mu\otimes\mu\right]\left(\left[1_A\otimes\tau_{A,A}\otimes1_A\right](e_g\otimes e_g\otimes e_h\otimes e_h)\right)=\\\left[\mu\otimes\mu\right](e_g\otimes e_h\otimes e_g\otimes e_h)=e_{gh}\otimes e_{gh}=\\\Delta(e_{gh})=\Delta\left(\mu(e_g\otimes e_h)\right)\end{aligned}

Similarly, if we multiply two algebra elements and then take the counit, it should be the same as the product (in \mathbb{F}) of the counits of the elements. Dually, the product of two copies of the algebra unit should be the algebra unit again, and the counit of the algebra unit should be the unit in \mathbb{F}. It’s straightforward to verify that these hold for \mathbb{F}[G].

November 6, 2008 Posted by | Algebra, Category theory | 5 Comments

Coalgebras

Okay, back to business. We’re about to need a little more algebraic structure floating around. This is something that’s always present, but many approaches don’t explicitly mention it until much later. Since I’m taking a categorical view of things, it’s easier to show what’s really going on right away.

Remember that an \mathbb{F}-algebra is a monoid object in the category of vector spaces over \mathbb{F}. Dually, an \mathbb{F}-coalgebra is a comonoid object in the category of vector spaces over \mathbb{F}. That’s all well and good, but what’s a comonoid object? We’ve mentioned them before, but let’s be more explicit this time around.

Remember that a monoid object was a functor from a certain category we cooked up to mirror the axioms of a monoid. We gave the category objects M^{\otimes n} corresponding to the natural numbers, corresponding to lists of monoid elements. We have a map \mu:M\otimes M\rightarrow M corresponding to multiplication, and a map \iota:\mathbf{1}\rightarrow M picking out the unit in the monoid.

So a comonoid object will be a functor from the dual of this category! That is, we’ve still got all the same objects, but now we have a “comultiplication” arrow \Delta:C\rightarrow C\otimes C, and a “counit” arrow \epsilon:C\rightarrow\mathbf{1}.

Now, the model category describing monoid objects isn’t just objects and arrows. We also have the relations that make a monoid a monoid: the associative law \mu\circ(\mu\otimes1_M)=\mu\circ(1_m\otimes\mu), and the left and right unit laws \mu\circ(1_M\otimes\iota)=1_M=\mu\circ(\iota\otimes1_M).

Dually, we must have dual relations for comonoid objects. We have a coassociative law (\Delta\otimes1_M)\circ\Delta=(1_M\otimes\Delta)\circ\Delta, and left and right counit laws (1_M\otimes\epsilon)\circ\Delta=1_M=(\epsilon\otimes1_M)\circ\Delta.

We could write these down in terms of commuting diagrams, but it’s even more instructive to look at “string diagrams” like we did before. This makes the sense of what’s going on all the clearer.

So a coalgebra is a comonoid object in the category of vector spaces over \mathbb{F}. That is, it’s an \mathbb{F} vector space C, equipped with a linear comultiplication \Delta and a linear counit \epsilon, which satisfy the coassociative and counit laws. I’ll admit that this seems an extremely quirky structure to discuss, so an example is in order. The one we care most about right now is the group algebra. Yes, it turns out to also be a coalgebra!

To really wrap our heads around it, let’s start with a finite group G. Then we get a finite-dimensional vector space \mathbb{F}[G], with a basis e_g indexed by elements of G. Let’s forget, for the moment, that we have a multiplication and a unit. Instead, we define the comultiplication by \Delta(e_g)=e_g\otimes e_g for each basis element. We also define the counit by \epsilon(e_g)=1 for each element g\in G. Both of these maps extend by linearity.

Now, let’s check the coassociative property. It suffices to check it on basis elements, because the extensions by linearity have to agree. In this case we have

\begin{aligned}\left[\Delta\otimes1_{\mathbb{F}[G]}\right]\left(\Delta(e_g)\right)=\left[\Delta\otimes1_{\mathbb{F}[G]}\right](e_g\otimes e_g)=e_g\otimes e_g\otimes e_g\\=\left[1_{\mathbb{F}[G]}\otimes\Delta\right](e_g\otimes e_g)=\left[1_{\mathbb{F}[G]}\otimes\Delta\right]\left(\Delta(e_g)\right)\end{aligned}

Similarly, we can check the right counit law:

\begin{aligned}\left[1_{\mathbb{F}[G]}\otimes\epsilon\right]\left(\Delta(e_g)\right)=\left[1_{\mathbb{F}[G]}\otimes\epsilon\right](e_g\otimes e_g)=e_g\\=\left[\epsilon\otimes1_{\mathbb{F}[G]}\right](e_g\otimes e_g)=\left[\epsilon\otimes1_{\mathbb{F}[G]}\right]\left(\Delta(e_g)\right)\end{aligned}

and the left counit law is similar. Thus these maps do indeed describe the structure of a coalgebra.

November 5, 2008 Posted by | Algebra, Category theory | 3 Comments

The Category of Representations

Now let’s narrow back in to representations of algebras, and the special case of representations of groups, but with an eye to the categorical interpretation. So, representations are functors. And this immediately leads us to the category of such functors. The objects, recall, are functors, while the morphisms are natural transformations. Now let’s consider what, exactly, a natural transformation consists of in this case.

Let’s say we have representations \rho:A:\rightarrow\hom_\mathbb{F}(V,V) and \sigma:A\rightarrow\hom_\mathbb{F}(W,W). That is, we have functors \rho and \sigma with \rho(*)=V, \sigma(*)=W — where * is the single object of A, when it’s considered as a category — and the given actions on morphisms. We want to consider a natural transformation \phi:\rho\rightarrow\sigma.

Such a natural transformation consists of a list of morphisms indexed by the objects of the category A. But A has only one object: *. Thus we only have one morphism, \phi_*, which we will just call \phi.

Now we must impose the naturality condition. For each arrow a:*\rightarrow * in A we ask that the diagram

\displaystyle\begin{matrix}V&\xrightarrow{\phi}&W\\\downarrow^{\rho(a)}&&\downarrow^{\sigma(a)}\\V&\xrightarrow{\phi}&W\end{matrix}

commute. That is, we want \phi\circ\rho(a)=\sigma(a)\circ\phi for every algebra element a. We call such a transformation an “intertwiner” of the representations. These intertwiners are the morphisms in the category of \mathbf{Rep}(A) of representations of A. If we want to be more particular about the base field, we might also write \mathbf{Rep}_\mathbb{F}(A).

Here’s another way of putting it. Think of \phi as a “translation” from V to W. If \phi is an isomorphism of vector spaces, for instance, it could be a change of basis. We want to take a transformation from the algebra A and apply it, and we also want to translate. We could first apply the transformation in V, using the representation \rho, and then translate to W. Or we could first translate from V to W and then apply the transformation, now using the representation \sigma. Our condition is that either order gives the same result, no matter which element of A we’re considering.

October 28, 2008 Posted by | Category theory, Group theory, Representation Theory, Ring theory | 8 Comments

Category Representations

We’ve seen how group representations are special kinds of algebra representations. But even more general than that is the representation of a category.

A group is a special monoid, within which each element is invertible. And a monoid is just a category with a single object. Similarly, an \mathbb{F}-algebra is just like a monoid but enriched over the category of vector spaces over \mathbb{F}. That is, it’s a one-object category with an \mathbb{F}-bilinear composition. It makes sense to regard both of these structures as categories of sorts. A representation will then be a functor from one of these categories.

The clear target category is \mathbf{Vect}_\mathbb{F}. So what’s a functor \rho from, say, a group G (considered as a category) to \mathbf{Vect}_\mathbb{F}? First the single object of the category G picks out some object V\in\mathbf{Vect}_\mathbb{F}. That is, V is a vector space over \mathbb{F}. Then for each arrow g in G — each group element — we have an arrow \rho(g)\in\hom_\mathbb{F}(V,V). Since g has to be invertible, this \rho(g) must be invertible — an element of \mathrm{GL}(V).

What about an algebra? Now our source category A and our target category \mathbf{Vect}_\mathbb{F} are both enriched over \mathbf{Vect}_\mathbb{F}. It only makes sense, then, for us to consider \mathbb{F}-linear functors. Such a functor F again picks out a single vector space V for the single object of A (considered as a category). Every arrow a in A gets sent to an arrow \alpha(a)\in\hom_\mathbf{F}(V,V). This mapping is linear over the field \mathbb{F}.

So what do category representations get us? Well, one thing is this: consider a combinatorial graph — a collection of “vertices” with some directed “edges” joining them. A path in the graph is a sequence of directed edges joined tip-to-tail, and the collection of all paths in the graph constitutes the “path category” of the graph (exercise: identify the identity paths). A representation of this path category is what mathematicians call a “quiver representation”, and they’re big business.

More interesting to me is this: the category \mathcal{T}ang of tangles (or \mathcal{OT}ang of oriented tangles, \mathcal{F}r\mathcal{T}ang of framed tangles, or \mathcal{F}r\mathcal{OT}ang of framed, oriented tangles). This is a monoidal category with duals, as is \mathbf{Vect}_\mathbb{F}, and so it only makes sense to ask that our functors respect those structures as well. We don’t ask that it send the braiding to the symmetry on \mathbf{Vect}_\mathbb{F}, since that would trivialize the structure.

So what is a representation of the category \mathcal{T}ang? It is my contention that this is nothing but a knot invariant, viewed in a more natural habitat. A little more generally, knot invariants are the restrictions to knots (and links) of functors defined on the category of tangles, which can often (always?) be decategorified — or otherwise rendered down — into representations of \mathcal{T}ang. This is my work: to translate existing knot theoretical ideas into this algebraic language, where I believe they find a better home.

October 27, 2008 Posted by | Algebra, Category theory, Linear Algebra, Representation Theory | 7 Comments

The Sum of Subspaces

We know what the direct sum of two vector spaces is. That we define abstractly and without reference to the internal structure of each space. It’s sort of like the disjoint union of sets, and in fact the basis for a direct sum is the disjoint union of bases for the summands.

Let’s use universal properties to prove this! We consider the direct sum V\oplus W, and we have a basis A for V and a basis B for W. But remember that the whole point of a basis is that vector spaces are free modules.

That is, there is a forgetful functor from \mathbf{Vec}(\mathbb{F}) to \mathbf{Set}, sending a vector space to its underlying set. This functor has a left adjoint which assigns to any set S the vector space \mathbb{F}\left[S\right] of formal linear combinations of elements of S. This is the free vector space on the basis S, and when we choose the basis A for a vector space V we are actually choosing an isomorphism V\cong\mathbb{F}\left[A\right].

Okay. So we’re really considering the direct sum \mathbb{F}\left[A\right]\oplus\mathbb{F}\left[B\right], and we’re asserting that it is isomorphic to \mathbb{F}\left[A\uplus B\right]. But we just said that constructing a free vector space is a functor, and this functor has a right adjoint. And we know that any functor that has a right adjoint preserves colimits! The disjoint union of sets is a coproduct, and the direct sum of vector spaces is a biproduct, which means it’s also a coproduct. Thus we have our isomorphism. Neat!

But not all unions of sets are disjoint. Sometimes the sets share elements, and the easiest way for this to happen is for them to both be subsets of some larger set. Then the union of the two subsets has to take this overlap into account. And since subspaces of a larger vector space may intersect nontrivially, their sum as subspaces has to take this into account.

First, here’s a definition in terms of the vectors themselves: given two subspaces V and W of a larger vector space U, the sum V+W will be the subspace consisting of all vectors that can be written in the form v+w for v\in V and w\in W. Notice that there’s no uniqueness requirement here, and that’s because if V and W overlap in anything other than the trivial subspace \left\{0\right\} we can add a vector in that overlap to v and subtract it from w, getting a different decomposition. This is precisely the situation a direct sum avoids.

Alternatively, let’s consider the collection of all subspaces of U. This is a partially-ordered set, where the order is given by containment of the underlying sets. It’s sort of like the power set of a set, except that only those subsets of U which are subspaces get included.

Now it turns out that, like the power set, this poset is actually a lattice. The meet is the intersection of subspaces, but the join isn’t their union. Indeed, the union of subspaces usually isn’t a subspace at all! What do we use instead? The sum, of course! It’s easiest to verify this with the algebraic definition of a lattice.

The lattice does have a top element (the whole space U) and a bottom element (the trivial subspace \left\{0\right\}). It’s even modular! Indeed, let X, Y, and Z be subspaces with X\subseteq Z. Then on the one hand we consider X+(Y\cap Z), which is the collection of all vectors u=x+y, where x\in X, y\in Y, and y\in Z. On the other hand we consider (X+Y)\cap Z, which is collection of all vectors u=x+y, where x\in X, y\in Y, and u\in Z. I’ll leave it to you to show how these two conditions are equivalent.

Unfortunately, the lattice isn’t distributive. I could work this out directly, but it’s easier to just notice that complements aren’t unique. Just consider three subspaces of \mathbb{F}^2: X has all vectors of the form \begin{pmatrix}x\\{0}\end{pmatrix}, Y has all of the form \begin{pmatrix}y\\y\end{pmatrix}, and Z has all of the form \begin{pmatrix}0\\z\end{pmatrix}. Then X+Y=\mathbb{F}^2=X+Z, and X\cap Y=\left\{0\right\}=X\cap Z, but Y\neq Z.

This is all well and good, but it’s starting to encroach on Todd’s turf, so I’ll back off a bit. The important bit here is that the sum behaves like a least-upper-bound.

In categorical terms, this means that it’s a product in the lattice of subspaces (considered as a category). Don’t get confused here! Direct sums are coproducts in the category \mathbf{Vec}(\mathbb{F}), while sums are coproducts in the category (lattice) of subspaces of a given vector space. These are completely different categories, so don’t go confusing coproducts in one with those in the other.

In this case, all we mean by saying this is a categorical coproduct is that we have a description of the sum of two subspaces which doesn’t refer to the elements of the subspaces at all. The sum V+W is the smallest subspace of U which contains both V and W. It is the “smallest” in the sense that any other subspace containing both V and W must contain V+W. This description from the outside of the subspaces will be useful when we don’t want to get our hands dirty with actual vectors.

July 21, 2008 Posted by | Algebra, Category theory, Linear Algebra | 6 Comments

More on the C-G Eversion

Some people had trouble grabbing the whole 50MB file that I posted, so Scott Carter broke it into pieces. He also included these comments:

The red, blue, and purple curves on the large (distorted) spherical objects at the bottom of each page of the eversion are the preimages of the the folds (color coded of course) and the double decker sets. Since at each time the sphere is immersed it may have double and triple points. Each arc of double points lifts to a pair of arcs on the ambient sphere, and each triple point lifts to three points on the ambient sphere. These lifts are the “decker sets.”

They are obtained via Gauss-Morse codes. Pick a base point and orientation on each curve in a movie. These are chosen
consistently from one still to the next. Label the double points and the optima and read the labels as they are encountered upon a single journey around the curve. The labels, too, are chosen consistently from one still to the next. Write these down for each curve in a movie, and connect the letters in the words as the curves change according to the basic changes that occur in each of the movie scenes.

These curves then are instructions on how to immerse the ambient sphere to create the illustrations.

Sarah’s thesis computes that the fold set is an annulus, the double point set is the connected sum of three projective planes, and the double decker set is the connected orientation double cover: a genus 2 surface.

So here are the pieces:

  1. Immersed spheres as movies (2.2 MB)
  2. The basic movie moves (3.4 MB)
  3. The eversion from the red side to the quadruple point (19 MB)
  4. Half of the eversion from the quadruple point halfway to the blue side (24 MB)
  5. The other half of the eversion from the quadruple point halfway to the blue side (17 MB)

There’s a glitch in part 4, so I’ll post that as soon as I can.

July 10, 2008 Posted by | Category theory, Knot theory, Topology | 1 Comment

The Carter-Gelsinger Eversion

I’ve mentioned Outside In before. That video shows a way of turning a sphere inside out. It’s simpler than the first explicit eversions to be discovered, but the simplicity is connected to a high degree of symmetry. This leads to very congested parts of the movie, where it’s very difficult to see what’s going on. Further, many quadruple points — where four sections of the surface pass through the same point — occur simultaneously, and even higher degree points occur. We need a simpler version.

What would constitute “simple” for us, then? We want as few multiple points as possible, and as few at a time as possible. In fact, it would be really nice if we could write it down algebraically, in some sense? But what sense?

Go back to the diagrammatics of braided monoidal categories with duals. There we could draw knots and links to represent morphisms from the monoidal identity object to itself. And topologically deformed versions of the same knot encoded the same morphism. This is the basic idea of the category \mathcal{T}ang of tangles.

But if we shift our perspective a bit, we consider the 2-category of tangles. Instead of saying that deformations are “the same” tangle, we consider explicit 2-isomorphisms between tangles. We’ve got basic 2-isomorphisms for each of the Reidemeister moves, and a couple to create or cancel caps and cups in pairs (duality) and to pull crossings past caps or cups (naturality). Just like we can write out any link diagram in terms of a small finite collection of basic tangles, we can write out any link diagram isotopy in terms of a small finite collection of basic moves.

What does a link diagram isotopy describe? Links (in our picture) are described by collections of points moving around in the plane. As we stack up pictures of these planes the points trace out a link. So now we’ve got links moving around in space. As we stack up pictures of these spaces, the links trace out linked surfaces in four-dimensional space. And we can describe any such surface in terms of a small collection of basic 2-morphisms in the braided monoidal 2-category of 2-tangles. These are analogous to the basic cups, caps, and crossings for tangles.

Of course the natural next step is to consider how to deform 2-tangles into each other. And we again have a small collection of basic 3-morphisms that can be used to describe any morphisms of 2-tangles. These are analogous to the Reidemeister moves. Any deformation of a surface (which is written in terms of the basic 2-morphisms) can be written out in terms of these basic 3-morphisms.

We can simplify our picture a bit. Instead of knotting surfaces in four-dimensional space, let’s just let them intersect each other in three-dimensional space. To do this, we need to use a symmetric monoidal 3-category with duals, since there’s no distinction between two types of crossings.

And now we come back to eversions. We write the sphere as a 2-dimensional cup followed by a 2-dimensional cap. Since we have duals, we can consider one side to be “painted red” and one side “painted blue”. One way of writing the sphere has the outside painted red and the other side is painted blue. An eversion in our language will be an explicit list of 3-morphisms that run from one of these spheres to the other.

Scott Carter and Sarah Gelsinger have now created just such an explicit list of directions to evert a sphere. And, what’s more, they’ve rendered pictures of it! Here, for the first time in public, is a 50MB PDF file showing the Carter-Gelsinger eversion.

First they illustrate the basic pieces of a diagram of knotted surfaces (pp. 1-4). Then they illustrate the basic 2-morphisms that build up surfaces (pp. 5-6), and write out a torus as an example (p. 7). Then come a few more basic 2-morphisms that involve self-intersections (pp. 8-9) and a more complicated immersed sphere (pp. 10-11). Each of these is written out also as a “movie” of self-intersecting loops in the plane. Next come the “movie moves” — the 3-morphisms connecting the 2-morphism “movies” (pp. 12-17). These are the basic pieces that let us move from one immersed surface to another.

Finally, the eversion itself, consisting of the next 79 pages. Each one consists of an immersed sphere, rendered in a number of different ways. On the left is a movie of immersed plane curves. On the top are three views of the sphere as a whole — a “solid” view on the right, a sketch of the double-point curves in the middle, and a “see-through” view on the left. The largest picture on each page is a more schematic view I don’t want to say too much about.

The important thing to see here is that between each two frames of this movie is exactly one movie move. Everything here is rendered into pictures, but we could write out the movie on each page as a sequence of 2-morphisms form the top of the page to the bottom. Then moving from one page to the next we trace out a sequence of 3-morphisms, writing out the eversion explicitly in terms of the basic 3-morphisms. As an added bonus, there’s only ever one quadruple point — where we pass from Red 26 to Blue 53 — and no higher degree points.

I’d like to thank Scott for not only finishing off this rendering he’s been promising for ages, but for allowing me to host its premiere weblog appearance. I, for one, am looking forward to the book, although I’m not sure this one will be better than the movie.

[UPDATE] Some people have been having trouble with the whole 50MB PDF (and more people might as the Carnival comes to see this page. Scott Carter broke the file up into five pieces, and I’ve put them up here in a new post. There’s a glitch in part 4, but I’ll have that one up as soon as I can.

July 6, 2008 Posted by | Category theory, Knot theory, Topology | 7 Comments

The Splitting Lemma

Evidently I never did this one when I was talking about abelian categories. Looks like I have to go back and patch this now.

We start with a short exact sequence:

\mathbf{0}\rightarrow A\xrightarrow{f}B\xrightarrow{g}C\rightarrow\mathbf{0}

A large class of examples of such sequences are provided by the split-exact sequences:

\mathbf{0}\rightarrow A\rightarrow A\oplus C\rightarrow C\rightarrow\mathbf{0}

where these arrows are those from the definition of the biproduct. But in this case we’ve also got other arrows: A\oplus C\rightarrow A and C\rightarrow A\oplus C that satisfy certain relations.

The lemma says that we can go the other direction too. If we have one arrow h:B\rightarrow A so that h\circ f=1_A then everything else falls into place, and B\cong A\oplus C. Similarly, a single arrow h:C\rightarrow B so that g\circ h=1_C will “split” the sequence. We’ll just prove the first one, since the second goes more or less the same way.

Just like with diagram chases, we’re going to talk about “elements” of objects as if the objects are abelian groups. Of course, we don’t really mean “elements”, but the exact same semantic switch works here.

So let’s consider an element b\in B and write it as (b-f(h(b)))+f(h(b)). Clearly f(h(b)) lands in \mathrm{Im}(f). We can also check

h(b-f(h(b)))=h(b)-h(f(h(b)))=h(b)-h(b)=0

so b-f(h(b))\in\mathrm{Ker}(h). That is, any element of B can be written as the sum of an element of \mathrm{Im}(f) and an element of \mathrm{Ker}(h). But these two intersect trivially. That is, if b=f(a) and h(b)=0 then 0=h(f(a))=a, and so b=0. This shows that B\cong\mathrm{Ker}(h)\oplus\mathrm{Im}(f). Thus we can write every b uniquely as b=f(a)+k.

Now consider an element c\in C. By exactness, there must be some b\in B so that c=g(b)=g(f(a)+k)=g(f(a))+g(k). That is, we have a unique k\in\mathrm{Ker}(h) with g(k)=c. This shows that C\cong\mathrm{Ker}(h). It’s straightforward to show that also A\cong\mathrm{Im}(f). Thus we have split the sequence: B\cong A\oplus C.

June 25, 2008 Posted by | Category theory | 6 Comments

Follow

Get every new post delivered to your Inbox.

Join 389 other followers