# The Unapologetic Mathematician

## Mathematics for the interested outsider

Today I’d like to show that the space $\hom_G(V,W)$ of homomorphisms between two $G$-modules is “additive”. That is, it satisfies the isomorphisms:

\displaystyle\begin{aligned}\hom_G(V_1\oplus V_2,W)&\cong\hom_G(V_1,W)\oplus\hom_G(V_2,W)\\\hom_G(V,W_1\oplus W_2)&\cong\hom_G(V,W_1)\oplus\hom_G(V,W_2)\end{aligned}

We should be careful here: the direct sums inside the $\hom$ are direct sums of $G$-modules, while those outside are direct sums of vector spaces.

The second of these is actually the easier. If $f:V\to W_1\oplus W_2$ is a $G$-morphism, then we can write it as $f=(f_1,f_2)$, where $f_1:V\to W_1$ and $f_2:V\to W_2$. Indeed, just take the projection $\pi_i:W_1\oplus W_2\to W_i$ and compose it with $f$ to get $f_i=\pi_i\circ f$. These projections are also $G$-morphisms, since $W_1$ and $W_2$ are $G$-submodules. Since every $f$ can be uniquely decomposed, we get a linear map $\hom_G(V,W_1\oplus W_2)\to\hom_G(V,W_1)\oplus\hom_G(V,W_2)$.

Then the general rules of direct sums tell us we can inject $W_1$ and $W_2$ back into $W_1\oplus W_2$, and write

$\displaystyle f=I_{W_1\oplus W_2}\circ f=(\iota_1\circ\pi_1+\iota_2\circ\pi_2)\circ f=\iota_1\circ f_1+\iota_2\circ f_2$

Thus given any $G$-morphisms $f_1:V\to W_1$ and $f_2:V\to W_2$ we can reconstruct an $f:V\to W_1\oplus W_2$. This gives us a map in the other direction — $\hom_G(V,W_1)\oplus\hom_G(V,W_2)\to\hom_G(V,W_1\oplus W_2)$ — which is clearly the inverse of the first one, and thus establishes our isomorphism.

Now that we’ve established the second isomorphism, the first becomes clearer. Given a $G$-morphism $h:V_1\oplus V_2\to W$ we need to find morphisms $h_i:V_i\to W$. Before we composed with projections, so this time let’s compose with injections! Indeed, $\iota_i:V_i\to V_1\oplus V_2$ composes with $h$ to give $h_i=h\circ\iota_i:V_i\to W$. On the other hand, given morphisms $h_i:V_i\to W$, we can use the projections $\pi_i:V_1\oplus V_2\to V_i$ and compose them with the $h_i$ to get two morphisms $h_i\circ\pi_i:V_1\oplus V_2\to W$. Adding them together gives a single morphism, and if the $h_i$ came from an $h$, then this reconstructs the original. Indeed:

$\displaystyle h_1\circ\pi_1+h_2\circ\pi_2=h\circ\iota_1\circ\pi_1+h\circ\iota_2\circ\pi_2=h\circ(\iota_1\circ\pi_1+\iota_2\circ\pi_2)=h\circ I_{V_1\oplus V_2}=h$

And so the first isomorphism holds as well.

We should note that these are not just isomorphisms, but “natural” isomorphisms. That the construction $\hom_G(\underline{\hphantom{X}},\underline{\hphantom{X}})$ is a functor is clear, and it’s straightforward to verify that these isomorphisms are natural for those who are interested in the category-theoretic details.

October 11, 2010

## Schur’s Lemma

Now that we know that images and kernels of $G$morphisms between $G$modules are $G$-modules as well, we can bring in a very general result.

Remember that we call a $G$-module irreducible or “simple” if it has no nontrivial submodules. In general, an object in any category is simple if it has no nontrivial subobjects. If a morphism in a category has a kernel and an image — as we’ve seen all $G$-morphisms do — then these are subobjects of the source and target objects.

So now we have everything we need to state and prove Schur’s lemma. Working in a category where every morphism has both a kernel and an image, if $f:V\to W$ is a morphism between two simple objects, then either $f$ is an isomorphism or it’s the zero morphism from $V$ to $W$. Indeed, since $V$ is simple it has no nontrivial subobjects. The kernel of $f$ is a subobject of $V$, so it must either be $V$ itself, or the zero object. Similarly, the image of $f$ must either be $W$ itself or the zero object. If either $\mathrm{Ker}(f)=V$ or $\mathrm{Im}(f)=\mathbf{0}$ then $f$ is the zero morphism. On the other hand, if $\mathrm{Ker}(f)=\mathbf{0}$ and $\mathrm{Im}(f)=W$ we have an isomorphism.

To see how this works in the case of $G$-modules, every time I say “object” in the preceding paragraph replace it by “$G$-module”. Morphisms are $G$-morphisms, the zero morphism is the linear map sending every vector to $0$, and the zero object is the trivial vector space $\mathbf{0}$. If it feels more comfortable, walk through the preceding proof making the required substitutions to see how it works for $G$-modules.

In terms of matrix representations, let’s say $X$ and $Y$ are two irreducible matrix representations of $G$, and let $T$ be any matrix so that $TX(g)=Y(g)T$ for all $g\in G$. Then Schur’s lemma tells us that either $T$ is invertible — it’s the matrix of an isomorphism — or it’s the zero matrix.

September 30, 2010

## Stone Spaces

The Stone space functor we’ve been working with sends Boolean algebras to topological spaces. Specifically, it sends them to compact Hausdorff spaces. There’s another functor floating around, of course, though it might not be the one you expect.

The clue is in our extended result. Given a topological space $X$ we define $S(X)$ to be the Boolean algebra of all clopen subsets. This functor is contravariant — given a continuous map $f:X\to Y$, we get a homomorphism of Boolean algebras $S(f)$ sending the clopen set $Z\subseteq Y$ to its preimage $f^{-1}(Z)\subseteq X$. It’s straightforward to see that this preimage is clopen. Another surprise is that this is known as the “Stone functor”, not to be confused with the Stone space functor $S(\mathcal{B})$.

So what happens when we put these two functors together? If we start with a Boolean algebra $\mathcal{B}$ and build its Stone space $S(\mathcal{B})$, then the Stone functor applied to this space gives us a Boolean algebra $S(S(\mathcal{B}))$. This is, by construction, isomorphic to $\mathcal{B}$ itself. Thus the category $\mathbf{Bool}$ is contravariantly equivalent to some subcategory $\mathbf{Stone}$ of $\mathbf{CHaus}$. But which compact Hausdorff spaces arise as the Stone spaces of Boolean algebras?

Look at the other composite; starting with a topological space $X$, we find the Boolean algebra $S(X)$ of its clopen subsets, and then the Stone space $S(S(X))$ of this Boolean algebra. We also get a function $X\to S(S(X))$. For each point $x\in X$ we define the Boolean algebra homomorphism $\lambda_x:S(X)\to\mathcal{B}_0$ that sends a clopen set $C\subseteq X$ to $1$ if and only if $x\in C$. We can see that this is a continuous map by checking that the preimage of any basic set is open. Indeed, a basic set of $S(S(X))$ is $s(C)$ for some clopen set $C\subseteq X$. That is, $\{\lambda\in S(S(X))\vert\lambda(C)=1\}$. Which functions of the form $\lambda_x$ are in $s(C)$? Exactly those for which $x\in C$. Since $C$ is clopen, this preimage is open.

Two points $x_1$ and $x_2$ are sent to the same function $\lambda_{x_1}=\lambda_{x_2}$ if and only if every clopen set containing $x_1$ also contains $x_2$, and vice versa. That is, $x_1$ and $x_2$ must be in the same connected component. Indeed, if they were in different connected components, then there would be some clopen containing one but not the other. Conversely, if there is a clopen that contains one but not the other they can’t be in the same connected component. Thus this map $X\to S(S(X))$ collapses all the connected components of $X$ into points of $S(S(X))$.

If this map $X\to S(S(X))$ is a homeomorphism, then no two points of $X$ are in the same connected component. Thus each singleton $\{x\}\subseteq X$ is a connected component, and we call the space “totally disconnected”. Clearly, such a space is in the image of the Stone space functor. On the other hand, if $X=S(\mathcal{B})$, then $S(S(X))=S(S(S(\mathcal{B})))\cong S(\mathcal{B})=X$, and so this is both a necessary and a sufficient condition. Thus the “Stone spaces” form the full subcategory of $\mathbf{CHaus}$, consisting of the totally disconnected compact Hausdorff spaces. Stone’s representation theorem shows us that this category is equivalent to the dual of the category of Boolean algebras.

As a side note: I’d intended to cover the Stone-Čech compactification, but none of the references I have at hand actually cover the details. There’s a certain level below which everyone seems to simply assert certain facts and take them as given, and I can’t seem to reconstruct them myself.

August 23, 2010

## Coproduct Root Systems

We should also note that the category of root systems has binary (and thus finite) coproducts. They both start the same way: given root systems $\Phi$ and $\Phi'$ in inner-product spaces $V$ and $V'$, we take the direct sum $V\oplus V'$ of the vector spaces, which makes vectors from each vector space orthogonal to vectors from the other one.

The coproduct $\Phi\amalg\Phi'$ root system consists of the vectors of the form $(\alpha,0)$ for $\alpha\in\Phi$ and $(0,\alpha')$ for $\alpha'\in\Phi'$. Indeed, this collection is finite, spans $V\oplus V'$, and does not contain $(0,0)$. The only multiples of any given vector in $\Phi\amalg\Phi'$ are that vector and its negative. The reflection $\sigma_{(\alpha,0)}$ sends vectors coming from $\Phi$ to each other, and leaves vectors coming from $\Phi'$ fixed, and similarly for the reflection $\sigma_{(0,\alpha')}$. Finally,

\displaystyle\begin{aligned}(\beta,0)\rtimes(\alpha,0)=\beta\rtimes\alpha&\in\mathbb{Z}\\(0,\beta')\rtimes(0,\alpha')=\beta'\rtimes\alpha'&\in\mathbb{Z}\\(\beta,0)\rtimes(0,\alpha')=(0,\beta')\rtimes(\alpha,0)=0&\in\mathbb{Z}\end{aligned}

All this goes to show that $\Phi\amalg\Phi'$ actually is a root system. As a set, it’s the disjoint union of the two sets of roots.

As a coproduct, we do have the inclusion morphisms $\iota_1:\Phi\rightarrow\Phi\amalg\Phi'$ and $\iota_2:\Phi'\rightarrow\Phi\amalg\Phi'$, which are inherited from the direct sum of $V$ and $V'$. This satisfies the universal condition of a coproduct, since the direct sum does. Indeed, if $\Psi\subseteq U$ is another root system, and if $f:V\rightarrow U$ and $f':V'\rightarrow U$ are linear transformations sending $\Phi$ and $\Phi'$ into $\Psi$, respectively, then $(a,b)\mapsto f(a)+g(b)$ sends $\Phi\amalg\Phi'$ into $\Psi$, and is the unique such transformation compatible with the inclusions.

Interestingly, the Weyl group of the coproduct is the product $\mathcal{W}\times\mathcal{W}'$ of the Weyl groups. Indeed, for every generator $\sigma_\alpha$ of $\mathcal{W}$ and every generator $\sigma_{\alpha'}$ of $\mathcal{W}'$ we get a generator $\sigma_{(\alpha,0)}$. And the two families of generators commute with each other, because each one only acts on the one summand.

On the other hand, there are no product root systems in general! There is only one natural candidate for $\Phi\times\Phi'$ that would be compatible with the projections $\pi_1:V\oplus V'\rightarrow V$ and $\pi_2:V\oplus V'\rightarrow V'$. It’s made up of the points $(\alpha,\alpha')$ for $\alpha\in\Phi$ and $\alpha'\in\Phi'$. But now we must consider how the projections interact with reflections, and it isn’t very pretty.

The projections should act as intertwinors. Specifically, we should have

$\displaystyle\pi_1(\sigma_{(\alpha,\alpha')}(\beta,\beta'))=\sigma_{\pi_1(\alpha,\alpha')}(\pi_1(\beta,\beta'))=\sigma_\alpha(\beta)$

and similarly for the other projection. In other words

$\displaystyle\sigma_{(\alpha,\alpha')}(\beta,\beta')=(\sigma_\alpha(\beta),\sigma_{\alpha'}(\beta'))$

But this isn’t a reflection! Indeed, each reflection has determinant $-1$, and this is the composition of two reflections (one for each component) so it has determinant ${1}$. Thus it cannot be a reflection, and everything comes crashing down.

That all said, the Weyl group of the coproduct root system is the product of the two Weyl groups, and many people are mostly concerned with the Weyl group of symmetries anyway. And besides, the direct sum is just as much a product as it is a coproduct. And so people will often write $\Phi\times\Phi'$ even though it’s really not a product. I won’t write it like that here, but be warned that that notation is out there, lurking.

January 25, 2010

## Functoriality of Tensor Algebras

The three constructions we’ve just shown — the tensor, symmetric tensor, and exterior algebras — were all asserted to be the “free” constructions. This makes them functors from the category of vector spaces over $\mathbb{F}$ to appropriate categories of $\mathbb{F}$-algebras, and that means that they behave very nicely as we transform vector spaces, and we can even describe exactly how nicely with explicit algebra homomorphisms. I’ll work through this for the exterior algebra, since that’s the one I’m most interested in, but the others are very similar.

Okay, we want the exterior algebra $\Lambda(V)$ to be the “free” graded-commutative algebra on the vector space $V$. That’s a tip-off that we’re thinking $\Lambda$ should be the left adjoint of the “forgetful” functor $U$ which sends a graded-commutative algebra to its underlying vector space (Todd makes a correction to which forgetful functor we’re using below). We’ll define this adjunction by finding a collection of universal arrows, which (along with the forgetful functor $U$) is one of the many ways we listed to specify an adjunction.

So let’s run down the checklist. We’ve got the forgetful functor $U$ which we’re going to make the right-adjoint. Now for each vector space $V$ we need a graded-commutative algebra — clearly the one we’ll pick is $\Lambda(V)$ — and a universal arrow $\eta_V:V\rightarrow U(\Lambda(V))$. The underlying vector space of the exterior algebra is the direct sum of all the spaces of antisymmetric tensors on $V$.

$\displaystyle U(\Lambda(V))=\bigoplus\limits_{n=0}^\infty A^n(V)$

Yesterday we wrote this without the $U$, since we often just omit forgetful functors, but today we want to remember that we’re using it. But we know that $A^1(V)=V$, so the obvious map $\eta_V$ to use is the one that sends a vector $v$ to itself, now considered as an antisymmetric tensor with a single tensorand.

But is this a universal arrow? That is, if $A$ is another graded-commutative algebra, and $\phi:V\rightarrow U(A)$ is another linear map, then is there a unique homomorphism of graded-commutative algebras $\bar{\phi}:\Lambda(V)\rightarrow A$ so that $\phi=U(\bar{\phi})$? Well, $\phi$ tells us where in $A$ we have to send any antisymmetric tensor with one tensorand. Any other element $\upsilon$ in $\Lambda(V)$ is the sum of a bunch of terms, each of which is the wedge of a bunch of elements of $V$. So in order for $\bar{\phi}$ to be a homomorphism of graded-commutative algebras, it has to act by simply changing each element of $V$ in our expression for $\upsilon$ into the corresponding element of $A$, and then wedging and summing these together as before. Just write out the exterior algebra element all the way down in terms of vectors, and transform each vector in the expression. This will give us the only possible such homomorphism $\bar{\phi}$. And this establishes that $\Lambda(V)$ is the object-function of a functor which is left-adjoint to $U$.

So how does $\Lambda$ work on morphisms? It’s right in the proof above! If we have a linear map $f:V\rightarrow W$, we need to find some homomorphism $\Lambda(f):\Lambda(V)\rightarrow\Lambda(W)$. But we can compose $f$ with the linear map $\eta_W$, which gives us $\eta_W\circ f:V\rightarrow U(\Lambda(W))$. The universality property we just proved shows that we have a unique homomorphism $\Lambda(f)=\overline{\eta_W\circ f}:\Lambda(V)\rightarrow\Lambda(W)$. And, specifically, it is defined on an element $\upsilon\in\Lambda(V)$ by writing down $\upsilon$ in terms of vectors in $V$ and applying $f$ to each vector in the expression to get a sum of wedges of elements of $W$, which will be an element of the algebra $\Lambda(W)$.

Of course, as stated above, we get similar constructions for the commutative algebra $S(V)$ and the tensor algebra $T(V)$.

Since, given a linear map $f$ the induced homomorphisms $\Lambda(f)$, $S(f)$, and $T(f)$ preserve the respective gradings, they can be broken into one linear map for each degree. And if $f$ is invertible, so must be its image under each functor. These give exactly the tensor, symmetric, and antisymmetric representations of the group $\mathrm{GL}(V)$, if we consider how these functors act on invertible morphisms $f:V\rightarrow V$. Functoriality is certainly a useful property.

October 28, 2009

## Galois Connections

I want to mention a topic I thought I’d hit back when we talked about adjoint functors. We know that every poset is a category, with the elements as objects and a single arrow from $a$ to $b$ if $a\leq b$. Functors between such categories are monotone functions, preserving the order. Contravariant functors are so-called “antitone” functions, which reverse the order, but the same abstract nonsense as usual tells us this is just a monotone function to the “opposite” poset with the order reversed.

So let’s consider an adjoint pair $F\dashv G$ of such functors. This means there is a natural isomorphism between $\hom(F(a),b)$ and $\hom(a,G(b))$. But each of these hom-sets is either empty (if $a\not\leq b$) or a singleton (if $a\leq b$). So the adjunction between $F$ and $G$ means that $F(a)\leq b$ if and only if $a\leq G(b)$. The analogous condition for an antitone adjoint pair is that $b\leq F(a)$ if and only if $a\leq G(b)$.

There are some immediate consequences to having a Galois connection, which are connected to properties of adjoints. First off, we know that $a\leq G(F(a))$ and $F(G(b))\leq b$. This essentially expresses the unit and counit of the adjunction. For the antitone version, let’s show the analogous statement more directly: we know that $F(a)\leq F(a)$, so the adjoint condition says that $a\leq G(F(a))$. Similarly, $b\leq F(G(b))$. This second condition is backwards because we’re reversing the order on one of the posets.

Using the unit and the counit of an adjunction, we found a certain quasi-inverse relation between some natural transformations on functors. For our purposes, we observe that since $a\leq G(F(a))$ we have the special case $G(b)\leq G(F(G(b)))$. But $F(G(b))\leq b$, and $G$ preserves the order. Thus $G(F(G(b)))\leq G(b)$. So $G(b)=G(F(G(b)))$. Similarly, we find that $F(G(F(a)))=F(a)$, which holds for both monotone and antitone Galois connections.

Chasing special cases further, we find that $G(F(G(F(a))))=G(F(a))$, and that $F(G(F(G(b))))=F(G(b))$ for either kind of Galois connection. That is, $F\circ G$ and $G\circ F$ are idempotent functions. In general categories, the composition of two adjoint functors gives a monad, and this idempotence is just the analogue in our particular categories. In particular, these functions behave like closure operators, but for the fact that general posets don’t have joins or bottom elements to preserve in the third and fourth Kuratowski axioms.

And so elements left fixed by $G\circ F$ (or $F\circ G$) are called “closed” elements of the poset. The images of $F$ and $G$ consist of such closed elements

May 18, 2009

## The Category of Representations of a Group

Sorry for missing yesterday. I had this written up but completely forgot to post it while getting prepared for next week’s trip back to a city. Speaking of which, I’ll be heading off for the week, and I’ll just give things here a rest until the beginning of December. Except for the Samples, and maybe an I Made It or so…

Okay, let’s say we have a group $G$. This gives us a cocommutative Hopf algebra. Thus the category of representations of $G$ is monoidal — symmetric, even — and has duals. Let’s consider these structures a bit more closely.

We start with two representations $\rho:G\rightarrow\mathrm{GL}(V)$ and $\sigma:G\rightarrow\mathrm{GL}(W)$. We use the comultiplication on $\mathbb{F}[G]$ to give us an action on the tensor product $V\otimes W$. Specifically, we find

\begin{aligned}\left[\left[\rho\otimes\sigma\right](g)\right](v\otimes w)=\left[\rho(g)\otimes\sigma(g)\right](v\otimes w)\\=\left[\rho(g)\right](v)\otimes\left[\sigma(g)\right](w)\end{aligned}

That is, we make two copies of the group element $g$, use $\rho$ to act on the first tensorand, and use $\sigma$ to act on the second tensorand. If $\rho$ and $\sigma$ came from actions of $G$ on sets, then this is just what you’d expect from linearizing the product of the $G$-actions.

Symmetry is straightforward. We just use the twist on the underlying vector spaces, and it’s automatically an intertwiner of the actions, so it defines a morphism between the representations.

Duals, though, take a bit of work. Remember that the antipode of $\mathbb{F}[G]$ sends group elements to their inverses. So if we start with a representation $\rho:G\rightarrow\mathrm{GL}(V)$ we calculate its dual representation on $V^*$:

\begin{aligned}\left[\rho^*(g)\right](\lambda)=\left[\rho(g^{-1})^*\right](\lambda)\\=\lambda\circ\rho(g^{-1})\end{aligned}

Composing linear maps from the right reverses the order of multiplication from that in the group, but taking the inverse of $g$ reverses it again, and so we have a proper action again.

November 21, 2008

## Cocommutativity

One things I don’t think I’ve mentioned is that the category of vector spaces over a field $\mathbb{F}$ is symmetric. Indeed, given vector spaces $V$ and $W$ we can define the “twist” map $\tau_{V,W}:V\otimes W\rightarrow W\otimes V$ by setting $\tau_{V,W}(v\otimes w)=w\otimes v$ and extending linearly.

Now we know that an algebra $A$ is commutative if we can swap the inputs to the multiplication and get the same answer. That is, if $m(a,b)=m(b,a)=m\left(\tau_{A,A}(a,b)\right)$. Or, more succinctly: $m=m\circ\tau_{A,A}$.

Reflecting this concept, we say that a coalgebra $C$ is cocommutative if we can swap the outputs from the comultiplication. That is, if $\tau_{C,C}\circ\Delta=\Delta$. Similarly, bialgebras and Hopf algebras can be cocommutative.

The group algebra $\mathbb{F}[G]$ of a group $G$ is a cocommutative Hopf algebra. Indeed, since $\Delta(e_g)=e_g\otimes e_g$, we can twist this either way and get the same answer.

So what does cocommutativity buy us? It turns out that the category of representations of a cocommutative bialgebra $B$ is not only monoidal, but it’s also symmetric! Indeed, given representations $\rho:B\rightarrow\hom_\mathbb{F}(V,V)$ and $\sigma:B\rightarrow\hom_\mathbb{F}(W,W)$, we have the tensor product representations $\rho\otimes\sigma$ on $V\otimes W$, and $\sigma\otimes\rho$ on $W\otimes V$. To twist them we define the natural transformation $\tau_{\rho,\sigma}$ to be the twist of the vector spaces: $\tau_{V,W}$.

We just need to verify that this actually intertwines the two representations. If we act first and then twist we find

\begin{aligned}\tau_{V,W}\left(\left[\left[\rho\otimes\sigma\right](a)\right](v\otimes w)\right)=\tau_{V,W}\left(\left[\rho\left(a_{(1)}\right)\otimes\sigma\left(a_{(2)}\right)\right](v\otimes w)\right)\\=\tau_{V,W}\left(\left[\rho\left(a_{(1)}\right)\right](v)\otimes\left[\sigma\left(a_{(2)}\right)\right](w)\right)\\=\left[\sigma\left(a_{(2)}\right)\right](w)\otimes\left[\rho\left(a_{(1)}\right)\right](v)\end{aligned}

On the other hand, if we twist first and then act we find

\begin{aligned}\left[\left[\sigma\otimes\rho\right](a)\right]\left(\tau_{V,W}(v\otimes w)\right)=\left[\sigma\left(a_{(1)}\right)\otimes\rho\left(a_{(2)}\right)\right]\left(w\otimes v\right)\\=\left[\sigma\left(a_{(1)}\right)\right](w)\otimes\left[\rho\left(a_{(2)}\right)\right](v)\end{aligned}

It seems there’s a problem. In general this doesn’t work. Ah! but we haven’t used cocommutativity yet! Now we write

$a_{(1)}\otimes a_{(2)}=\Delta(a)=\tau_{B,B}\left(\Delta(a)\right)=\tau_{B,B}\left(a_{(1)}\otimes a_{(2)}\right)=a_{(2)}\otimes a_{(1)}$

Again, remember that this doesn’t mean that the two tensorands are always equal, but only that the results after (implicitly) summing up are equal. Anyhow, that’s enough for us. It shows that the twist on the underlying vector spaces actually does intertwine the two representations, as we wanted. Thus the category of representations is symmetric.

November 19, 2008

## The Category of Representations of a Hopf Algebra

It took us two posts, but we showed that the category of representations of a Hopf algebra $H$ has duals. This is on top of our earlier result that the category of representations of any bialgebra $B$ is monoidal. Let’s look at this a little more conceptually.

Earlier, we said that a bialgebra is a comonoid object in the category of algebras over $\mathbb{F}$. But let’s consider this category itself. We also said that an algebra is a category enriched over $\mathbb{F}$, but with only one object. So we should really be thinking about the category of algebras as a full sub-2-category of the 2-category of categories enriched over $\mathbb{F}$.

So what’s a comonoid object in this 2-category? When we defined comonoid objects we used a model category $\mathrm{Th}(\mathbf{CoMon})$. Now let’s augment it to a 2-category in the easiest way possible: just add an identity 2-morphism to every morphism!

But the 2-category language gives us a bit more flexibility. Instead of demanding that the morphism $\Delta:C\rightarrow C\otimes C$ satisfy the associative law on the nose, we can add a “coassociator” 2-morphism $\gamma:(\Delta\otimes1)\circ\Delta\rightarrow(1\otimes\Delta)\circ\Delta$ to our model 2-category. Similarly, we dispense with the left and right counit laws and add left and right counit 2-morphisms. Then we insist that these 2-morphisms satisfy pentagon and triangle identities dual to those we defined when we talked about monoidal categories.

What we’ve built up here is a model 2-category for weak comonoid objects in a 2-category. Then any weak comonoid object is given by a 2-functor from this 2-category to the appropriate target 2-category. Similarly we can define a weak monoid object as a 2-functor from the opposite model 2-category to an appropriate target 2-category.

So, getting a little closer to Earth, we have in hand a comonoid object in the 2-category of categories enriched over $\mathbb{F}$ — our algebra $B$. But remember that a 2-category is just a category enriched over categories. That is, between $H$ (considered as a category) and $\mathbf{Vect}(\mathbb{F})$ we have a hom-category $\hom(H,\mathbf{Vect}(\mathbb{F}))$. The entry in the first slot $H$ is described by a 2-functor from the model category of weak comonoid objects to the 2-category of categories enriched over $\mathbb{F}$. This hom-functor is contravariant in the first slot (like all hom-functors), and so the result is described by a 2-functor from the opposite of our model 2-category. That is, it’s a weak monoid object in the 2-category of all categories. And this is just a monoidal category!

This is yet another example of the way that hom objects inherit structure from their second variable, and inherit opposite structure from their first variable. I’ll leave it to you to verify that a monoidal category with duals is similarly a weak group object in the 2-category of categories, and that this is why a Hopf algebra — a (weak) cogroup object in the 2-category of categories enriched over $\mathbb{F}$ has dual representations.

November 18, 2008

## Representations of Hopf Algebras II

Now that we have a coevaluation for vector spaces, let’s make sure that it intertwines the actions of a Hopf algebra. Then we can finish showing that the category of representations of a Hopf algebra has duals.

Take a representation $\rho:H\rightarrow\hom_\mathbb{F}(V,V)$, and pick a basis $\left\{e_i\right\}$ of $V$ and the dual basis $\left\{\epsilon^i\right\}$ of $V^*$. We define the map $\eta_\rho:\mathbf{1}\rightarrow V^*\otimes V$ by $\eta_\rho(1)=\epsilon^i\otimes e_i$. Now $\left[\rho(a)\right](1)=\epsilon(a)$, so if we use the action of $H$ on $\mathbf{1}$ before transferring to $V^*\otimes V$, we get $\epsilon(a)\epsilon^i\otimes e_i$. Be careful not to confuse the counit $\epsilon$ with the basis elements $\epsilon^i$.

On the other hand, if we transfer first, we must calculate

\begin{aligned}\left[\left[\rho^*\otimes\rho\right](a)\right](\epsilon^i\otimes e_i)=\left[\rho^*\left(a_{(1)}\right)\otimes\rho\left(a_{(2)}\right)\right](\epsilon^i\otimes e_i)\\=\left[\rho\left(S\left(a_{(1)}\right)\right)^*\otimes\rho\left(a_{(2)}\right)\right](\epsilon^i\otimes e_i)\\=\left[\rho\left(S\left(a_{(1)}\right)\right)^*\right](\epsilon^i)\otimes\left[\rho\left(a_{(2)}\right)\right](e_i)\end{aligned}

Now let’s use the fact that we’ve got this basis sitting around to expand out both $\rho\left(S\left(a_{(1)}\right)\right)$ and $\rho\left(a_{(2)}\right)$ as matrices. We’ll just take on matrix indices on the right for our notation. Then we continue the calculation above:

\begin{aligned}\left[\rho\left(S\left(a_{(1)}\right)\right)^*\right](\epsilon^i)\otimes\left[\rho\left(a_{(2)}\right)\right](e_i)=\epsilon^j\rho\left(S\left(a_{(1)}\right)\right)_j^i\otimes\rho\left(a_{(2)}\right)_i^ke_k\\=\epsilon^j\otimes\rho\left(S\left(a_{(1)}\right)\right)_j^i\rho\left(a_{(2)}\right)_i^ke_k\\=\epsilon^j\otimes\left[\rho\left(\mu\left(\left[S\otimes1_H\right]\left(\Delta(a)\right)\right)\right)\right](e_j)\\=\epsilon^j\otimes\left[\rho\left(\iota\left(\epsilon(a)\right)\right)\right](e_j)=\epsilon^j\otimes\epsilon(a)e_j\end{aligned}

And so the coevaluation map does indeed intertwine the two actions of $H$. Together with the evaluation map, it provides the duality on the category of representations of a Hopf algebra $H$ that we were looking for.

November 14, 2008