The Unapologetic Mathematician

Mathematics for the interested outsider

The Category of Representations of a Group

Sorry for missing yesterday. I had this written up but completely forgot to post it while getting prepared for next week’s trip back to a city. Speaking of which, I’ll be heading off for the week, and I’ll just give things here a rest until the beginning of December. Except for the Samples, and maybe an I Made It or so…

Okay, let’s say we have a group G. This gives us a cocommutative Hopf algebra. Thus the category of representations of G is monoidal — symmetric, even — and has duals. Let’s consider these structures a bit more closely.

We start with two representations \rho:G\rightarrow\mathrm{GL}(V) and \sigma:G\rightarrow\mathrm{GL}(W). We use the comultiplication on \mathbb{F}[G] to give us an action on the tensor product V\otimes W. Specifically, we find

\begin{aligned}\left[\left[\rho\otimes\sigma\right](g)\right](v\otimes w)=\left[\rho(g)\otimes\sigma(g)\right](v\otimes w)\\=\left[\rho(g)\right](v)\otimes\left[\sigma(g)\right](w)\end{aligned}

That is, we make two copies of the group element g, use \rho to act on the first tensorand, and use \sigma to act on the second tensorand. If \rho and \sigma came from actions of G on sets, then this is just what you’d expect from linearizing the product of the G-actions.

Symmetry is straightforward. We just use the twist on the underlying vector spaces, and it’s automatically an intertwiner of the actions, so it defines a morphism between the representations.

Duals, though, take a bit of work. Remember that the antipode of \mathbb{F}[G] sends group elements to their inverses. So if we start with a representation \rho:G\rightarrow\mathrm{GL}(V) we calculate its dual representation on V^*:


Composing linear maps from the right reverses the order of multiplication from that in the group, but taking the inverse of g reverses it again, and so we have a proper action again.

November 21, 2008 Posted by | Algebra, Category theory, Group theory, Representation Theory | 1 Comment


One things I don’t think I’ve mentioned is that the category of vector spaces over a field \mathbb{F} is symmetric. Indeed, given vector spaces V and W we can define the “twist” map \tau_{V,W}:V\otimes W\rightarrow W\otimes V by setting \tau_{V,W}(v\otimes w)=w\otimes v and extending linearly.

Now we know that an algebra A is commutative if we can swap the inputs to the multiplication and get the same answer. That is, if m(a,b)=m(b,a)=m\left(\tau_{A,A}(a,b)\right). Or, more succinctly: m=m\circ\tau_{A,A}.

Reflecting this concept, we say that a coalgebra C is cocommutative if we can swap the outputs from the comultiplication. That is, if \tau_{C,C}\circ\Delta=\Delta. Similarly, bialgebras and Hopf algebras can be cocommutative.

The group algebra \mathbb{F}[G] of a group G is a cocommutative Hopf algebra. Indeed, since \Delta(e_g)=e_g\otimes e_g, we can twist this either way and get the same answer.

So what does cocommutativity buy us? It turns out that the category of representations of a cocommutative bialgebra B is not only monoidal, but it’s also symmetric! Indeed, given representations \rho:B\rightarrow\hom_\mathbb{F}(V,V) and \sigma:B\rightarrow\hom_\mathbb{F}(W,W), we have the tensor product representations \rho\otimes\sigma on V\otimes W, and \sigma\otimes\rho on W\otimes V. To twist them we define the natural transformation \tau_{\rho,\sigma} to be the twist of the vector spaces: \tau_{V,W}.

We just need to verify that this actually intertwines the two representations. If we act first and then twist we find

\begin{aligned}\tau_{V,W}\left(\left[\left[\rho\otimes\sigma\right](a)\right](v\otimes w)\right)=\tau_{V,W}\left(\left[\rho\left(a_{(1)}\right)\otimes\sigma\left(a_{(2)}\right)\right](v\otimes w)\right)\\=\tau_{V,W}\left(\left[\rho\left(a_{(1)}\right)\right](v)\otimes\left[\sigma\left(a_{(2)}\right)\right](w)\right)\\=\left[\sigma\left(a_{(2)}\right)\right](w)\otimes\left[\rho\left(a_{(1)}\right)\right](v)\end{aligned}

On the other hand, if we twist first and then act we find

\begin{aligned}\left[\left[\sigma\otimes\rho\right](a)\right]\left(\tau_{V,W}(v\otimes w)\right)=\left[\sigma\left(a_{(1)}\right)\otimes\rho\left(a_{(2)}\right)\right]\left(w\otimes v\right)\\=\left[\sigma\left(a_{(1)}\right)\right](w)\otimes\left[\rho\left(a_{(2)}\right)\right](v)\end{aligned}

It seems there’s a problem. In general this doesn’t work. Ah! but we haven’t used cocommutativity yet! Now we write

a_{(1)}\otimes a_{(2)}=\Delta(a)=\tau_{B,B}\left(\Delta(a)\right)=\tau_{B,B}\left(a_{(1)}\otimes a_{(2)}\right)=a_{(2)}\otimes a_{(1)}

Again, remember that this doesn’t mean that the two tensorands are always equal, but only that the results after (implicitly) summing up are equal. Anyhow, that’s enough for us. It shows that the twist on the underlying vector spaces actually does intertwine the two representations, as we wanted. Thus the category of representations is symmetric.

November 19, 2008 Posted by | Algebra, Category theory, Representation Theory | 2 Comments

The Category of Representations of a Hopf Algebra

It took us two posts, but we showed that the category of representations of a Hopf algebra H has duals. This is on top of our earlier result that the category of representations of any bialgebra B is monoidal. Let’s look at this a little more conceptually.

Earlier, we said that a bialgebra is a comonoid object in the category of algebras over \mathbb{F}. But let’s consider this category itself. We also said that an algebra is a category enriched over \mathbb{F}, but with only one object. So we should really be thinking about the category of algebras as a full sub-2-category of the 2-category of categories enriched over \mathbb{F}.

So what’s a comonoid object in this 2-category? When we defined comonoid objects we used a model category \mathrm{Th}(\mathbf{CoMon}). Now let’s augment it to a 2-category in the easiest way possible: just add an identity 2-morphism to every morphism!

But the 2-category language gives us a bit more flexibility. Instead of demanding that the morphism \Delta:C\rightarrow C\otimes C satisfy the associative law on the nose, we can add a “coassociator” 2-morphism \gamma:(\Delta\otimes1)\circ\Delta\rightarrow(1\otimes\Delta)\circ\Delta to our model 2-category. Similarly, we dispense with the left and right counit laws and add left and right counit 2-morphisms. Then we insist that these 2-morphisms satisfy pentagon and triangle identities dual to those we defined when we talked about monoidal categories.

What we’ve built up here is a model 2-category for weak comonoid objects in a 2-category. Then any weak comonoid object is given by a 2-functor from this 2-category to the appropriate target 2-category. Similarly we can define a weak monoid object as a 2-functor from the opposite model 2-category to an appropriate target 2-category.

So, getting a little closer to Earth, we have in hand a comonoid object in the 2-category of categories enriched over \mathbb{F} — our algebra B. But remember that a 2-category is just a category enriched over categories. That is, between H (considered as a category) and \mathbf{Vect}(\mathbb{F}) we have a hom-category \hom(H,\mathbf{Vect}(\mathbb{F})). The entry in the first slot H is described by a 2-functor from the model category of weak comonoid objects to the 2-category of categories enriched over \mathbb{F}. This hom-functor is contravariant in the first slot (like all hom-functors), and so the result is described by a 2-functor from the opposite of our model 2-category. That is, it’s a weak monoid object in the 2-category of all categories. And this is just a monoidal category!

This is yet another example of the way that hom objects inherit structure from their second variable, and inherit opposite structure from their first variable. I’ll leave it to you to verify that a monoidal category with duals is similarly a weak group object in the 2-category of categories, and that this is why a Hopf algebra — a (weak) cogroup object in the 2-category of categories enriched over \mathbb{F} has dual representations.

November 18, 2008 Posted by | Algebra, Category theory, Representation Theory | Leave a comment

Representations of Hopf Algebras II

Now that we have a coevaluation for vector spaces, let’s make sure that it intertwines the actions of a Hopf algebra. Then we can finish showing that the category of representations of a Hopf algebra has duals.

Take a representation \rho:H\rightarrow\hom_\mathbb{F}(V,V), and pick a basis \left\{e_i\right\} of V and the dual basis \left\{\epsilon^i\right\} of V^*. We define the map \eta_\rho:\mathbf{1}\rightarrow V^*\otimes V by \eta_\rho(1)=\epsilon^i\otimes e_i. Now \left[\rho(a)\right](1)=\epsilon(a), so if we use the action of H on \mathbf{1} before transferring to V^*\otimes V, we get \epsilon(a)\epsilon^i\otimes e_i. Be careful not to confuse the counit \epsilon with the basis elements \epsilon^i.

On the other hand, if we transfer first, we must calculate

\begin{aligned}\left[\left[\rho^*\otimes\rho\right](a)\right](\epsilon^i\otimes e_i)=\left[\rho^*\left(a_{(1)}\right)\otimes\rho\left(a_{(2)}\right)\right](\epsilon^i\otimes e_i)\\=\left[\rho\left(S\left(a_{(1)}\right)\right)^*\otimes\rho\left(a_{(2)}\right)\right](\epsilon^i\otimes e_i)\\=\left[\rho\left(S\left(a_{(1)}\right)\right)^*\right](\epsilon^i)\otimes\left[\rho\left(a_{(2)}\right)\right](e_i)\end{aligned}

Now let’s use the fact that we’ve got this basis sitting around to expand out both \rho\left(S\left(a_{(1)}\right)\right) and \rho\left(a_{(2)}\right) as matrices. We’ll just take on matrix indices on the right for our notation. Then we continue the calculation above:


And so the coevaluation map does indeed intertwine the two actions of H. Together with the evaluation map, it provides the duality on the category of representations of a Hopf algebra H that we were looking for.

November 14, 2008 Posted by | Algebra, Category theory, Representation Theory | 3 Comments

The Coevaluation on Vector Spaces

Okay, I noticed that I never really gave the definition of the coevaluation when I introduced categories with duals, because you need some linear algebra. Well, now we have some linear algebra, so let’s do it.

Let V be a finite-dimensional vector space with dual space V^*. Then if we have a basis \left\{e_i\right\} of V we immediately get a dual basis \left\{\epsilon^i\right\} for V^* (yet another \epsilon to keep straight), defined by \epsilon^j(e_i)=\delta_i^j. We now define a map \eta:\mathbf{1}\rightarrow V^*\otimes V by setting \eta(1)=\epsilon^i\otimes e_i. That is, we take the tensor product of each dual basis element with its corresponding basis element, and add them all up (summation convention).

But this seems to depend on which basis \left\{e_i\right\} we started with. What if we used a different basis \left\{f_i\right\} and dual basis \left\{\phi^i\right\}? We know that there is a change of basis matrix f_i=t_i^je_j, so let’s see how this works on the dual basis.

The dual basis is defined by the fact that \phi^j(f_i)=\delta_i^j. So we use this new expression for f_i to write \phi^j(t_i^ke_k)=t_i^k\phi^j(e_k)=\delta_i^j. That is, \phi^j(e_k) must be the inverse matrix to t_i^k, which we’ll write as \left(t^{-1}\right)_k^j. But now we can check


And so we find that \phi^i=\left(t^{-1}\right)_j^i\epsilon^j when we change bases.

Now we can use the same definition for \eta above with our new basis. We set \eta(1)=\phi^i\otimes f_i, and then substitute our expressions in terms of the old bases:

\eta(1)=\left(t^{-1}\right)_j^i\epsilon^j\otimes t_i^ke_k=\left(t^{-1}\right)_j^it_i^k\left(\epsilon^j\otimes e_k\right)=\delta_j^k\left(\epsilon^j\otimes e_k\right)=\epsilon^k\otimes e_k

which is what we got before. That is, this map actually doesn’t depend on the basis we chose!

Okay, now does this coevaluation — along with the evaluation map from before — actually satisfy the conditions for a duality? First, let’s start with a vector written out in terms of a basis: v=v^ie_i. Now we use the coevaluation to send it to v^ie_i\otimes\epsilon^j\otimes e_j. Next we evaluate on the first two tensorands to find v^i\delta_i^je_j=v^je_j. So we do indeed have the identity here. Verifying the other condition is almost the same, starting from an arbitrary covector \lambda=\lambda_i\epsilon^i.

So now we know that the category \mathbf{FinVect}(\mathbb{F}) has duals. Tomorrow we can promote this to a duality on the category of finite-dimensional representations of a Hopf algebra.

November 13, 2008 Posted by | Algebra, Linear Algebra | 2 Comments

Representations of Hopf Algebras I

We’ve seen that the category of representations of a bialgebra is monoidal. What do we get for Hopf algebras? What does an antipode buy us? Duals! At least when we restrict to finite-dimensional representations.

Again, we base things on the underlying category of vector spaces. Given a representation \rho:H\rightarrow\hom_\mathbb{F}(V,V), we want to find a representation \rho^*:H\rightarrow\hom_\mathbb{F}(V^*,V^*). And it should commute with the natural transformations which make up the dual structure.

Easy enough! We just take the dual of each map to find \rho(h)^*:V^*\rightarrow V^*. But no, this can’t work. Duality reverses the order of composition. We need an antiautomorphism S to reverse the multiplication on H. Then we can define \rho^*(h)=\rho(S(h))^*.

The antiautomorphism we’ll use will be the antipode. Now to make these representations actual duals, we’ll need natural transformations \eta_\rho:\mathbf{1}\rightarrow\rho^*\otimes\rho and \epsilon_\rho:\rho\otimes\rho^*\rightarrow\mathbf{1}. This natural transformation \epsilon is not to be confused with the counit of the Hopf algebra. Given a representation \rho on the finite-dimensional vector space V, we’ll just use the \eta_V and \epsilon_V that come from the duality on the category of finite-dimensional vector spaces.

Thus we find that \epsilon_\rho is the pairing v\otimes\lambda\mapsto\lambda(v). Does this commute with the actions of H? On the one side, we calculate


Then we apply the evaluation to find

\begin{aligned}\left[\left[\rho\left(S\left(h_{(2)}\right)\right)^*\right](\lambda)\right]\left(\left[\rho\left(h_{(1)}\right)\right](v)\right)=\lambda\left(\left[\rho\left(S\left(h_{(2)}\right)\right)\right]\left(\left[\rho\left(h_{(1)}\right)\right](v)\right)\right)\\=\lambda\left(\left[\rho\left(h_{(1)}S\left(h_{(2)}\right)\right)\right](v)\right)=\lambda\left(\left[\rho\left(\mu\left(h_{(1)}\otimes S\left(h_{(2)}\right)\right)\right)\right](v)\right)\\=\lambda\left(\left[\rho\left(\iota\left(\epsilon(h)\right)\right)\right](v)\right)=\epsilon(h)\lambda(v)\end{aligned}

Which is the same as the result we’d get by applying the “unit” action after evaluating. Notice how we used the definition of the dual map, the fact that \rho is a representation, and the antipodal property in obtaining this result.

This much works whether or not V is a finite-dimensional vector space. The other direction, though, needs more work, especially since I waved my hands at it when I used \mathbf{FinVect} as the motivating example of a category with duals. Tomorrow I’ll define this map.

November 12, 2008 Posted by | Algebra, Category theory, Representation Theory | 3 Comments

Representations of Bialgebras

What’s so great about bialgebras? Their categories of representations are monoidal!

Let’s say we have two algebra representations \rho:A\rightarrow\hom_\mathbb{F}(V,V) and \sigma:A\rightarrow\hom_\mathbb{F}(W,W). These are morphisms in the category of \mathbb{F}-algebras, and so of course we can take their tensor product \rho\otimes\sigma. But this is not a representation of the same algebra. It’s a representation of the tensor square of the algebra:

\rho\otimes\sigma:A\otimes A\rightarrow\hom_\mathbb{F}(V,V)\otimes\hom_\mathbb{F}(W,W)\cong\hom_\mathbb{F}(V\otimes W,V\otimes W)

Ah, but if we have a way to send A to A\otimes A (an algebra homomorphism, that is), then we can compose it with this tensor product to get a representation of A. And that’s exactly what the comultiplication \Delta does for us. We abuse notation slightly and write:

\rho\otimes\sigma:A\rightarrow\hom_\mathbb{F}(V\otimes W,V\otimes W)

where the homomorphism of this representation is the comultiplication \Delta followed by the tensor product of the two homomorphisms, followed by the equivalence of \hom algebras.

Notice here that the underlying vector space of the tensor product of two representations \rho\otimes\sigma is the tensor product of their underlying vector spaces V\otimes W. That is, if we think (as many approaches to representation theory do) of the vector space as fundamental and the homomorphism as extra structure, then this is saying we can put the structure of a representation on the tensor product of the vector spaces.

Which leads us to the next consideration. For the tensor product to be a monoidal structure we need an associator. And the underlying linear map on vector spaces must clearly be the old associator for \mathbf{Vect}(\mathbb{F}). We just need to verify that it commutes with the action of A.

So let’s consider three representations \rho:A\rightarrow\hom_\mathbb{F}(U,U), \sigma\rightarrow\hom_\mathbb{F}(V,V), and \tau:A\rightarrow\hom_\mathbb{F}(W,W). Given an algebra element a and vectors u, v, and w, we have the action

\begin{aligned}\left[\left[(\rho\otimes\sigma)\otimes\tau\right](a)\right]((u\otimes v)\otimes w)=\\\left(\left[\rho\left(\left(a_{(1)}\right)_{(1)}\right)\right](u)\otimes\left[\sigma\left(\left(a_{(1)}\right)_{(2)}\right)\right](v)\right)\otimes\left[\tau\left(a_{(2)}\right)\right](v)\end{aligned}

On the other hand, if we associate the other way we have the action

\begin{aligned}\left[\left[\rho\otimes(\sigma\otimes\tau)\right](a)\right](u\otimes(v\otimes w))=\\\left[\rho\left(a_{(1)}\right)\right](u)\otimes\left(\left[\sigma\left(\left(a_{(2)}\right)_{(1)}\right)\right](v)\otimes\left[\tau\left(\left(a_{(2)}\right)_{(2)}\right)\right](v)\right)\end{aligned}

Where we have used the Sweedler notation to write out the comultiplications of a. But now we can use the coassociativity of the comultiplication — along with the fact that, as algebra homomorphisms, the representations are linear maps — to show that the associator on \mathbf{Vect}(\mathbb{F}) intertwines these actions, and thus acts as an associator for the category of representations of A as well.

We also need a unit object, and similar considerations to those above tell us it should be based on the vector space unit object. That is, we need a homomorphism A\rightarrow\hom_\mathbb{F}(\mathbb{F},\mathbb{F}). But linear maps from the base field to itself (considered as a one-dimensional vector space) are just multiplications by field elements! That is, the \hom algebra is just the field \mathbb{F} itself, and we need a homomorphism A\rightarrow\mathbb{F}. This is precisely what the counit \epsilon provides! I’ll leave it to you to verify that the left and right unit maps from vector spaces intertwine the relevant representations.

November 11, 2008 Posted by | Algebra, Category theory, Representation Theory | 5 Comments

Sweedler notation

As we work with coalgebras, we’ll need a nice way to write out the comultiplication of an element. In the group algebra we’ve been using as an example, we just have \Delta(e_g)=e_g\otimes e_g, but not all elements are so cleanly sent to two copies of themselves. And other comltiplications in other coalgebras aren’t even defined so nicely on any basis. So we introduce the so-called “Sweedler notation”. If you didn’t like the summation convention, you’re going to hate this.

Okay, first of all, we know that the comultiplication of an element c\in C is an element of the tensor square C\otimes C. Thus it can be written as a finite sum

\displaystyle\Delta(c)=\sum\limits_{i=1}^n(c)a_i\otimes b_i

Now, this uses two whole new letters, a and b, which might be really awkward to come up with in practice. Instead, let’s call them c_{(1)} and c_{(2)}, to denote the first and second factors of the comultiplication. We’ll also move the indices to superscripts, just to get them out of the way.

\displaystyle\Delta(c)=\sum\limits_{i=1}^n(c)c_{(1)}^i\otimes c_{(2)}^i

The whole index-summing thing is a bit awkward, especially because the number of summands is different for each coalgebra element c. Let’s just say we’re adding up all the terms we need to for a given c:

\displaystyle\Delta(c)=\sum\limits_{(c)}c_{(1)}\otimes c_{(2)}

Then if we’re really pressed for space we can just write \Delta(c)=c_{(1)}\otimes c_{(2)}. Since we don’t use a subscript in parentheses for anything else, we remember that this is implicitly a summation.

Let’s check out the counit laws (1_M\otimes\epsilon)\circ\Delta=1_M=(\epsilon\otimes1_M)\circ\Delta in this notation. Now they read c_{(1)}\epsilon(c_{(2)}=c=\epsilon(c_{(1)})c_{(2)}. Or, more expansively:


Similarly, the coassociativity condition now reads

\displaystyle\sum\limits_{(c)}\left(\sum\limits_{\left(c_{(1)}\right)}\left(c_{(1)}\right)_{(1)}\otimes\left(c_{(1)}\right)_{(2)}\right)\otimes c_{(2)}=\sum\limits_{(c)}c_{(1)}\otimes\left(\sum\limits_{\left(c_{(2)}\right)}\left(c_{(2)}\right)_{(1)}\otimes\left(c_{(2)}\right)_{(1)}\right)

In the Sweedler notation we’ll write both of these equal sums as

\displaystyle\sum\limits_{(c)}c_{(1)}\otimes c_{(2)}\otimes c_{(3)}

Or more simply as c_{(1)}\otimes c_{(2)}\otimes c_{(3)}.

As a bit more practice, let’s write out the condition that a linear map f:C\rightarrow D between coalgebras is a coalgebra morphism. The answer is that f must satisfy

f\left(c_{(1)}\right)\otimes f\left(c_{(2)}\right)=f(c)_{(1)}\otimes f(c)_{(2)}

Notice here that there are implied summations here. We are not asserting that all the summands are equal, and definitely not that f\left(c_{(1)}\right)=f(c)_{(1)} (for instance). Sweedler notation hides a lot more than the summation convention ever did, but it’s still possible to expand it back out to a proper summation-heavy format when we need to.

November 10, 2008 Posted by | Algebra | 7 Comments

Hopf Algebras

One more piece of structure we need. We take a bialgebra H, and we add an “antipode”, which behaves sort of like an inverse operation. Then what we have is a Hopf algebra.

An antipode will be a linear map S:H\rightarrow H on the underlying vector space. Here’s what we mean by saying that an antipode “behaves like an inverse”. In formulas, we write that:

\mu\circ(S\otimes1_H)\circ\Delta=\iota\circ\epsilon=\mu\circ(1_H\otimes S)\circ\Delta

On either side, first we comultiply an algebra element to split it into two parts. Then we use S on one or the other part before multiplying them back together. In the center, this is the same as first taking the counit to get a field element, and then multiplying that by the unit of the algebra.

By now it shouldn’t be a surprise that the group algebra \mathbb{F}[G] is also a Hopf algebra. Specifically, we set S(e_g)=e_{g^{-1}}. Then we can check the “left inverse” law:

\begin{aligned}\mu\left(\left[S\otimes1_H\right]\left(\Delta(e_g)\right)\right)=\mu\left(\left[S\otimes1_H\right](e_g\otimes e_g)\right)=\\\mu(e_{g^{-1}}\otimes e_g)=e_{g^{-1}g}=e_1=\iota(1)=\iota\left(\epsilon(e_g)\right)\end{aligned}

One thing that we should point out: this is not a group object in the category of vector spaces over \mathbb{F}. A group object needs the diagonal we get from the finite products on the target category. But in the category of vector spaces we pointedly do not use the categorical product as our monoidal structure. There is no “diagonal” for the tensor product.

Instead, we move to the category of coalgebras over \mathbb{F}. Now each coalgebra C comes with its own comultiplication \Delta:C\rightarrow C\otimes C, which stands in for the diagonal. In the case of \mathbb{F}[G] we’ve been considering, this comultiplication is clearly related to the diagonal on the underlying set of the group G. In fact, it’s not going too far to say that “linearizing” a set naturally brings along a coalgebra structure on top of the vector space structure we usually consider. But many coalgebras, bialgebras, and Hopf algebras are not such linearized sets.

In the category of coalgebras over \mathbb{F}, a Hopf algebra is a group object, so long as we use the comultiplications and counits that come with the coalgebras instead of the ones that come from the categorical product structure. Dually, we can characterize a Hopf algebra as a cogroup object in the category of algebras over \mathbb{F}, subject to a similar caveat. It is this cogroup structure that will be important moving forwards.

November 7, 2008 Posted by | Algebra | 9 Comments


In yesterday’s post I used the group algebra \mathbb{F}[G] of a group G as an example of a coalgebra. In fact, more is true.

A bialgebra is a vector space A equipped with both the structure of an algebra and the structure of a coalgebra, and that these two structures are “compatible” in a certain sense. The traditional definitions usually consist in laying out the algebra maps and relations, then the coalgebra maps and relations. Then they state that the algebra structure preserves the coalgebra structure, and that the coalgebra structure preserves the algebra structure, and they note that really you only need to require one of these last two conditions because they turn out to be equivalent.

In fact, our perspective allows this equivalence to come to the fore. The algebra structure makes the bialgebra a monoid object in the category of vector space over \mathbb{F}. Then a compatible coalgebra structure makes it a comonoid object in the category of algebras over \mathbb{F}. Or in the other order, we have a monoid object in the category of comonoid objects in the category of vector spaces over \mathbb{F}. And these describe essentially the same things because internalizations commute!

Okay, let’s be explicit about what we mean by “compatibility”. This just means that — on the one side — the coalgebra maps are not just linear maps between the underlying vector spaces, but actually are algebra homomorphisms. On the other side, it means that the algebra maps are actually coalgebra homomorphisms.

Multiplication and comultiplication being compatible actually mean the same thing. Take two algebra elements and multiply them, then comultiply the result. Alternatively, comultiply each of them, and the multiply corresponding factors of the result. We should get the same answer whether we multiply or comultiply first. That is: \Delta\circ\mu=(\mu\otimes\mu)\circ(1_A\otimes\tau_{A,A}\otimes1_A)\circ(\Delta\otimes\Delta), where \tau is the twist map, exchanging two factors.

Let’s check this condition for the group algebra \mathbb{F}[G]:

\begin{aligned}\left[\mu\otimes\mu\right]\left(\left[1_A\otimes\tau_{A,A}\otimes1_A\right]\left(\left[\Delta\otimes\Delta\right](e_g\otimes e_h)\right)\right)=\\\left[\mu\otimes\mu\right]\left(\left[1_A\otimes\tau_{A,A}\otimes1_A\right](e_g\otimes e_g\otimes e_h\otimes e_h)\right)=\\\left[\mu\otimes\mu\right](e_g\otimes e_h\otimes e_g\otimes e_h)=e_{gh}\otimes e_{gh}=\\\Delta(e_{gh})=\Delta\left(\mu(e_g\otimes e_h)\right)\end{aligned}

Similarly, if we multiply two algebra elements and then take the counit, it should be the same as the product (in \mathbb{F}) of the counits of the elements. Dually, the product of two copies of the algebra unit should be the algebra unit again, and the counit of the algebra unit should be the unit in \mathbb{F}. It’s straightforward to verify that these hold for \mathbb{F}[G].

November 6, 2008 Posted by | Algebra, Category theory | 5 Comments