The Unapologetic Mathematician

Mathematics for the interested outsider

Sweedler notation

As we work with coalgebras, we’ll need a nice way to write out the comultiplication of an element. In the group algebra we’ve been using as an example, we just have \Delta(e_g)=e_g\otimes e_g, but not all elements are so cleanly sent to two copies of themselves. And other comltiplications in other coalgebras aren’t even defined so nicely on any basis. So we introduce the so-called “Sweedler notation”. If you didn’t like the summation convention, you’re going to hate this.

Okay, first of all, we know that the comultiplication of an element c\in C is an element of the tensor square C\otimes C. Thus it can be written as a finite sum

\displaystyle\Delta(c)=\sum\limits_{i=1}^n(c)a_i\otimes b_i

Now, this uses two whole new letters, a and b, which might be really awkward to come up with in practice. Instead, let’s call them c_{(1)} and c_{(2)}, to denote the first and second factors of the comultiplication. We’ll also move the indices to superscripts, just to get them out of the way.

\displaystyle\Delta(c)=\sum\limits_{i=1}^n(c)c_{(1)}^i\otimes c_{(2)}^i

The whole index-summing thing is a bit awkward, especially because the number of summands is different for each coalgebra element c. Let’s just say we’re adding up all the terms we need to for a given c:

\displaystyle\Delta(c)=\sum\limits_{(c)}c_{(1)}\otimes c_{(2)}

Then if we’re really pressed for space we can just write \Delta(c)=c_{(1)}\otimes c_{(2)}. Since we don’t use a subscript in parentheses for anything else, we remember that this is implicitly a summation.

Let’s check out the counit laws (1_M\otimes\epsilon)\circ\Delta=1_M=(\epsilon\otimes1_M)\circ\Delta in this notation. Now they read c_{(1)}\epsilon(c_{(2)}=c=\epsilon(c_{(1)})c_{(2)}. Or, more expansively:

\displaystyle\sum\limits_{(c)}c_{(1)}\epsilon\left(c_{(2)}\right)=c=\sum\limits_{(c)}\epsilon\left(c_{(1)}\right)c_{(2)}

Similarly, the coassociativity condition now reads

\displaystyle\sum\limits_{(c)}\left(\sum\limits_{\left(c_{(1)}\right)}\left(c_{(1)}\right)_{(1)}\otimes\left(c_{(1)}\right)_{(2)}\right)\otimes c_{(2)}=\sum\limits_{(c)}c_{(1)}\otimes\left(\sum\limits_{\left(c_{(2)}\right)}\left(c_{(2)}\right)_{(1)}\otimes\left(c_{(2)}\right)_{(1)}\right)

In the Sweedler notation we’ll write both of these equal sums as

\displaystyle\sum\limits_{(c)}c_{(1)}\otimes c_{(2)}\otimes c_{(3)}

Or more simply as c_{(1)}\otimes c_{(2)}\otimes c_{(3)}.

As a bit more practice, let’s write out the condition that a linear map f:C\rightarrow D between coalgebras is a coalgebra morphism. The answer is that f must satisfy

f\left(c_{(1)}\right)\otimes f\left(c_{(2)}\right)=f(c)_{(1)}\otimes f(c)_{(2)}

Notice here that there are implied summations here. We are not asserting that all the summands are equal, and definitely not that f\left(c_{(1)}\right)=f(c)_{(1)} (for instance). Sweedler notation hides a lot more than the summation convention ever did, but it’s still possible to expand it back out to a proper summation-heavy format when we need to.

November 10, 2008 - Posted by | Algebra

7 Comments »

  1. And unsurprisingly, now that I’ve seen the Sweedler notation, I find it about as horrible and unreadable as ever I found the summation convention.

    Give me my sum signs. By all means fudge the decorations as long as there is a context, but having at least ONE sigma hanging around and a bunch of indexing variables vaguely indicated makes it all more readable and not less.

    And I had this rant a while back too, and I know we disagree on this.

    Comment by Mikael Vejdemo Johansson | November 11, 2008 | Reply

  2. Actually, I rather dislike Sweedler notation myself. But for writing out formulas it’s sort of difficult to escape. The thing is, I’ve found that if you’re writing out a lot of things in Sweedler notation, you’re thinking too explicitly. Similarly, if you’re writing out a lot of matrix indices, you’re missing the point.

    As for actual use of Sweedler notation, I need it for writing the two equivalent monoidal triple products of three representations. Beyond that, I hope to not need it again.

    Comment by John Armstrong | November 11, 2008 | Reply

  3. […] remember that this doesn’t mean that the two tensorands are always equal, but only that the results after (implicitly) summing up […]

    Pingback by Cocommutativity « The Unapologetic Mathematician | November 19, 2008 | Reply

  4. I personally love the summation convention, and loved it from the first time I encountered it. The problem I had with Sweedler notation is that nobody seems to explain it in detail. However I think I get it now – and I think I’ll learn to like it.

    Comment by Blake | December 4, 2008 | Reply

  5. So am I correct to understand that, when we write c_{(1)}) \otimes c_{(2)}) and c_{(1)}) \otimes c_{(2)}) \otimes c_{(3)}, the c_{(2)} in the former is different from the c_{(2)} in the latter? That is, there’s an implicit additional index here that you (hopefully) get by counting up the size of the tensor product you’re working in?

    Comment by Daniel | March 26, 2011 | Reply

  6. (modulo the stray parens that I managed to put in there somehow…)

    Comment by Daniel | March 26, 2011 | Reply

  7. That’s correct, Daniel, and that’s one of the most confusing things about it. In fact, in c_{(1)} and c_{(2)} the 1 and 2 aren’t really index values at all, and c_{(1)} and c_{(2)} aren’t particular values of some indexed quantity.

    For example, there’s a certain common situation where we write \Delta(X)=X\otimes1+1\otimes X. In this case, c_{(1)} can be X and c_{(2)} can be 1, or vice versa, depending on which of the two summands we’re talking about.

    Comment by John Armstrong | March 26, 2011 | Reply


Leave a comment