## Sweedler notation

As we work with coalgebras, we’ll need a nice way to write out the comultiplication of an element. In the group algebra we’ve been using as an example, we just have , but not all elements are so cleanly sent to two copies of themselves. And other comltiplications in other coalgebras aren’t even defined so nicely on any basis. So we introduce the so-called “Sweedler notation”. If you didn’t like the summation convention, you’re going to hate this.

Okay, first of all, we know that the comultiplication of an element is an element of the tensor square . Thus it can be written as a finite sum

Now, this uses two whole new letters, and , which might be really awkward to come up with in practice. Instead, let’s call them and , to denote the first and second factors of the comultiplication. We’ll also move the indices to superscripts, just to get them out of the way.

The whole index-summing thing is a bit awkward, especially because the number of summands is different for each coalgebra element . Let’s just say we’re adding up all the terms we need to for a given :

Then if we’re really pressed for space we can just write . Since we don’t use a subscript in parentheses for anything else, we remember that this is implicitly a summation.

Let’s check out the counit laws in this notation. Now they read . Or, more expansively:

Similarly, the coassociativity condition now reads

In the Sweedler notation we’ll write both of these equal sums as

Or more simply as .

As a bit more practice, let’s write out the condition that a linear map between coalgebras is a coalgebra morphism. The answer is that must satisfy

Notice here that there are implied summations here. We are *not* asserting that all the summands are equal, and definitely not that (for instance). Sweedler notation hides a lot more than the summation convention ever did, but it’s still possible to expand it back out to a proper summation-heavy format when we need to.

And unsurprisingly, now that I’ve seen the Sweedler notation, I find it about as horrible and unreadable as ever I found the summation convention.

Give me my sum signs. By all means fudge the decorations as long as there is a context, but having at least ONE sigma hanging around and a bunch of indexing variables vaguely indicated makes it all more readable and not less.

And I had this rant a while back too, and I know we disagree on this.

Comment by Mikael Vejdemo Johansson | November 11, 2008 |

Actually, I rather dislike Sweedler notation myself. But for writing out formulas it’s sort of difficult to escape. The thing is, I’ve found that if you’re writing out a lot of things in Sweedler notation, you’re thinking too explicitly. Similarly, if you’re writing out a lot of matrix indices, you’re missing the point.

As for actual use of Sweedler notation, I need it for writing the two equivalent monoidal triple products of three representations. Beyond that, I hope to not need it again.

Comment by John Armstrong | November 11, 2008 |

[...] remember that this doesn’t mean that the two tensorands are always equal, but only that the results after (implicitly) summing up [...]

Pingback by Cocommutativity « The Unapologetic Mathematician | November 19, 2008 |

I personally love the summation convention, and loved it from the first time I encountered it. The problem I had with Sweedler notation is that nobody seems to explain it in detail. However I think I get it now – and I think I’ll learn to like it.

Comment by Blake | December 4, 2008 |

So am I correct to understand that, when we write and , the in the former is different from the in the latter? That is, there’s an implicit additional index here that you (hopefully) get by counting up the size of the tensor product you’re working in?

Comment by Daniel | March 26, 2011 |

(modulo the stray parens that I managed to put in there somehow…)

Comment by Daniel | March 26, 2011 |

That’s correct, Daniel, and that’s one of the most confusing things about it. In fact, in and the and aren’t really index values at all, and and aren’t particular values of some indexed quantity.

For example, there’s a certain common situation where we write . In this case, can be and can be , or vice versa, depending on which of the two summands we’re talking about.

Comment by John Armstrong | March 26, 2011 |