## Intertwiner Spaces

Another grading day, another straightforward post. It should come as no surprise that the collection of intertwining maps between any two representations forms a vector space.

Let’s fix representations and . We already know that is a vector space. We also know that an intertwiner can be identified with a linear map . What I’m asserting is that is actually a *subspace* of under this identification.

Indeed, all we really need to check is that this subset is closed under additions and under scalar multiplications. For the latter, let’s say that is an intertwiner. That is, . Then given a constant we consider the linear map we calculate

And so is an intertwiner as well.

Now if and are both intertwiners, satisfying conditions like the one above, we consider their sum and calculate

Which shows that is again an intertwiner.

Since composition of intertwiners is the same as composing their linear maps, it’s also bilinear. It immediately follows that the category is enriched over .

## Math and Philosophy

An aspiring philosopher of mathematics got hold of my post on categorification, and it’s leading to (what I find to be) an interesting discussion of how Hume’s Principle fits into the story, and just what the Peano axioms (not to mention “the natural numbers”) mean. Follow along (or jump in!) at *if-then knots* — an excellent title, in my opinion.

## Direct Sums of Representations

We know that we can take direct sums of vector spaces. Can we take representations and and use them to put a representation on ? Of course we can, or I wouldn’t be making this post!

This is even easier than tensor products were, and we don’t even need to be a bialgebra. An element of is just a pair with and . We simply follow our noses to define

The important thing to notice here is that the direct summands and do not interact with each other in the direct sum . This is very different from tensor products, where the tensorands and are very closely related in the tensor product . If you’ve seen a bit of pop quantum mechanics, this is *exactly* the reason quantum system exhibit entanglement while classical systems don’t.

Okay, so we have a direct sum of representations. Is it a biproduct? Luckily, we don’t have to bother with universal conditions here, because a biproduct can be defined purely in terms of the morphisms and . And we automatically have the candidates for the proper morphisms sitting around: the inclusion and projection morphisms on the underlying vector spaces! All we need to do is check that they intertwine representations, and we’re done. And we really only need to check that the first inclusion and projection morphisms work, because all the others are pretty much the same.

So, we’ve got defined by . Following this with the action on we get

But this is the same as if we applied to . Thus, is an intertwiner.

On the other hand, we have , defined by . Acting now by we get , while if we acted by beforehand we’d get

Just as we want.

The upshot is that taking the direct sum of two representations in this manner *is* a biproduct on the category of representations.

## The Zero Representation

Okay, this is going to sound pretty silly and trivial, but I’ve been grading today. There is one representation we always have for any group, called the zero representation .

Pretty obviously this is built on the unique zero-dimensional vector space . It shouldn’t be hard to convince yourself that is the trivial group, and so any group has a unique homomorphism to this group. Thus there is a unique representation of on the vector space .

We should immediately ask: is this representation a zero object? Suppose we have a representation . Then there is a unique arrow sending every vector to . Similarly, there is a unique arrow sending the only vector to the zero vector . It’s straightforward to show that these linear maps are intertwinors, and thus that the zero representation is indeed a zero object for the category of representations of .

This is all well and good for groups, but what about representing an algebra ? This can only make sense if we allow rings without unit, which I only really mentioned back when I first defined a ring. This is because there’s only one endomorphism of the zero-dimensional vector space at all! The endomorphism algebra will consist of just the element , and a representation of has to be an algebra homomorphism to this *non-unital* algebra. Given this allowance, we do have the zero representation, and it’s a zero object just as for groups. It’s sort of convenient, so we’ll tacitly allow this one non-unital algebra to float around just so we can have our zero representation, even if we allow no other algebras without units.

## Subrepresentations and Quotient Representations

Today we consider subobjects and quotient objects in the category of representations of an algebra . Since the objects are representations we call these “subrepresentations” and “quotient representations”.

As in any category, a subobject (on the vector space ) of a representation (on the vector space ) is a monomorphism . This natural transformation is specified by a single linear map . It’s straightforward to show that if is to be left-cancellable as an intertwinor, it must be left-cancellable as a linear map. That is, it must be an injective linear transformation from to .

Thus we can identify with its image subspace . Even better, the naturality condition means that we can identify the action of on with the restriction of to this subspace. The result is that we can define a subrepresentation of as a subspace of so that actually sends into itself for every . That is, it’s a subspace which is fixed by the action .

If is a subrepresentation of , then we can put the structure of a representation on the quotient space . Indeed, note that any vector in the quotient space is the coset of a vector . We define the quotient action using the action on : . But what if is another representative of the same coset? Then we calculate:

because sends the subspace back to itself.

## Some Symmetric Group Representations

Tuesday, we talked about tensor powers of the standard representation of . Today we’ll look at some representations of the symmetric group and see how they’re related.

We start by looking at the th tensor power . Now since the category of vector spaces is symmetric, we get a representation of the symmetric group . Indeed, we just use the element of the symmetric group to permute the tensorands. That is, given and a pure tensor we define the representation by

Indeed, it’s straightforward to check that , given the convention we’ve picked for symmetric group composition.

This representation of on is almost trivial, so why do we care? Well, it turns out that every single one of the transformations in the representation commutes with the action of on ! Indeed, because of the way we defined the group to act on the tensor powers by doing the exact same thing to each tensorand, we can shuffle around the tensorands and get the same result. If you’re still not convinced, write out the square that needs to commute and verify both compositions.

In fact, there’s a really beautiful theorem (that I’m not about to prove here (yet)) that says the situation is even nicer. Let’s consider specifically — a finite-dimensional vector space over the complex numbers. Then the representations of (the group algebras of) and determine subalgebras (call them and , respectively) of the endomorphism algebra . And each one is the “centralizer” of the other. That is, is the subalgebra consisting of all algebra elements of which commute with every element of , and vice-versa. This situation is called “Schur-Weyl duality”, and it turns out to be fantastically useful in studying representations of both the symmetric groups and the general linear groups.

## Some Representations of the General Linear Group

Sorry for the delays, but it’s the last week of class and everyone came back from the break in a panic.

Okay, let’s look at some examples of group representations. Specifically, let’s take a vector space and consider its general linear group .

This group comes equipped with a representation already, on the vector space itself! Just use the identity homomorphism We often call this the “standard” or “defining” representation. In fact, it’s easy to forget that it’s a representation at all. But it is.

As with any other group, we have dual representations. That is, we immediately get an action of on . And we’ve seen it already! When we talked about the coevaluation on vector spaces we worked out how a change of basis affects linear functionals. What we found is that if is our action on , then the action on is by the transpose — the dual — of . And this is exactly the dual representation.

Also, as with any other group, we have tensor representations — actions on the tensor power for any number of factors of . How does this work? Well, every vector in is a linear combination of vectors of the form , where each . And we know how to act on these: just act on each tensorand separately. That is,

Then we just extend this action by linearity to all of .