Projecting Onto Invariants
Given a -module
, we can find the
-submodule
of
-invariant vectors. It’s not just a submodule, but it’s a direct summand. Thus not only does it come with an inclusion mapping
, but there must be a projection
. That is, there’s a linear map that takes a vector and returns a
-invariant vector, and further if the vector is already
-invariant it is left alone.
Well, we know that it exists, but it turns out that we can describe it rather explicitly. The projection from vectors to -invariant vectors is exactly the “averaging” procedure we ran into (with a slight variation) when proving Maschke’s theorem. We’ll describe it in general, and then come back to see how it applies in that case.
Given a vector , we define
This is clearly a linear operation. I say that is invariant under the action of
. Indeed, given
we calculate
since as ranges over
, so does
, albeit in a different order. Further, if
is already
-invariant, then we find
so this is indeed the projection we’re looking for.
Now, how does this apply to Maschke’s theorem? Well, given a -module
, the collection of sesquilinear forms on the underlying space
forms a vector space itself. Indeed, such forms correspond to correspond to Hermitian matrices, which form a vector space. Anyway, rather than write the usual angle-brackets, we will write one of these forms as a bilinear function
.
Now I say that the space of forms carries an action from the right by . Indeed, we can define
It’s straightforward to verify that this is a right action by . So, how do we “average” the form to get a
-invariant form? We define
which — other than the factor of — is exactly how we came up with a
-invariant form in the proof of Maschke’s theorem!
Group Invariants
Again, my apologies. What with yesterday’s cooking, I forgot to post this yesterday. I’ll have another in the evening.
Let be a representation of a finite group
, with finite dimension
. We can decompose
into blocks — one for each irreducible representation of
:
We’re particularly concerned with one of these blocks, which we can construct for any group . Every group has a trivial representation
, and so we can always come up with the space of “invariants” of
:
We call these invariants, because these are the so that
for all
. Technically, this is a
module — actually a
-submodule of
— but the action of
is trivial, so it feels slightly pointless to consider it as a module at all.
On the other hand, any “plain” vector space can be considered as if it were carrying the trivial action of . Indeed, if
has dimension
, then we can say it’s the direct sum of
copies of the trivial representation. Since the trivial character takes the constant value
, the character of this representation takes the constant value
. And so it really does make sense to consider it as the “number”
, just like we’ve been doing.
We’ve actually already seen this sort of subspace before. Given two left -modules
and
, we can set up the space of linear maps
between the underlying vector spaces. In this setup, the two group actions are extraneous, and so we find that they give residual actions on the space of linear maps. That is we have two actions by
on
, one from the left and one from the right.
Now just like we found with inner tensor products, we can combine these two actions of into one. Now we have one left action of
on linear maps by conjugation:
, defined by
Just in case, we check that
so this is, indeed, a representation. And what are the invariants of this representation? They’re exactly those linear maps such that
for all and
. Equivalently, the condition is that
and so must be an intertwinor. And so we conclude that
That is: the space of linear maps from to
that are invariant under the conjugation action of
is exactly the space of
-morphisms between the two
-modules.
Subspaces from Irreducible Representations
Because of Maschke’s theorem we know that every representation of a finite group can be decomposed into chunks that correspond to irreducible representations:
where the are pairwise-inequivalent irreps. But our consequences of orthonormality prove that there can be only finitely many such inequivalent irreps. So we may as well say that
is the number of them and let a multiplicity
be zero if
doesn’t show up in
at all.
Now there’s one part of this setup that’s a little less than satisfying. For now, let’s say that is an irrep itself, and let
be a natural number for its multiplicity. We’ve been considering the representation
made up of the direct sum of copies of
. But this leaves some impression that these copies of
actually exist in some sense inside the sum. In fact, though inequivalent irreps stay distinct, equivalent ones lose their separate identities in the sum. Indeed, we’ve seen that
That is, we can find a copy of lying “across” all
copies in the sum in all sorts of different ways. The identified copies are like the basis vectors in an
-dimensional vector space — they hardly account for all the vectors in the space.
We need a more satisfactory way of describing this space. And it turns out that we have one:
Here, the tensor product is over the base field , so the “extra action” by
on
makes this into a
-module as well.
This actually makes sense, because as we pass from representations to their characters, we also pass from “plain” vector spaces to their dimensions, and from tensor products to regular products. Thus at the level of characters this says that adding copies of an irreducible character together gives the same result as multiplying it by
, which is obviously true. So since the two sides have the same characters, they contain the same number of copies of the same irreps, and so they are isomorphic as asserted.
Actually, any vector space of dimension will do in the place of
here. And we have one immediately at hand:
itself. That is, if
is an irreducible representation then we have an isomorphism:
As an example, if is any representation and
is any irrep, then we find
We can reassemble these subspaces to find
Notice that this extends our analogy between spaces and inner products. Indeed, if we have an orthonormal basis
of a vector space of dimension
, we can decompose any vector as
Tensor Products over Group Algebras
So far, we’ve been taking all our tensor products over the complex numbers, since everything in sight has been a vector space over that field. But remember that a representation of is a module over the group algebra
, and we can take tensor products over this algebra as well.
More specifically, if is a right
-module, and
is a left
-module, then we have a plain vector space
. We build it just like the regular tensor product
, but we add new relations of the form
That is, in the tensor product over , we can pull actions by
from one side to the other.
If or
have extra group actions, they pass to the tensor product. For instance, if
is a left
-module as well as a right
-module, then we can define
Similarly, if has an additional right action by
, then so does
, and the same goes for extra actions on
. Similar to the way that hom spaces over
“eat up” an action of
on each argument, the tensor product
“eats up” a right action by
on its left argument and a left action by
on its right argument.
We can try to work out the dimension of this space. Let’s say that we have decompositions
into irreducible representations (possibly with repetitions). As usual for tensor products, the operation is additive, just like we saw for spaces. That is
So we really just need to understand the dimension of one of these summands. Let’s say is irreducible with dimension
and
is irreducible with dimension
.
Now, we can pick any vector and hit it with every group element
. These vectors must span
; they span some subspace, which (since
is irreducible) is either trivial or all of
. But it can’t be trivial, since it contains
itself, and so it must be
. That is, given any vector
we can find some element of the group algebra
so that
. But then for any
we have
That is, every tensor can be written with as the first tensorand. Does this mean that
? Not quite, since this expression might not be unique. For every element of the group algebra that sends
back to itself, we have a different expression.
So how many of these are there? Well, we have a linear map that sends
to
. We know that this is onto, so the dimension of the image is
. The dimension of the source is
, and so the rank-nullity theorem tells us that the dimension of the kernel — the dimension of the space that sends
back to itself — is
.
So we should be able to subtract this off from the dimension of the tensor product, due to redundancies. Assuming that this works as expected, we get , which at least is symmetric between
and
as expected. But it still feels sort of like we’re getting away with something here. We’ll come back to find a more satisfying proof soon.
The Character Table of S4
Let’s use our inner tensor products to fill in the character table of . We start by listing out the conjugacy classes along with their sizes:
Now we have the same three representations as in the character table of : the trivial, the signum, and the complement of the signum in the defining representation. Let’s write what we have.
Just to check, we calculate
so again, is irreducible.
But now we can calculate the inner tensor product of and
. This gives us a new line in the character table:
which we can easily check to be irreducible.
Next, we can form the tensor product , which has values
Now, this isn’t irreducible, but we can calculate inner products with the existing irreducible characters and decompose it as
where is what’s left after subtracting the other three characters. This gives us one more line in the character table:
and we check that
so is irreducible as well.
Now, we haven’t actually exhibited these representations explicitly, but there is no obstacle to carrying out the usual calculations. Matrix representations for and
are obvious. A matrix representation for
comes just as in the case of
by finding a basis for the defining representation that separates out the copy of
inside it. Finally, we can calculate the Kronecker product of these matrices with themselves to get a representation corresponding to
, and then find a basis that allows us to split off copies of
,
, and
.
Inner Tensor Products
Let’s say we have two left -modules —
and
— and form their outer tensor product
. Note that this carries two distinct actions of the same group
, and these two actions commute with each other. That is,
carries a representation of the product group
. This representation is a homomorphism
.
It turns out that we actually have another homomorphism , given by
. Indeed, we check that
If we compose this with the representing homomorphism, we get a homomorphism . That is,
actually carries a representation of
itself!
We’ve actually seen this before, a long while back. The group algebra is an example of a bialgebra, with the map
serving as a “comultiplication”. If this seems complicated, don’t worry about it. The upshot is that this “inner” tensor product
, considered as a representation of
, behaves as a monoidal product in the category of
-modules. Sometimes we will write
(and similar notations) when we need to distinguish the inner tensor product from the outer one, but often we can just let context handle it.
Luckily, characters are just as well-behaved for inner products. Indeed, we can check that
However, unlike for outer tensor products the inner tensor product of two irreducible representations is not, in general, itself irreducible. Indeed, we can look at the character table for and consider the inner tensor product of two copies of
.
What we just proved above tells us that the character of is
. This takes the values
which does not occur as a line in the character table, and thus cannot be an irreducible character. Indeed, calculating inner products, we find
And so we find that
which means that
Outer Tensor Products
Let’s say we have two finite groups and
, and we have (left) representations of each one:
and
. It turns out that the tensor product
naturally carries a representation of the product group
. Equivalently, it carries a representation of each of
and
, and these representations commute with each other. In our module notation, we write
.
The action is simple enough. Any vector in can be written (not always uniquely) as a sum of vectors of the form
. We let
act on the first “tensorand”, let
act on the second, and extend by linearity. That is:
and the action of either or
on the sum of two tensors is the sum of their actions on each of the tensors.
Now, might the way we write the sum make a difference? No, because all the relations look like
where in the last equation is a complex constant. Now, we can check that the actions of
and
give equivalent results on either side of each equation. For instance, acting by
in the first equation we see
just as we want. All the other relations are easy enough to check.
But do the actions of and
commute with each other? Indeed, we calculate
So we really do have a representation of the product group.
We have similar “outer” tensor products for other combinations of left and right representations:
Now, let’s try to compute the character of this representation. If we write the representing homomorphisms and
, then we get a representing homomorphism
. And this is given by
Indeed, this is exactly the endomorphism of that applies
to
and applies
to
, just as we want. And we know that when expressed in matrix form, the tensor product of linear maps becomes the Kronecker product of matrices. We write the character of
as
, that of
as
, and that of their tensor product as
, and calculate:
That is, the character of the tensor product representation is the product of the characters of the two representations.
Finally, if both and
are irreducible, then the tensor product
is as well. We calculate:
In particular, we find that . If
and
are both irreducible characters, then our character properties tell us that both of the inner products on the right are
, and we conclude that the inner product on the left is as well, which means that
is irreducible.
Left and Right Morphism Spaces
One more explicit parallel between left and right representations: we have morphisms between right -modules just like we had between left
-modules. I won’t really go into the details — they’re pretty straightforward — but it’s helpful to see how our notation works.
In the case of left modules, we had a vector space . Now in the case of right modules we also have a vector space
. We use the same notation
in both cases, and rely on context to tell us whether we’re talking about right or left module morphisms. In a sense, applying
“eats up” an action of
on each module, and on the same side.
We can see this even more clearly when we add another action to one of the modules. Let’s say that carries a left action of
— we write
— and
carries commuting left actions of both
and another group
— we write
. I say that there is a “residual” left action of
on the space of left
-module morphisms. That is, the space
“eats up” an action of
on each module, and it leaves the left action of
behind.
So, how could act on the space of morphisms? Well, let
be an intertwinor of the
actions, and let any
act on
by sending it to
, defined by
. That is,
first sends a vector
to a vector
, and then
acts on the left to give a new vector
. We must check that this is an intertwinor, and not just a linear map from
to
. For any
, we calculate
using first the fact that is an intertwinor, and then the fact that the action of
commutes with that of
to pull
all the way out to the left.
Similarly, if is an extra action on the right of
, we have a “residual” right action on the space
. And the same goes for right
-modules: we have a residual left action of
on
, and a residual right action on
.
It’s a little more complicated when we have extra commuting actions on . The complication is connected to the fact that the hom functor is contravariant in its first argument, which if you don’t know much about category theory you don’t need to care about. The important thing is that if
has an extra left action of
, then the space
will have a residual right action of
.
In this case, given a map intertwining the
actions, we define
by
. We should verify that this is, indeed, a right action:
using the fact that acts on the left on
. Again, we must verify that
is actually another intertwinor:
using the fact that is an intertwinor and the actions of
and
on
commute.
Similarly, we find a residual right action on the space , and residual left actions on the spaces
and
.
Right Representations
In our discussions of representations so far we’ve been predominantly concerned with actions on the left. That is, we have a map , linear in
, that satisfies the relation
. That is, the action of the product of two group elements is the composition of their actions.
But sometimes we’re interested in actions on the right. This is almost exactly the same, but with a map , again linear in
, and this time the relation reads
. Again, the action of the product of two group elements is the composition of their actions, but now in the opposite order! Before we first acted by
and then by
, but now we act first by
and then by
. And so instead of a homomorphism
, we have an anti-homomomorphism — a map from one group to another that reverses the order of multiplication.
We can extend the notation from last time. If the space carries a right representation of a group
, then we hang a tag on the right:
. If we have an action by another group
on the right that commutes with the action of
, we write
. And if
instead acts on the left, we write
. Again, this can be read as a pair of commuting actions, or as a left action of
on the right
-module
, or as a right action of
on the left
-module
.
Pretty much everything we’ve discussed moves over to right representations without much trouble. On the occasions we’ll really need them I’ll clarify if there’s anything tricky, but I don’t want to waste a lot of time redoing everything. One exception that I will mention right away is the right regular representation, which (predictably enough) corresponds to the left regular representation. In fact, when I introduced that representation I even mentioned the right action in passing. At the time, I said that we can turn the natural antihomomorphism into a homomorphism by right-multiplying by the inverse of the group element. But if we’re willing to think of a right action on its own terms, we no longer need that trick.
So the group algebra — here considered just as a vector space — carries the left regular representation. The left action of a group element
on a basis vector
is the basis vector
. It also carries the right regular representation. The right action of a group element
on a basis vector
is the basis vector
. And it turns out that these two actions commute! Indeed, we can check
This might seem a little confusing at first, but remember that when shows up plain on the left it means the group element
acting on the vector to its right. When it shows up in a boldface expression, that expression describes a basis vector in
. Overall, this tells us that we can start with the basis vector
and act first on the left by
and then on the right by
, or we can act first on the right by
and then on the left by
. Either way, we end up with the basis vector
, which means that these two actions commute. Using our tags, we can thus write
.
Representing Product Groups
An important construction for groups is their direct product. Given two groups and
we take the cartesian product of their underlying sets
and put a group structure on it by multiplying component-by-component. That is, the product of
and
is
. Representations of product groups aren’t really any different from those of any other group, but we have certain ways of viewing them that will come in handy.
The thing to notice is that we have copies of both and
inside
. Indeed, we can embed
into
by sending
to
, which clearly preserves the multiplication. Similarly, the map
embeds
into
. The essential thing here is that the transformations coming from
and those coming from
commute with each other. Indeed, we calculate
Also, every transformation in is the product of one from
and one from
.
The upshot is that a representation of on a space
provides us with a representation of
, and also one of
on the space
. Further, transformation in the representation of
must commute with every transformation in the representation of
. Conversely, if we have a representation of each factor group on the same space
, then we have a representation of the product group, but only if all the transformations in each representation commute with all the transformations in the other.
So what can we do with this? Well, it turns out that it’s pretty common to have two separate group actions on the same module, and to have these two actions commute with each other like this. Whenever this happens we can think of it as a representation of the product group, or as two commuting representations.
In fact, there’s another way of looking at it: remember that a representation of a group on a space
can be regarded as a module for the group algebra
. If we then add a commuting representation of a group
, we can actually regard it as a representation on the
-module instead of just the underlying vector space. That is, instead of just having a homomorphism
that sends each element of
to a linear endomorphism of
, we actually get a homomorphism
that sends each element of
to a
-module endomorphism of
.
Indeed, let’s write our action of with the group homomorphism
and our action of
with the group homomorphism
. Now, I’m asserting that each
is an intertwinor for the action of
. This means that for each
, it satisfies the equation
. But this is exactly what it means for the two representations to commute!
Some notation will be helpful here. If the vector space carries a representation of a group
, we can hang a little tag on it to remind ourselves of this, writing
. That is,
is a
-module, and not just a vector space. If we now add a new representation of a group
that commutes with the original representation, we just add a new tag:
. Of course, the order of the tags doesn’t really matter, so we could just as well write
. Either way, this means that we have a representation of
on
.