## Characters of Induced Representations

We know how to restrict and induce representations. Now we want to see what this looks like on the level of characters.

For restricted representations, this is easy. Let be a matrix representation of a group , and let be a subgroup. Then for any . We just consider an element of as an element in and construct the matrix as usual. Therefore we can see that

That is, we get the restricted character by restricting the original character.

As for the induced character, we use the matrix of the induced representation that we calculated last time. If is a matrix representation of a group , which is a subgroup , then we pick a transversal of in . Using our formula for the induced matrix, we find

Where we define if . Now, since is a class function on , conjugation by any element leaves it the same. That is,

for all and . So let’s do exactly this for each element of , add all the results together, and then divide by the number of elements of . That is, we write the above function out in different ways, add them all together, and divide by to get exactly what we started with:

But now as varies over the transversal, and as varies over , their product varies exactly once over . That is, every can be written in exactly one way in the form for some transversal element and subgroup element . Thus we find:

## Induced Matrix Representations

Sorry I missed posting this back in the morning…

We want to work out the matrices of induced representations. Explicitly, if is a left -module of degree , where is a subgroup of , then is a left -module. If we pick a basis of , we get a matrix representation . We want to describe a matrix representation corresponding to . In the process, we’ll see that we were *way* off with our first stabs at the dimensions of tensor products over .

The key point is to realize that is a free right module over . That is, we can find some collection of vectors in so that any other one can be written as a linear collection of these with coefficients (on the right) in . Indeed, we can break up into the left cosets of . Picking one representative of each coset — we call this a “transversal” for — we have essentially chopped up into chunks, each of which looks exactly like .

To see this, notice that the coset is a subset of . Thus it describes a subspace of — that spanned by the elements of the coset, considered as basis vectors in the group algebra. The action of on shuffles the basis vectors in this coset around amongst each other, and so this subspace is invariant. It should be clear that it is isomorphic to , considered as a right -module.

Okay, so when we consider the tensor product , we can pull any action by across to the right and onto . What remains on the left? A vector space spanned by the transversal elements , which essentially index the left cosets of in . We have one copy of for each of these cosets, and so the dimension of the induced module is .

How should we think about this equation, heuristically? The tensor product multiplies the dimensions of vector spaces, which gives . Then the action of on the tensor product divides by a factor of — at least in principle. In practice, this only works because in our example the action by is free. That is, no element in the bare tensor product is left fixed by any non-identity element of .

So how does this give us a matrix representation of ? Well, acts on by shuffling around the subspaces that correspond to the cosets of . In fact, this is exactly the coset representation of corresponding to ! If we write for some , then this uses up the transversal element . The is left to “pass through” and act on .

To write this all out explicitly, we get the following block matrix:

where is the number of cosets, and we simply define to be a zero block if does not actually fall into .

## Restricting and Inducing Representations

Two of the most interesting constructions involving group representations are restriction and induction. For our discussion of both of them, we let be a subgroup; it doesn’t have to be normal.

Now, given a representation , it’s easy to “restrict” it to just apply to elements of . In other words, we can compose the representing homomorphism with the inclusion : . We write this restricted representation as ; if we are focused on the representing space , we can write ; if we pick a basis for to get a matrix representation we can write . Sometimes, if the original group is clear from the context we omit it. For instance, we may write .

It should be clear that restriction is transitive. That is, if is a chain of subgroups, then the inclusion mapping is the exactly composition of the inclusion arrows and . And so we conclude that

So whether we restrict from directly to , or we stop restrict from to and from there to , we get the same representation in the end.

Induction is a somewhat more mysterious process. If is a left -module, we want to use it to construct a left -module, which we will write , or simply if the first group is clear from the context. To get this representation, we will take the tensor product over with the group algebra of .

To be more explicit, remember that the group algebra carries an action of on both the left and the right. We leave the left action alone, but we restrict the right action down to . So we have a -module , and we take the tensor product over with . We get the space ; in the process the tensor product over “eats up” the right action of on the and the left action of on . The extra left action of on leaves a residual left action on the tensor product, and this is the left action we seek.

Again, induction is transitive. If is a chain of subgroups, and if is a left -module, then

The key step here is that . But if we have any simple tensor , we can use the relation that lets us pull elements of across the tensor product. We get . That is, we can specify any tensor by an element in alone.

## The Character Table as Change of Basis

Now that we’ve seen that the character table is square, we know that irreducible characters form an orthonormal basis of the space of class functions. And we also know another orthonormal basis of this space, indexed by the conjugacy classes :

A line in the character table corresponds to an irreducible character , and its entries tell us how to write it in terms of the basis :

That is, it’s a change of basis matrix from one to the other. In fact, we can modify it slightly to exploit the orthonormality as well.

When dealing with lines in the character table, we found that we can write our inner product as

So let’s modify the table to replace the entry with . Then we have

where we make use of our orthonormality relations. That is, if we use the regular dot product on the rows of the modified character table (considered as tuples of complex numbers) we find that they’re orthonormal. But this means that the modified table is a unitary matrix, and thus its columns are orthonormal as well. We conclude that

where now the sum is over a set indexing the irreducible characters. We rewrite these relations as

We can use these relations to help fill out character tables. For instance, let’s consider the character table of , starting from the first two rows:

where we know that the third row must exist for the character table to be square. Now our new orthogonality relations tell us on the first column that

Since , it is a dimension, and must be positive. That is, . On the second column we see that

and so we must have . Finally on the third column we see that

so .

To tell the difference, we can use the new orthogonality relations on the first and third or second and third columns, or the old ones on the first and third or second and third rows. Any of them will tell us that , and we’ve completed the character table without worrying about constructing any representations at all.

We should take note here that the conjugacy classes index one orthonormal basis of the space of class functions, and the irreducible representations index another. Since all bases of any given vector space have the same cardinality, the set of conjugacy classes and the set of irreducible representations have the same number of elements. However, there is no reason to believe that there is any particular correspondence between the elements of the two sets. And in general there isn’t any, but we will see that in the case of symmetric groups there is a way of making just such a correspondence.

## The Character Table is Square

We’ve defined the character table of a group, and we’ve seen that it must be finite. Specifically, it cannot have any more rows — cannot have any more irreducible representations — than there are conjugacy classes in . Now we can show that there are always *exactly* as many irreducible representations as there are conjugacy classes in .

We recall that for any representation the center of the endomorphism algebra is equal to the number of irreducible representations that show up in . In particular, since we know that every irreducible representation shows up in the left regular representation , the number of irreducible representations is . Thus to calculate this number , we must understand the structure of the endomorphism algebra and its center.

But we just saw that is anti-isomorphic to as algebras, and this anti-isomorphism induces an anti-isomorphism on their centers. In particular, their centers have the same dimension. That is:

So what does a central element of the group algebra look like? Let be such a central element and write it out as

Now since is central, it must commute with every other element of the group algebra. In particular, for every we have , or . That is:

Since is invariant, the coefficients and must be the same. But as runs over , runs over the conjugacy class of , so the coefficients must be the same for all elements in the conjugacy class. That is, we have exactly as many free parameters when building as there are conjugacy classes in — one for each of them.

So we’ve established that the center of the group algebra has dimension equal to the number of conjugacy classes in . We also know that this is the same as the dimension of the center of the endomorphism algebra of the left regular representation. Finally, we know that this is the same as the number of distinct irreducible representations that show up in the decomposition of the left regular representation. And so we conclude that any finite group must have exactly as many irreducible representations as it has conjugacy classes. Since the conjugacy classes index the columns of the character table of , and the irreducible characters index the rows, we conclude that the character table is always square.

As a quick corollary, we find that the irreducible characters span a subspace of the space of class functions with dimension equal to the number of conjugacy classes in . Since this is the dimension of the whole space of class functions, the irreducible characters must form an orthonormal basis of this space.

## The Endomorphism Algebra of the Left Regular Representation

Since the left regular representation is such an interesting one — in particular since it contains all the irreducible representations — we want to understand its endomorphisms. That is, we want to understand the structure of . I say that, amazingly enough, it is *anti*-isomorphic to the group algebra itself!

So let’s try to come up with an anti-isomorphism . Given any element , we define the map to be right-multiplication by . That is:

for every . This is a -endomorphism, since acts by multiplication on the left, and left-multiplication commutes with right-multiplication.

To see that it’s an anti-homomorphism, we must check that it’s linear and that it reverses the order of multiplication. Linearity is straightforward; as for reversing multiplication, we calculate:

Next we check that is injective by calculating its kernel. If then

so this is only possible if .

Finally we must check surjectivity. Say , and define . I say that , since

Since the two -endomorphisms are are equal on the standard basis of , they are equal. Thus, every -endomorphism of the left regular representation is of the form for some .

## Decomposing the Left Regular Representation

Let’s take the left regular representation of a finite group on its group algebra and decompose it into irreducible representations.

Our first step is to compute the character of as a left -module. The nice thing here is that it’s a permutation representation, and that means we have a shortcut to calculating its character: is the number of fixed point of the action of on the standard basis of . That is, it counts the number of with . But this can only happen if is the group identity, and in that case every element is a fixed point. Thus we conclude

Now let be any irreducible representation of , with character . We know that the multiplicity of in is given by the inner product . This, we can calculate:

where in the last line we use the fact that evaluating the character of any representation at the identity element gives the degree of that representation.

So, what does this tell us? Every irreducible representation shows up in with a multiplicity equal to its degree. In particular, it must show up at least once. That is, the left regular representation contains all the irreducible representations.

Thus if are the irreducible representations of , we have a decomposition.

Taking dimensions on either side, we find

We can check this in the case of and , since we have complete character tables for both of them:

## The Dimension of the Space of Tensors Over the Group Algebra

Now we can return to the space of tensor products over the group algebra and take a more solid pass at calculating its dimension. Key to this approach will be the isomorphism .

First off, we want to calculate the character of . If — as a left -module — has character and has character , then we know that the inner tensor product has character

Next, we recall that the submodule of invariants can be written as

Now, we know that , and thus the dimension of our space of invariants is the dimension of the space. We’ve seen that this is the multiplicity of the trivial representation in , which we’ve also seen is the inner product . We calculate:

This may not be as straghtforward and generic a result as the last one, but it’s at least easily calculated for any given pair of modules and .

## Tensors Over the Group Algebra are Invariants

It turns out that we can view the space of tensors over a group algebra as a subspace of invariants of the space of all tensors. That is, if is a right -module and is a left -module, then is a subspace of .

To see this, first we’ll want to turn into a left -module by defining

We can check that this is a left action:

The trick is that moving from a right to a left action reverses the order of composition, and changing from a group element to its inverse reverses the order again.

So now that we have two left actions by , we can take the outer tensor product, which carries an action by . Then we pass to the inner tensor product, acting on each tensorand by the same group element. To be more explicit:

Now, I say that being invariant under this action of is equivalent to the new relation that holds for tensors over a group algebra. Indeed, if is invariant, then

Similarly, if we apply this action to a tensor product over the group algebra we find

so this action is trivial.

Now, we’ve been playing it sort of fast and loose here. We originally got the space by adding new relations to the space , and normally adding new relations to an algebraic object gives a quotient object. But when it comes to vector spaces and modules over finite groups, we’ve seen that quotient objects and subobjects are the same thing.

We can get a more explicit description to verify this equivalence by projecting onto the invariants. Given a tensor , we consider it instead as a tensor in . Now, this is far from unique, since many equivalent tensors over the group algebra correspond to different tensors in . But next we project to the invariant

Now I say that any two equivalent tensors in are sent to the same invariant tensor in . We check the images of and :

To invert this process, we just consider an invariant tensor as a tensor in . The “fast and loose” proof above will suffice to show that this is a well defined map . To see it’s an inverse, take the forward image and apply the relation we get from moving it back to :

And so we’ve established the isomorphism , as desired.

## Projecting Onto Invariants

Given a -module , we can find the -submodule of -invariant vectors. It’s not just a submodule, but it’s a direct summand. Thus not only does it come with an inclusion mapping , but there must be a projection . That is, there’s a linear map that takes a vector and returns a -invariant vector, and further if the vector is already -invariant it is left alone.

Well, we know that it exists, but it turns out that we can describe it rather explicitly. The projection from vectors to -invariant vectors is exactly the “averaging” procedure we ran into (with a slight variation) when proving Maschke’s theorem. We’ll describe it in general, and then come back to see how it applies in that case.

Given a vector , we define

This is clearly a linear operation. I say that is invariant under the action of . Indeed, given we calculate

since as ranges over , so does , albeit in a different order. Further, if is already -invariant, then we find

so this is indeed the projection we’re looking for.

Now, how does this apply to Maschke’s theorem? Well, given a -module , the collection of sesquilinear forms on the underlying space forms a vector space itself. Indeed, such forms correspond to correspond to Hermitian matrices, which form a vector space. Anyway, rather than write the usual angle-brackets, we will write one of these forms as a bilinear function .

Now I say that the space of forms carries an action from the *right* by . Indeed, we can define

It’s straightforward to verify that this is a right action by . So, how do we “average” the form to get a -invariant form? We define

which — other than the factor of — is exactly how we came up with a -invariant form in the proof of Maschke’s theorem!