It turns out that our efforts last time were somewhat unnecessary, although they were instructive. Actually, we already had a matrix representation in our hands that would have done the trick.
The secret is to look at the block diagonal form from when we defined reducibility:
We worked this out for the product of two group elements, finding
We focused before on the upper-left corner to see that was a subrepresentation, but we see that as well. The thing is, if the overall representation acts on and acts on the submodule , then acts on the quotient space . That is, it’s generally not a submodule. However, it so happens that over a finite group we have as modules. That is, if we have a matrix representation that looks like the one above, then we can find a different basis that makes it look like
This is more than Maschke’s theorem tells us — not only do we have a decomposition, but we have one that uses the exact same matrix representation in the lower right as the original one. Proving this will reprove Maschke’s theorem, and in a way that works over any field!
So, let’s look for a change-of-basis matrix that’s partitioned the same way:
Multiplying this out, we find
which gives us the equation , which we rearrange to give
as well as
That is, acting on the left of by and on the right by doesn’t leave unchanged, but instead adds a certain offset. We’re not looking for an invariant of these actions, but something close. Incidentally, why are these two offsets the same? Well, if we put them together we find
which must clearly be zero, as desired.
Anyway, I say that things will work out if we choose
Indeed, we calculate
Just as we wanted.
Notice that just like our original proof of Maschke’s theorem, this depends on a sum that is only finite if is a finite group.
Last time we wrote down the complete character table of :
which is all well and good except we haven’t actually seen a representation with the last line as its character!
So where did we get the last line? We had the equation , which involves the characters of the defining representation and the trivial representation . This equation should correspond to an isomorphism .
We know that there’s a copy of the the trivial representation as a submodule of the defining representation. If we use the standard basis of , this submodule is the line spanned by the vector . We even worked out the defining representation in terms of the basis to show that it’s reducible.
But what we want is a complementary subspace which is also -invariant. And we can find such a complement if we have a -invariant inner product on our space. And, luckily enough, permutation representations admit a very nice invariant inner product! Indeed, just take the inner product that arises by declaring the standard basis to be orthonormal; it’s easy to see that this is invariant under the action of .
So we need to take our basis and change the second and third members to make them orthogonal to the first one. Then they will span the orthogonal complement, which we will show to be -invariant. The easiest way to do this is to use . Then we can calculate the action of each permutation in terms of this basis. For example:
and write out all the representing matrices in terms of this basis:
These all have the required form:
where the in the upper-left is the trivial representation and the block in the lower right is exactly the other representation we’ve been looking for! Indeed, we can check the values of the character:
exactly as the character table predicted.
When we first defined the character table of a group, we closed by starting to write down the character table of :
We’ve already verified that the two characters we know of are orthonormal, and we know that there can be at most one more, which would make the character table look like:
Do we have any other representations of to work with? Well, there’s the defining representation. This has a character we can specify by the three values
We calculate the multiplicities of the two characters we know by taking inner products:
That is, the defining representation contains one copy of the trivial representation and no copies of the signum representation. In fact, we already knew about the copy of the trivial representation, but it’s nice to see it confirmed again. Subtracting it off, we’re left with a residual character:
Now this character might itself decompose, or it might be irreducible. We can check by calculating its inner product with itself:
which confirms that is irreducible. Thus we can write down the character table of as
So, why is this just part 1? Well, we’ve calculated another character, but we still haven’t actually shown that there’s any irrep that gives rise to this character. We have a pretty good idea what it should be, but next time we’ll actually show that it exists, and it really does have the character .
We have some immediate consequences of the orthonormality relations.
First of all, since irreducible characters are orthonormal any collection of them forms an orthonormal basis of the subspace it spans. Of course whatever subspace this is, it has to fit within the space of class functions, and so it can’t have any more basis elements than the dimension of this larger space. That is, there can be at most as many irreducible characters as there are conjugacy classes in . And so we know that the character table must have only finitely many rows. For instance, since has three conjugacy classes, it can have at most three irreducible characters. We know two already, so there’s only room for one more, if there are any more at all.
For something a little more concrete, let be a collection of irreps with corresponding characters . Then the representation
That is, direct sums of representations correspond to sums of characters. This is just the tip of a far-reaching correspondence between the high-level properties of the category of representations and the low-level properties of the algebra of characters.
Anyway, proving this relation is pretty straightforward. If and are two matrix representations then their direct sum is
It should be clear that the trace of the direct sum on the left is the sum of the traces on the right. This is all we need, since we can just split off one irreducible component after another to turn the direct sum on one side into a sum on the other.
Next we have a way of reading off the coefficients. Let be the same representation from above, with the same character. I say that the multiplicity . Indeed, we can easily calculate
Notice that this is very similar to the result we showed at the end of calculating the dimensions of spaces of morphisms. This is not a coincidence.
More generally, if is the representation from above and is another representation that decomposes as
then the character of is
and we calculate the inner product
In particular, we see that
We see that in all these cases . Just like sums of characters correspond to direct sums of representations, inner products of characters correspond to -spaces between representations. We just have to pass from plain vector spaces to their dimensions when we pass from representations to their characters. Of course, this isn’t much of a stretch, since we saw that the character of a representation includes information about the dimension: .
This goes even further: what happens when we swap the arguments to an inner product? We get the complex conjugate: . What happens when we swap the arguments to the functor? We get the dual space: . Complex conjugation corresponds to passing to the dual space.
Finally, the character is irreducible if and only if . Indeed, if is itself irreducible then our decomposition only involves one nonzero coefficient, which is a . The formula we just computed gives
Conversely, if this formula holds then we have to write as a sum of squares. The only possibility is for all but one of the numbers to be , and the remaining one to be , in which case , and is thus irreducble.
Today we prove the assertion that we made last time: that irreducible characters are orthogonal. That is, if and are -modules with characters and , respectively, then their inner product is if and are equivalent and otherwise. Strap in, ’cause it’s a bit of a long one.
Let’s pick a basis of each of and to get matrix representations and of degrees and , respectively. Further, let be any matrix with entries . Now we can construct the matrix
Now I claim that intertwines the matrix representations and . Indeed, for any we calculate
At this point, Schur’s lemma kicks in to tell us that if then is the zero matrix, while if then is a scalar times the identity matrix.
First we consider the case where (equivalently, ). Since is the zero matrix, each entry must be zero. In particular, we get the equations
But the left side isn’t just any expression, it’s a linear function of the . Since this equation must hold no matter what the are, the coefficients of the function must all be zero! That is, we have the equations
But now we can recognize the left hand side as our alternate expression for the inner product of characters of . If the functions and were characters, this would be an inner product, but in general we’ll write
Okay, but we actually do have some characters floating around: and . And we can write them out in terms of these matrix elements as
And now we can use the fact that for characters our two bilinear forms are the same to calculate
So there: if and are inequivalent irreps, then their characters are orthogonal!
Now if we can pick bases so that the matrix representations are both . Schur’s lemma tells us that there is some so that . Our argument above goes through just the same as before to show that
so long as . To handle the case where , we consider our equation
We take the trace of both sides:
and thus we conclude that . And so we can write
Equating coefficients on both sides we find
And finally we can calculate
exactly as we asserted.
Incidentally, this establishes what we suspected when setting up the character table: if and are inequivalent irreps then their characters and must be unequal. Indeed, since they’re inequivalent we must have . But if the characters were the same we would have to have . So since inequivalent irreps have unequal characters we can replace all the irreps labeling rows in the character table by their corresponding irreducible characters.
where our sum runs over all conjugacy classes , and where is the common value for all in the conjugacy class (and similarly for ). The idea is that every in a given conjugacy class gives the same summand. Instead of adding it up over and over again, we just multiply by the number of elements in the class.
As an example, consider again the start of the character table of :
Here we index the rows by irreducible characters, and the columns by representatives of the conjugacy classes. We can calculate inner products of rows by multiplying corresponding entries, but we don’t just sum up these products; we multiply each one by the size of the conjugacy class, and at the end we divide the whole thing by the size of the whole group:
We find that when we take the inner product of each character with itself we get , while taking the inner product of the two different characters gives . This is no coincidence; for any finite group irreducible characters are orthonormal. That is, different irreducible characters have inner product , while any irreducible character has inner product with itself. This is what we will prove next time.
Given a group , Maschke’s theorem tells us that every -module is completely reducible. That is, we can write any such module as the direct sum of irreducible representations:
Thus the irreducible representations are the most important ones to understand. And so we’re particularly interested in their characters, which we call “irreducible characters”.
Of course an irreducible character — like all characters — is a class function. We can describe it by giving its values on each conjugacy class. And so we lay out the “character table”. This is an array whose rows are indexed by inequivalent irreducible representations, and whose columns are indexed by conjugacy classes . The row indexed by describes the corresponding irreducible character . If is a representative of the conjugacy class, then the entry in the column indexed by is . That is, the character table looks like
By convention, the first row corresponds to the trivial representation, and the first column corresponds to the conjugacy class of the identity element. We know that the trivial representation sends every group element to the identity matrix, whose trace is . We also know that every character’s value on the identity element is the degree of the corresponding representation. We can slightly refine our first picture to sketch the character table like so:
We have no reason to believe (yet) that the table is finite. Since is a finite group there can be only finitely many conjugacy classes, and thus only finitely many columns, but as far as we can tell there may be infinitely many inequivalent irreps, and thus infinitely many rows. Further, we have no reason to believe that the rows are all distinct. Indeed, we know that equivalent representations have equal characters — they’re related through conjugation by an invertible intertwinor — but we don’t know for sure that inequivalent representations must have distinct characters.
As an example, we can start writing down the character table of . We know that conjugacy classes in symmetric groups correspond to cycle types, and so we can write down all three conjugacy classes easily:
We know of two irreps offhand — the trivial representation and the signum representation — and so we’ll start with those and leave the table incomplete below that:
Let’s take to be a permutation representation coming from a group action on a finite set that we’ll also call . It’s straightforward to calculate the character of this representation.
Indeed, the standard basis that comes from the elements of gives us a nice matrix representation:
On the left is the matrix of the action on , while on the right it’s the group action on the set . Hopefully this won’t be too confusing. The matrix entry in row and column is if sends to , and it’s otherwise.
So what’s the character ? It’s the trace of the matrix , which is the sum of all the diagonal elements:
This sum counts up for each point that sends back to itself, and otherwise. That is, it counts the number of fixed points of the permutation .
As a special case, we can consider the defining representation of the symmetric group . The character counts the number of fixed points of any given permutation. For instance, in the case we calculate:
In particular, the character takes the value on the identity element , and the degree of the representation is as well. This is no coincidence; will always be the degree of the representation in question, since any matrix representation of degree must send to the identity matrix, whose trace is . This holds both for permutation representations and for any other representation.
Let’s take a -module , with character . Before, we’ve used Maschke’s theorem to tell us that all -modules are completely reducible, but remember what it really tells us that there is some -invariant inner product on (we’ll have to keep straight the two inner products by which vector space they apply to). With respect to the inner product on , every transformation with is unitary, and if we pick an orthonormal basis to get a matrix representation each of the matrices will be unitary. That is:
So what does this mean for the character ? We can calculate
And so we can rewrite our inner product
The nice thing about this formula is that it doesn’t depend on complex conjugation, and so it’s useful for any base field (if we were using other base fields).
The catch is that for class functions in general we have no reason to believe that this is an inner product. Indeed, if is some element that isn’t conjugate to its inverse then we can define a class function that takes the value on the class of , on the class of , and elsewhere. Our new formula gives
so this bilinear form isn’t positive-definite.
Our first observation about characters takes our work from last time and spins it in a new direction.
Let’s say and are conjugate elements of the group . That is, there is some so that . I say that for any -module with character , the character takes the same value on both and . Indeed, we find that
We see that is not so much a function on the group as it is a function on the set of conjugacy classes , since it takes the same value for any two elements in the same conjugacy class. We call such a complex-valued function on a group a “class function”. Clearly they form a vector space, and this vector space comes with a very nice basis: given a conjugacy class we define to be the function that takes the value for every element of and the value otherwise. Any class function is a linear combination of these , and so we conclude that the dimension of the space of class functions in is equal to the number of conjugacy classes in .
The basis isn’t orthonormal, but it is orthogonal. However, we can compute:
Incidentally, this is the reciprocal of the size of the centralizer of any . Thus if we pick a in each we can write down the orthonormal basis .