Our first observation about characters takes our work from last time and spins it in a new direction.
Let’s say and are conjugate elements of the group . That is, there is some so that . I say that for any -module with character , the character takes the same value on both and . Indeed, we find that
We see that is not so much a function on the group as it is a function on the set of conjugacy classes , since it takes the same value for any two elements in the same conjugacy class. We call such a complex-valued function on a group a “class function”. Clearly they form a vector space, and this vector space comes with a very nice basis: given a conjugacy class we define to be the function that takes the value for every element of and the value otherwise. Any class function is a linear combination of these , and so we conclude that the dimension of the space of class functions in is equal to the number of conjugacy classes in .
The basis isn’t orthonormal, but it is orthogonal. However, we can compute:
Incidentally, this is the reciprocal of the size of the centralizer of any . Thus if we pick a in each we can write down the orthonormal basis .
Now we introduce a very useful tool in the study of group representations: the “character” of a representation. And it’s almost effortless to define: the character of a matrix representation of a group is a complex-valued function on defined by
That is, the character is “the trace of the representation”. But why this is interesting is almost completely opaque at this point. I’m still not entirely sure why this formula has so many fabulous properties.
First of all, we need to recall something about the trace: it satisfies the “cyclic property”. That is, given an matrix and an matrix , we have
Indeed, if we write out the matrices in components we find
Then since the trace is the sum of the diagonal elements we calculate
but these are exactly the same!
We have to be careful, though, that we don’t take this to mean that we can arbitrarily reorder matrices inside the trace. If , , and are all matrices, we can conclude that
but we cannot conclude in general that any of the traces on the upper line are equal to any of the traces on the lower line. We can “cycle” matrices around inside the trace, but not rearrange them arbitrarily.
So, what good is this? Well, if is an invertible matrix and is any matrix, then we find that . If is a change of basis matrix, then this tells us that the trace only depends on the linear transformation represents, and not on the particular matrix. In particular, if and are two equivalent matrix representations then there is some intertwining matrix so that for all . The characters of and are therefore equal.
If is a -module, then picking any basis for gives a matrix representing each linear transformation . The previous paragraph shows that which particular matrix representation we pick doesn’t matter, since they’re all give us the same character . And so we can define the character of a -module to be the character of any corresponding matrix representation.
Again, sorry for the delay but I was eager to polish something up for my real job this morning.
There’s something interesting to notice in our formulæ for the dimensions of spaces of intertwinors: they’re symmetric between the two representations involved. Indeed, let’s take two -modules:
where the are pairwise-inequivalent irreducible -modules with degrees . We calculate the dimensions of the -spaces going each way:
but these are equal! So does this mean these spaces are isomorphic?
Well, yes. Any two vector spaces having the same dimension are isomorphic, but they’re not “naturally” isomorphic. Roughly, there’s no universal method of giving an explicit isomorphism, and so it’s regarded as sort of coincidental. But there’s something else around that’s not coincidental.
It turns out that these spaces are naturally isomorphic to each other’s dual spaces. That is, for any -modules and we have an isomorphism
Luckily, we already know that their dimensions are equal, so the rank-nullity theorem tells us all we need is to find an injective linear map from one to the other.
So, let’s take an intertwinor and use it to build a linear functional on . For any intertwinor we define
Where is the trace of an endomorphism. Given a matrix, it’s the sum of the diagonal entries. Since the composition of linear maps is linear in each variable, and the trace is a linear function, this is a linear functional as desired. It should also be clear that the construction is itself a linear map.
Now, we must show that this map is injective. That is, for no to we find . This will follow if we can find for every nonzero at least one so that . To do so, we pick a basis for each irreducible representation that shows up in either or so we can replace and with matrix representations. Now we can write
where is an complex matrix. To construct our , we simply take the conjugate transpose of each of these matrices:
where now is an complex matrix, as desired. We multiply the two matrices:
and find that each is a square matrix. Thus the trace of this composition is the sum of their traces.
We’ve already seen that the composition of a linear transformation and its adjoint is self-adjoint and positive-definite. In terms of complex matrices, this tells us that the product of a matrix and its conjugate transpose is conjugate-symmetric and positive-definite. This means that it’s diagonalizable with all nonnegative real eigenvalues down the diagonal. And thus its trace is a nonnegative real number, and it can only be zero if the original matrix was zero.
The upshot, if you didn’t follow that, is that if we have an so that . And thus the map is injective, as we asserted. Proving naturality is similarly easy to proving it for additivity of -spaces, and you can work it out if you’re interested.
Now that we know that hom spaces are additive, we’re all set to make a high-level approach to generalizing last week’s efforts. We’re not just going to deal with endomorphism algebras, but with all the -spaces.
Given -modules and , Maschke’s theorem tells us that we can decompose our representations as
where the are pairwise-inequivalent irreducible -modules with degrees . I’m including all the irreps that show up in either decomposition, so some of the coefficients or may well be zero. This is not a problem, since it just means direct-summing on a trivial module.
So let’s use additivity! We find
Now to calculate these summands, we can pick a basis for and and use the same sorts of methods we did to calculate commutant algebras. We find that if — — then there are no -morphisms at all, even if we include multiplicities. On the other hand, if we find that an intertwinor between and has the form , where is an complex matrix. That is, as a vector space it’s isomorphic to the space of matrices.
and its dimension is
Notice that any for which or doesn’t count for anything.
As a special case, we consider the endomorphism algebra . This time we assume that none of the are zero. We find:
Just like before, we can calculate the center, which goes summand-by-summand. Each summand is (isomorphic to) a complete matrix algebra, so we know that its center is isomorphic to . Thus we find that the center of is the direct sum of copies of , and so has dimension .
As one last corollary, let be irreducible and let be any representation. Then we calculate the dimension of the -space:
That is, the dimension of the space of intertwinors is exactly the multiplicity of in the representation .
Today I’d like to show that the space of homomorphisms between two -modules is “additive”. That is, it satisfies the isomorphisms:
We should be careful here: the direct sums inside the are direct sums of -modules, while those outside are direct sums of vector spaces.
The second of these is actually the easier. If is a -morphism, then we can write it as , where and . Indeed, just take the projection and compose it with to get . These projections are also -morphisms, since and are -submodules. Since every can be uniquely decomposed, we get a linear map .
Then the general rules of direct sums tell us we can inject and back into , and write
Thus given any -morphisms and we can reconstruct an . This gives us a map in the other direction — — which is clearly the inverse of the first one, and thus establishes our isomorphism.
Now that we’ve established the second isomorphism, the first becomes clearer. Given a -morphism we need to find morphisms . Before we composed with projections, so this time let’s compose with injections! Indeed, composes with to give . On the other hand, given morphisms , we can use the projections and compose them with the to get two morphisms . Adding them together gives a single morphism, and if the came from an , then this reconstructs the original. Indeed:
And so the first isomorphism holds as well.
We should note that these are not just isomorphisms, but “natural” isomorphisms. That the construction is a functor is clear, and it’s straightforward to verify that these isomorphisms are natural for those who are interested in the category-theoretic details.
We want to calculate the centers of commutant algebras. We will have use of the two easily-established equations:
Where , , , and are linear functions. In particular, this holds where and are matrices representing linear endomorphisms of , and and are matrices representing linear endomorphisms of .
Now let be a matrix representation and consider a central matrix . That is, for all , we have
Let’s further assume that we can write
where each is an irreducible representation of degree . Then we know that we can write
Thus we calculate:
This is only possible if for each we have for all . But this means that is in the center of , which implies that . Therefore a central element can be written
As a concrete example, let’s say that , where and . Then the matrices in the commutant algebra look like:
and the dimension of the commutant algebra is evidently .
The central matrices in the commutant algebra, on the other hand, look like:
And the dimension is .
And in my hurry to get a post up yesterday afternoon after forgetting to in the morning, I put up the wrong one. Here’s what should have gone up yesterday, and yesterday’s should have been now.
Now we can describe the most general commutant algebras. Maschke’s theorem tells us that any matrix representation can be decomposed as the direct sum of irreducible representations. If we collect together all the irreps that are equivalent to each other, we can write
where the are pairwise-inequivalent irreducible matrix representations with degrees , respectively. We calculate the degree:
Now, can a matrix in the commutant algebra send a vector from the subspace isomorphic to to the subspace isomorphic to ? No, and for basically the same reason we saw in the case of . Since it’s an intertwinor, it would have to send the whole -orbit of the vector — a submodule isomorphic to — into the target subspace , but we know that that submodule itself has no submodules isomorphic to .
And so any such matrix must be the direct sum of one matrix in each commutant algebra . But we know that these matrices are of the form . And so we can write
which has dimension
Sorry I forgot to get this posted this morning.
Given an algebra , it’s interesting to consider the “center” of . This is the collection of algebra elements that commute with all the others. That is,
It’s straightforward to see that sums, scalar multiples, and products of central elements — elements of — are themselves central. That is, is an algebra, and it’s a commutative one to boot. This gives us a construction that starts with an associative algebra and ends with a commutative algebra, and yet it turns out that it is not a functor! I don’t really want to get into that right now, though, but I wanted to mention it in passing, since it’s one of the few examples of a natural algebraic construction that isn’t functorial.
What I do want to get into right now, is calculating the center of the matrix algebra . The answer is reminiscent of Schur’s lemma:
Suppose that is a central matrix. Then in particular it commutes with the matrix , which has a at the th place along the diagonal and s everywhere else. That is, . But zeroes out everything except the th column of , while zeroes out everything except the th row. For these two be equal, the th column must be all zeroes except for the one spot along the diagonal, and similarly for the th row. And so must be diagonal.
For , must also commute with — the matrix with ones in the th column of the th row and the th column of the th row. That is, . Multiplying on the right by swaps the th and th columns of , while multiplying on the left swaps the th and th rows. Thus we can tell that not only is diagonal, but all the diagonal entries must be the same. And so for some complex .
Before we get into it, let’s discuss a bit of notation. Given a representation we write for the direct sum of copies of . We say that is the “multiplicity” of .
Now, let’s let be an irrep of degree , and let . Our analysis proceeds exactly as yesterday — with — up until we write down our four equations. Now they read:
This time, Schur’s lemma tells us that each is an intertwinor between and itself. And so we conclude that each of the blocks is a constant times the identity: . That is:
We can recognize this as a Kronecker product of two matrices:
which is the matrix version of the tensor product of two linear maps. If you don’t know much about the tensor product, don’t worry; we’ll refresh more as we go. You can also review tensor products in the context of vector spaces and linear transformations here. What we want to think of here is that the matrix shuffles around the two copies of the irrep , and the identity matrix stands for the trivial transformation on an irreducible representation.
Since any values are possible for the , the first matrix can take any value in the algebra of complex matrices. We say that
In more generality, if , where is an irrep of degree , then we find
The degree of the representation is — we get for each of the copies of — and the dimension of the commutant algebra is the dimension of the matrix algebra , which is .
Next, let and be two inequivalent matrix irreps, with degrees and , respectively, and consider the representation . As a matrix, this looks like:
Where we’ve broken the rows and columns into blocks of size and . Now let’s determine the algebra of matrices commuting with each such matrix . Let’s break down into blocks like .
The nice thing about this is that when the block sizes are the same, and when we break rows and columns into the same blocks, the rules for multiplication are the same as for regular matrices:
If these are to be equal, we have four equations to satisfy:
And we can apply Schur’s lemma to all of them. In the middle two equations, we see that both and must be either be invertible or zero. But if either one is invertible, then it gives an equivalence between the matrix irreps and . But since we assumed that these are inequivalent, we conclude that and are both the appropriate zero matrices. And then the first and last equations are handled just like single irreps were last time. Thus we must have
And so , where the multiplication is handled component by component. Similarly, the direct sum of pairwise-inequivalent irreps has commutant algebra , with multiplication handled componentwise. The degree of the representation is the sum of the degrees of the irreps, and the dimension of the commutant is .