The Unapologetic Mathematician

Mathematics for the interested outsider

Class Functions

Our first observation about characters takes our work from last time and spins it in a new direction.

Let’s say g and h are conjugate elements of the group G. That is, there is some k\in G so that h=kgk^{-1}. I say that for any G-module V with character \chi, the character takes the same value on both g and h. Indeed, we find that


We see that \chi is not so much a function on the group G as it is a function on the set of conjugacy classes K\subseteq G, since it takes the same value for any two elements in the same conjugacy class. We call such a complex-valued function on a group a “class function”. Clearly they form a vector space, and this vector space comes with a very nice basis: given a conjugacy class K we define f_K:G\to\mathbb{C} to be the function that takes the value 1 for every element of K and the value 0 otherwise. Any class function is a linear combination of these f_K, and so we conclude that the dimension of the space of class functions in G is equal to the number of conjugacy classes in G.

The space of class functions also has a nice inner product. Of course, we could just declare the basis \{f_K\} to be orthonormal, but that’s not quite what we’re going to do. Instead, we’ll define

\displaystyle\langle\chi,\psi\rangle=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}\overline{\chi(g)}\psi(g)

The basis \{f_K\} isn’t orthonormal, but it is orthogonal. However, we can compute:

\displaystyle\begin{aligned}\langle f_K,f_K\rangle&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}\overline{f_K(g)}f_K(g)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{k\in K}\overline{f_K(k)}f_K(k)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{k\in K}1\\&=\frac{\lvert K\rvert}{\lvert G\rvert}\end{aligned}

Incidentally, this is the reciprocal of the size of the centralizer Z_k of any k\in K. Thus if we pick a k in each K we can write down the orthonormal basis \{\sqrt{\lvert Z_k\rvert}f_K\}.

October 15, 2010 Posted by | Algebra, Group theory, Representation Theory | 7 Comments

The Character of a Representation

Now we introduce a very useful tool in the study of group representations: the “character” of a representation. And it’s almost effortless to define: the character \chi of a matrix representation X of a group G is a complex-valued function on G defined by


That is, the character is “the trace of the representation”. But why this is interesting is almost completely opaque at this point. I’m still not entirely sure why this formula has so many fabulous properties.

First of all, we need to recall something about the trace: it satisfies the “cyclic property”. That is, given an m\times n matrix A and an n\times m matrix B, we have


Indeed, if we write out the matrices in components we find


Then since the trace is the sum of the diagonal elements we calculate


but these are exactly the same!

We have to be careful, though, that we don’t take this to mean that we can arbitrarily reorder matrices inside the trace. If A, B, and C are all n\times n matrices, we can conclude that


but we cannot conclude in general that any of the traces on the upper line are equal to any of the traces on the lower line. We can “cycle” matrices around inside the trace, but not rearrange them arbitrarily.

So, what good is this? Well, if A is an invertible n\times n matrix and X is any matrix, then we find that \mathrm{Tr}(AXA^{-1})=\mathrm{Tr}(XA^{-1}A)=\mathrm{Tr}(X). If A is a change of basis matrix, then this tells us that the trace only depends on the linear transformation X represents, and not on the particular matrix. In particular, if X and Y are two equivalent matrix representations then there is some intertwining matrix A so that AX(g)=Y(g)A for all g\in G. The characters of X and Y are therefore equal.

If V is a G-module, then picking any basis for V gives a matrix X(g) representing each linear transformation \rho(g). The previous paragraph shows that which particular matrix representation we pick doesn’t matter, since they’re all give us the same character \chi(g). And so we can define the character of a G-module to be the character of any corresponding matrix representation.

October 14, 2010 Posted by | Algebra, Group theory, Representation Theory | 4 Comments

Hom Space Duals

Again, sorry for the delay but I was eager to polish something up for my real job this morning.

There’s something interesting to notice in our formulæ for the dimensions of spaces of intertwinors: they’re symmetric between the two representations involved. Indeed, let’s take two G-modules:


where the V^{(i)} are pairwise-inequivalent irreducible G-modules with degrees d_i. We calculate the dimensions of the \hom-spaces going each way:


but these are equal! So does this mean these spaces are isomorphic?

Well, yes. Any two vector spaces having the same dimension are isomorphic, but they’re not “naturally” isomorphic. Roughly, there’s no universal method of giving an explicit isomorphism, and so it’s regarded as sort of coincidental. But there’s something else around that’s not coincidental.

It turns out that these spaces are naturally isomorphic to each other’s dual spaces. That is, for any G-modules V and W we have an isomorphism


Luckily, we already know that their dimensions are equal, so the rank-nullity theorem tells us all we need is to find an injective linear map from one to the other.

So, let’s take an intertwinor h:W\to V and use it to build a linear functional \lambda_h on \hom_G(V,W). For any intertwinor f:V\to W we define

\displaystyle\lambda_h(f)=\mathrm{Tr}_V(h\circ f)

Where \mathrm{Tr} is the trace of an endomorphism. Given a matrix, it’s the sum of the diagonal entries. Since the composition of linear maps is linear in each variable, and the trace is a linear function, this is a linear functional as desired. It should also be clear that the construction h\mapsto\lambda_h is itself a linear map.

Now, we must show that this map is injective. That is, for no h to we find \lambda_h=0. This will follow if we can find for every nonzero h:W\to V at least one f:V\to W so that \mathrm{Tr}_V(h\circ f)\neq0. To do so, we pick a basis for each irreducible representation that shows up in either V or W so we can replace V and W with matrix representations. Now we can write

\displaystyle h=\bigoplus\limits_{i=1}^kM_i\boxtimes I_{d_i}

where M_i\in\mathrm{Mat}_{n_i,m_i}(\mathbb{C}) is an n_i\times m_i complex matrix. To construct our f, we simply take the conjugate transpose of each of these matrices:

\displaystyle f=\bigoplus\limits_{i=1}^kM_i^\dagger\boxtimes I_{d_i}

where now M_i^\dagger\in\mathrm{Mat}_{m_i,n_i}(\mathbb{C}) is an m_i\times n_i complex matrix, as desired. We multiply the two matrices:

\displaystyle hf=\bigoplus\limits_{i=1}^k(M_iM_i^\dagger)\boxtimes I_{d_i}

and find that each M_iM_i^\dagger\in\mathrm{Mat}_{m_i,m_i}(\mathbb{C}) is a m_i\times m_i square matrix. Thus the trace of this composition is the sum of their traces.

We’ve already seen that the composition of a linear transformation and its adjoint is self-adjoint and positive-definite. In terms of complex matrices, this tells us that the product of a matrix and its conjugate transpose is conjugate-symmetric and positive-definite. This means that it’s diagonalizable with all nonnegative real eigenvalues down the diagonal. And thus its trace is a nonnegative real number, and it can only be zero if the original matrix was zero.

The upshot, if you didn’t follow that, is that if h\neq0 we have an f so that \lambda_h(f)=\mathrm{Tr}(h\circ f)\neq0. And thus the map h\mapsto\lambda_h is injective, as we asserted. Proving naturality is similarly easy to proving it for additivity of \hom-spaces, and you can work it out if you’re interested.

October 13, 2010 Posted by | Algebra, Group theory, Representation Theory | 2 Comments

Dimensions of Hom Spaces

Now that we know that hom spaces are additive, we’re all set to make a high-level approach to generalizing last week’s efforts. We’re not just going to deal with endomorphism algebras, but with all the \hom-spaces.

Given G-modules V and W, Maschke’s theorem tells us that we can decompose our representations as

\displaystyle\begin{aligned}V&\cong m_1V^{(1)}\oplus\dots\oplus m_kV^{(k)}\\W&\cong n_1V^{(1)}\oplus\dots\oplus n_kV^{(k)}\end{aligned}

where the V^{(i)} are pairwise-inequivalent irreducible G-modules with degrees d_i. I’m including all the irreps that show up in either decomposition, so some of the coefficients m_i or n_i may well be zero. This is not a problem, since it just means direct-summing on a trivial module.

So let’s use additivity! We find


Now to calculate these summands, we can pick a basis for V^{(i)} and V^{(j)} and use the same sorts of methods we did to calculate commutant algebras. We find that if i\neq jV^{(i)}\not\cong V^{(j)} — then there are no G-morphisms at all, even if we include multiplicities. On the other hand, if i=j we find that an intertwinor between m_iV^{(i)} and n_iV^{(i)} has the form M_{m_i,n_i}\boxtimes I_{d_i}, where M_{m_i,n_i} is an m_i\times n_i complex matrix. That is, as a vector space it’s isomorphic to the space of m_i\times n_i matrices.

We conclude


and its dimension is


Notice that any i for which m_i=0 or n_i=0 doesn’t count for anything.

As a special case, we consider the endomorphism algebra \mathrm{End}_G(V)=\hom_G(V,V). This time we assume that none of the m_i are zero. We find:


with dimension


Just like before, we can calculate the center, which goes summand-by-summand. Each summand is (isomorphic to) a complete matrix algebra, so we know that its center is isomorphic to \mathbb{C}. Thus we find that the center of \mathrm{End}_G(V) is the direct sum of k copies of \mathbb{C}, and so has dimension k.

As one last corollary, let V=V^{(1)} be irreducible and let W be any representation. Then we calculate the dimension of the \hom-space:


That is, the dimension of the space of intertwinors is exactly the multiplicity of V^{(1)} in the representation W.

October 12, 2010 Posted by | Algebra, Group theory, Representation Theory | 5 Comments

Hom-Space Additivity

Today I’d like to show that the space \hom_G(V,W) of homomorphisms between two G-modules is “additive”. That is, it satisfies the isomorphisms:

\displaystyle\begin{aligned}\hom_G(V_1\oplus V_2,W)&\cong\hom_G(V_1,W)\oplus\hom_G(V_2,W)\\\hom_G(V,W_1\oplus W_2)&\cong\hom_G(V,W_1)\oplus\hom_G(V,W_2)\end{aligned}

We should be careful here: the direct sums inside the \hom are direct sums of G-modules, while those outside are direct sums of vector spaces.

The second of these is actually the easier. If f:V\to W_1\oplus W_2 is a G-morphism, then we can write it as f=(f_1,f_2), where f_1:V\to W_1 and f_2:V\to W_2. Indeed, just take the projection \pi_i:W_1\oplus W_2\to W_i and compose it with f to get f_i=\pi_i\circ f. These projections are also G-morphisms, since W_1 and W_2 are G-submodules. Since every f can be uniquely decomposed, we get a linear map \hom_G(V,W_1\oplus W_2)\to\hom_G(V,W_1)\oplus\hom_G(V,W_2).

Then the general rules of direct sums tell us we can inject W_1 and W_2 back into W_1\oplus W_2, and write

\displaystyle f=I_{W_1\oplus W_2}\circ f=(\iota_1\circ\pi_1+\iota_2\circ\pi_2)\circ f=\iota_1\circ f_1+\iota_2\circ f_2

Thus given any G-morphisms f_1:V\to W_1 and f_2:V\to W_2 we can reconstruct an f:V\to W_1\oplus W_2. This gives us a map in the other direction — \hom_G(V,W_1)\oplus\hom_G(V,W_2)\to\hom_G(V,W_1\oplus W_2) — which is clearly the inverse of the first one, and thus establishes our isomorphism.

Now that we’ve established the second isomorphism, the first becomes clearer. Given a G-morphism h:V_1\oplus V_2\to W we need to find morphisms h_i:V_i\to W. Before we composed with projections, so this time let’s compose with injections! Indeed, \iota_i:V_i\to V_1\oplus V_2 composes with h to give h_i=h\circ\iota_i:V_i\to W. On the other hand, given morphisms h_i:V_i\to W, we can use the projections \pi_i:V_1\oplus V_2\to V_i and compose them with the h_i to get two morphisms h_i\circ\pi_i:V_1\oplus V_2\to W. Adding them together gives a single morphism, and if the h_i came from an h, then this reconstructs the original. Indeed:

\displaystyle h_1\circ\pi_1+h_2\circ\pi_2=h\circ\iota_1\circ\pi_1+h\circ\iota_2\circ\pi_2=h\circ(\iota_1\circ\pi_1+\iota_2\circ\pi_2)=h\circ I_{V_1\oplus V_2}=h

And so the first isomorphism holds as well.

We should note that these are not just isomorphisms, but “natural” isomorphisms. That the construction \hom_G(\underline{\hphantom{X}},\underline{\hphantom{X}}) is a functor is clear, and it’s straightforward to verify that these isomorphisms are natural for those who are interested in the category-theoretic details.

October 11, 2010 Posted by | Algebra, Category theory, Group theory, Representation Theory | 3 Comments

Centers of Commutant Algebras

We want to calculate the centers of commutant algebras. We will have use of the two easily-established equations:

\displaystyle\begin{aligned}(B_1\oplus B_2)\circ(A_1\oplus A_2)&=(B_1\circ A_1)\oplus(B_2\circ A_2)\\(B_1\otimes B_2)\circ(A_1\otimes A_2)&=(B_1\circ A_1)\otimes(B_2\circ A_2)\end{aligned}

Where A_1:U_1\to V_1, A_2:U_2\to V_2, B_1:V_1\to W_1, and B_2:V_2\to W_2 are linear functions. In particular, this holds where A_1 and A_2 are m\times m matrices representing linear endomorphisms of \mathbb{C}^m, and B_1 and B_2 are n\times n matrices representing linear endomorphisms of \mathbb{R}^n.

Now let X be a matrix representation and consider a central matrix C\in Z_{\mathrm{Com}_G(X)}. That is, for all T\in\mathrm{Com}_G(X), we have

\displaystyle CT = TC

Let’s further assume that we can write

\displaystyle X=m_1X^{(1)}\oplus\dots\oplus m_kX^{(k)}

where each X^{(i)} is an irreducible representation of degree d_i. Then we know that we can write

\displaystyle\begin{aligned}T&=\bigoplus\limits_{i=1}^k\left(T_{m_i}\boxtimes I_{d_i}\right)\\C&=\bigoplus\limits_{i=1}^k\left(C_{m_i}\boxtimes I_{d_i}\right)\end{aligned}

Thus we calculate:

\displaystyle\begin{aligned}CT&=\left(\bigoplus\limits_{i=1}^k(C_{m_i}\boxtimes I_{d_i})\right)\left(\bigoplus\limits_{i=1}^k(T_{m_i}\boxtimes I_{d_i})\right)\\&=\bigoplus\limits_{i=1}^k\left((C_{m_i}\boxtimes I_{d_i})(T_{m_i}\boxtimes I_{d_i})\right)\\&=\bigoplus\limits_{i=1}^k(C_{m_i}T_{m_i}\boxtimes I_{d_i})\\TC&=\bigoplus\limits_{i=1}^k(T_{m_i}C_{m_i}\boxtimes I_{d_i})\end{aligned}

This is only possible if for each i we have C_{m_i}T_{m_i}=T_{m_i}C_{m_i} for all T_{m_i}\in\mathrm{Mat}_{m_i}. But this means that C_{m_i} is in the center of \mathrm{Mat}_{m_i}, which implies that C_{m_i}=c_iI_{m_i}. Therefore a central element can be written

\displaystyle C=\bigoplus\limits_{i=1}^k\left(c_iI_{m_i}\boxtimes I_{d_i}\right)=\bigoplus\limits_{i=1}^kc_iI_{m_id_i}

As a concrete example, let’s say that X=2X^{(1)}\oplus X^{(2)}, where \deg\left(X^{(1)}\right)=3 and \deg\left(X^{(2)}\right)=4. Then the matrices in the commutant algebra look like:

T=\left(\begin{tabular}{ccc|ccc|cccc}a&0&0&b&0&0&0&0&0&0\\{0}&a&0&0&b&0&0&0&0&0\\{0}&0&a&0&0&b&0&0&0&0\\\hline c&0&0&d&0&0&0&0&0&0\\{0}&c&0&0&d&0&0&0&0&0\\{0}&0&c&0&0&d&0&0&0&0\\\hline{0}&0&0&0&0&0&x&0&0&0\\{0}&0&0&0&0&0&0&x&0&0\\{0}&0&0&0&0&0&0&0&x&0\\{0}&0&0&0&0&0&0&0&0&x\end{tabular}\right)

and the dimension of the commutant algebra is evidently m_1^2+m_2^2=2^2+1^2=5.

The central matrices in the commutant algebra, on the other hand, look like:


And the dimension is k=2.

October 8, 2010 Posted by | Algebra, Group theory, Representation Theory | 1 Comment

Commutant Algebras in General

And in my hurry to get a post up yesterday afternoon after forgetting to in the morning, I put up the wrong one. Here’s what should have gone up yesterday, and yesterday’s should have been now.

Now we can describe the most general commutant algebras. Maschke’s theorem tells us that any matrix representation X can be decomposed as the direct sum of irreducible representations. If we collect together all the irreps that are equivalent to each other, we can write

\displaystyle X\cong m_1X^{(1)}\oplus m_2X^{(2)}\oplus\dots\oplus m_kX^{(k)}

where the X^{(i)} are pairwise-inequivalent irreducible matrix representations with degrees d_i, respectively. We calculate the degree:

\displaystyle\deg X=\sum\limits_{i=1}^k\deg\left(m_iX^{(i)}\right)=\sum\limits_{i=1}^km_id_i

Now, can a matrix in the commutant algebra send a vector from the subspace isomorphic to m_iX^{(i)} to the subspace isomorphic to m_jX^{(j)}? No, and for basically the same reason we saw in the case of X^{(i)}\oplus X^{(j)}. Since it’s an intertwinor, it would have to send the whole \mathbb{C}[G]-orbit of the vector — a submodule isomorphic to X^{(j)} — into the target subspace m_jX^{(j)}, but we know that that submodule itself has no submodules isomorphic to X^{(i)}.

And so any such matrix must be the direct sum of one matrix in each commutant algebra \mathrm{Com}_G\left(m_iX^{(i)}\right). But we know that these matrices are of the form M_{m_i}\boxtimes I_{d_i}. And so we can write

\displaystyle\mathrm{Com}_G(X)=\left\{\bigoplus\limits_{i=1}^k(M_{m_i}\boxtimes I_{d_i})\bigg\vert M_{m_i}\in\mathrm{Mat}_{m_i}(\mathbb{C})\right\}

which has dimension


October 7, 2010 Posted by | Algebra, Group theory, Representation Theory | 1 Comment

The Center of an Algebra

Sorry I forgot to get this posted this morning.

Given an algebra A, it’s interesting to consider the “center” Z_A of A. This is the collection of algebra elements that commute with all the others. That is,

\displaystyle Z_A=\{a\in A\vert\forall b\in A, ab=ba\}

It’s straightforward to see that sums, scalar multiples, and products of central elements — elements of Z_A — are themselves central. That is, Z_A is an algebra, and it’s a commutative one to boot. This gives us a construction that starts with an associative algebra and ends with a commutative algebra, and yet it turns out that it is not a functor! I don’t really want to get into that right now, though, but I wanted to mention it in passing, since it’s one of the few examples of a natural algebraic construction that isn’t functorial.

What I do want to get into right now, is calculating the center of the matrix algebra \mathrm{Mat}_d(\mathbb{C}). The answer is reminiscent of Schur’s lemma:

Z_{\mathrm{Mat}_d(\mathbb{C})}=\{c I_d\vert c\in\mathbb{C}\}

Suppose that C is a central d\times d matrix. Then in particular it commutes with the matrix E_{i,i}, which has a 1 at the ith place along the diagonal and 0s everywhere else. That is, CE_{i,i}=E_{i,i}C. But CE_{i,i} zeroes out everything except the ith column of C, while E_{i,i}C zeroes out everything except the ith row. For these two be equal, the ith column must be all zeroes except for the one spot along the diagonal, and similarly for the ith row. And so C must be diagonal.

For i\neq j, C must also commute with E_{i,j}+E_{j,i} — the matrix with ones in the jth column of the ith row and the ith column of the jth row. That is, C(E_{i,j}+E_{j,i})=(E_{i,j}+E_{j,i})C. Multiplying on the right by E_{i,j}+E_{j,i} swaps the ith and jth columns of C, while multiplying on the left swaps the ith and jth rows. Thus we can tell that not only is C diagonal, but all the diagonal entries must be the same. And so C=c I_d for some complex c.

October 6, 2010 Posted by | Algebra, Group theory, Representation Theory | 2 Comments

More Commutant Algebras

We continue yesterday’s discussion of commutant algebras. But today, let’s consider the direct sum of a bunch of copies of the same irrep.

Before we get into it, let’s discuss a bit of notation. Given a representation X we write mX for the direct sum of m copies of X. We say that m is the “multiplicity” of X.

Now, let’s let X^{(1)} be an irrep of degree d, and let X=2X^{(1)}. Our analysis proceeds exactly as yesterday — with X^{(2)}=X^{(1)} — up until we write down our four equations. Now they read:


This time, Schur’s lemma tells us that each T_{i,j} is an intertwinor between X^{(1)} and itself. And so we conclude that each of the blocks is a constant times the identity: T_{i,j}=c_{i,j}I_d. That is:

\displaystyle T=\begin{pmatrix}c_{1,1}I_d&c_{1,2}I_d\\c_{2,1}I_d&c_{2,2}I_d\end{pmatrix}

We can recognize this as a Kronecker product of two matrices:

\displaystyle T=\begin{pmatrix}c_{1,1}&c_{1,2}\\c_{2,1}&c_{2,2}\end{pmatrix}\boxtimes I_d

which is the matrix version of the tensor product of two linear maps. If you don’t know much about the tensor product, don’t worry; we’ll refresh more as we go. You can also review tensor products in the context of vector spaces and linear transformations here. What we want to think of here is that the matrix \left(c_{i,j}\right) shuffles around the two copies of the irrep X^{(1)}, and the identity matrix I_d stands for the trivial transformation on an irreducible representation.

Since any values are possible for the c_{i,j}, the first matrix can take any value in the algebra \mathrm{Mat}_2(\mathbb{C}) of 2\times2 complex matrices. We say that

\displaystyle\mathrm{Com}_G(X)=\{M_2\boxtimes I_d\vert M_2\in\mathrm{Mat}_2(\mathbb{C})\}

In more generality, if X=mX^{(1)}, where X^{(1)} is an irrep of degree d, then we find

\displaystyle\mathrm{Com}_G(X)=\{M_m\boxtimes I_d\vert M_m\in\mathrm{Mat}_m(\mathbb{C})\}

The degree of the representation X is md — we get d for each of the m copies of X^{(1)} — and the dimension of the commutant algebra is the dimension of the matrix algebra \mathrm{Mat}_m(\mathbb{C}), which is m^2.

October 5, 2010 Posted by | Algebra, Group theory, Representation Theory | 1 Comment

Some Commutant Algebras

We want to calculate commutant algebras of matrix representations. We already know that if X is an irrep, then \mathrm{Com}_G(X)=\mathbb{C}, and we’ll move on from there.

Next, let X^{(1)} and X^{(2)} be two inequivalent matrix irreps, with degrees d_1 and d_2, respectively, and consider the representation X=X^{(1)}\oplus X^{(2)}. As a matrix, this looks like:

\displaystyle X(g)=\begin{pmatrix}X^{(1)}(g)&0\\{0}&X^{(2)}(g)\end{pmatrix}

Where we’ve broken the d_1+d_2 rows and columns into blocks of size d_1 and d_2. Now let’s determine the algebra of matrices T commuting with each such matrix X(g). Let’s break down T into blocks like X.

\displaystyle T=\begin{pmatrix}T_{1,1}&T_{1,2}\\T_{2,1}&T_{2,2}\end{pmatrix}

The nice thing about this is that when the block sizes are the same, and when we break rows and columns into the same blocks, the rules for multiplication are the same as for regular matrices:


If these are to be equal, we have four equations to satisfy:


And we can apply Schur’s lemma to all of them. In the middle two equations, we see that both T_{1,2} and T_{2,1} must be either be invertible or zero. But if either one is invertible, then it gives an equivalence between the matrix irreps X^{(1)} and X^{(2)}. But since we assumed that these are inequivalent, we conclude that T_{1,2} and T_{2,1} are both the appropriate zero matrices. And then the first and last equations are handled just like single irreps were last time. Thus we must have

\displaystyle T=\begin{pmatrix}c_1I_{d_1}&0\\{0}&c_2I_{d_2}\end{pmatrix}

And so \mathrm{Com}_G(X)=\mathbb{C}\oplus\mathbb{C}, where the multiplication is handled component by component. Similarly, the direct sum of n pairwise-inequivalent irreps X=X^{(1)}\oplus\dots\oplus X^{(n)} has commutant algebra \mathrm{Com}_G(X)=\mathbb{C}^n, with multiplication handled componentwise. The degree of the representation X is the sum of the degrees of the irreps, and the dimension of the commutant is n.

October 4, 2010 Posted by | Algebra, Group theory, Representation Theory | 2 Comments