The Unapologetic Mathematician

Mathematics for the interested outsider

Characters of Induced Representations

We know how to restrict and induce representations. Now we want to see what this looks like on the level of characters.

For restricted representations, this is easy. Let X be a matrix representation of a group G, and let H\subseteq G be a subgroup. Then X\!\!\downarrow^G_H(h)=X(h) for any h\in H. We just consider an element of H as an element in G and construct the matrix as usual. Therefore we can see that


That is, we get the restricted character by restricting the original character.

As for the induced character, we use the matrix of the induced representation that we calculated last time. If X is a matrix representation of a group H, which is a subgroup H\subseteq G, then we pick a transversal of H in G. Using our formula for the induced matrix, we find


Where we define \chi(g)=0 if g\notin H. Now, since \chi is a class function on H, conjugation by any element h\in H leaves it the same. That is,


for all g\in G and h\in H. So let’s do exactly this for each element of H, add all the results together, and then divide by the number of elements of H. That is, we write the above function out in \lvert H\rvert different ways, add them all together, and divide by \lvert H\rvert to get exactly what we started with:

\displaystyle\begin{aligned}\chi\!\!\uparrow_H^G(g)&=\frac{1}{\lvert H\rvert}\sum\limits_{h\in H}\sum\limits_{i=1}^n\chi(h^{-1}t_i^{-1}gt_ih)\\&=\frac{1}{\lvert H\rvert}\sum\limits_{h\in H}\sum\limits_{i=1}^n\chi\left((t_ih)^{-1}g(t_ih)\right)\end{aligned}

But now as t_i varies over the transversal, and as h varies over H, their product t_ih varies exactly once over G. That is, every x\in G can be written in exactly one way in the form t_ih for some transversal element t_i and subgroup element h. Thus we find:

\displaystyle\chi\!\!\uparrow_H^G(g)=\frac{1}{\lvert H\rvert}\sum\limits_{x\in G}\chi(x^{-1}gx)

November 29, 2010 Posted by | Algebra, Group theory, Representation Theory | 1 Comment

Induced Matrix Representations

Sorry I missed posting this back in the morning…

We want to work out the matrices of induced representations. Explicitly, if V is a left H-module of degree d, where H is a subgroup of G, then V\!\!\uparrow_H^G is a left G-module. If we pick a basis of V, we get a matrix representation X:H\to\mathrm{Mat}_d(\mathbb{C}). We want to describe a matrix representation corresponding to V\!\!\uparrow_H^G. In the process, we’ll see that we were way off with our first stabs at the dimensions of tensor products over H.

The key point is to realize that \mathbb{C}[G] is a free right module over \mathbb{C}[H]. That is, we can find some collection of vectors in \mathbb{C}[G] so that any other one can be written as a linear collection of these with coefficients (on the right) in \mathbb{C}[H]. Indeed, we can break G up into the \lvert G\rvert/\lvert H\rvert left cosets of H. Picking one representative t_i of each coset — we call this a “transversal” for H — we have essentially chopped \mathbb{C}[G] up into chunks, each of which looks exactly like \mathbb{C}[H].

To see this, notice that the coset t_iH is a subset of G. Thus it describes a subspace of \mathbb{C}[G] — that spanned by the elements of the coset, considered as basis vectors in the group algebra. The action of H on \mathbb{C}[G] shuffles the basis vectors in this coset around amongst each other, and so this subspace is invariant. It should be clear that it is isomorphic to \mathbb{C}[H], considered as a right H-module.

Okay, so when we consider the tensor product \mathbb{C}[G]\otimes_HV, we can pull any action by H across to the right and onto V. What remains on the left? A vector space spanned by the transversal elements \{t_i\}, which essentially index the left cosets of H in G. We have one copy of V for each of these cosets, and so the dimension of the induced module V\!\!\uparrow_H^G is d\lvert G\rvert/\lvert H\rvert.

How should we think about this equation, heuristically? The tensor product multiplies the dimensions of vector spaces, which gives d\lvert G\rvert. Then the action of H on the tensor product divides by a factor of \lvert H\rvert — at least in principle. In practice, this only works because in our example the action by H is free. That is, no element in the bare tensor product \mathbb{C}[G]\otimes V is left fixed by any non-identity element of H.

So how does this give us a matrix representation of G? Well, g acts on \mathbb{C}[G] by shuffling around the subspaces that correspond to the cosets of H. In fact, this is exactly the coset representation of G corresponding to H! If we write g=t_ih for some i, then this uses up the transversal element t_i. The h is left to “pass through” and act on V.

To write this all out explicitly, we get the following block matrix:

\displaystyle X\!\!\uparrow_H^G(g)=\begin{pmatrix}X(t_i^{-1}gt_j)\end{pmatrix}=\left(\begin{array}{cccc}X(t_1^{-1}gt_1)&X(t_1^{-1}gt_2)&\cdots&X(t_1^{-1}gt_n)\\X(t_2^{-1}gt_1)&X(t_2^{-1}gt_2)&\cdots&X(t_2^{-1}gt_n)\\\vdots&\vdots&\ddots&\vdots\\X(t_n^{-1}gt_1)&X(t_n^{-1}gt_2)&\cdots&X(t_n^{-1}gt_n)\end{array}\right)

where n is the number of cosets, and we simply define X(t_i^{-1}gt_j) to be a zero block if t_i^{-1}gt_j does not actually fall into H.

November 25, 2010 Posted by | Algebra, Group theory, Representation Theory | 5 Comments

Restricting and Inducing Representations

Two of the most interesting constructions involving group representations are restriction and induction. For our discussion of both of them, we let H\subseteq G be a subgroup; it doesn’t have to be normal.

Now, given a representation \rho:G\to\mathrm{End}(V), it’s easy to “restrict” it to just apply to elements of H. In other words, we can compose the representing homomorphism \rho with the inclusion \iota:H\to G: \rho\circ\iota:H\to\mathrm{End}(V). We write this restricted representation as \rho\!\!\downarrow^G_H; if we are focused on the representing space V, we can write V\!\!\downarrow^G_H; if we pick a basis for V to get a matrix representation X we can write X\!\!\downarrow^G_H. Sometimes, if the original group G is clear from the context we omit it. For instance, we may write V\!\!\downarrow_H.

It should be clear that restriction is transitive. That is, if K\subseteq H\subseteq G is a chain of subgroups, then the inclusion mapping \iota_{K,G}K\hookrightarrow G is the exactly composition of the inclusion arrows \iota_{K,H}K\hookrightarrow H and \iota_{H,G}H\hookrightarrow G. And so we conclude that


So whether we restrict from G directly to K, or we stop restrict from G to H and from there to K, we get the same representation in the end.

Induction is a somewhat more mysterious process. If V is a left H-module, we want to use it to construct a left G-module, which we will write V\!\!\uparrow_H^G, or simply V\!\!\uparrow^G if the first group H is clear from the context. To get this representation, we will take the tensor product over H with the group algebra of G.

To be more explicit, remember that the group algebra \mathbb{C}[G] carries an action of G on both the left and the right. We leave the left action alone, but we restrict the right action down to H. So we have a G\times H-module {}_G\mathbb{C}[G]_H, and we take the tensor product over H with {}_HV. We get the space V\!\!\uparrow_H^G=\mathbb{C}[G]\otimes_HV; in the process the tensor product over H “eats up” the right action of H on the \mathbb{C}[G] and the left action of H on V. The extra left action of G on \mathbb{C}[G] leaves a residual left action on the tensor product, and this is the left action we seek.

Again, induction is transitive. If K\subseteq H\subseteq G is a chain of subgroups, and if V is a left K-module, then


The key step here is that \mathbb{C}[G]\otimes_H\mathbb{C}[H]\cong\mathbb{C}[G]. But if we have any simple tensor g\otimes h\in\mathbb{C}[G]\otimes_H\mathbb{C}[H], we can use the relation that lets us pull elements of H across the tensor product. We get gh\otimes1\in\mathbb{C}[G]\otimes_H\mathbb{C}[H]. That is, we can specify any tensor by an element in \mathbb{C}[G] alone.

November 23, 2010 Posted by | Algebra, Group theory, Representation Theory | 9 Comments

The Character Table as Change of Basis

Now that we’ve seen that the character table is square, we know that irreducible characters form an orthonormal basis of the space of class functions. And we also know another orthonormal basis of this space, indexed by the conjugacy classes K\subseteq G:

\displaystyle\left\{\sqrt{\frac{\lvert K\rvert}{\lvert G\rvert}}f_K\right\}

A line in the character table corresponds to an irreducible character \chi^{(i)}, and its entries \chi_K^{(i)} tell us how to write it in terms of the basis \{f_K\}:


That is, it’s a change of basis matrix from one to the other. In fact, we can modify it slightly to exploit the orthonormality as well.

When dealing with lines in the character table, we found that we can write our inner product as

\displaystyle\langle\chi,\psi\rangle=\sum\limits_K\frac{\lvert K\rvert}{\lvert G\rvert}\overline{\chi_K}\psi_K

So let’s modify the table to replace the entry \chi_K^{(i)} with \sqrt{\lvert K\rvert/\lvert G\rvert}\chi_K^{(i)}. Then we have

\displaystyle\sum\limits_K\overline{\left(\sqrt{\frac{\lvert K\rvert}{\lvert G\rvert}}\chi_K^{(i)}\right)}\left(\sqrt{\frac{\lvert K\rvert}{\lvert G\rvert}}\chi_K^{(j)}\right)=\langle\chi^{(i)},\chi^{(j)}\rangle=\delta_{i,j}

where we make use of our orthonormality relations. That is, if we use the regular dot product on the rows of the modified character table (considered as tuples of complex numbers) we find that they’re orthonormal. But this means that the modified table is a unitary matrix, and thus its columns are orthonormal as well. We conclude that

\displaystyle\sum\limits_i\overline{\left(\sqrt{\frac{\lvert K\rvert}{\lvert G\rvert}}\chi_K^{(i)}\right)}\left(\sqrt{\frac{\lvert K\rvert}{\lvert G\rvert}}\chi_L^{(i)}\right)=\delta_{K,L}

where now the sum is over a set indexing the irreducible characters. We rewrite these relations as

\displaystyle\sum\limits_i\overline{\chi_K^{(i)}}\chi_L^{(i)}=\frac{\lvert G\rvert}{\lvert K\rvert}\delta_{K,L}

We can use these relations to help fill out character tables. For instance, let’s consider the character table of S_3, starting from the first two rows:


where we know that the third row must exist for the character table to be square. Now our new orthogonality relations tell us on the first column that


Since a=\chi^{(3)}(e), it is a dimension, and must be positive. That is, a=2. On the second column we see that


and so we must have b=0. Finally on the third column we see that


so c=\pm1.

To tell the difference, we can use the new orthogonality relations on the first and third or second and third columns, or the old ones on the first and third or second and third rows. Any of them will tell us that c=-1, and we’ve completed the character table without worrying about constructing any representations at all.

We should take note here that the conjugacy classes index one orthonormal basis of the space of class functions, and the irreducible representations index another. Since all bases of any given vector space have the same cardinality, the set of conjugacy classes and the set of irreducible representations have the same number of elements. However, there is no reason to believe that there is any particular correspondence between the elements of the two sets. And in general there isn’t any, but we will see that in the case of symmetric groups there is a way of making just such a correspondence.

November 22, 2010 Posted by | Algebra, Group theory, Representation Theory | Leave a comment

The Character Table is Square

We’ve defined the character table of a group, and we’ve seen that it must be finite. Specifically, it cannot have any more rows — G cannot have any more irreducible representations — than there are conjugacy classes in G. Now we can show that there are always exactly as many irreducible representations as there are conjugacy classes in G.

We recall that for any representation V the center of the endomorphism algebra Z_{\mathrm{End}_G(V)} is equal to the number of irreducible representations that show up in V. In particular, since we know that every irreducible representation shows up in the left regular representation \mathbb{C}[G], the number of irreducible representations is k=\dim\left(Z_{\mathrm{End}_G(\mathbb{C}[G])}\right). Thus to calculate this number k, we must understand the structure of the endomorphism algebra and its center.

But we just saw that \mathrm{End}_G(\mathbb{C}[G]) is anti-isomorphic to \mathbb{C}[G] as algebras, and this anti-isomorphism induces an anti-isomorphism on their centers. In particular, their centers have the same dimension. That is:

\displaystyle k=\dim\left(Z_{\mathrm{End}_G(\mathbb{C}[G])}\right)=\dim\left(Z_{\mathbb{C}[G]}\right)

So what does a central element of the group algebra look like? Let z be such a central element and write it out as

\displaystyle z=\sum\limits_{g\in G}c_gg

Now since z is central, it must commute with every other element of the group algebra. In particular, for every h\in G we have zh=hz, or z=hzh^{-1}. That is:

\displaystyle\sum\limits_{g\in G}c_gg=z=hzh^{-1}=\sum\limits_{g\in G}c_ghgh^{-1}

Since z is invariant, the coefficients c_g and c_{hgh^{-1}} must be the same. But as h runs over G, hgh^{-1} runs over the conjugacy class of g, so the coefficients must be the same for all elements in the conjugacy class. That is, we have exactly as many free parameters when building z as there are conjugacy classes in G — one for each of them.

So we’ve established that the center of the group algebra has dimension equal to the number of conjugacy classes in G. We also know that this is the same as the dimension of the center of the endomorphism algebra of the left regular representation. Finally, we know that this is the same as the number of distinct irreducible representations that show up in the decomposition of the left regular representation. And so we conclude that any finite group G must have exactly as many irreducible representations as it has conjugacy classes. Since the conjugacy classes index the columns of the character table of G, and the irreducible characters index the rows, we conclude that the character table is always square.

As a quick corollary, we find that the irreducible characters span a subspace of the space of class functions with dimension equal to the number of conjugacy classes in G. Since this is the dimension of the whole space of class functions, the irreducible characters must form an orthonormal basis of this space.

November 19, 2010 Posted by | Algebra, Group theory, Representation Theory | 2 Comments

The Endomorphism Algebra of the Left Regular Representation

Since the left regular representation is such an interesting one — in particular since it contains all the irreducible representations — we want to understand its endomorphisms. That is, we want to understand the structure of \mathrm{End}_G(\mathbb{C}[G]). I say that, amazingly enough, it is anti-isomorphic to the group algebra \mathbb{C}[G] itself!

So let’s try to come up with an anti-isomorphism \mathbb{C}[G]\to\mathrm{End}_G(\mathbb{C}[G]). Given any element v\in\mathbb{C}[G], we define the map \phi_v:\mathbb{C}[G]\to\mathbb{C}[G] to be right-multiplication by v. That is:


for every w\in\mathbb{C}[G]. This is a G-endomorphism, since G acts by multiplication on the left, and left-multiplication commutes with right-multiplication.

To see that it’s an anti-homomorphism, we must check that it’s linear and that it reverses the order of multiplication. Linearity is straightforward; as for reversing multiplication, we calculate:


Next we check that v\mapsto\phi_v is injective by calculating its kernel. If \phi_v=0 then


so this is only possible if v=0.

Finally we must check surjectivity. Say \theta\in\mathrm{End}_G(\mathbb{C}[G]), and define v=\theta(1). I say that \theta=\phi_v, since


Since the two G-endomorphisms are are equal on the standard basis of \mathbb{C}[G], they are equal. Thus, every G-endomorphism of the left regular representation is of the form \phi_v for some v\in\mathbb{C}[G].

November 18, 2010 Posted by | Algebra, Group theory, Representation Theory | 1 Comment

Decomposing the Left Regular Representation

Let’s take the left regular representation of a finite group G on its group algebra \mathbb{C}[G] and decompose it into irreducible representations.

Our first step is to compute the character of \mathbb{C}[G] as a left G-module. The nice thing here is that it’s a permutation representation, and that means we have a shortcut to calculating its character: \chi(g) is the number of fixed point of the action of g on the standard basis of \mathbb{C}[G]. That is, it counts the number of h\in G with gh=h. But this can only happen if g is the group identity, and in that case every element is a fixed point. Thus we conclude

\displaystyle\begin{aligned}\chi(e)&=\lvert G\rvert\\\chi(g)&=0\qquad g\neq e\end{aligned}

Now let V be any irreducible representation of G, with character \chi_V. We know that the multiplicity of V in \mathbb{C}[G] is given by the inner product \langle\chi_V,\chi\rangle. This, we can calculate:

\displaystyle\begin{aligned}\langle \chi_V,\chi\rangle&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}\overline{\chi_V(g)}\chi(g)\\&=\frac{1}{\lvert G\rvert}\overline{\chi_V(e)}\lvert G\rvert\\&=\dim(V)\end{aligned}

where in the last line we use the fact that evaluating the character of any representation at the identity element gives the degree of that representation.

So, what does this tell us? Every irreducible representation V shows up in \mathbb{C}[G] with a multiplicity equal to its degree. In particular, it must show up at least once. That is, the left regular representation contains all the irreducible representations.

Thus if V^{(i)} are the k irreducible representations of G, we have a decomposition.


Taking dimensions on either side, we find

\displaystyle\lvert G\rvert=\sum\limits_{i=1}^k\dim\left(V^{(i)}\right)\dim\left(V^{(i)}\right)=\sum\limits_{i=1}^k\dim\left(V^{(i)}\right)^2

We can check this in the case of S_3 and S_4, since we have complete character tables for both of them:


November 17, 2010 Posted by | Algebra, Group theory, Representation Theory | 5 Comments

The Dimension of the Space of Tensors Over the Group Algebra

Now we can return to the space of tensor products over the group algebra and take a more solid pass at calculating its dimension. Key to this approach will be the isomorphism V\otimes_GW\cong(V\otimes W)^G.

First off, we want to calculate the character of V\otimes W. If V — as a left G-module — has character \chi and W has character \psi, then we know that the inner tensor product has character


Next, we recall that the submodule of invariants (V\otimes W)^G can be written as

\displaystyle(V\otimes W)^G\cong V^\mathrm{triv}\otimes\hom_G(V^\mathrm{triv},V\otimes W)

Now, we know that \dim(V^\mathrm{triv})=1, and thus the dimension of our space of invariants is the dimension of the \hom space. We’ve seen that this is the multiplicity of the trivial representation in V\otimes W, which we’ve also seen is the inner product \langle\chi^\mathrm{triv},\chi\otimes\psi\rangle. We calculate:

\displaystyle\begin{aligned}\langle\chi^\mathrm{triv},\chi\otimes\psi\rangle&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}\overline{\chi^\mathrm{triv}(g)}\chi(g)\psi(g)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}\chi(g)\psi(g)\end{aligned}

This may not be as straghtforward and generic a result as the last one, but it’s at least easily calculated for any given pair of modules V and W.

November 16, 2010 Posted by | Algebra, Group theory, Representation Theory | Leave a comment

Tensors Over the Group Algebra are Invariants

It turns out that we can view the space of tensors over a group algebra as a subspace of invariants of the space of all tensors. That is, if V_G is a right G-module and {}_GW is a left G-module, then V\otimes_G W is a subspace of V\otimes W.

To see this, first we’ll want to turn V into a left G-module by defining

\displaystyle g\cdot v=vg^{-1}

We can check that this is a left action:

\displaystyle\begin{aligned}g\cdot(h\cdot v)&=g\cdot(vh^{-1})\\&=vh^{-1}g^{-1}\\&=v(gh)^{-1}\\&=(gh)\cdot v\end{aligned}

The trick is that moving from a right to a left action reverses the order of composition, and changing from a group element to its inverse reverses the order again.

So now that we have two left actions by G, we can take the outer tensor product, which carries an action by G\times G. Then we pass to the inner tensor product, acting on each tensorand by the same group element. To be more explicit:

g\cdot(v\otimes w)=(vg^{-1})\otimes(gw)

Now, I say that being invariant under this action of G is equivalent to the new relation that holds for tensors over a group algebra. Indeed, if (vg)\otimes w is invariant, then

\displaystyle(vg)\otimes w=(vgg^{-1})\otimes(gw)=v\otimes(gw)

Similarly, if we apply this action to a tensor product over the group algebra we find

\displaystyle g\cdot(v\otimes w)=(vg^{-1})\otimes(gw)=v\otimes(g^{-1}gw)=v\otimes w

so this action is trivial.

Now, we’ve been playing it sort of fast and loose here. We originally got the space V\otimes_GW by adding new relations to the space V\otimes W, and normally adding new relations to an algebraic object gives a quotient object. But when it comes to vector spaces and modules over finite groups, we’ve seen that quotient objects and subobjects are the same thing.

We can get a more explicit description to verify this equivalence by projecting onto the invariants. Given a tensor v\otimes w\in V\otimes_GW, we consider it instead as a tensor in V\otimes W. Now, this is far from unique, since many equivalent tensors over the group algebra correspond to different tensors in V\otimes W. But next we project to the invariant

\displaystyle\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}(vg^{-1})\otimes(gw)

Now I say that any two equivalent tensors in V\otimes GW are sent to the same invariant tensor in (V\otimes W)^G. We check the images of (vg)\otimes w and v\otimes(gw):

\displaystyle\begin{aligned}\frac{1}{\lvert G\rvert}\sum\limits_{h\in G}((vg)h^{-1})\otimes(hw)&=\frac{1}{\lvert G\rvert}\sum\limits_{h\in G}(v(gh^{-1}))\otimes((hg^{-1}g)w)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{k\in G}(vk^{-1})\otimes(k(gw))\end{aligned}

To invert this process, we just consider an invariant tensor v\otimes w as a tensor in V\otimes_GW. The “fast and loose” proof above will suffice to show that this is a well defined map (V\otimes W)^G\to V\otimes_GW. To see it’s an inverse, take the forward image and apply the relation we get from moving it back to V\otimes_GW:

\displaystyle\begin{aligned}\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}(vg^{-1})\otimes(gw)&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}v\otimes(g^{-1}gw)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}v\otimes w\\&=v\otimes w\end{aligned}

And so we’ve established the isomorphism V\otimes_GW\cong(V\otimes W)^G, as desired.

November 15, 2010 Posted by | Algebra, Group theory, Representation Theory | 1 Comment

Projecting Onto Invariants

Given a G-module V, we can find the G-submodule V^G of G-invariant vectors. It’s not just a submodule, but it’s a direct summand. Thus not only does it come with an inclusion mapping V^G\to V, but there must be a projection V\to V^G. That is, there’s a linear map that takes a vector and returns a G-invariant vector, and further if the vector is already G-invariant it is left alone.

Well, we know that it exists, but it turns out that we can describe it rather explicitly. The projection from vectors to G-invariant vectors is exactly the “averaging” procedure we ran into (with a slight variation) when proving Maschke’s theorem. We’ll describe it in general, and then come back to see how it applies in that case.

Given a vector v\in V, we define

\displaystyle\bar{v}=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}gv

This is clearly a linear operation. I say that \bar{v} is invariant under the action of G. Indeed, given g'\in G we calculate

\displaystyle\begin{aligned}g'\bar{v}&=g'\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}gv\\&=\frac{1}{\lvert G\rVert}\sum\limits_{g\in G}(g'g)v\\&=\bar{v}\end{aligned}

since as g ranges over G, so does g'g, albeit in a different order. Further, if v is already G-invariant, then we find

\displaystyle\begin{aligned}\bar{v}&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}gv\\&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}v\\&=v\end{aligned}

so this is indeed the projection we’re looking for.

Now, how does this apply to Maschke’s theorem? Well, given a G-module V, the collection of sesquilinear forms on the underlying space V forms a vector space itself. Indeed, such forms correspond to correspond to Hermitian matrices, which form a vector space. Anyway, rather than write the usual angle-brackets, we will write one of these forms as a bilinear function B:V\times V\to\mathbb{C}.

Now I say that the space of forms carries an action from the right by G. Indeed, we can define


It’s straightforward to verify that this is a right action by G. So, how do we “average” the form to get a G-invariant form? We define

\displaystyle\bar{B}(v,w)=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}B(gv,gw)

which — other than the factor of \frac{1}{\lvert G\rvert} — is exactly how we came up with a G-invariant form in the proof of Maschke’s theorem!

November 13, 2010 Posted by | Algebra, Group theory, Representation Theory | 1 Comment


Get every new post delivered to your Inbox.

Join 366 other followers