The Unapologetic Mathematician

Mathematics for the interested outsider

Projecting Onto Invariants

Given a G-module V, we can find the G-submodule V^G of G-invariant vectors. It’s not just a submodule, but it’s a direct summand. Thus not only does it come with an inclusion mapping V^G\to V, but there must be a projection V\to V^G. That is, there’s a linear map that takes a vector and returns a G-invariant vector, and further if the vector is already G-invariant it is left alone.

Well, we know that it exists, but it turns out that we can describe it rather explicitly. The projection from vectors to G-invariant vectors is exactly the “averaging” procedure we ran into (with a slight variation) when proving Maschke’s theorem. We’ll describe it in general, and then come back to see how it applies in that case.

Given a vector v\in V, we define

\displaystyle\bar{v}=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}gv

This is clearly a linear operation. I say that \bar{v} is invariant under the action of G. Indeed, given g'\in G we calculate

\displaystyle\begin{aligned}g'\bar{v}&=g'\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}gv\\&=\frac{1}{\lvert G\rVert}\sum\limits_{g\in G}(g'g)v\\&=\bar{v}\end{aligned}

since as g ranges over G, so does g'g, albeit in a different order. Further, if v is already G-invariant, then we find

\displaystyle\begin{aligned}\bar{v}&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}gv\\&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}v\\&=v\end{aligned}

so this is indeed the projection we’re looking for.

Now, how does this apply to Maschke’s theorem? Well, given a G-module V, the collection of sesquilinear forms on the underlying space V forms a vector space itself. Indeed, such forms correspond to correspond to Hermitian matrices, which form a vector space. Anyway, rather than write the usual angle-brackets, we will write one of these forms as a bilinear function B:V\times V\to\mathbb{C}.

Now I say that the space of forms carries an action from the right by G. Indeed, we can define


It’s straightforward to verify that this is a right action by G. So, how do we “average” the form to get a G-invariant form? We define

\displaystyle\bar{B}(v,w)=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}B(gv,gw)

which — other than the factor of \frac{1}{\lvert G\rvert} — is exactly how we came up with a G-invariant form in the proof of Maschke’s theorem!


November 13, 2010 Posted by | Algebra, Group theory, Representation Theory | 1 Comment

Group Invariants

Again, my apologies. What with yesterday’s cooking, I forgot to post this yesterday. I’ll have another in the evening.

Let V be a representation of a finite group G, with finite dimension d_V. We can decompose V into blocks — one for each irreducible representation of G:

\displaystyle V\cong\bigoplus\limits_{i=1}^kV^{(i)}\otimes\hom_G(V^{(i)},V)

We’re particularly concerned with one of these blocks, which we can construct for any group G. Every group has a trivial representation V^\mathrm{triv}, and so we can always come up with the space of “invariants” of G:

\displaystyle V^G=V^\mathrm{triv}\otimes\hom_G(V^\mathrm{triv},V)

We call these invariants, because these are the v\in V so that gv=v for all g\in G. Technically, this is a G module — actually a G-submodule of V — but the action of G is trivial, so it feels slightly pointless to consider it as a module at all.

On the other hand, any “plain” vector space can be considered as if it were carrying the trivial action of G. Indeed, if W has dimension d_W, then we can say it’s the direct sum of d_W copies of the trivial representation. Since the trivial character takes the constant value 1, the character of this representation takes the constant value d_W. And so it really does make sense to consider it as the “number” d_W, just like we’ve been doing.

We’ve actually already seen this sort of subspace before. Given two left G-modules {}_GV and {}_GW, we can set up the space of linear maps \hom(V,W) between the underlying vector spaces. In this setup, the two group actions are extraneous, and so we find that they give residual actions on the space of linear maps. That is we have two actions by G on \hom(V,W), one from the left and one from the right.

Now just like we found with inner tensor products, we can combine these two actions of G into one. Now we have one left action of G on linear maps by conjugation: (g,f)\mapsto g\cdot f, defined by

\displaystyle[g\cdot f](v)=gf(g^{-1}v)

Just in case, we check that

\displaystyle\begin{aligned}\left[g\cdot(h\cdot f)\right](v)&=g\left[h\cdot f\right](g^{-1}v)\\&=g(hf(h^{-1}(g^{-1}v)))\\&=(gh)f((h^{-1}g^{-1})v)\\&=(gh)f((gh)^{-1}v)\\&=\left[(gh)\cdot f\right](v)\end{aligned}

so this is, indeed, a representation. And what are the invariants of this representation? They’re exactly those linear maps f:V\to W such that

\displaystyle gf(g^{-1}v)=f(v)

for all v\in V and g\in G. Equivalently, the condition is that

\displaystyle gf(v)=f(gv)

and so f must be an intertwinor. And so we conclude that


That is: the space of linear maps from V to W that are invariant under the conjugation action of G is exactly the space of G-morphisms between the two G-modules.

November 12, 2010 Posted by | Algebra, Group theory, Representation Theory | 3 Comments

Subspaces from Irreducible Representations

Because of Maschke’s theorem we know that every representation of a finite group G can be decomposed into chunks that correspond to irreducible representations:

\displaystyle V\cong\bigoplus\limits_{i=1}^km_iV^{(i)}

where the V^{(i)} are pairwise-inequivalent irreps. But our consequences of orthonormality prove that there can be only finitely many such inequivalent irreps. So we may as well say that k is the number of them and let a multiplicity m_i be zero if V^{(i)} doesn’t show up in V at all.

Now there’s one part of this setup that’s a little less than satisfying. For now, let’s say that V is an irrep itself, and let m be a natural number for its multiplicity. We’ve been considering the representation

\displaystyle mV=\bigoplus\limits_{i=1}^mV

made up of the direct sum of m copies of V. But this leaves some impression that these copies of V actually exist in some sense inside the sum. In fact, though inequivalent irreps stay distinct, equivalent ones lose their separate identities in the sum. Indeed, we’ve seen that


That is, we can find a copy of V lying “across” all m copies in the sum in all sorts of different ways. The identified copies are like the basis vectors in an m-dimensional vector space — they hardly account for all the vectors in the space.

We need a more satisfactory way of describing this space. And it turns out that we have one:

\displaystyle mV=\bigoplus\limits_{i=1}^mV\cong V\otimes\mathbb{C}^m

Here, the tensor product is over the base field \mathbb{C}, so the “extra action” by G on V makes this into a G-module as well.

This actually makes sense, because as we pass from representations to their characters, we also pass from “plain” vector spaces to their dimensions, and from tensor products to regular products. Thus at the level of characters this says that adding m copies of an irreducible character together gives the same result as multiplying it by m, which is obviously true. So since the two sides have the same characters, they contain the same number of copies of the same irreps, and so they are isomorphic as asserted.

Actually, any vector space of dimension m will do in the place of \mathbb{C}^m here. And we have one immediately at hand: \hom_G(V,mV) itself. That is, if V is an irreducible representation then we have an isomorphism:

\displaystyle mV\cong V\otimes\hom_G(V,mV)

As an example, if V is any representation and V^{(i)} is any irrep, then we find

\displaystyle m_iV^{(i)}\cong V^{(i)}\otimes\hom_G(V^{(i)},V)

We can reassemble these subspaces to find

\displaystyle V\cong\bigoplus\limits_{i=1}^km_iV^{(i)}\cong\bigoplus\limits_{i=1}^kV^{(i)}\otimes\hom_G(V^{(i)},V)

Notice that this extends our analogy between \hom spaces and inner products. Indeed, if we have an orthonormal basis \{e_i\} of a vector space of dimension k, we can decompose any vector as

\displaystyle v=\sum\limits_{i=1}^ke_i\langle e_i,v\rangle

November 10, 2010 Posted by | Algebra, Group theory, Representation Theory | 2 Comments

Tensor Products over Group Algebras

So far, we’ve been taking all our tensor products over the complex numbers, since everything in sight has been a vector space over that field. But remember that a representation of G is a module over the group algebra \mathbb{C}[G], and we can take tensor products over this algebra as well.

More specifically, if V_G is a right G-module, and {}_GW is a left G-module, then we have a plain vector space V\otimes_GW. We build it just like the regular tensor product V\otimes W, but we add new relations of the form

\displaystyle(vg)\otimes w=v\otimes(gw)

That is, in the tensor product over \mathbb{C}[G], we can pull actions by g from one side to the other.

If V or W have extra group actions, they pass to the tensor product. For instance, if {}_HV_G is a left H-module as well as a right G-module, then we can define

\displaystyle h(v\otimes w)=(hv)\otimes w

Similarly, if V_{GH} has an additional right action by H, then so does V\otimes_GW, and the same goes for extra actions on W. Similar to the way that hom spaces over G “eat up” an action of G on each argument, the tensor product \otimes_G “eats up” a right action by G on its left argument and a left action by G on its right argument.

We can try to work out the dimension of this space. Let’s say that we have decompositions

\displaystyle\begin{aligned}V_G&=V^{(1)}_G\oplus\dots\oplus V^{(m)}_G\\{}_GW&={}_GW^{(1)}\oplus\dots\oplus{}_GW^{(n)}\end{aligned}

into irreducible representations (possibly with repetitions). As usual for tensor products, the operation is additive, just like we saw for \hom spaces. That is

\displaystyle V\otimes_GW\cong\bigoplus\limits_{i=1}^m\bigoplus\limits_{j=1}^nV^{(i)}\otimes_GW^{(j)}

So we really just need to understand the dimension of one of these summands. Let’s say V is irreducible with dimension d_V and W is irreducible with dimension d_W.

Now, we can pick any vector v\in V and hit it with every group element g\in G. These vectors must span V; they span some subspace, which (since V is irreducible) is either trivial or all of V. But it can’t be trivial, since it contains v itself, and so it must be V. That is, given any vector v'\in V we can find some element of the group algebra g'\in\mathbb{C}[G] so that v'=vg'. But then for any w\in W we have

\displaystyle v'\otimes w=(vg')\otimes w=v\otimes(g'w)

That is, every tensor can be written with v as the first tensorand. Does this mean that \dim(V\otimes_GW)=\dim(W)? Not quite, since this expression might not be unique. For every element of the group algebra that sends v back to itself, we have a different expression.

So how many of these are there? Well, we have a linear map \mathbb{C}[G]\to V that sends g\in\mathbb{C}[G] to vg\in V. We know that this is onto, so the dimension of the image is d_V. The dimension of the source is \dim(\mathbb{C}[G])=\lvert G\rvert, and so the rank-nullity theorem tells us that the dimension of the kernel — the dimension of the space that sends v back to itself — is \lvert G\rvert-d_V.

So we should be able to subtract this off from the dimension of the tensor product, due to redundancies. Assuming that this works as expected, we get \dim(V\otimes_GW)=d_V+d_W-\lvert G\rvert, which at least is symmetric between V and W as expected. But it still feels sort of like we’re getting away with something here. We’ll come back to find a more satisfying proof soon.

November 9, 2010 Posted by | Algebra, Group theory, Representation Theory | 6 Comments

The Character Table of S4

Let’s use our inner tensor products to fill in the character table of S_4. We start by listing out the conjugacy classes along with their sizes:


Now we have the same three representations as in the character table of S_3: the trivial, the signum, and the complement of the signum in the defining representation. Let’s write what we have.


Just to check, we calculate


so again, \chi^\perp is irreducible.

But now we can calculate the inner tensor product of \mathrm{sgn} and \chi^\perp. This gives us a new line in the character table:


which we can easily check to be irreducible.

Next, we can form the tensor product \chi^\perp\otimes\chi^\perp, which has values


Now, this isn’t irreducible, but we can calculate inner products with the existing irreducible characters and decompose it as


where \chi^{(5)} is what’s left after subtracting the other three characters. This gives us one more line in the character table:


and we check that


so \chi^{(5)} is irreducible as well.

Now, we haven’t actually exhibited these representations explicitly, but there is no obstacle to carrying out the usual calculations. Matrix representations for V^\mathrm{triv} and V^\mathrm{sgn} are obvious. A matrix representation for V^\perp comes just as in the case of S_3 by finding a basis for the defining representation that separates out the copy of V^\mathrm{triv} inside it. Finally, we can calculate the Kronecker product of these matrices with themselves to get a representation corresponding to \chi^\perp\otimes\chi^\perp, and then find a basis that allows us to split off copies of V^\mathrm{triv}, V^\perp, and V^\mathrm{sgn}\otimes V^\perp.

November 8, 2010 Posted by | Algebra, Group theory, Representation Theory | 4 Comments

Inner Tensor Products

Let’s say we have two left G-modules — {}_GV and {}_GW — and form their outer tensor product {}_{GG}V\otimes W. Note that this carries two distinct actions of the same group G, and these two actions commute with each other. That is, V\otimes W carries a representation of the product group G\times G. This representation is a homomorphism \rho\otimes\sigma:G\times G\to\mathrm{End}(V\otimes W).

It turns out that we actually have another homomorphism \Delta:G\to G\times G, given by \Delta(g)=(g,g). Indeed, we check that


If we compose this with the representing homomorphism, we get a homomorphism (\rho\otimes\sigma)\circ\Delta:G\to\mathrm{End}(V\otimes W). That is, V\otimes W actually carries a representation of G itself!

We’ve actually seen this before, a long while back. The group algebra \mathbb{C}[G] is an example of a bialgebra, with the map \Delta serving as a “comultiplication”. If this seems complicated, don’t worry about it. The upshot is that this “inner” tensor product V\otimes W, considered as a representation of G, behaves as a monoidal product in the category of G-modules. Sometimes we will write V\hat{\otimes}W (and similar notations) when we need to distinguish the inner tensor product from the outer one, but often we can just let context handle it.

Luckily, characters are just as well-behaved for inner products. Indeed, we can check that


However, unlike for outer tensor products the inner tensor product of two irreducible representations is not, in general, itself irreducible. Indeed, we can look at the character table for S_3 and consider the inner tensor product of two copies of V^\perp.

What we just proved above tells us that the character of V^\perp\hat{\otimes}V^\perp is \left(\chi^\perp\right)^2. This takes the values


which does not occur as a line in the character table, and thus cannot be an irreducible character. Indeed, calculating inner products, we find


And so we find that


which means that

\displaystyle V^\perp\hat{\otimes}V^\perp\cong V^\mathrm{triv}\oplus V^\mathrm{sgn}\oplus V^\perp

November 5, 2010 Posted by | Algebra, Group theory, Representation Theory | 4 Comments

Outer Tensor Products

Let’s say we have two finite groups G and H, and we have (left) representations of each one: {}_GV and {}_HW. It turns out that the tensor product V\otimes W naturally carries a representation of the product group G\times H. Equivalently, it carries a representation of each of G and H, and these representations commute with each other. In our module notation, we write {}_{GH}(V\otimes W).

The action is simple enough. Any vector in V\otimes W can be written (not always uniquely) as a sum of vectors of the form v\otimes w. We let G act on the first “tensorand”, let H act on the second, and extend by linearity. That is:

\displaystyle\begin{aligned}g(v\otimes w)&=(gv\otimes w)\\h(v\otimes w)&=v\otimes(hw)\end{aligned}

and the action of either g or h on the sum of two tensors is the sum of their actions on each of the tensors.

Now, might the way we write the sum make a difference? No, because all the relations look like

  • (v_1+v_2)\otimes w=v_1\otimes w+v_2\otimes w
  • v\otimes(w_1+w_2)=v\otimes w_1+v\otimes w_2
  • (cv)\otimes w=v\otimes(cw)

where in the last equation c is a complex constant. Now, we can check that the actions of g and h give equivalent results on either side of each equation. For instance, acting by g in the first equation we see

\displaystyle\begin{aligned}g((v_1+v_2)\otimes w)&=(g(v_1+v_2))\otimes w\\&=(gv_1+gv_2)\otimes w\\&=(gv_1)\otimes w+(gv_2)\otimes w\\&=g(v_1\otimes w)+g(v_2\otimes w)\end{aligned}

just as we want. All the other relations are easy enough to check.

But do the actions of G and H commute with each other? Indeed, we calculate

\displaystyle\begin{aligned}gh(v\otimes w)&=g(v\otimes(hw))\\&=(gv)\otimes(hw)\\&=h((gv)\otimes w)\\&=hg(v\otimes w)\end{aligned}

So we really do have a representation of the product group.

We have similar “outer” tensor products for other combinations of left and right representations:

\displaystyle\begin{aligned}({}_GV)\otimes(W_H)&={}_G(V\otimes W)_H\\(V_G)\otimes({}_HW)&={}_H(V\otimes W)_G\\(V_G)\otimes(W_H)&=(V\otimes W)_{GH}\end{aligned}

Now, let’s try to compute the character of this representation. If we write the representing homomorphisms \rho:G\to\mathrm{End}(V) and \sigma:H\to\mathrm{End}(W), then we get a representing homomorphism \rho\otimes\sigma:G\times H\to\mathrm{End}(V\otimes W). And this is given by


Indeed, this is exactly the endomorphism of V\otimes W that applies \rho(g) to V and applies \sigma(h) to W, just as we want. And we know that when expressed in matrix form, the tensor product of linear maps becomes the Kronecker product of matrices. We write the character of \rho as \chi, that of \sigma as \psi, and that of their tensor product as \chi\otimes\psi, and calculate:


That is, the character of the tensor product representation is the product of the characters of the two representations.

Finally, if both \rho and \sigma are irreducible, then the tensor product \rho\otimes\sigma is as well. We calculate:

\displaystyle\begin{aligned}\langle\chi_1\otimes\psi_1,\chi_2\otimes\psi_2\rangle&=\frac{1}{\lvert G\times H\rvert}\sum\limits_{(g,h)\in G\times H}\overline{\chi_1\otimes\psi_1(g,h)}\chi_2\otimes\psi_2(g,h)\\&=\frac{1}{\lvert G\rvert}\frac{1}{\lvert H\rvert}\sum\limits_{g\in G}\sum\limits_{h\in H}\overline{\chi_1(g)}\overline{\psi_1(h)}\chi_2(g)\psi_2(h)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}\overline{\chi_1(g)}\chi_2(g)\frac{1}{\lvert H\rvert}\sum\limits_{h\in H}\overline{\psi_1(h)}\psi_2(h)\\&=\langle\chi_1,\chi_2\rangle\langle\psi_1,\psi_2\rangle\end{aligned}

In particular, we find that \langle\chi\otimes\psi,\chi\otimes\psi\rangle=\langle\chi,\chi\rangle\langle\psi,\psi\rangle. If \chi and \psi are both irreducible characters, then our character properties tell us that both of the inner products on the right are 1, and we conclude that the inner product on the left is as well, which means that \chi\otimes\psi is irreducible.

November 4, 2010 Posted by | Algebra, Group theory, Representation Theory | 3 Comments

Left and Right Morphism Spaces

One more explicit parallel between left and right representations: we have morphisms between right G-modules just like we had between left G-modules. I won’t really go into the details — they’re pretty straightforward — but it’s helpful to see how our notation works.

In the case of left modules, we had a vector space \hom_G({}_GV,{}_GW). Now in the case of right modules we also have a vector space \hom_G(V_G,W_G). We use the same notation \hom_G in both cases, and rely on context to tell us whether we’re talking about right or left module morphisms. In a sense, applying \hom_G “eats up” an action of G on each module, and on the same side.

We can see this even more clearly when we add another action to one of the modules. Let’s say that V carries a left action of G — we write {}_GV — and W carries commuting left actions of both G and another group H — we write {}_{GH}W. I say that there is a “residual” left action of H on the space of left G-module morphisms. That is, the space \hom_G({}_GV,{}_{GH}W) “eats up” an action of G on each module, and it leaves the left action of H behind.

So, how could H act on the space of morphisms? Well, let f:V\to W be an intertwinor of the G actions, and let any h\in H act on f by sending it to hf, defined by \left[hf\right](v)=hf(v). That is, hf:V\to W first sends a vector v to a vector f(v), and then h acts on the left to give a new vector hf(v). We must check that this is an intertwinor, and not just a linear map from V to W. For any g\in G, we calculate


using first the fact that f is an intertwinor, and then the fact that the action of H commutes with that of G to pull g all the way out to the left.

Similarly, if H is an extra action on the right of W, we have a “residual” right action on the space \hom_G({}_GV,{}_GW_H). And the same goes for right G-modules: we have a residual left action of H on \hom_G(V_G,{}_HW_G), and a residual right action on \hom_G(V_G,W_{GH}).

It’s a little more complicated when we have extra commuting actions on V. The complication is connected to the fact that the hom functor is contravariant in its first argument, which if you don’t know much about category theory you don’t need to care about. The important thing is that if V has an extra left action of H, then the space \hom_G({}_{GH}V,{}_GW) will have a residual right action of H.

In this case, given a map f intertwining the G actions, we define fh by \left[fh\right](v)=f(hv). We should verify that this is, indeed, a right action:


using the fact that H acts on the left on V. Again, we must verify that fh is actually another intertwinor:


using the fact that f is an intertwinor and the actions of G and H on V commute.

Similarly, we find a residual right action on the space \hom_G({}_HV_G,W_G), and residual left actions on the spaces \hom_G({}_GV_H,{}_GW) and \hom_G(V_{GH},W_G).

November 3, 2010 Posted by | Algebra, Group theory, Representation Theory | 2 Comments

Right Representations

In our discussions of representations so far we’ve been predominantly concerned with actions on the left. That is, we have a map G\times V\to V, linear in V, that satisfies the relation g_1(g_2v)=(g_1g_2)v. That is, the action of the product of two group elements is the composition of their actions.

But sometimes we’re interested in actions on the right. This is almost exactly the same, but with a map V\times G\to V, again linear in V, and this time the relation reads (vg_1)g_2=v(g_1g_2). Again, the action of the product of two group elements is the composition of their actions, but now in the opposite order! Before we first acted by g_2 and then by g_1, but now we act first by g_1 and then by g_2. And so instead of a homomorphism G\to\mathrm{End}(V), we have an anti-homomomorphism — a map from one group to another that reverses the order of multiplication.

We can extend the notation from last time. If the space V carries a right representation of a group G, then we hang a tag on the right: V_G. If we have an action by another group H on the right that commutes with the action of G, we write V_{GH}. And if H instead acts on the left, we write {}_HV_G. Again, this can be read as a pair of commuting actions, or as a left action of H on the right G-module V_G, or as a right action of G on the left G-module {}_HV.

Pretty much everything we’ve discussed moves over to right representations without much trouble. On the occasions we’ll really need them I’ll clarify if there’s anything tricky, but I don’t want to waste a lot of time redoing everything. One exception that I will mention right away is the right regular representation, which (predictably enough) corresponds to the left regular representation. In fact, when I introduced that representation I even mentioned the right action in passing. At the time, I said that we can turn the natural antihomomorphism into a homomorphism by right-multiplying by the inverse of the group element. But if we’re willing to think of a right action on its own terms, we no longer need that trick.

So the group algebra \mathbb{C}[G] — here considered just as a vector space — carries the left regular representation. The left action of a group element h on a basis vector \mathbf{g} is the basis vector \mathbf{hg}. It also carries the right regular representation. The right action of a group element k on a basis vector \mathbf{g} is the basis vector \mathbf{gk}. And it turns out that these two actions commute! Indeed, we can check


This might seem a little confusing at first, but remember that when h shows up plain on the left it means the group element h acting on the vector to its right. When it shows up in a boldface expression, that expression describes a basis vector in \mathbb{C}[G]. Overall, this tells us that we can start with the basis vector \mathbf{g} and act first on the left by h and then on the right by k, or we can act first on the right by k and then on the left by h. Either way, we end up with the basis vector \mathbf{hgk}, which means that these two actions commute. Using our tags, we can thus write {}_G\mathbb{C}[G]_G.

November 2, 2010 Posted by | Algebra, Group theory, Representation Theory | 1 Comment

Representing Product Groups

An important construction for groups is their direct product. Given two groups G and H we take the cartesian product of their underlying sets G\times H and put a group structure on it by multiplying component-by-component. That is, the product of (g_1,h_1) and (g_2,h_2) is (g_1g_2,h_1h_2). Representations of product groups aren’t really any different from those of any other group, but we have certain ways of viewing them that will come in handy.

The thing to notice is that we have copies of both G and H inside G\times H. Indeed, we can embed G into G\times H by sending g to (g,e_H), which clearly preserves the multiplication. Similarly, the map h\mapsto(e_G,h) embeds H into G. The essential thing here is that the transformations coming from G and those coming from H commute with each other. Indeed, we calculate


Also, every transformation in G\times H is the product of one from G and one from H.

The upshot is that a representation of G\times H on a space V provides us with a representation of G, and also one of H on the space V. Further, transformation in the representation of G must commute with every transformation in the representation of V. Conversely, if we have a representation of each factor group on the same space V, then we have a representation of the product group, but only if all the transformations in each representation commute with all the transformations in the other.

So what can we do with this? Well, it turns out that it’s pretty common to have two separate group actions on the same module, and to have these two actions commute with each other like this. Whenever this happens we can think of it as a representation of the product group, or as two commuting representations.

In fact, there’s another way of looking at it: remember that a representation of a group G on a space V can be regarded as a module for the group algebra \mathbb{C}[G]. If we then add a commuting representation of a group H, we can actually regard it as a representation on the G-module instead of just the underlying vector space. That is, instead of just having a homomorphism H\to\mathrm{End}(V) that sends each element of H to a linear endomorphism of V, we actually get a homomorphism H\to\mathrm{End}_G(V) that sends each element of H to a G-module endomorphism of V.

Indeed, let’s write our action of G with the group homomorphism \rho and our action of H with the group homomorphism \sigma. Now, I’m asserting that each \sigma(h) is an intertwinor for the action of G. This means that for each g\in G, it satisfies the equation \rho(g)\sigma(h)=\sigma(h)\rho(g). But this is exactly what it means for the two representations to commute!

Some notation will be helpful here. If the vector space V carries a representation of a group G, we can hang a little tag on it to remind ourselves of this, writing {}_GV. That is, {}_GV is a G-module, and not just a vector space. If we now add a new representation of a group H that commutes with the original representation, we just add a new tag: {}_{HG}V. Of course, the order of the tags doesn’t really matter, so we could just as well write {}_{GH}V. Either way, this means that we have a representation of G\times H on V.

November 1, 2010 Posted by | Algebra, Group theory, Representation Theory | 3 Comments