The Unapologetic Mathematician

Mathematics for the interested outsider

Lifting and Descending Representations

Let’s recall that a group representation is, among other things, a group homomorphism. This has a few consequences.

First of all, we can consider the kernel of a matrix representation X. This is not the kernel we’ve talked about recently, which is the kernel of a G-morphism. This is the kernel of a group homomorphism. In this context, it’s the collection of group elements g\in G so that the image X(g) is the identity transformation. We call this subgroup N\subseteq G. If N is the trivial subgroup, we say that X is a “faithful” representation, since it doesn’t send any two group elements to the same matrix.

Now, basic group theory tells us that N is a normal subgroup, and so we can form the quotient group G/N. I say that the representation X “descends” to a representation of this quotient group. That is, we can define a representation Y by Y(gN)=X(g) for all cosets gN\in G/N. We have to check that this doesn’t depend on which representative g of the coset we choose, but any other one looks like g'=gn for n\in N. Then Y(g')=Y(g)Y(n)=Y(g), since Y(n) is the identity matrix by definition.

I say that Y is a faithful representation. That is, the only coset that Y sends to the identity matrix is the one containing the identity: N itself. And indeed, if Y(gN)=I, then X(g)=I, and so g\in N and gN=N in the first place.

Next, Y is irreducible if and only if X is. We’ll prove this one by using the properties we proved a few days ago. In particular, we’ll calculate the inner product of the character of Y with itself. Writing \psi for the character of Y and \chi for that of X, we find that


and we use this to calculate:

\displaystyle\begin{aligned}\langle\psi,\psi\rangle&=\frac{1}{\lvert G/N\rvert}\sum\limits_{gN\in G/N}\overline{\psi(gN)}\psi(gN)\\&=\frac{\lvert N\rvert}{\lvert G\rvert}\sum\limits_{gN\in G/N}\overline{\chi(g)}\chi(g)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{gN\in G/N}\sum\limits_{n\in N}\overline{\chi(gn)}\chi(gn)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}\overline{\chi(g)}\chi(g)\\&=\langle\chi,\chi\rangle\end{aligned}

Essentially, the idea is that each group element in the kernel N contributes the same to the sum, and this is exactly compensated for by the difference between the sizes of G and G/N. Since the two inner products are the same, either both are 1 or neither one is, and so either both representations are irreducible or neither one is.

We can also run this process in reverse: let G be a finite group, let N be any normal subgroup so we have the quotient group G/N, and let Y be any representation of G/N. We will use this to define a representation of the original group G by “lifting” the representation Y.

So, the obvious choice is to define X(g)=Y(gN). This time there’s no question that X is well-defined, since here we start with a group element and find its coset, rather than starting with a coset and picking a representative element. And indeed, this is easily verified to be a representation.

If Y is faithful, then the kernel of X is exactly N. Indeed, if X(g)=I, then Y(gN)=I. If Y is faithful, then we must have gN=N, and thus g\in N.

And finally, X is irreducible if and only if Y is. The proof runs exactly as it did before.

October 29, 2010 Posted by | Algebra, Group theory, Representation Theory | Leave a comment

An Alternative Path

It turns out that our efforts last time were somewhat unnecessary, although they were instructive. Actually, we already had a matrix representation in our hands that would have done the trick.

The secret is to look at the block diagonal form from when we defined reducibility:


We worked this out for the product of two group elements, finding


We focused before on the upper-left corner to see that X was a subrepresentation, but we see that Z(gh)=Z(g)Z(h) as well. The thing is, if the overall representation acts on V and X acts on the submodule W, then Z acts on the quotient space V/W. That is, it’s generally not a submodule. However, it so happens that over a finite group we have V\cong W\oplus V/W as modules. That is, if we have a matrix representation that looks like the one above, then we can find a different basis that makes it look like


This is more than Maschke’s theorem tells us — not only do we have a decomposition, but we have one that uses the exact same matrix representation in the lower right as the original one. Proving this will reprove Maschke’s theorem, and in a way that works over any field!

So, let’s look for a change-of-basis matrix that’s partitioned the same way:


Multiplying this out, we find


which gives us the equation Y(g)+DZ(g)=X(g)D, which we rearrange to give

\displaystyle X(g)DZ\left(g^{-1}\right)=D-X(g)Y\left(g^{-1}\right)

as well as

\displaystyle X(g)DZ\left(g^{-1}\right)=D+Y(g)Z\left(g^{-1}\right)

That is, acting on the left of D by X(g) and on the right by Z\left(g^{-1}\right) doesn’t leave D unchanged, but instead adds a certain offset. We’re not looking for an invariant of these actions, but something close. Incidentally, why are these two offsets the same? Well, if we put them together we find

\displaystyle X(g)Y\left(g^{-1}\right)+Y(g)Z\left(g^{-1}\right)=Y\left(gg^{-1}\right)=Y(e)

which must clearly be zero, as desired.

Anyway, I say that things will work out if we choose

\displaystyle D=\frac{1}{\lvert G\rvert}\sum\limits_{h\in G}X\left(g^{-1}\right)Y(g)

Indeed, we calculate

\displaystyle\begin{aligned}X(g)DZ\left(g^{-1}\right)&=X(g)\left(\frac{1}{\lvert G\rvert}\sum\limits_{h\in G}X\left(h^{-1}\right)Y(h)\right)Z\left(g^{-1}\right)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{h\in G}X\left(gh^{-1}\right)Y(h)Z\left(g^{-1}\right)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{f\in G}X\left(f^{-1}\right)Y(fg)Z\left(g^{-1}\right)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{f\in G}X\left(f^{-1}\right)\left(X(f)Y(g)+Y(f)Z(g)\right)Z\left(g^{-1}\right)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{f\in G}\left(Y(g)Z\left(g^{-1}\right)+X\left(f^{-1}\right)Y(f)\right)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{f\in G}Y(g)Z\left(g^{-1}\right)+\frac{1}{\lvert G\rvert}\sum\limits_{f\in G}X\left(f^{-1}\right)Y(f)\\&=Y(g)Z\left(g^{-1}\right)+D\end{aligned}

Just as we wanted.

Notice that just like our original proof of Maschke’s theorem, this depends on a sum that is only finite if G is a finite group.

October 28, 2010 Posted by | Algebra, Group theory, Representation Theory | 2 Comments

One Complete Character Table (part 2)

Last time we wrote down the complete character table of S_3:


which is all well and good except we haven’t actually seen a representation with the last line as its character!

So where did we get the last line? We had the equation \chi^\mathrm{def}=\chi^\mathrm{triv}+\chi^\perp, which involves the characters of the defining representation V^\mathrm{def} and the trivial representation V^\mathrm{triv}. This equation should correspond to an isomorphism V^\mathrm{def}\cong V^\mathrm{triv}\oplus V^\perp.

We know that there’s a copy of the the trivial representation as a submodule of the defining representation. If we use the standard basis \{\mathbf{1},\mathbf{2},\mathbf{3}\} of V^\mathrm{def}, this submodule is the line spanned by the vector \mathbf{1}+\mathbf{2}+\mathbf{3}. We even worked out the defining representation in terms of the basis \{\mathbf{1}+\mathbf{2}+\mathbf{3},\mathbf{2},\mathbf{3}\} to show that it’s reducible.

But what we want is a complementary subspace which is also G-invariant. And we can find such a complement if we have a G-invariant inner product on our space. And, luckily enough, permutation representations admit a very nice invariant inner product! Indeed, just take the inner product that arises by declaring the standard basis to be orthonormal; it’s easy to see that this is invariant under the action of G.

So we need to take our basis \{\mathbf{1}+\mathbf{2}+\mathbf{3},\mathbf{2},\mathbf{3}\} and change the second and third members to make them orthogonal to the first one. Then they will span the orthogonal complement, which we will show to be G-invariant. The easiest way to do this is to use \{\mathbf{1}+\mathbf{2}+\mathbf{3},\mathbf{2}-\mathbf{1},\mathbf{3}-\mathbf{1}\}. Then we can calculate the action of each permutation in terms of this basis. For example:


and write out all the representing matrices in terms of this basis:


These all have the required form:


where the 1 in the upper-left is the trivial representation and the 2\times 2 block in the lower right is exactly the other representation V^\perp we’ve been looking for! Indeed, we can check the values of the character:


exactly as the character table predicted.

October 27, 2010 Posted by | Algebra, Group theory, Representation Theory | 2 Comments

One Complete Character Table (part 1)

When we first defined the character table of a group, we closed by starting to write down the character table of S_3:


Now let’s take the consequences we just worked out of the orthonormality relations and finish the job.

We’ve already verified that the two characters we know of are orthonormal, and we know that there can be at most one more, which would make the character table look like:


Do we have any other representations of S_3 to work with? Well, there’s the defining representation. This has a character we can specify by the three values


We calculate the multiplicities of the two characters we know by taking inner products:


That is, the defining representation contains one copy of the trivial representation and no copies of the signum representation. In fact, we already knew about the copy of the trivial representation, but it’s nice to see it confirmed again. Subtracting it off, we’re left with a residual character:


Now this character might itself decompose, or it might be irreducible. We can check by calculating its inner product with itself:


which confirms that \chi^\perp is irreducible. Thus we can write down the character table of S_3 as


So, why is this just part 1? Well, we’ve calculated another character, but we still haven’t actually shown that there’s any irrep that gives rise to this character. We have a pretty good idea what it should be, but next time we’ll actually show that it exists, and it really does have the character \chi^\perp.

October 26, 2010 Posted by | Algebra, Group theory, Representation Theory | 5 Comments

Consequences of Orthogonality

We have some immediate consequences of the orthonormality relations.

First of all, since irreducible characters are orthonormal any collection of them forms an orthonormal basis of the subspace it spans. Of course whatever subspace this is, it has to fit within the space of class functions, and so it can’t have any more basis elements than the dimension of this larger space. That is, there can be at most as many irreducible characters as there are conjugacy classes in G. And so we know that the character table must have only finitely many rows. For instance, since S_3 has three conjugacy classes, it can have at most three irreducible characters. We know two already, so there’s only room for one more, if there are any more at all.

For something a little more concrete, let V^{(i)} be a collection of irreps with corresponding characters \chi^{(i)}. Then the representation

\displaystyle V=\bigoplus\limits_{i=1}^km_iV^{(i)}

has character


That is, direct sums of representations correspond to sums of characters. This is just the tip of a far-reaching correspondence between the high-level properties of the category of representations and the low-level properties of the algebra of characters.

Anyway, proving this relation is pretty straightforward. If A and B are two matrix representations then their direct sum is

\displaystyle\left[A\oplus B\right](g)=\left(\begin{array}{c|c}A(g)&0\\\hline0&B(g)\end{array}\right)

It should be clear that the trace of the direct sum on the left is the sum of the traces on the right. This is all we need, since we can just split off one irreducible component after another to turn the direct sum on one side into a sum on the other.

Next we have a way of reading off the coefficients. Let V be the same representation from above, with the same character. I say that the multiplicity m_j=\langle\chi^{(j)},\chi\rangle. Indeed, we can easily calculate


Notice that this is very similar to the result we showed at the end of calculating the dimensions of spaces of morphisms. This is not a coincidence.

More generally, if V is the representation from above and W is another representation that decomposes as

\displaystyle W=\bigoplus\limits_{j=1}^kn_jV^{(j)}

then the character of W is


and we calculate the inner product


In particular, we see that


We see that in all these cases \langle\chi,\psi\rangle=\dim\hom_G(V,W). Just like sums of characters correspond to direct sums of representations, inner products of characters correspond to \hom-spaces between representations. We just have to pass from plain vector spaces to their dimensions when we pass from representations to their characters. Of course, this isn’t much of a stretch, since we saw that the character \chi of a representation V includes information about the dimension: \chi(e)=\dim(V).

This goes even further: what happens when we swap the arguments to an inner product? We get the complex conjugate: \langle\psi,\chi\rangle=\overline{\langle\chi,\psi\rangle}. What happens when we swap the arguments to the \hom functor? We get the dual space: \displaystyle\hom_G(W,V)\cong\hom_G(V,W)^*. Complex conjugation corresponds to passing to the dual space.

Finally, the character \chi is irreducible if and only if \langle\chi,\chi\rangle=1. Indeed, if \chi is itself irreducible then our decomposition only involves one nonzero coefficient, which is a 1. The formula we just computed gives


Conversely, if this formula holds then we have to write 1 as a sum of squares. The only possibility is for all but one of the numbers m_i to be 0, and the remaining one to be 1, in which case \chi=\chi^{(i)}, and is thus irreducble.

October 25, 2010 Posted by | Algebra, Group theory, Representation Theory | 9 Comments

Irreducible Characters are Orthogonal

Today we prove the assertion that we made last time: that irreducible characters are orthogonal. That is, if V and W are G-modules with characters \chi and \psi, respectively, then their inner product is 1 if V and W are equivalent and 0 otherwise. Strap in, ’cause it’s a bit of a long one.

Let’s pick a basis of each of V and W to get matrix representations X and Y of degrees m and n, respectively. Further, let A be any m\times n matrix with entries a_i^j. Now we can construct the m\times n matrix

\displaystyle B=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}X(g)AY\left(g^{-1}\right)

Now I claim that B intertwines the matrix representations X and Y. Indeed, for any h\in G we calculate

\displaystyle\begin{aligned}X(h)B&=X(h)\left(\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}X(g)AY\left(g^{-1}\right)\right)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}X(h)X(g)AY\left(g^{-1}\right)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}X(hg)AY\left(g^{-1}h^{-1}h\right)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}X(hg)AY\left((hg)^{-1}h\right)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{f\in G}X(f)AY\left(f^{-1}\right)Y(h)\\&=\left(\frac{1}{\lvert G\rvert}\sum\limits_{f\in G}X(f)AY\left(f^{-1}\right)\right)Y(h)\\&=BY(h)\end{aligned}

At this point, Schur’s lemma kicks in to tell us that if X\not\cong Y then B is the zero matrix, while if X\cong Y then B is a scalar times the identity matrix.

First we consider the case where X\not\cong Y (equivalently, V\not\cong W). Since B is the zero matrix, each entry must be zero. In particular, we get the equations

\displaystyle\frac{1}{\lvert G\rvert}\sum\limits_{k,l}\sum\limits_{g\in G}x_i^k(g)a_k^ly_l^j\left(g^{-1}\right)=0

But the left side isn’t just any expression, it’s a linear function of the a_k^l. Since this equation must hold no matter what the a_k^l are, the coefficients of the function must all be zero! That is, we have the equations

\displaystyle\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}x_i^k(g)y_l^j\left(g^{-1}\right)=0

But now we can recognize the left hand side as our alternate expression for the inner product of characters of G. If the functions x_i^k and y_l^j were characters, this would be an inner product, but in general we’ll write

\displaystyle\langle x_i^k,y_l^j\rangle'=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}x_i^k(g)y_l^j\left(g^{-1}\right)=0

Okay, but we actually do have some characters floating around: \chi and \psi. And we can write them out in terms of these matrix elements as


And now we can use the fact that for characters our two bilinear forms are the same to calculate

\displaystyle\begin{aligned}\langle\chi,\psi\rangle&=\langle\chi,\psi\rangle'\\&=\left\langle\sum\limits_ix_i^i,\sum\limits_jy_j^j\right\rangle'\\&=\sum\limits_{i,j}\langle x_i^i,y_j^j\rangle'\\&=0\end{aligned}

So there: if V and W are inequivalent irreps, then their characters are orthogonal!

Now if V\cong W we can pick bases so that the matrix representations are both X. Schur’s lemma tells us that there is some c\in\mathbb{C} so that b_i^j=c\delta_i^j. Our argument above goes through just the same as before to show that

\displaystyle\langle a_i^k,a_l^j\rangle'=0

so long as i\neq j. To handle the case where i=j, we consider our equation

\displaystyle\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}X(g)AX\left(g^{-1}\right)=cI_d

We take the trace of both sides:

\displaystyle\begin{aligned}cd&=\mathrm{Tr}(cI_d)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}\mathrm{Tr}\left(X(g)AX\left(g^{-1}\right)\right)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}\mathrm{Tr}(A)\\&=\mathrm{Tr}(A)\end{aligned}

and thus we conclude that b_i^i=c=\frac{1}{d}\mathrm{Tr}(A). And so we can write

\displaystyle\frac{1}{\lvert G\rvert}\sum\limits_{k,l}\sum\limits_{g\in G}x_i^k(g)a_k^lx_l^i\left(g^{-1}\right)=\frac{1}{d}\left(\sum\limits_ja_j^j\right)

Equating coefficients on both sides we find

\displaystyle\langle x_i^k,x_l^i\rangle'=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}x_i^k(g)x_l^i\left(g^{-1}\right)=\frac{1}{d}\delta_l^k

And finally we can calculate

\displaystyle\begin{aligned}\langle\chi,\chi\rangle'&=\sum\limits_{i,j}\langle x_i^i,x_j^j\rangle'\\&=\sum\limits_i\langle x_i^i,x_i^i\rangle'\\&=\sum\limits_i\frac{1}{d}\\&=1\end{aligned}

exactly as we asserted.

Incidentally, this establishes what we suspected when setting up the character table: if V and W are inequivalent irreps then their characters \chi and \psi must be unequal. Indeed, since they’re inequivalent we must have \langle\chi,\psi\rangle=0. But if the characters were the same we would have to have \langle\chi,\psi\rangle=\langle\chi,\chi\rangle=1. So since inequivalent irreps have unequal characters we can replace all the irreps labeling rows in the character table by their corresponding irreducible characters.

October 22, 2010 Posted by | Algebra, Group theory, Representation Theory | 3 Comments

Inner Products in the Character Table

As we try to fill in the character table, it will help us to note another slight variation of our inner product formula:

\displaystyle\langle\chi,\psi\rangle=\frac{1}{\lvert G\rvert}\sum\limits_K\lvert K\rvert\overline{\chi_K}\psi_K

where our sum runs over all conjugacy classes K\subseteq G, and where \chi_K is the common value \chi_K=\chi(k) for all k in the conjugacy class K (and similarly for \psi_K). The idea is that every k in a given conjugacy class gives the same summand. Instead of adding it up over and over again, we just multiply by the number of elements in the class.

As an example, consider again the start of the character table of S_3:


Here we index the rows by irreducible characters, and the columns by representatives of the conjugacy classes. We can calculate inner products of rows by multiplying corresponding entries, but we don’t just sum up these products; we multiply each one by the size of the conjugacy class, and at the end we divide the whole thing by the size of the whole group:


We find that when we take the inner product of each character with itself we get 1, while taking the inner product of the two different characters gives 0. This is no coincidence; for any finite group G irreducible characters are orthonormal. That is, different irreducible characters have inner product 0, while any irreducible character has inner product 1 with itself. This is what we will prove next time.

October 21, 2010 Posted by | Algebra, Group theory, Representation Theory | 2 Comments

The Character Table of a Group

Given a group G, Maschke’s theorem tells us that every G-module is completely reducible. That is, we can write any such module V as the direct sum of irreducible representations:

\displaystyle V=\bigoplus\limits_{i=1}^km_iV^{(i)}

Thus the irreducible representations are the most important ones to understand. And so we’re particularly interested in their characters, which we call “irreducible characters”.

Of course an irreducible character — like all characters — is a class function. We can describe it by giving its values on each conjugacy class. And so we lay out the “character table”. This is an array whose rows are indexed by inequivalent irreducible representations, and whose columns are indexed by conjugacy classes K\subseteq G. The row indexed by V^{(i)} describes the corresponding irreducible character \chi^{(i)}. If k\in K is a representative of the conjugacy class, then the entry in the column indexed by K is \chi^{(i)}_K=\chi^{(i)}(k). That is, the character table looks like


By convention, the first row corresponds to the trivial representation, and the first column corresponds to the conjugacy class \{e\} of the identity element. We know that the trivial representation sends every group element to the 1\times 1 identity matrix, whose trace is 1. We also know that every character’s value on the identity element is the degree of the corresponding representation. We can slightly refine our first picture to sketch the character table like so:

\displaystyle\begin{array}{c|cccc}&\{e\}&\cdots&K&\cdots\\\hline V^\mathrm{triv}&1&\cdots&1&\cdots\\\vdots&\vdots&&\vdots&\\V^{(i)}&\deg\left(V^{(i)}\right)&\cdots&\chi^{(i)}_K&\cdots\\\vdots&\vdots&&\vdots&\end{array}

We have no reason to believe (yet) that the table is finite. Since G is a finite group there can be only finitely many conjugacy classes, and thus only finitely many columns, but as far as we can tell there may be infinitely many inequivalent irreps, and thus infinitely many rows. Further, we have no reason to believe that the rows are all distinct. Indeed, we know that equivalent representations have equal characters — they’re related through conjugation by an invertible intertwinor — but we don’t know for sure that inequivalent representations must have distinct characters.

As an example, we can start writing down the character table of S_3. We know that conjugacy classes in symmetric groups correspond to cycle types, and so we can write down all three conjugacy classes easily:


We know of two irreps offhand — the trivial representation and the signum representation — and so we’ll start with those and leave the table incomplete below that:

\displaystyle\begin{array}{c|ccc}&K_1&K_2&K_3\\\hline V^\mathrm{triv}&1&1&1\\V^\mathrm{sgn}&1&-1&1\\\vdots&\vdots&\vdots&\vdots\end{array}

October 20, 2010 Posted by | Algebra, Group theory, Representation Theory | 5 Comments

Characters of Permutation Representations

Let’s take (\mathbb{C}S,\rho) to be a permutation representation coming from a group action on a finite set S that we’ll also call \rho. It’s straightforward to calculate the character of this representation.

Indeed, the standard basis that comes from the elements of S gives us a nice matrix representation:


On the left \rho(g) is the matrix of the action on \mathbb{C}S, while on the right it’s the group action on the set S. Hopefully this won’t be too confusing. The matrix entry in row s and column t is 1 if \rho(g) sends s to t, and it’s 0 otherwise.

So what’s the character \chi_\rho(g)? It’s the trace of the matrix \rho(g), which is the sum of all the diagonal elements:

\displaystyle\mathrm{Tr}\left(\rho(g)\right)=\sum\limits_{s\in S}\rho(g)_s^s=\sum\limits_{s\in S}\rho(g)_s^s=\sum\limits_{s\in S}\delta_{\rho(g)s,s}

This sum counts up 1 for each point s that \rho(g) sends back to itself, and 0 otherwise. That is, it counts the number of fixed points of the permutation \rho(g).

As a special case, we can consider the defining representation V^\mathrm{def} of the symmetric group S_n. The character \chi^\mathrm{def} counts the number of fixed points of any given permutation. For instance, in the case n=3 we calculate:


In particular, the character takes the value 3 on the identity element e\in G, and the degree of the representation is 3 as well. This is no coincidence; \chi(e) will always be the degree of the representation in question, since any matrix representation of degree n must send e to the n\times n identity matrix, whose trace is n. This holds both for permutation representations and for any other representation.

October 19, 2010 Posted by | Algebra, Group theory, Representation Theory, Representations of Symmetric Groups | 4 Comments

The Inner Product of Characters

When we’re dealing with characters, there’s something we can do to rework our expression for the inner product on the space of class functions.

Let’s take a G-module (W,\sigma), with character \psi. Before, we’ve used Maschke’s theorem to tell us that all G-modules are completely reducible, but remember what it really tells us that there is some Ginvariant inner product on W (we’ll have to keep straight the two inner products by which vector space they apply to). With respect to the inner product on W, every transformation \sigma(g) with g\in G is unitary, and if we pick an orthonormal basis to get a matrix representation Y each of the matrices Y(g) will be unitary. That is:

\displaystyle Y\left(g^{-1}\right)=Y(g)^{-1}=\overline{Y(g)}^\top

So what does this mean for the character \psi? We can calculate


And so we can rewrite our inner product

\displaystyle\begin{aligned}\langle\chi,\psi\rangle&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}\overline{\chi(g)}\psi(g)\\&=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}\chi\left(g^{-1}\right)\psi(g)\end{aligned}

The nice thing about this formula is that it doesn’t depend on complex conjugation, and so it’s useful for any base field (if we were using other base fields).

The catch is that for class functions in general we have no reason to believe that this is an inner product. Indeed, if g\in G is some element that isn’t conjugate to its inverse then we can define a class function f that takes the value 1 on the class K_g of g, -1 on the class K_{g^{-1}} of g^{-1}, and 0 elsewhere. Our new formula gives

\displaystyle\langle f,f\rangle=\frac{1}{\lvert G\rvert}\sum\limits_{g\in G}f\left(g^{-1}\right)f(g)=-\lvert K_{g^{-1}}\rvert\lvert K_g\rvert

so this bilinear form isn’t positive-definite.

October 18, 2010 Posted by | Algebra, Group theory, Representation Theory | 4 Comments