The Unapologetic Mathematician

Mathematics for the interested outsider

The Branching Rule, Part 3

“Part 3”? Didn’t we just finish proving the branching rule? Well, yes, but there’s another part we haven’t mentioned yet. Not only does the branching rule tell us how representations of S_n decompose when they’re restricted to S_{n-1}, it also tells us how representations of S_{n-1} decompose when they’re induced to S_n.

Now that we have the first statement of the branching rule down, proving the other one is fairly straightforward: it’s a consequence of Frobenius reciprocity. Indeed, the branching rule tells us that

\displaystyle\dim\left(\hom_{S_{n-1}}(S^\mu,S^\lambda\!\!\downarrow)\right)=\bigg\{\begin{array}{cl}1&\mu=\lambda^-\\{0}&\mathrm{otherwise}\end{array}

That is, there is one copy of S^\mu inside S^\lambda (considered as an S_{n-1}-module) if \mu comes from \lambda by removing an inner corner, and there are no copies otherwise.

So let’s try to calculate the multiplicity of S^\lambda in the induced module S^\mu\!\!\uparrow:

\displaystyle\begin{aligned}\hom_{S_n}(S^\lambda,S^\mu\!\!\uparrow)^*&\cong\hom_{S_n}(S^\mu\!\!\uparrow,S^\lambda)\\&\cong\hom_{S_{n-1}}(S^\mu,S^\lambda\!\!\downarrow)\end{aligned}

Taking dimensions, we find

\displaystyle\begin{aligned}\dim\left(\hom_{S_n}(S^\lambda,S^\mu\!\!\uparrow)\right)&=\dim\left(\hom_{S_{n-1}}(S^\mu,S^\lambda\!\!\downarrow)\right)\\&=\bigg\{\begin{array}{cl}1&\mu=\lambda^-\\{0}&\mathrm{otherwise}\end{array}\\&=\bigg\{\begin{array}{cl}1&\lambda=\mu^+\\{0}&\mathrm{otherwise}\end{array}\end{aligned}

since if \mu comes from \lambda by removing an inner corner, then \lambda comes from \mu by adding an outer corner.

We conclude that

\displaystyle S^\mu\!\!\uparrow_{S_{n-1}}^{S_n}\cong\bigoplus\limits_{\mu^+}S^{\mu^+}

which is the other half of the branching rule.

January 31, 2011 Posted by | Algebra, Representation Theory, Representations of Symmetric Groups | 1 Comment

The Branching Rule, Part 2

We pick up our proof of the branching rule. We have a partition \lambda with inner corners in rows r_1,\dots,r_k. The partitions we get by removing each of the inner corner r_i is \lambda^i. If the tableau t (or the tabloid \{t\} has its n in row i, then t^i (or \{t^i\}) is the result of removing that n.

We’re looking for a chain of subspaces

\displaystyle0=V^{(0)}\subseteq V^{(1)}\subseteq\dots\subseteq V^{(k)}=S^\lambda

such that V^{(i)}/V^{(i-1)}=S^{\lambda^i} as S_{n-1}-modules. I say that we can define V^{(i)} to be the subspace of S^\lambda spanned by the standard polytabloids e_t where the n shows up in row r_i or above in t.

For each i, define the map \theta_i:M^\lambda\to M^{\lambda^i} by removing an n in row r_i. That is, if \{t\}\in latex M^\lambda$ has its n in row r_i, set \theta_i(\{t\})=\{t^i\}; otherwise set \theta_i(\{t\})=0. These are all homomorphisms of S_{n-1}-modules, since the action of S_{n-1} always leaves the n in the same row, and so it commutes with removing an n from row r_i.

Similarly, I say that \theta_i(e_t)=e_{t^i} if n is in row r_i of t, and we get \theta_i(e_t)=0 if it’s in row r_j with j<i. Indeed if n shows up above row r_i, then since it’s the bottommost entry in its column that column can have no entries at all in row r_i. Thus as we use C_t to shuffle the columns, all of the tabloids that show up in e_t=C_t^-\{t\} will be sent to zero by \theta_i. Similar considerations show that if n is in row r_i, then of all the tabloids that show up in e_t, only those leaving n in that row are not sent to zero by \theta_i. The permutations in C_t leaving n fixed are, of course, exactly those in C_{t^i}^-, and our assertion holds.

Now, since each standard polytabloid e_{t^i}\in S^{\lambda^i} comes from some polytabloid e_t\in M^{\lambda}, we see they’re all in the image of \theta_i. Further, these e_t all have their ns in row r_i, so they’re all in V^{(i)}. That is, \theta_i(V^{(i)})=S^{\lambda^i}. On the other hand, if t has its n above row r_i, then \theta(e_t)=0, and so V^{(i-1)}\subseteq\mathrm{Ker}(\theta_i).

So now we’ve got a longer chain of subspaces:

\displaystyle0=V^{(0)}\subseteq V^{(1)}\cap\mathrm{Ker}(\theta_1)\subseteq V^{(1)}\subseteq\dots\subseteq V^{(k)}\cap\mathrm{Ker}(\theta_k)\subseteq V^{(k)}=S^\lambda

But we also know that

\displaystyle\dim\left(\frac{V^{(i)}}{V^{(i)}\cap\mathrm{Ker}(\theta_i)}\right)=\dim(\theta_i(V^{(i)}))=\dim(S^{\lambda^i})=f^{\lambda^i}

So the steps from V^{(i)}\cap\mathrm{Ker}(\theta_i) to V^{(i)} give us all the f^{\lambda^i} as we add up dimensions. Comparing to the formula we’re categorifying, we see that this accounts for all of f^\lambda=\dim(S^\lambda). And so there are no dimensions left for the steps from V^{(i-1)} to V^{(i)}\cap\mathrm{Ker}(\theta_i), and these containments must actually be equalities!

\displaystyle V^{(i-1)}=V^{(i)}\cap\mathrm{Ker}(\theta_i)

And thus

\displaystyle \frac{V^{(i)}}{V^{(i-1)}}=\frac{V^{(i)}}{V^{(i)}\cap\mathrm{Ker}(\theta_i)}\cong S^{\lambda^i}

as asserted. The branching rule then follows.

January 28, 2011 Posted by | Algebra, Representation Theory, Representations of Symmetric Groups | 5 Comments

The Branching Rule, Part 1

We want to “categorify” the relation we came up with last time:

\displaystyle f^\lambda=\sum\limits_{\lambda^-}f^{\lambda^-}

That is, we want to replace these numbers with objects of a category, replace the sum with a direct sum, and replace the equation with a natural isomorphism.

It should be clear that an obvious choice for the objects is to replace f^\lambda with the Specht module S^\lambda, since we’ve seen that f^\lambda=\dim(S^\lambda). But what category are they in? On the left side, S^\lambda is an S_n-module, but on the right side all the S^{\lambda^-} are S_{n-1}-modules. Our solution is to restrict S^\lambda, suggesting the isomorphism

\displaystyle S^\lambda\downarrow^{S_n}_{S_{n-1}}\cong\bigoplus\limits_{\lambda^-}S^{\lambda^-}

This tells us what happens to any of the Specht modules as we restrict it to a smaller symmetric group. As a side note, it doesn’t really matter which S_{n-1}\subseteq S_n we use, since they’re all conjugate to each other inside S_n. So we’ll just use the one that permutes all the numbers but n.

Anyway, say the inner corners of \lambda occur in the rows r_1<\dots<r_k, and of course they must occur at the ends of these rows. For each one, we’ll write \lambda^i for the partition that comes from removing that inner corner. Similarly, if t is a standard tableau with n in the ith row, we write t^i for the (standard) tableau with n removed. And the same goes for the standard tabloids \{t\} and \{t^i\}.

Our method will be to find a tower of subspaces

\displaystyle0=V^{(0)}\subseteq V^{(1)}\subseteq\dots\subseteq V^{(k)}=S^\lambda

so that at each step we have V^{(i)}/V^{(i-1)}\cong S^{\lambda^i} as S_{n-1}-modules. Then we can see that

\displaystyle S^\lambda\downarrow^{S_n}_{S_{n-1}}=V^{(k)}\cong V^{(k-1)}\oplus(V^{(k)}/V^{(k-1)}\cong V^{(k-1)}\oplus S^{\lambda^k}

And similarly we find V^{(k-1)}\cong V^{(k-2)}\oplus S^{\lambda^{k-1}}, and step by step we go until we find the proposed isomorphism. The construction itself will be presented next time.

January 27, 2011 Posted by | Algebra, Representation Theory, Representations of Symmetric Groups | 2 Comments

Inner and Outer Corners

The next thing we need to take note of it the idea of an “inner corner” and an “outer corner” of a Ferrers diagram, and thus of a partition.

An inner corner of a Ferrers diagram is a cell that, if it’s removed, the rest of the diagram is still the Ferrers diagram of a partition. It must be the rightmost cell in its row, and the bottommost cell in its column. Similarly, an outer corner is one that, if it’s added to the diagram, the result is still the Ferrers diagram of a partition. This is a little more subtle: it must be just to the right of the end of a row, and just below the bottom of a column.

As an example, consider the partition (5,4,4,2), with Ferrers diagram

\displaystyle\begin{array}{ccccc}\bullet&\bullet&\bullet&\bullet&\bullet\\\bullet&\bullet&\bullet&\bullet&\\\bullet&\bullet&\bullet&\bullet&\\\bullet&\bullet&&&\end{array}

We highlight the inner corners by shrinking them, and mark the outer corners with circles:

\displaystyle\begin{array}{cccccc}\bullet&\bullet&\bullet&\bullet&\cdot&\circ\\\bullet&\bullet&\bullet&\bullet&\circ&\\\bullet&\bullet&\bullet&\cdot&&\\\bullet&\cdot&\circ&&&\\\circ&&&&&\end{array}

That is, there are three ways we could remove a cell and still have the Ferrers diagram of a partition:

\displaystyle\begin{array}{cccc}\bullet&\bullet&\bullet&\bullet\\\bullet&\bullet&\bullet&\bullet\\\bullet&\bullet&\bullet&\bullet\\\bullet&\bullet&&\end{array}\qquad\begin{array}{ccccc}\bullet&\bullet&\bullet&\bullet&\bullet\\\bullet&\bullet&\bullet&\bullet&\\\bullet&\bullet&\bullet&&\\\bullet&\bullet&&&\end{array}\qquad\begin{array}{ccccc}\bullet&\bullet&\bullet&\bullet&\bullet\\\bullet&\bullet&\bullet&\bullet&\\\bullet&\bullet&\bullet&\bullet&\\\bullet&&&&\end{array}

And there are four ways that we could add a cell and still have the Ferrers diagram of a partition:

\displaystyle\begin{array}{cccccc}\bullet&\bullet&\bullet&\bullet&\bullet&\bullet\\\bullet&\bullet&\bullet&\bullet&&\\\bullet&\bullet&\bullet&\bullet&&\\\bullet&\bullet&&&&\end{array}\qquad\begin{array}{ccccc}\bullet&\bullet&\bullet&\bullet&\bullet\\\bullet&\bullet&\bullet&\bullet&\bullet\\\bullet&\bullet&\bullet&\bullet&\\\bullet&\bullet&&&\end{array}\qquad\begin{array}{ccccc}\bullet&\bullet&\bullet&\bullet&\bullet\\\bullet&\bullet&\bullet&\bullet&\\\bullet&\bullet&\bullet&\bullet&\\\bullet&\bullet&\bullet&&\end{array}\qquad\begin{array}{ccccc}\bullet&\bullet&\bullet&\bullet&\bullet\\\bullet&\bullet&\bullet&\bullet&\\\bullet&\bullet&\bullet&\bullet&\\\bullet&\bullet&&&\\\bullet&&&&\end{array}

If the first partition is \lambda, we write a generic partition that comes from removing a single inner corner by \lambda^-. Similarly, we write a generic partition that comes from adding a single outer corner by \lambda^+. In our case, if \lambda=(5,4,4,2), then the three possible \lambda^- partitions are (4,4,4,2), (5,4,3,2), and (5,4,4,1), while the four possible \lambda^+ partitions are (6,4,4,2), (5,5,4,2), (5,4,4,3), and (5,4,4,2,1).

Now, as a quick use of this concept, think about how to fill a Ferrers diagram to make a standard Young tableau. It should be clear that since n is the largest entry in the tableau, it must be in the rightmost cell of its row and the bottommost cell of its column in order for the tableau to be standard. Thus n must occur in an inner corner. This means that we can describe any standard tableau by picking which inner corner contains n, removing that corner, and filling the rest with a standard tableau with n-1 entries. Thus, the number of standard \lambda-tableaux is the sum of all the standard \lambda^--tableaux:

\displaystyle f^\lambda=\sum\limits_{\lambda^-}f^{\lambda^-}

January 26, 2011 Posted by | Algebra, Representation Theory, Representations of Symmetric Groups | 2 Comments

Young’s Natural Representation

Now that we have a canonical basis for our Specht modules composed of standard polytabloids it gives us a matrix representation of S_n for each \lambda\vdash n. We really only need to come up with matrices for the swaps (k\,k+1), for 1\leq k\leq n-1, since these generate the whole symmetric group.

When we calculate the action of the swap on a polytabloid e_t associated with a standard Young tableau t, there are three possibilities. Either k and k+1 are in the same column of t, they’re in the same row of t, or they’re not in the same row or column of t.

The first case is easy. If k and k+1 are in the same column of t, then (k\,k+1)\in C_t, and thus (k\,k+1)e_t=-e_t.

The third case isn’t much harder, although it’s subtler. I say that if k and k+1 share neither a row nor a column, then (k\,k+1)t is again standard. Indeed, swapping the two can’t introduce either a row or a column descent. The entries to the left of and above k+1 are all less than k+1, and none of them are k, so they’re all less than k as well. Similarly, all the entries to the right of and below k are greater than k, and none of them are k+1, so they’re all greater than k+1 as well.

Where things get complicated is when k and k+1 share a row. But then they have to be next to each other, and the swap introduces a row descent between them. We can then use our Garnir elements to write this polytabloid in terms of standard ones.

Let’s work this out explicitly for the Specht module S^{(2,1)}, which should give us our well-known two-dimensional representation of S_3. The basis consists of the polytabloids associated to these two tableaux:

\displaystyle\begin{array}{cc}1&2\\3&\end{array}\qquad\begin{array}{cc}1&3\\2&\end{array}

We need to come up with matrices for the two swaps (1\,2) and (2\,3). And the second one is easy: it just swaps these two tableaux! Thus we get the matrix

\displaystyle X^{(2,1)}\left((2\,3)\right)=\begin{pmatrix}0&1\\1&0\end{pmatrix}

The action of (1\,2) on the second standard tableau is similarly easy. Since 1 and 2 are in the same column, the swap acts by multiplying by -1. Thus we can write down a column of the matrix

\displaystyle X^{(2,1)}\left((1\,2)\right)=\begin{pmatrix}?&0\\?&-1\end{pmatrix}

As for the action on the first tableau, the swap induces a row descent. We use a Garnir element to straighten it out. With the same abuse of notation as last time, we write

\displaystyle\begin{array}{cc}2&1\\3&\end{array}=\begin{array}{cc}1&2\\3&\end{array}-\begin{array}{cc}1&3\\2&\end{array}

and so we can fill in the other column:

\displaystyle X^{(2,1)}\left((1\,2)\right)=\begin{pmatrix}1&0\\-1&-1\end{pmatrix}

From here we can write all the other matrices in the representation as products of these two.

January 26, 2011 Posted by | Algebra, Representation Theory, Representations of Symmetric Groups | Leave a comment

“Straightening” a Polytabloid

Let’s look at one example of “straightening” out a polytabloid to show it’s in the span of the standard polytabloids, using the Garnir elements.

We’ll start with one we’ve already partially worked out:

\displaystyle t=\begin{array}{ccc}1&2&3\\5&4&\\6&&\end{array}

Now, it’s slightly abusive to the notation, but we’ll just write a tableau t and know that we actually mean the polytabloid e_t in our linear combinations. Using this, we’ve seen that we can write

\displaystyle\begin{array}{ccc}1&2&3\\5&4&\\6&&\end{array}=\begin{array}{ccc}1&2&3\\4&5&\\6&&\end{array}-\begin{array}{ccc}1&4&3\\2&5&\\6&&\end{array}-\begin{array}{ccc}1&2&3\\4&6&\\5&&\end{array}+\begin{array}{ccc}1&4&3\\2&6&\\5&&\end{array}-\begin{array}{ccc}1&5&3\\2&6&\\4&&\end{array}

Now, by the way we’ve selected our Garnir elements, we know that none of these can have any column descents. And we also know that they can’t have a row descent in the same place e_t did. Indeed, the only three that have a row descent all have it between the second and third entries of the first row. So now let’s look at

\displaystyle u=\begin{array}{ccc}1&4&3\\2&5&\\6&&\end{array}

We can write down another table, just like before:

\displaystyle\begin{array}{cccc}A'&B'&\pi&\pi u\\\hline\\\{4,5\}&\{3\}&e&\begin{array}{ccc}1&4&3\\2&5&\\6&&\end{array}\\\{3,5\}&\{4\}&(3\,4)&\begin{array}{ccc}1&3&4\\2&5&\\6&&\end{array}\\\{3,4\}&\{5\}&(3\,5\,4)&\begin{array}{ccc}1&3&5\\2&4&\\6&&\end{array}\end{array}

which lets us write

\displaystyle\begin{array}{ccc}1&4&3\\2&5&\\6&&\end{array}=\begin{array}{ccc}1&3&4\\2&5&\\6&&\end{array}-\begin{array}{ccc}1&3&5\\2&4&\\6&&\end{array}

Similarly we can write

\displaystyle\begin{array}{ccc}1&4&3\\2&6&\\5&&\end{array}=\begin{array}{ccc}1&3&4\\2&6&\\5&&\end{array}-\begin{array}{ccc}1&3&6\\2&4&\\5&&\end{array}

and

\displaystyle\begin{array}{ccc}1&5&3\\2&6&\\4&&\end{array}=\begin{array}{ccc}1&3&5\\2&6&\\4&&\end{array}-\begin{array}{ccc}1&3&6\\2&5&\\4&&\end{array}

Putting these all together, we conclude that

\displaystyle\begin{aligned}\begin{array}{ccc}1&2&3\\5&4&\\6&&\end{array}=&\begin{array}{ccc}1&2&3\\4&5&\\6&&\end{array}-\begin{array}{ccc}1&3&4\\2&5&\\6&&\end{array}+\begin{array}{ccc}1&3&5\\2&4&\\6&&\end{array}-\begin{array}{ccc}1&2&3\\4&6&\\5&&\end{array}\\+&\begin{array}{ccc}1&3&4\\2&6&\\5&&\end{array}-\begin{array}{ccc}1&3&6\\2&4&\\5&&\end{array}-\begin{array}{ccc}1&3&5\\2&6&\\4&&\end{array}+\begin{array}{ccc}1&3&6\\2&5&\\4&&\end{array}\end{aligned}

All of these tabloids are standard, and so we see that our original — nonstandard — e_t is in the span of the standard polytabloids.

January 25, 2011 Posted by | Algebra, Representation Theory, Representations of Symmetric Groups | Leave a comment

Standard Polytabloids Span Specht Modules

We defined the Specht module S^\lambda as the subspace of the Young tabloid module M^\lambda spanned by polytabloids of shape \lambda. But these polytabloids are not independent. We’ve seen that standard polytabloids are independent, and it turns out that they also span. That is, they provide an explicit basis for the Specht module S^\lambda.

Anyway, first off we can take care of all the e_t where the columns of t don’t increase down the column. Indeed, if \sigma\in C_t stabilizes the columns of t, then

\displaystyle e_{\sigma t}=\sigma e_t=\mathrm{sgn}(\sigma)e_t

where we’ve used the sign lemma. So any two polytabloids coming from tableaux in the same column equivalence class are scalar multiples of each other. We’ve just cut our spanning set down from one element for each tableau to one for each column equivalence class of tableaux.

To deal with column equivalence classes, start with the tableau t_0 that we get by filling in the shape with the numbers 1 to n in order down the first column, then the second, and so on. This is the maximum element in the column dominance order, and it’s standard. Given any other tableau t, we assume that every tableau s with [s]\triangleright[t] is already in the span of the standard polytabloids. This is an inductive hypothesis, and the base case is taken care of by the maximum tabloid t_0.

If t is itself standard, we’re done, since it’s obviously in the span of the standard polytabloids. If not, there must be a row descent — we’ve ruled out column descents already — and so we can pick our Garnir element to write e_t as the sum of a bunch of other polytabloids e_{\pi t}, where [\pi t]\triangleright[t] in the column dominance order. But by our inductive hypothesis, all these e_{\pi t} are in the span of the standard polytabloids, and thus e_t is as well.

As an immediate consequence, we conclude that \dim(S^\lambda)=f^\lambda, where f^\lambda is the number of standard tableaux of shape \lambda. Further, since we know from our decomposition of the left regular representation that each irreducible representation of S_n shows up in \mathbb{C}[S_n] with a multiplicity equal to its dimension, we can write

\displaystyle\mathbb{C}[S_n]=\bigoplus\limits_{\lambda\vdash n}f^\lambda S^\lambda

Taking dimensions on both sides we find

\displaystyle \sum\limits_{\lambda\vdash n}\left(f^\lambda\right)^2=\sum\limits_{\lambda\vdash n}f^\lambda\dim(S^\lambda)=\lvert S_n\rvert=n!

January 21, 2011 Posted by | Algebra, Representation Theory, Representations of Symmetric Groups | 3 Comments

Properties of Garnir Elements from Tableaux 2

When we pick a tableau t with a certain row descent and use it to pick sets A and B, as we’ve done, the resulting Garnir element is a sum of a bunch of tabloids coming from a bunch of tableaux. I say that the column tabloid [t] corresponding to the original tableau is dominated by all the other tabloids, using the column dominance order.

Indeed, when considering column tabloids we can rearrange the entries within columns freely, so we may assume that they’re always increasing down the columns. If we have our row descent in row i, we can label the entries in the left column by as and those in the right column by bs. Our tabloid then looks — in these two columns, at least — something like

\displaystyle\begin{array}{ccc}a_1&\hphantom{X}&b_1\\&&\wedge\\a_2&&b_2\\&&\wedge\\\vdots&&\vdots\\&&\wedge\\a_i&>&b_i\\\wedge&&\\\vdots&&\vdots\\\wedge&&b_q\\a_p&&\end{array}

We see our sets A=\{a_i,\dots,a_p\} and B=\{b_1,\dots,b_i\}. The permutations in the transversal that we use to construct our Garnir element work by moving swapping some of the bs with some of the as. But since all that bs are smaller than all the as, while they occur in a row further to the right, the dominance lemma for column tabloids tells us that any such swap can only move the tabloid up in the dominance order.

It is in this sense that the Garnir element lets us replace a tabloid with a linear combination of other tabloids that are “more standard”. And it puts us within striking distance of our goal.

January 20, 2011 Posted by | Algebra, Representation Theory, Representations of Symmetric Groups | 2 Comments

The Column Dominance Order

Okay, for the last couple posts I’ve talked about using Garnir elements to rewrite nonstandard polytabloids — those coming from tableaux containing “row descents” — in terms of “more standard” polytabloids. Finally, we’re going to define another partial order that will give some meaning to this language.

Given a tableau t, consider the column stabilizer C_t, and use it to build the “column tabloid” [t]=C_tt. This is defined just like our other tabloids, except by shuffling columns instead of rows.

For example, consider the tabloid

\displaystyle t=\begin{array}{cc}1&2\\3&\end{array}

from which we get the column tabloid

\displaystyle\begin{array}{|c|c|}1&2\\3&\multicolumn{1}{c}{}\end{array}=\left\{\begin{array}{cc}1&2\\3&\end{array},\begin{array}{cc}3&2\\1&\end{array}\right\}

And now we can define the dominance order on column tabloids just like the dominance order for row tabloids. Of course, in doing so we have to alter our definition of the dominance order on Ferrers diagrams to take columns into account instead of rows.

But one thing at least will make our life simpler: it should be clear that we still have a dominance lemma for column dominance. To be explicit: if k< l, and k appears in a column to the right of l in the column tabloid [t], then (k\,l)[t] dominates [t] in the column dominance order.

January 20, 2011 Posted by | Algebra, Representation Theory, Representations of Symmetric Groups | 2 Comments

Properties of Garnir Elements from Tableaux 1

Pick a Young tableau t, and sets A and B as we did last time. If there are more entries in A\uplus B than there are in the jth column of t — the one containing A — then g_{A,B}e_t=0. In particular, if we pick A and B by selecting a row descent, letting A be the entries below the left entry, and letting B be the entries above the right entry, then this situation will hold.

As a first step, I say that {S_{A\uplus B}}^-e_t=0. That is, if we allow all the permutations of entries in these two sets (along with signs) then everything cancels out. Indeed, let \sigma\in C_t be any column-stabilizing permutation. Our hypothesis on the number of entries in A\uplus B tells us that we must have some pair of a\in A and b\in B in the same row of \sigma t. Thus the swap (a\,b)\in S_{A\uplus B}. The sign lemma then tells us that {S_{A\uplus B}}^-\{\sigma t\}=0. Since this is true for every summand \{\sigma t\} of e_t=C_t^-\{t\}, it is true for e_t itself.

Now, our assertion is not that this is true for all of S_{A\uplus B}, but rather that it holds for our transversal \Pi. We use the decomposition

\displaystyle S_{A\uplus B}=\biguplus\limits_{\pi\in\Pi}\pi(S_A\times S_B)

This gives us a factorization

\displaystyle{S_{A\uplus B}}^-=\Pi^-(S_A\times S_B)^-=g_{A,B}(S_A\times S_B)^-

And so we conclude that g_{A,B}(S_A\times S_B)^-e_t=0.

But now we note that S_A\times S_B\subseteq C_t. So if \sigma\in S_A\times S_B we use the sign lemma to conclude

\displaystyle\mathrm{sgn}(\sigma)\sigma e_t=\mathrm{sgn}(\sigma)\sigma C_t^-\{t\}=C_t^-\{t\}=e_t

Thus (S_A\times S_B)^-e_t=\lvert S_A\times S_B\rvert e_t, and so

\displaystyle 0=g_{A,B}(S_A\times S_B)^-e_t=g_{A,B}\lvert S_A\times S_B\rvert e_t=\lvert S_A\times S_B\rvert g_{A,B}e_t

which can only happen if g_{A,B}e_t=0, as asserted.

This result will allow us to pick out a row descent in t and write down a linear combination of polytabloids that lets us rewrite e_t in terms of other polytabloids. And it will turn out that all the other polytabloids will be “more standard” than e_t.

January 18, 2011 Posted by | Algebra, Representation Theory, Representations of Symmetric Groups | 6 Comments