General Linear Groups are Lie Groups
One of the most important examples of a Lie group we’ve already seen: the general linear group of a finite dimensional vector space
. Of course for the vector space
this is the same as — or at least isomorphic to — the group
of all invertible
real matrices, so that’s a Lie group we can really get our hands on. And if
has dimension
, then
, and thus
.
So, how do we know that it’s a Lie group? Well, obviously it’s a group, but what about the topology? The matrix group sits inside the algebra
of all
matrices, which is an
-dimensional vector space. Even better, it’s an open subset, which we can see by considering the (continuous) map
. Since
is the preimage of
— which is an open subset of
—
is an open subset of
.
So we can conclude that is an open submanifold of
, which comes equipped with the standard differentiable structure on
. Matrix multiplication is clearly smooth, since we can write each component of a product matrix
as a (quadratic) polynomial in the entries of
and
. As for inversion, Cramer’s rule expresses the entries of the inverse matrix
as the quotient of a (degree
) polynomial in the entries of
and the determinant of
. So long as
is invertible these are two nonzero smooth functions, and thus their quotient is smooth at
.
The Dominance Lemma
We will have use of the following technical result about the dominance order:
Let and
be Young tableaux of shape
and
, respectively. If for each row, all the entries on that row of
are in different columns of
, then
. Essentially, the idea is that since all the entries on a row in
fit into different columns of
, the shape of
must be wide enough to handle that row. Not only that, but it’s wide enough to handle all of the rows of that width at once.
More explicitly, we can rearrange the columns of so that all the entries in the first
rows of
fit into the first
rows of
. This is actually an application of the pigeonhole principle: if we have a column in
that contains
elements from the first
rows of
, then look at which row each one came from. Since
, we must have two entries in the column coming from the same row, which we assumed doesn’t happen.
Yes, this does change the tableau , but our conclusion is about the shape of
, which remains the same.
So now we can figure as the number of entries in the first
rows of
. Since these contain all the entries from the first
rows of
, it must be greater than or equal to that number. But that number is just as clearly
. Since this holds for all
, we conclude that
dominates
.
Young Tableaux
We want to come up with some nice sets for our symmetric group to act on. Our first step in this direction is to define a “Young tableau”.
If is a partition of
, we define a Young tableau of shape
to be an array of numbers. We start with the Ferrers diagram of the partition
, and we replace the dots with the numbers
to
in any order. Clearly, there are
Young tableaux of shape
if
.
For example, if , the Ferrers diagram is
We see that , and so there are
Young tableaux of shape
. They are
We write for the entry in the
place. For example, the last tableau above has
,
, and
.
We also call a Young tableau of shape
a “
-tableau”, and we write
. We can write a generic
-tableau as
.
Partitions and Ferrers Diagrams
We’ve discussed partitions before, but they’re about to become very significant. Let be a sequence of positive integers with
. We write
If we say
is a partition of
, and we write
. A partition, then, is a way of breaking a positive integer
into a bunch of smaller positive integers, and sorting them in (the unique) decreasing order.
We visualize partitions with Ferrers diagrams. The best way to explain this is with an example: if , the Ferrers diagram of
is
The diagram consists of left-justified rows, one for each part in the partition , and arranged from top to bottom in decreasing order. We can also draw the Ferrers diagram as boxes
The dangling vertical lines aren’t supposed to be there, but I’m having a hell of a time getting WordPress’ processor to recognize an \hfill command so I can place \vline elements at the edges of columns. This should work but.. well, see for yourself:
So, if anyone knows how to make this look like the above diagram, but without the dangling vertical lines, I’d appreciate the help.
Anyway, in both of those ugly, ugly Ferrers diagrams, the is placed in the
position; we see this by counting down two boxes and across three boxes. We will have plenty of call to identify which positions in a Ferrers diagram are which in the future.
The Road Forward
We’ve been talking a lot about the general theory of finite group representations. But our goal is to talk about symmetric groups in particular. Now, we’ve seen that the character table of a finite group is square, meaning there are as many irreducible representations of a group as there are conjugacy classes
. But we’ve also noted that there’s no reason to believe that these have any sort of natural correspondence.
But for the symmetric group , there’s something we can say. We know that conjugacy classes in symmetric groups correspond to cycle types. Cycles correspond to integer partitions of
. And from a partition we will build a representation.
For a first step, let be a partition, with
. We can use this to come up with a subgroup
. Given a set
we will write
for the group of permutations of that set. For example
permutes the first
positive integers, and
permutes the next
of them. We can put a bunch of these groups together to build
Elements of permute the same set as
, and so
, but only in certain discrete chunks. Numbers in each block can be shuffled arbitrarily among each other, but the different blocks are never mixed. Really, all that matters is that the chunks have sizes
through
, but choosing them like this is a nicely concrete way to do it.
So, now we can define the -module
by inducing the trivial representation from the subgroup
to all of
. Now, the
are not all irreducible, but we will see how to identify a particular irreducible submodule
of each one, and the
will all be distinct. Since they correspond to partitions
, there are exactly as many of them as there are conjugacy classes in
, and so they must be all the irreducible
-modules, up to isomorphism.
Inducing the Trivial Representation
We really should see an example of inducing a representation. One example we’ll find extremely useful is when we start with the trivial representation.
So, let be a group and
be a subgroup. Since this will be coming up a bunch, let’s just start writing
for the trivial representation that sends each element of
to the
matrix
. We want to consider the induced representation
.
Well, we have a matrix representation, so we look at the induced matrix representation. We have to pick a transversal for the subgroup
in
. Then we have the induced matrix in block form:
In this case, each “block” is just a number, and it’s either or
, depending on whether
is in
or not. But if
, then
latex g(t_jH)=(t_iH)$. That is, this is exactly the coset representation of
corresponding to
. And so all of these coset representations arise as induced representations.
Dual Frobenius Reciprocity
Our proof of Frobenius reciprocity shows that induction is a left-adjoint to restriction. In fact, we could use this to define induction in the first place; show that restriction functor must have a left adjoint and let that be induction. The downside is that we wouldn’t get an explicit construction for free like we have.
One interesting thing about this approach, though, is that we can also show that restriction must have a right adjoint, which we might call “coinduction”. But it turns out that induction and coinduction are naturally isomorphic! That is, we can show that
Indeed, we can use the duality on hom spaces and apply it to yesterday’s Frobenius adjunction:
Sometimes when two functors are both left and right adjoints of each other, we say that they are a “Frobenius pair”.
Now let’s take this relation and apply our “decategorifying” correspondence that passes from representations down to characters. If the representation has character
and
has character
, then hom-spaces become inner products, and (natural) isomorphisms become equalities. We find:
which is our “fake” Frobenius reciprocity relation.
(Real) Frobenius Reciprocity
Now we come to the real version of Frobenius reciprocity. It takes the form of an adjunction between the functors of induction and restriction:
where is an
-module and
is a
-module.
This is one of those items that everybody (for suitable values of “everybody”) knows to be true, but that nobody seems to have written down. I’ve been beating my head against it for days and finally figured out a way to make it work. Looking back, I’m not entirely certain I’ve ever actually proven it before.
So let’s start on the left with a linear map that intertwines the action of each subgroup element
. We want to extend this to a linear map from
to
that intertwines the actions of all the elements of
.
Okay, so we’ve defined . But if we choose a transversal
for
— like we did when we set up the induced matrices — then we can break down
as the direct sum of a bunch of copies of
:
So then when we take the tensor product we find
So we need to define a map from each of these summands to
. But a vector in
looks like
for some
. And thus a
-intertwinor
extending
must be defined by
.
So, is this really a -intertwinor? After all, we’ve really only used the fact that it commutes with the actions of the transversal elements
. Any element of the induced representation can be written uniquely as
for some collection of . We need to check that
.
Now, we know that left-multiplication by permutes the cosets of
. That is,
for some
. Thus we calculate
and so, since commutes with
and with each transversal element
Okay, so we’ve got a map that takes
-module morphisms in
to
-module homomorphisms in
. But is it an isomorphism? Well we can get go from
back to
by just looking at what
does on the component
If we only consider the actions elements , they send this component back into itself, and by definition they commute with
. That is, the restriction of
to this component is an
-intertwinor, and in fact it’s the same as the
we started with.
Induction and Restriction are Additive Functors
Before we can prove the full version of Frobenius reciprocity, we need to see that induction and restriction are actually additive functors.
First of all, functoriality of restriction is easy. Any intertwinor between
-modules is immediately an intertwinor between the restrictions
and
. Indeed, all it has to do is commute with the action of each
on the exact same spaces.
Functoriality of induction is similarly easy. If we have an intertwinor between
-modules, we need to come up with one between
and
. But the tensor product is a functor on each variable, so it’s straightforward to come up with
. The catch is that since we’re taking the tensor product over
in the middle, we have to worry about this map being well-defined. The tensor
is equivalent to
. The first gets sent to
, while the second gets sent to
. But these are equivalent in
, so the map is well-defined.
Next: additivity of restriction. If and
are
-modules, then so is
. The restriction
is just the restriction of this direct sum to
, which is clearly the direct sum of the restrictions
.
Finally we must check that induction is additive. Here, the induced matrices will come in handy. If and
are matrix representations of
, then the direct sum is the matrix representation
And then the induced matrix looks like:
Now, it’s not hard to see that we can rearrange the basis to make the matrix look like this:
There’s no complicated mixing up of basis elements amongst each other; just rearranging their order is enough. And this is just the direct sum .
(Fake) Frobenius Reciprocity
Today, we can prove the Frobenius’ reciprocity formula, which relates induced characters to restricted ones.
Now, naïvely we might hope that induction and restriction would be inverse processes. But this is clearly impossible, since if we start with a -module
with dimension
, it restricts to an
-module
which also has dimension
. Then we can induce it to a
-module
with dimension
. This can’t be the original representation unless
, which is a pretty trivial example indeed.
So, instead we have the following “reciprocity” relation. If is a character of the group
and
is a character of the subgroup
, we find that
Where the left inner product is that of class functions on , while the right is that of class functions on
. We calculate the inner products using our formula
where we have also used the fact that is a class function on
, and that
is defined to be zero away from
.
As a special case, let and
be irreducible characters of
and
respectively, so the inner products are multiplicities. For example,
is the multiplicity of in the representation obtained by inducing
to a representation of
. On the other hand,
is the multiplicity of in the representation obtained by restricting
down to
. The Frobenius reciprocity theorem asserts that these multiplicities are identical.
Now, why did I call this post “fake” Frobenius reciprocity? Well, this formula gets a lot of press. But really it’s a pale shadow of the real Frobenius reciprocity theorem. This one is a simple equation that holds at the level of characters, while the real one is a natural isomorphism that holds at the level of representations themselves.