where is an -module and is a -module.
This is one of those items that everybody (for suitable values of “everybody”) knows to be true, but that nobody seems to have written down. I’ve been beating my head against it for days and finally figured out a way to make it work. Looking back, I’m not entirely certain I’ve ever actually proven it before.
So let’s start on the left with a linear map that intertwines the action of each subgroup element . We want to extend this to a linear map from to that intertwines the actions of all the elements of .
Okay, so we’ve defined . But if we choose a transversal for — like we did when we set up the induced matrices — then we can break down as the direct sum of a bunch of copies of :
So then when we take the tensor product we find
So we need to define a map from each of these summands to . But a vector in looks like for some . And thus a -intertwinor extending must be defined by .
So, is this really a -intertwinor? After all, we’ve really only used the fact that it commutes with the actions of the transversal elements . Any element of the induced representation can be written uniquely as
for some collection of . We need to check that .
Now, we know that left-multiplication by permutes the cosets of . That is, for some . Thus we calculate
and so, since commutes with and with each transversal element
Okay, so we’ve got a map that takes -module morphisms in to -module homomorphisms in . But is it an isomorphism? Well we can get go from back to by just looking at what does on the component
If we only consider the actions elements , they send this component back into itself, and by definition they commute with . That is, the restriction of to this component is an -intertwinor, and in fact it’s the same as the we started with.
First of all, functoriality of restriction is easy. Any intertwinor between -modules is immediately an intertwinor between the restrictions and . Indeed, all it has to do is commute with the action of each on the exact same spaces.
Functoriality of induction is similarly easy. If we have an intertwinor between -modules, we need to come up with one between and . But the tensor product is a functor on each variable, so it’s straightforward to come up with . The catch is that since we’re taking the tensor product over in the middle, we have to worry about this map being well-defined. The tensor is equivalent to . The first gets sent to , while the second gets sent to . But these are equivalent in , so the map is well-defined.
Next: additivity of restriction. If and are -modules, then so is . The restriction is just the restriction of this direct sum to , which is clearly the direct sum of the restrictions .
Finally we must check that induction is additive. Here, the induced matrices will come in handy. If and are matrix representations of , then the direct sum is the matrix representation
And then the induced matrix looks like:
Now, it’s not hard to see that we can rearrange the basis to make the matrix look like this:
There’s no complicated mixing up of basis elements amongst each other; just rearranging their order is enough. And this is just the direct sum .