# The Unapologetic Mathematician

## Induction and Restriction are Additive Functors

Before we can prove the full version of Frobenius reciprocity, we need to see that induction and restriction are actually additive functors.

First of all, functoriality of restriction is easy. Any intertwinor $f:V\to W$ between $G$-modules is immediately an intertwinor between the restrictions $V\!\!\downarrow^G_H$ and $W\!\!\downarrow^G_H$. Indeed, all it has to do is commute with the action of each $h\in H\subseteq G$ on the exact same spaces.

Functoriality of induction is similarly easy. If we have an intertwinor $f:V\to W$ between $H$-modules, we need to come up with one between $\mathbb{C}[G]\otimes_HV$ and $\mathbb{C}[G]\otimes_HW$. But the tensor product is a functor on each variable, so it’s straightforward to come up with $1_{\mathbb{C}[G]}\otimes f$. The catch is that since we’re taking the tensor product over $H$ in the middle, we have to worry about this map being well-defined. The tensor $s\otimes v\in\mathbb{C}[G]\otimes V$ is equivalent to $sh^{-1}\otimes hv$. The first gets sent to $s\otimes f(v)$, while the second gets sent to $sh^{-1}\otimes f(hv)=sh^{-1}\otimes hf(v)$. But these are equivalent in $\mathbb{C}[G]\otimes_HW$, so the map is well-defined.

Next: additivity of restriction. If $V$ and $W$ are $G$-modules, then so is $V\oplus W$. The restriction $(V\oplus W)\!\!\downarrow^G_H$ is just the restriction of this direct sum to $H$, which is clearly the direct sum of the restrictions $V\!\!\downarrow^G_H\oplus W\!\!\downarrow^G_H$.

Finally we must check that induction is additive. Here, the induced matrices will come in handy. If $X$ and $Y$ are matrix representations of $H$, then the direct sum is the matrix representation

$\displaystyle\left[X\oplus Y\right](h)=\left(\begin{array}{cc}X(h)&0\\{0}&Y(h)\end{array}\right)$

And then the induced matrix looks like:

$\displaystyle \left[X\oplus Y\right]\!\!\uparrow_H^G(g)=\left(\begin{array}{ccccccc}X(t_1^{-1}gt_1)&0&X(t_1^{-1}gt_2)&0&\cdots&X(t_1^{-1}gt_n)&0\\{0}&Y(t_1^{-1}gt_1)&0&Y(t_1^{-1}gt_2)&\cdots&0&Y(t_1^{-1}gt_n)\\X(t_2^{-1}gt_1)&0&X(t_2^{-1}gt_2)&0&\cdots&X(t_2^{-1}gt_n)&0\\{0}&Y(t_2^{-1}gt_1)&0&Y(t_2^{-1}gt_2)&\cdots&0&Y(t_2^{-1}gt_n)\\\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\X(t_n^{-1}gt_1)&0&X(t_n^{-1}gt_2)&0&\cdots&X(t_n^{-1}gt_n)&0\\{0}&Y(t_n^{-1}gt_1)&0&Y(t_n^{-1}gt_2)&\cdots&0&Y(t_n^{-1}gt_n)\end{array}\right)$

Now, it’s not hard to see that we can rearrange the basis to make the matrix look like this:

$\displaystyle\left(\begin{array}{cc}\begin{array}{cccc}X(t_1^{-1}gt_1)&X(t_1^{-1}gt_2)&\cdots&X(t_1^{-1}gt_n)\\X(t_2^{-1}gt_1)&X(t_2^{-1}gt_2)&\cdots&X(t_2^{-1}gt_n)\\\vdots&\vdots&\ddots&\vdots\\X(t_n^{-1}gt_1)&X(t_n^{-1}gt_2)&\cdots&X(t_n^{-1}gt_n)\end{array}&0\\{0}&\begin{array}{cccc}Y(t_1^{-1}gt_1)&Y(t_1^{-1}gt_2)&\cdots&Y(t_1^{-1}gt_n)\\Y(t_2^{-1}gt_1)&Y(t_2^{-1}gt_2)&\cdots&Y(t_2^{-1}gt_n)\\\vdots&\vdots&\ddots&\vdots\\Y(t_n^{-1}gt_1)&Y(t_n^{-1}gt_2)&\cdots&Y(t_n^{-1}gt_n)\end{array}\end{array}\right)$

There’s no complicated mixing up of basis elements amongst each other; just rearranging their order is enough. And this is just the direct sum $X\!\!\uparrow_H^G\oplus Y\!\!\uparrow_H^G$.