# The Unapologetic Mathematician

## The Submodule of Invariants

If $V$ is a module of a Lie algebra $L$, there is one submodule that turns out to be rather interesting: the submodule $V^0$ of vectors $v\in V$ such that $x\cdot v=0$ for all $x\in L$. We call these vectors “invariants” of $L$.

As an illustration of how interesting these are, consider the modules we looked at last time. What are the invariant linear maps $\hom(V,W)^0$ from one module $V$ to another $W$? We consider the action of $x\in L$ on a linear map $f$:

$\displaystyle\left[x\cdot f\right](v)=x\cdot f(V)-f(x\cdot v)=0$

Or, in other words:

$\displaystyle x\cdot f(v)=f(x\cdot v)$

That is, a linear map $f\in\hom(V,W)$ is invariant if and only if it intertwines the actions on $V$ and $W$. That is, $\hom_\mathbb{F}(V,W)^0=hom_L(V,W)$.

Next, consider the bilinear forms on $L$. Here we calculate

\displaystyle\begin{aligned}\left[y\cdot B\right](x,z)&=-B([y,x],z)-B(x,[y,z])\\&=B([x,y],z)-B(x,[y,z])=0\end{aligned}

That is, a bilinear form is invariant if and only if it is associative, in the sense that the Killing form is: $B([x,y],z)=B(x,[y,z])$

September 21, 2012

## More New Modules from Old

There are a few constructions we can make, starting with the ones from last time and applying them in certain special cases.

First off, if $V$ and $W$ are two finite-dimensional $L$-modules, then I say we can put an $L$-module structure on the space $\hom(V,W)$ of linear maps from $V$ to $W$. Indeed, we can identify $\hom(V,W)$ with $V^*\otimes W$: if $\{e_i\}$ is a basis for $V$ and $\{f_j\}$ is a basis for $W$, then we can set up the dual basis $\{\epsilon^i\}$ of $V^*$, such that $\epsilon^i(e_j)=\delta^i_j$. Then the elements $\{\epsilon^i\otimes f_j\}$ form a basis for $V^*\otimes W$, and each one can be identified with the linear map sending $e_i$ to $f_j$ and all the other basis elements of $V$ to $0$. Thus we have an inclusion $V^*\otimes W\to\hom(V,W)$, and a simple dimension-counting argument suffices to show that this is an isomorphism.

Now, since we have an action of $L$ on $V$ we get a dual action on $V^*$. And because we have actions on $V^*$ and $W$ we get one on $V^*\otimes W\cong\hom(V,W)$. What does this look like, explicitly? Well, we can write any such tensor as the sum of tensors of the form $\lambda\otimes w$ for some $\lambda\in V^*$ and $w\in W$. We calculate the action of $x\cdot(\lambda\otimes w)$ on a vector $v\in V$:

\displaystyle\begin{aligned}\left[x\cdot(\lambda\otimes w)\right](v)&=\left[(x\cdot\lambda)\otimes w\right](v)+\left[\lambda\otimes(x\cdot w)\right](v)\\&=\left[x\cdot\lambda\right](v)w+\lambda(v)(x\cdot w)\\&=-\lambda(x\cdot v)w+x\cdot(\lambda(v)w)\\&=-\left[\lambda\otimes w\right](x\cdot v)+x\cdot\left[\lambda\otimes x\right](w)\end{aligned}

In general we see that $\left[x\cdot f\right](v)=x\cdot f(v)-f(x\cdot v)$. In particular, the space of linear endomorphisms on $V$ is $\hom(V,V)$, and so it get an $L$-module structure like this.

The other case of interest is the space of bilinear forms on a module $V$. A bilinear form on $V$ is, of course, a linear functional on $V\otimes V$. And thus this space can be identified with $(V\otimes V)^*$. How does $x\in L$ act on a bilinear form $B$? Well, we can calculate:

\displaystyle\begin{aligned}\left[x\cdot B\right](v_1,v_2)&=\left[x\cdot B\right](v_1\otimes v_2)\\&=-B\left(x\cdot(v_1\otimes v_2)\right)\\&=-B\left((x\cdot v_1)\otimes v_2\right)-B\left(v_1\otimes(x\cdot v_2)\right)\\&=-B(x\cdot v_1,v_2)-B(v_1,x\cdot v_2)\end{aligned}

In particular, we can consider the case of bilinear forms on $L$ itself, where $L$ acts on itself by $\mathrm{ad}$. Here we read

$\displaystyle\left[x\cdot B\right](v_1,v_2)=-B([x,v_1],v_2)-B(v_1,[x,v_2])$

September 21, 2012

## New Modules from Old

There are a few standard techniques we can use to generate new modules for a Lie algebra $L$ from old ones. We’ve seen direct sums already, but here are a few more.

One way is to start with a module $M$ and then consider its dual space $M^*$. I say that this can be made into an $L$-module by setting

$\displaystyle\left[x\cdot\lambda\right](m)=-\lambda(x\cdot m)$

for all $x\in L$, $\lambda\in M^*$, and $m\in M$. Bilinearity should be clear, so we just check the defining property of a module. That is, we take two Lie algebra elements $x,y\in L$ and check

\displaystyle\begin{aligned}\left[[x,y]\cdot f\right](m)&=-f([x,y]\cdot m)\\&=-f(x\cdot(y\cdot m)-y\cdot(x\cdot m))\\&=-f(x\cdot(y\cdot m))+f(y\cdot(x\cdot m))\\&=\left[x\cdot f\right](y\cdot m)-\left[y\cdot f\right](x\cdot m)\\&=-\left[y\cdot(x\cdot f)\right](m)+\left[x\cdot(y\cdot f)\right](m)\\&=\left[x\cdot(y\cdot f)-y\cdot(x\cdot f)\right](m)\end{aligned}

so $[x,y]\cdot f=x\cdot(y\cdot f)-y\cdot(x\cdot f)$ for all $f\in M^*$, as desired.

Another way is to start with modules $M$ and $N$ and form their tensor product $M\otimes N$. Now we define a module structure on this space by

$\displaystyle x\cdot m\otimes n=(x\cdot m)\otimes n + m\otimes(x\cdot n)$

We check the defining property again. Calculate:

\displaystyle\begin{aligned}{}[x,y]\cdot m\otimes n&=([x,y]\cdot m)\otimes n+m\otimes([x,y]\cdot n)\\&=(x\cdot(y\cdot m)-y\cdot(x\cdot m))\otimes n+m\otimes(x\cdot(y\cdot n)-y\cdot(x\cdot n))\\&=(x\cdot(y\cdot m))\otimes n-(y\cdot(x\cdot m))\otimes n+m\otimes(x\cdot(y\cdot n))-m\otimes(y\cdot(x\cdot n))\end{aligned}

while

\displaystyle\begin{aligned}x\cdot(y\cdot m\otimes n)-y\cdot(x\cdot m\otimes n)=&x\cdot((y\cdot m)\otimes n+m\otimes(y\cdot n))-y\cdot((x\cdot m)\otimes n+m\otimes(x\cdot n))\\=&x\cdot((y\cdot m)\otimes n)+x\cdot(m\otimes(y\cdot n))-y\cdot((x\cdot m)\otimes n)-y\cdot(m\otimes(x\cdot n))\\=&(x\cdot(y\cdot m))\otimes n+(y\cdot m)\otimes(x\cdot n)+(x\cdot m)\otimes(y\cdot n)+m\otimes(x\cdot(y\cdot n))\\&-(y\cdot(x\cdot m))\otimes n-(x\cdot m)\otimes(y\cdot n)-(y\cdot m)\otimes(x\cdot n)-m\otimes(y\cdot(x\cdot n))\\=&(x\cdot(y\cdot m))\otimes n+m\otimes(x\cdot(y\cdot n))-(y\cdot(x\cdot m))\otimes n-m\otimes(y\cdot(x\cdot n))\end{aligned}

These are useful, and they’re only just the beginning.

September 17, 2012

## Reducible Modules

As might be surmised from irreducible modules, a reducible module $M$ for a Lie algebra $L$ is one that contains a nontrivial proper submodule — one other than $0$ or $M$ itself.

Now obviously if $N\subseteq M$ is a submodule we can form the quotient $M/N$. This is the basic setup of a short exact sequence:

$\displaystyle0\to N\to M\to M/N\to 0$

The question is, does this sequence split? That is, can we write $M$ as the direct sum of $N$ and some other submodule isomorphic to $M/N$?

First of all, let’s be clear that direct sums of modules do make sense. Indeed, if $A$ and $B$ are $L$-modules then we can form an action on $A\oplus B$ by defining it on each summand separately

$\displaystyle\left[phi_{A\oplus B}(x)\right](a,b)=\left(\left[\phi_a(x)\right](a),\left[\phi_B(x)\right](b)\right)$

Clearly the usual subspace inclusions and projections between $A$, $B$, and $A\oplus B$ intertwine these actions, so they’re the required module morphisms. Further, it’s clear that $(A\oplus B)/A\cong B$.

So, do all short exact sequences of representations split? no. Indeed, let $\mathfrak{t}(n,\mathbb{F})$ be the algebra of $n\times n$ upper-triangular matrices, along with the obvious $n$-dimensional representation. If we let $e_i$ be the basic column vector with a $1$ in the $i$th row and $0$ elsewhere, then the one-dimensional space spanned by $e_1$ forms a one-dimensional submodule. Indeed, all upper-triangular matrices will send this column vector back to a multiple of itself.

On the other hand, it may be the case for a module $M$ that any nontrivial proper submodule $N$ has a complementary submodule $N'\subseteq M$ with $M=N\oplus N'$. In this case, $N$ is either irreducible or it’s not; if not, then any proper nontrivial submodule of $N$ will also be a proper nontrivial submodule of $M$, and we can continue taking smaller submodules until we get to an irreducible one, so we may as well assume that $N$ is irreducible. Now the same sort of argument works for $N'$, showing that if it’s not irreducible it can be decomposed into the direct sum of some irreducible submodule and some complement, which is another nontrivial proper submodule of $M$. At each step, the complement gets smaller and smaller, until we have decomposed $M$ entirely into a direct sum of irreducible submodules.

If $M$ is decomposable into a direct sum of irreducible submodules, we say that $M$ is “completely reducible”, or “decomposable”, as we did when we were working with groups. Any module where any nontrivial proper submodule has a complement is thus completely reducible; conversely, complete reducibility implies that any nontrivial proper submodule has a complement. Indeed, such a submodule must consist of some, but not all, of the summands of $M$, and the complement will consist of the rest.

September 16, 2012

## Irreducible Modules

Sorry for the delay; it’s getting crowded around here again.

Anyway, an irreducible module for a Lie algebra $L$ is a pretty straightforward concept: it’s a module $M$ such that its only submodules are $0$ and $M$. As usual, Schur’s lemma tells us that any morphism between two irreducible modules is either $0$ or an isomorphism. And, as we’ve seen in other examples involving linear transformations, all automorphisms of an irreducible module are scalars times the identity transformation. This, of course, doesn’t depend on any choice of basis.

A one-dimensional module will always be irreducible, if it exists. And a unique — up to isomorphism, of course — one-dimensional module will always exist for simple Lie algebras. Indeed, if $L$ is simple then we know that $[L,L]=L$. Any one-dimensional representation $\phi:L\to\mathfrak{gl}(1,\mathbb{F})$ must have its image in $[\mathfrak{gl}(1,\mathbb{F}),\mathfrak{gl}(1,\mathbb{F})]=\mathfrak{sl}(1,\mathbb{F})$. But the only traceless $1\times1$ matrix is the zero matrix. Setting $\phi(x)=0$ for all $x\in L$ does indeed give a valid representation of $L$.

September 15, 2012

## Lie Algebra Modules

It should be little surprise that we’re interested in concrete actions of Lie algebras on vector spaces, like we were for groups. Given a Lie algebra $L$ we define an $L$-module to be a vector space $V$ equipped with a bilinear function $L\times V\to V$ — often written $(x,v)\mapsto x\cdot v$ satisfying the relation

$\displaystyle [x,y]\cdot v=x\cdot(y\cdot v)-y\cdot(x\cdot v)$

Of course, this is the same thing as a representation $\phi:L\to\mathfrak{gl}(V)$. Indeed, given a representation $\phi$ we can define $x\cdot v=[\phi(x)](v)$; given an action we can define a representation $\phi(x)\in\mathfrak{gl}(V)$ by $[\phi(x)](v)=x\cdot v$. The above relation is exactly the statement that the bracket in $L$ corresponds to the bracket in $\mathfrak{gl}(V)$.

Of course, the modules of a Lie algebra form a category. A homomorphism of $L$-modules is a linear map $\phi:V\to W$ satisfying

$\displaystyle\phi(x\cdot v)=x\cdot\phi(v)$

We automatically get the concept of a submodule — a subspace sent back into itself by each $x\in L$ — and a quotient module. In the latter case, we can see that if $W\subseteq V$ is any submodule then we can define $x\cdot(v+W)=(x\cdot v)+W$. This is well-defined, since if $v+w$ is any other representative of $v+W$ then $x\cdot(v+w)=x\cdot v+x\cdot w$, and $x\cdot w\in W$, so $x\cdot v$ and $x\cdot(v+w)$ both represent the same element of $v+W$.

Thus, every submodule can be seen as the kernel of some homomorphism: the projection $V\to V/W$. It should be clear that every homomorphism has a kernel, and a cokernel can be defined simply as the quotient of the range by the image. All we need to see that the category of $L$-modules is abelian is to show that every epimorphism is actually a quotient, but we know this is already true for the underlying vector spaces. Since the (vector space) kernel of an $L$-module map is an $L$-submodule, this is also true for $L$-modules.

September 12, 2012

## More Kostka Numbers

First let’s mention a few more general results about Kostka numbers.

Among all the tableaux that partition $n$, it should be clear that $(n)\triangleright\mu$. Thus the Kostka number $K_{(n)\mu}$ is not automatically zero. In fact, I say that it’s always $1$. Indeed, the shape is a single row with $n$ entries, and the content $\mu$ gives us a list of numbers, possibly with some repeats. There’s exactly one way to arrange this list into weakly increasing order along the single row, giving $K_{(n)\mu}=1$.

On the other extreme, $\lambda\triangleright(1^n)$, so $K_{\lambda(1^n)}$ might be nonzero. The shape is given by $\lambda$, and the content $(1^n)$ gives one entry of each value from $1$ to $n$. There are no possible entries to repeat, and so any semistandard tableau with content $(1^n)$ is actually standard. Thus $K_{\lambda(1^n)}=f^\lambda$ — the number of standard tableaux of shape $\lambda$.

This means that we can decompose the module $M^{(1^n)}$:

$\displaystyle M^{(1^n)}=\bigoplus\limits_{\lambda}f^\lambda S^\lambda$

But $f^\lambda=\dim(S^\lambda)$, which means each irreducible $S_n$-module shows up here with a multiplicity equal to its dimension. That is, $M^{(1^n)}$ is always the left regular representation.

Okay, now let’s look at a full example for a single choice of $\mu$. Specifically, let $\mu=(2,2,1)$. That is, we’re looking for semistandard tableaux of various shapes, all with two entries of value $1$, two of value $2$, and one of value $3$. There are five shapes $\lambda$ with $\lambda\trianglerighteq\mu$. For each one, we will look for all the ways of filling it with the required content.

$\displaystyle\begin{array}{cccc}\lambda=(2,2,1)&\begin{array}{cc}\bullet&\bullet\\\bullet&\bullet\\\bullet&\end{array}&\begin{array}{cc}1&1\\2&2\\3&\end{array}&\\\hline\lambda=(3,1,1)&\begin{array}{ccc}\bullet&\bullet&\bullet\\\bullet&&\\\bullet&&\end{array}&\begin{array}{ccc}1&1&2\\2&&\\3&&\end{array}&\\\hline\lambda=(3,2)&\begin{array}{ccc}\bullet&\bullet&\bullet\\\bullet&\bullet&\end{array}&\begin{array}{ccc}1&1&2\\2&3&\end{array}&\begin{array}{ccc}1&1&3\\2&2&\end{array}\\\hline\lambda=(4,1)&\begin{array}{cccc}\bullet&\bullet&\bullet&\bullet\\\bullet&&&\end{array}&\begin{array}{cccc}1&1&2&2\\3&&&\end{array}&\begin{array}{cccc}1&1&2&3\\2&&&\end{array}\\\hline\lambda=(5)&\begin{array}{ccccc}\bullet&\bullet&\bullet&\bullet&\bullet\end{array}&\begin{array}{ccccc}1&1&2&2&3\end{array}&\end{array}$

Counting the semistandard tableaux on each row, we find the Kostka numbers. Thus we get the decomposition

$\displaystyle M^{(2,2,1)}=S^{(2,2,1)}\oplus S^{(3,1,1)}\oplus2S^{(3,2)}\oplus2S^{(4,1)}\oplus S^{(5)}$

February 18, 2011

## Kostka Numbers

Now we’ve finished our proof that the intertwinors $\bar{\theta}_T$ coming from semistandard tableauxspan the space of all intertwinors from the Specht module $S^\lambda$ to the Young tabloid module $M^\mu$. We also know that they’re linearly independent, and so they form a basis of the space of intertwinors — one for each semistandard generalized tableau.

Since the Specht modules are irreducible, we know that the dimension of this space is the multiplicity of $S^\lambda$ in $M^\mu$. And the dimension, of course, is the number of basis elements, which is the number of semistandard generalized tableaux of shape $\lambda$ and content $\mu$. This number we call the “Kostka number” $K_{\lambda\mu}$. We’ve seen that there is a decomposition

$\displaystyle M^\mu=\bigoplus\limits_{\lambda\trianglerighteq\mu}m_{\lambda\mu}S^\lambda$

Now we know that the Kostka numbers give these multiplicities, so we can write

$\displaystyle M^\mu=\bigoplus\limits_{\lambda\trianglerighteq\mu}K_{\lambda\mu}S^\lambda$

We saw before that when $\lambda=\mu$, the multiplicity is one. In terms of the Kostka numbers, this tells us that $K_{\mu\mu}=1$. Is this true? Well, the only way to fit $\mu_1$ entries with value $1$, $\mu_2$ with value $2$, and so on into a semistandard tableau of shape $\mu$ is to put all the $i$ entries on the $i$th row.

In fact, we can extend the direct sum by removing the restriction on $\lambda$:

$\displaystyle M^\mu=\bigoplus\limits_\lambda K_{\lambda\mu}S^\lambda$

This is because when $\lambda\triangleleft\mu$ we have $K_{\lambda\mu}=0$. Indeed, we must eventually have $\lambda_1+\dots+\lambda_i<\mu_1+\dots+\mu_i$, and so we can't fit all the entries with values $1$ through $i$ on the first $i$ rows of $\lambda$. We must at the very least have a repeated entry in some column, if not a descent. There are thus no semistandard generalized tableaux with shape $\lambda$ and content $\mu$ in this case.

February 17, 2011

## Intertwinors from Semistandard Tableaux Span, part 3

Now we are ready to finish our proof that the intertwinors $\bar{\theta}_T:S^\lambda\to M^\mu$ coming from semistandard generalized tableaux $T$ span the space of all intertwinors between these modules.

As usual, pick any intertwinor $\theta:S^\lambda\to M^\mu$ and write

$\displaystyle\theta(e_t)=\sum\limits_Tc_TT$

Now define the set $L_\theta$ to consist of those semistandard generalized tableaux $S$ so that $[S]\trianglelefteq[T]$ for some $T$ appearing in this sum with a nonzero coefficient. This is called the “lower order ideal” generated by the $T$ in the sum. We will prove our assertion by induction on the size of this order ideal.

If $L_\theta$ is empty, then $\theta$ must be the zero map. Indeed, our lemmas showed that if $\theta$ is not the zero map, then at least one semistandard $T$ shows up in the above sum, and this $T$ would itself belong to $L_\theta$. And of course the zero map is contained in any span.

Now, if $L_\theta$ is not empty, then there is at least some semistandard $T$ with $c_T\neq0$ in the sum. Our lemmas even show that we can pick one so that $[T]$ is maximal among all the tableaux in the sum. Let’s do that and define a new intertwinor:

$\displaystyle \theta' = \theta - c_T\bar{\theta}_T$

I say that $L_{\theta'}$ is $L_\theta$ with $T$ removed.

Every $S$ appearing in $\bar{\theta}_T(e_t)$ has $[S]\trianglelefteq[T]$, since if $T$ is semistandard then $[T]$ is the largest column equivalence class in $\theta_T(\{t\})$. Thus $L_{\theta'}$ must be a subset of $L_\theta$ since we can’t be introducing any new nonzero coefficients.

Our lemmas show that if $[S]=[T]$, then $c_S$ must appear with the same coefficient in both $\theta(e_t)$ and $c_T\bar{\theta}_T(e_t)$. That is, they must be cancelled off by the subtraction. Since $T$ is maximal there’s nothing above it that might keep it inside the ideal, and so $T\notin L_{\theta'}$.

So by induction we conclude that $\theta'$ is contained within the span of the $\bar{\theta}_T$ generated by semistandard tableaux, and thus $\theta$ must be as well.

February 14, 2011

## Intertwinors from Semistandard Tableaux Span, part 2

We continue our proof that the intertwinors $\bar{\theta}_T:S^\lambda\to M^\mu$ that come from semistandard tableaux span the space of all such intertwinors. This time I assert that if $\theta\in\hom(S^\lambda,M^\mu)$ is not the zero map, then there is some semistandard $T$ with $c_T\neq0$.

Obviously there are some nonzero coefficients; if $\theta(e_t)=0$, then

$\displaystyle\theta(e_{\pi t})=\theta(\pi e_t)=\pi\theta(e_t)=0$

which would make $\theta$ the zero map. So among the nonzero $c_T$, there are some with $[T]$ maximal in the column dominance order. I say that we can find a semistandard $T$ among them.

By the results yesterday we know that the entries in the columns of these $T$ are all distinct, so in the column tabloids we can arrange them to be strictly increasing down the columns. What we must show is that we can find one with the rows weakly increasing.

Well, let’s pick a maximal $T$ and suppose that it does have a row descent, which would keep it from being semistandard. Just like the last time we saw row descents, we get a chain of distinct elements running up the two columns:

$\displaystyle\begin{array}{ccc}a_1&\hphantom{X}&b_1\\&&\wedge\\a_2&&b_2\\&&\wedge\\\vdots&&\vdots\\&&\wedge\\a_i&>&b_i\\\wedge&&\\\vdots&&\vdots\\\wedge&&b_q\\a_p&&\end{array}$

We choose the sets $A$ and $B$ and the Garnir element $g_{A,B}$ just like before. We find

$\displaystyle g_{A,B}\left(\sum\limits_Tc_TT\right)=g_{A,B}\left(\theta(e_t)\right)=\theta\left(g_{A,B}(e_t)\right)=\theta(0)=0$

The generalized tableau $T$ must appear in $g_{A,B}(T)$ with unit coefficient, so to cancel it off there must be some other generalized tableau $T'\neq T$ with $T'=\pi T$ for some $\pi$ that shows up in $g_{A,B}$. But since this $\pi$ just interchanges some $a$ and $b$ entries, we can see that $[T']\triangleright[T]$, which contradicts the maximality of our choice of $T$.

Thus there can be no row descents in $T$, and $T$ is in fact semistandard.

February 12, 2011