# The Unapologetic Mathematician

## The Determinant

Let’s look at the dimensions of antisymmetric tensor spaces. We worked out that if $V$ has dimension $d$, then the space of antisymmetric tensors with $n$ tensorands has dimension

$\displaystyle\dim\left(A^n(V)\right)=\binom{d}{n}=\frac{d!}{n!(d-n)!}$

One thing should leap out about this: if $n$ is greater than $d$, then the dimension formula breaks down. This is connected with the fact that at that point we can’t pick any $n$-tuples without repetition from $d$ basis vectors.

So what happens right before everything breaks down? If $n=d$, then we find

$\displaystyle\dim\left(A^d(V)\right)=\binom{d}{d}=\frac{d!}{d!(d-d)!}=\frac{d!}{d!}=1$

There’s only one independent antisymmetric tensor of this type, and so we have a one-dimensional vector space. But remember that this isn’t just a vector space. The tensor power $V^{\otimes d}$ is both a representation of $\mathrm{GL}(V)$ and a representation of $S_d$, which actions commute with each other. Our antisymmetric tensors are the image of a certain action from the symmetric group, which is an intertwiner of the $\mathrm{GL}(V)$ action. Thus we have a one-dimensional representation of $\mathrm{GL}(V)$, which we call the determinant representation.

I want to pause here and point out something that’s extremely important. We’ve mentioned a basis for $V$ in the process of calculating the dimension of this space, but the space itself was defined without reference to such a basis. Similarly, the representation of any element of $\mathrm{GL}(V)$ is defined completely without reference to any basis of $V$. It needs only the abstract vector space itself to be defined. Calculating the determinant of a linear transformation, though, is a different story. We’ll use a basis to calculate it, but as we’ve just said the particular choice of a basis won’t matter in the slightest to the answer we get. We’d get the same answer no matter what basis we chose.

December 31, 2008

## Dimensions of Symmetric and Antisymmetric Tensor Spaces

We’ve laid out the spaces of symmetric and antisymmetric tensors. We even showed that if $V$ has dimension $d$ and a basis $\{e_i\}$ we can set up bases for $S^n(V)$ and $A^n(V)$. Now let’s count how many vectors are in these bases and determine the dimensions of these spaces.

The easy one will be the antisymmetric case. Every basic antisymmetric tensor is given by antisymmetrizing an $n$-tuple $e_{i_1}\otimes...\otimes e_{i_n}$ of basis vectors of $V$. We may as well start out with this collection in order by their indices: $i_1\leq...\leq i_n$. But we also know that we can’t have any repeated vectors or else the whole thing collapses. So the basis for $A^n(V)$ consists of subsets of the basis for $V$. There are $d$ basis vectors overall, and we must pick $n$ of them. But we know how to count these. This is a number of combinations:

$\displaystyle\dim\left(A^n(V)\right)=\binom{d}{n}=\frac{d!}{n!(d-n)!}$

Now what about symmetric tensors? We can’t do quite the same thing, since now we can allow repetitions in our lists. Instead, what we’ll do is this: instead of just a list of basis vectors of $V$, consider writing the indices out in a line and drawing dividers between different indices. For example, consider th basic tensor of $\left(\mathbb{F}^5\right)^{\otimes 4}$: $e_1\otimes e_3\otimes e_3\otimes e_4$. First, it becomes the list of indices

$\displaystyle1,3,3,4$

Now we divide $1$ from $2$, $2$ from $3$, $3$ from $4$, and $4$ form $5$.

$\displaystyle1,|,|,3,3,|,4,|$

Since there are five choices of an index, there will always be four dividers. And we’ll always have four indices since we’re considering the fourth tensor power. That is, a basic symmetric tensor corresponds to a choice of which of these eight slots to put the four dividers in. More generally if $V$ has dimension $d$ then a basic tensor in $S^n(V)$ has $n$ indices separated by $d-1$ dividers. Then the dimension is again given by a number of combinations:

$\displaystyle\dim\left(S^n(V)\right)=\binom{n+d-1}{d-1}=\frac{(n+d-1)!}{n!(d-1)!}$

December 30, 2008

## Permutations and Combinations

Okay, here’s a classic bit of combinatorics that I’m almost surprised I never mentioned before: how to count permutations and combinations.

Back when I defined permutation groups, I gave a quick argument about how to count their orders. That is, how many elements are in the group $S_n$. If we’re permuting the set of numbers $1$ through $n$, first we have to choose where to send $1$. There are $n$ possible choices of how to do this. Then we have to choose where to send $2$, which can be anywhere but the place we sent $1$. Thus there are $n-1$ choices of how to do this. And so on we go, making $n-2$ choices of where to send $3$, $n-3$ choices of where to send $4$. And so in total we have $n*(n-1)*(n-2)*...*3*2*1=n!$ permutations.

Notice that what we’ve really done there is picked out an ordering on the set $\{1,...,n\}$. More specifically, we’ve picked an ordering of the whole set. But what if we want to pick out only an ordered subset? That is, what if we want to pick out an ordered list of $k$ numbers from the set with no repetitions? Well, since there’s only one order type for each finite cardinal number, let’s just pick one as the model for our ordered list. Specifically, we’ll use the numbers from $1$ to $k$, in their usual order.

Now we’ll take an injective function from $\{1,...,k\}$ to $\{1,...,n\}$. The image subset will be our list, and we carry over the order from $\{1,...,k\}$ onto it. But now we can reuse the same argument as above! First we have $n$ choices where to send $1$, and then $n-1$ choices where to send $2$, and so on until we have $n-(k-1)$ choices of where to send $k$. So there are $n*(n-1)*...*(n-k+2)*(n-k+1)$ such ordered subsets of length $k$. This gets called the number of permutations with $k$ elements, and written as $P(n,k)$, or ${}_nP_k$ or $P^n_k$. Notice that when $k=n$ this reduces to the factorial as above.

There’s an easy way to express this number in notation we already know. First we can multiply all the numbers from $1$ to $n$, and then we can divide out the ones from $1$ to $n-k$. That is, $P(n,k)=\frac{n!}{(n-k)!}$. What does this mean? Well, the $n!$ up top means that we’re ordering all the elements of our set. But then since we only care about the first $k$ we don’t care about the order on the last $n-k$ elements. What’s hiding here is that the group $S_{n-k}$ is secretly acting on the set of all permutations of our set by rearranging the last $n-k$ elements of the permutation. What’s more, it acts freely — with no fixed points — so every orbit has the same size: $(n-k)!$. But since we only care about the first $k$ places in the permutation, we’re really interested in the number of orbits. That is, the total number of permutations divided by the size of each orbit: $\frac{n!}{(n-k)!}$. And this is the formula we came up with before.

This is a general principle in combinatorics. It’s often possible to see the set you’re trying to count as a larger set modulo the free action of some group. Then the cardinality of the set you’re interested in is the cardinality of the larger set divided by the order of the group.

To see this in action, what if we don’t care about the order of our subset? That is, we just want to pick out $k$ elements with no repetitions and no care about what order they come in. Well, first we can pick out an ordered set of $k$ elements. Then we can use the group $S_k$ to rearrange them. Any rearrangement is just as good as any other, and the group $S_k$ acts freely on the set of permutations. That is, the number of unordered subsets is the number of ordered subsets — $\frac{n!}{(n-k)!}$ — divided by the order of the group $S_k$$k!$. This number of unordered subsets we call the number of “combinations” of an $n$ element set with $k$ elements. This is often written as $C(n,k)$, or ${}_nC_k$, or $C^n_k$, or $\binom{n}{k}$, and they all are given by the formula $\frac{n!}{k!(n-k)!}$.

December 29, 2008 Posted by | Combinatorics | 6 Comments

## Antisymmetric Tensors

Let’s continue yesterday’s project by considering antisymmetric tensors today. Remember that we’re starting with a tensor representation of $\mathrm{GL}(V)$ on the tensor power $V^{\otimes n}$, which also carries an action of the symmetric group $S_n$ by permuting the tensorands. The antisymmetrizer (for today) is an element of the group algebra $\mathbb{F}[S_n]$, and thus defines an intertwiner from $V^{\otimes n}$ to itself. Its image is thus a subrepresentation $A^n(V)$ of $\mathrm{GL}(V)$ acting on $V^{\otimes n}$.

Now let’s again say $V$ has finite dimension $d$ and pick a basis $\{e_i\}$ of $V$. Then we again have a basis of $V^{\otimes n}$ given by $n$-tuples of basis elements of $V$, and the permutation $\pi$ again acts by

$\displaystyle\pi\left(e_{i_1}\otimes...\otimes e_{i_n}\right)=e_{i_{\pi(1)}}\otimes...\otimes e_{i_{\pi(n)}}$

So let’s see what the antisymmetrizer looks like.

\displaystyle\begin{aligned}A\left(e_{i_1}\otimes...\otimes e_{i_n}\right)=\frac{1}{n!}\sum\limits_{\pi\in S_n}\mathrm{sgn}(\pi)\pi\left(e_{i_1}\otimes...\otimes e_{i_n}\right)\\=\frac{1}{n!}\sum\limits_{\pi\in S_n}\mathrm{sgn}(\pi)e_{i_{\pi(1)}}\otimes...\otimes e_{i_{\pi(n)}}\end{aligned}

It’s just like the symmetrizer, except we have a sign in each term given by the signum representation of the symmetric group.

Immediately this tells us something very interesting: if the same basis element of $V$ shows up twice in the basic tensor, then the antisymmetrization of that tensor is automatically zero! This is because there’s some swap $\tau$ exchanging the two slots where we find the two copies. We can divide the permutations in $S_n$ into two collections: those which can be written as a series of swaps starting with $\tau$ and those which can’t. And throwing $\tau$ on at the beginning of the list just exchanges these two groups for each other in a bijection. If you don’t see that immediately, consider how it’s really the exact same thing as how we solved the airplane seat problem almost two years ago, and Susan’s related Thanksgiving seating problem. Then each permutation in one collection is paired with one in the other collection, and the two in a pair differ by only one swap. This means that one of them will get sent to $+1$ in the signum representation, and one will get sent to $-1$. Since the permuted tensors they give are the same, the two terms will exactly cancel each other out, and we’ll just end up adding together a bunch of zeroes. Neat!

Just like for the symmetric tensors, many different basic tensors will antisymmetrize to the same basic antisymmetric tensor. Again, we use the unique permutation which puts the tensorands in order, but now we use the fact above: if any basis vector of $V$ is repeated, then the antisymmetrization is automatically zero. Thus we throw out all these vectors and only consider those $n$-tuples $e_{i_1}\otimes...\otimes e_{i_n}$ with $i_1<.... This makes it really easy to count the dimension of $A^n(\mathbb{F}^d)$, because this is just the number of subsets of cardinality $n$ of a set of cardinality $d$.

As an example, let’s antisymmetrize $e_2\otimes e_1\otimes e_4$. We write out a sum of all the permutations, with signs as appropriate:

\displaystyle\begin{aligned}\frac{1}{3!}(-e_1\otimes e_2\otimes e_4+e_1\otimes e_4\otimes e_2+e_2\otimes e_1\otimes e_4\\-e_2\otimes e_4\otimes e_1-e_4\otimes e_1\otimes e_2+e_4\otimes e_2\otimes e_1)\end{aligned}

We have no repeated terms here because if we did they’d pair off with opposite signs and cancel out. Now you might notice that we didn’t start with the tensorands in order. If we put them in order, we have to use an odd permutation to get $e_1\otimes e_2\otimes e_4$. Then we’ll get the same result, but with the opposite signs:

\displaystyle\begin{aligned}\frac{1}{3!}(e_1\otimes e_2\otimes e_4-e_1\otimes e_4\otimes e_2-e_2\otimes e_1\otimes e_4+\\e_2\otimes e_4\otimes e_1+e_4\otimes e_1\otimes e_2-e_4\otimes e_2\otimes e_1)\end{aligned}

It’s a different result, yes, but it serves just as well as a basic antisymmetric tensor.

December 23, 2008

## Symmetric Tensors

Wow, people are loving my zero-knowledge test. It got 1,743 views yesterday, thanks to someone redditing it. Anywho…

Today and tomorrow I want to take last Friday’s symmetrizer and antisymmetrizer and apply them to the tensor representations of $\mathrm{GL}(V)$, which we know also carry symmetric group representations. Specifically, the $n$th tensor power $V^{\otimes n}$ carries a representation of $S_n$ by permuting the tensorands, and this representation commutes with the representation of $\mathrm{GL}(V)$. Then since the symmetrizer and antisymmetrizer are elements of the group algebra $\mathbb{F}[S_n]$, they define intertwiners from $V^{\otimes n}$ to itself. The their images are not just subspaces on which the symmetric group acts nicely, but subrepresentations of symmetric and antisymmetric tensors — $S^n(V)$ and $A^n(V)$, respectively.

Now it’s important (even if it’s not quite clear why) to emphasize that we’ve defined these representations without ever talking about a basis for $V$. However, let’s try to get a better handle on what such a thing looks like by assuming $V$ has finite dimension $d$ and picking a basis $\{e_i\}$. Then we have bases for tensor powers: a basis element of the $n$th tensor power is given by an $n$-tuple of basis elements for $V$. We’ll write a general one like $e_{i_1}\otimes e_{i_2}\otimes...\otimes e_{i_n}$.

How does a permutation act on such a basis element? Well, basis elements are pure tensors, so the permutation $\pi$ simply permutes these basis tensorands. That is:

$\displaystyle\pi\left(e_{i_1}\otimes...\otimes e_{i_n}\right)=e_{i_{\pi(1)}}\otimes...\otimes e_{i_{\pi(n)}}$

So the space of symmetric tensors $S^n(V)$ is the image of $V^{\otimes n}$ under the action of the symmetrizer. And so it’s going to be spanned by the images of a basis for $V^{\otimes n}$, which we can calculate now. The symmetrizer is an average of all the permutations in the symmetric group, so we find

\displaystyle\begin{aligned}S\left(e_{i_1}\otimes...\otimes e_{i_n}\right)=\frac{1}{n!}\sum\limits_{\pi\in S_n}\pi\left(e_{i_1}\otimes...\otimes e_{i_n}\right)\\=\frac{1}{n!}\sum\limits_{\pi\in S_n}e_{i_{\pi(1)}}\otimes...\otimes e_{i_{\pi(n)}}\end{aligned}

Now we notice something here: if two basic tensors are related by a permutation of their tensorands, then the symmetrizer will send them to the same symmetric tensor! This means that we can choose a preimage for each basic symmetric tensor. Just use whatever permutation we need to put the $n$-tuple of tensorands into order. That is, always select $i_1\leq i_2\leq...\leq i_n$. Given any basic tensor, there is a unique permutation of its tensorands which is in this order.

As an explicit example, let’s consider what happens when we symmetrize the tensor $e_1\otimes e_2\otimes e_1$. First of all, we toss it into the proper order, since this won’t change the symmetrization: $e_1\otimes e_1\otimes e_2$. Now we write out a sum of all the permutations of the three tensorands, with the normalizing factor out front

\displaystyle\begin{aligned}\frac{1}{3!}(e_1\otimes e_1\otimes e_2+e_1\otimes e_2\otimes e_1+e_1\otimes e_1\otimes e_2+\\e_1\otimes e_2\otimes e_1+e_2\otimes e_1\otimes e_1+e_2\otimes e_1\otimes e_1)\end{aligned}

Some of these terms are repeated, since we have two copies of $e_1$ in this tensor. So we collect these together and cancel off some of the normalizing factor to find

$\displaystyle\frac{1}{3}e_1\otimes e_1\otimes e_2+\frac{1}{3}e_1\otimes e_2\otimes e_1+\frac{1}{3}e_2\otimes e_1\otimes e_1$

Now no matter how we rearrange the tensorands we’ll get back this same tensor again.

Tomorrow we’ll apply the same sort of approach to the antisymmetrizer.

December 22, 2008

## The Symmetrizer and Antisymmetrizer

Today we’ll introduce two elements of the group algebra $\mathbb{F}[S_n]$ of the symmetric group $S_n$ which have some interesting properties. Each one, given a representation of the symmetric group, will give us a subrepresentation of that representation.

The symmetrizer gets its name from the way that it takes an arbitrary vector and turns it into one that the symmetric group will act trivially on. Specifically, we use the element

$\displaystyle S=\frac{1}{n!}\sum\limits_{\pi\in S_n}\pi$

That is, we take all $n!$ permutations in the symmetric group and — in a sense — average them out. If we compose this with any permutation $\hat{\pi}$ we find

\displaystyle\begin{aligned}\hat{\pi}S=\hat{\pi}\frac{1}{n!}\sum\limits_{\pi\in S_n}\pi\\=\frac{1}{n!}\sum\limits_{\pi\in S_n}\hat{\pi}\pi\end{aligned}

But as $\pi$ runs over all the permutations in the group, $\hat{\pi}\pi$ does as well. So this is just the symmetrizer $S$ back again. The upshot is that if we have a representation $\rho:S_n\rightarrow\mathrm{GL}(V)$ we find that

\displaystyle\begin{aligned}\left[\rho(\hat{\pi})\right]\left(\left[\rho(S)\right](v)\right)=\left[\rho(\hat{\pi}S)\right](v)\\=\left[\rho(S)\right](v)\end{aligned}

Thus every vector in the image of $\rho(S)$ is left unchanged by the action of any permutation. That is, $\mathrm{Im}\left(\rho(S)\right)\subseteq V$ is a subrepresentation on which $S_n$ acts trivially.

The antisymmetrizer, on the other hand, on the other hand, will give us vectors on which the symmetric group acts by the signum representation. We use the group algebra element

$\displaystyle A=\frac{1}{n!}\sum\limits_{\pi\in S_n}\mathrm{sgn}(\pi)\pi$

Now if we compose this with any permutation $\hat{\pi}$ we find

\displaystyle\begin{aligned}\hat{\pi}A=\hat{\pi}\frac{1}{n!}\sum\limits_{\pi\in S_n}\mathrm{sgn}(\pi)\pi\\=\frac{1}{n!}\sum\limits_{\pi\in S_n}\mathrm{sgn}(\pi)\hat{\pi}\pi\\=\frac{1}{n!}\sum\limits_{\pi\in S_n}\mathrm{sgn}(\hat{\pi})\mathrm{sgn}(\hat{\pi}\pi)\hat{\pi}\pi\\=\mathrm{sgn}(\hat{\pi})\frac{1}{n!}\sum\limits_{\pi\in S_n}\mathrm{sgn}(\hat{\pi}\pi)\hat{\pi}\pi\\=\mathrm{sgn}(\hat{\pi})\hat{\pi}A\end{aligned}

Now given a representation $\rho:S_n\rightarrow\mathrm{GL}(V)$ we find that

\displaystyle\begin{aligned}\left[\rho(\hat{\pi})\right]\left(\left[\rho(A)\right](v)\right)=\left[\rho(\hat{\pi}A)\right](v)\\=\left[\rho(\mathrm{sgn}(\hat{\pi})A)\right](v)\\=\mathrm{sgn}(\hat{\pi})\left[\rho(A)\right](v)\end{aligned}

Thus every vector in the image of $\rho(A)$ is multiplied by the signum of any permutation. That is, $\mathrm{Im}\left(\rho(A)\right)\subseteq V$ is a subrepresentation on which $S_n$ acts by the signum representation.

Now, one thing to be careful about: I haven’t said that the subrepresentations are nontrivial. That is, when we (anti)symmetrize a representation, the subrepresentation we get may be zero — maybe no vectors in the representation transform trivially or by the signum representation. In fact, let’s check what happens when we multiply the symmetrizer and antisymmetrizer:

\displaystyle\begin{aligned}SA=\left(\frac{1}{n!}\sum\limits_{\pi\in S_n}\pi\right)\left(\frac{1}{n!}\sum\limits_{\hat{\pi}\in S_n}\mathrm{sgn}(\hat{\pi})\hat{\pi}\right)\\=\frac{1}{n!}\frac{1}{n!}\sum\limits_{\substack{\pi\in S_n\\\hat{\pi}\in S_n}}\mathrm{sgn}(\hat{\pi})\pi\hat{\pi}\\=\frac{1}{n!}\frac{1}{n!}\sum\limits_{\substack{\pi\in S_n\\\hat{\pi}\in S_n}}\mathrm{sgn}(\pi)\mathrm{sgn}(\pi\hat{\pi})\pi\hat{\pi}\\=\frac{1}{n!}\frac{1}{n!}\left(\sum\limits_{\pi\in S_n}\mathrm{sgn}(\pi)\right)\left(\sum\limits_{\pi\hat{\pi}\in S_n}\mathrm{sgn}(\pi\hat{\pi})\pi\hat{\pi}\right)\\=\frac{1}{n!}\frac{1}{n!}\left(\sum\limits_{\pi\in S_n}\mathrm{sgn}(\pi)\right)A\\=0\end{aligned}

Where the sum comes to zero because we’re just adding up $\frac{n!}{2}$ terms where $\mathrm{sgn}(\pi)=1$ and $\frac{n!}{2}$ where $\mathrm{sgn}(\pi)=-1$, and so everything cancels out. That is, the symmetric part of an antisymmetrized representation is trivial. Similarly, the antisymmetric part of a symmetrized representation is trivial.

December 19, 2008

## The Signum Representation

I’ve just noticed that there’s a very important representation I haven’t mentioned. Well, actually, I mentioned it in passing while talking about Rubik’s group, but not very explicitly. And it’s a very important one.

Way back when I defined permutation groups I talked about a permutation being even or odd. Remember that we showed that permutation can be written out as a composite of transpositions which swap two objects. In general this can be done in more than one way, but if it takes an even number of swaps to write a permutation in one way, then it will take an even number of swaps in any other way, and similarly for permutations requiring an odd number of swaps. In this way we separate out permutations into the “even” and “odd” collections.

The composite of two even permutations or two odd permutations is even, while the composite of an even and an odd permutation is odd. This is just like the multiplication table of the group $\mathbb{Z}_2$, with “even” for the group’s identity $e$ and “odd” for the other group element $o$. That is, we have a homomorphism $\mathrm{sgn}:S_n\rightarrow\mathbb{Z}_2$ for every permutation group $S_n$.

Now to make this into a representation we’re going to use a one-dimensional representation of $\mathbb{Z}_2$. We have to send the group identity $e$ to the field element $1$, but we have a choice to make for the image of $o$. We need to send it to some field element $x$, and this element must satisfy $x^2=1$ for this to be a representation. We could choose $x=1$, but this just sends everything to the identity, which is the trivial group representation. There may be other choices around, but the only one we know always exist is $x=-1$ (note we’re tacitly assuming that $1\neq-1$.

So we define the one-dimensional signum representation of $S_n$ by sending all the even permutations to the $1\times1$ matrix whose single entry is $1$, and sending all the odd permutations to the $1\times1$ matrix whose single entry is $-1$. Often we’ll just ignore the “matrix” fact in here, and just say that the signum of an even permutation is $1$ and the signum of an odd permutation is $-1$. But secretly we’re always taking this and multiplying it by something else, so we’re always using it as a linear transformation anyway.

December 18, 2008

## Do Short Exact Sequences of Representations Split?

We’ve seen that the category of representations is abelian, so we have all we need to talk about exact sequences. And we know that some of the most important exact sequences are short exact sequences. We also saw that every short exact sequence of vector spaces splits. So does the same hold for representations? It turns out that no, they don’t always, and I’ll give an example to show what can happen.

Consider the group $\mathbb{Z}$ of integers and the two dimensional representation $\rho:\mathbb{Z}\rightarrow\mathrm{GL}\left(\mathbb{F}^2\right)$ defined by:

$\displaystyle\rho(n)=\begin{pmatrix}1&n\\ 0&1\end{pmatrix}$

Verify for yourself that this actually does define a representation of the group of integers.

Now it’s straightforward to see that all these linear transformations send every vector of the form $\begin{pmatrix}x\\ 0\end{pmatrix}$ to itself. This defines a one-dimensional subspace fixed by the representation — a subrepresentation $\tau:\mathbb{Z}\rightarrow\mathrm{GL}\left(\mathbb{F}^1\right)$ defined by:

$\displaystyle\tau(n)=\begin{pmatrix}1\end{pmatrix}$

Then there must be a quotient representation $\sigma=\rho/\tau$, and we can arrange them into a short exact sequence: $\mathbf{0}\rightarrow\tau\rightarrow\rho\rightarrow\sigma\rightarrow\mathbf{0}$. The question, then, is whether this is isomorphic to the split exact sequence $\mathbf{0}\rightarrow\tau\rightarrow\tau\oplus\sigma\rightarrow\sigma\rightarrow\mathbf{0}$. That is, can we find an isomorphism $\rho\cong\tau\oplus\sigma$ compatible with the the inclusion map from $\tau$ and the projection map onto $\sigma$?

First off, let’s write the direct sum representation a little more explicitly. The direct sum $\tau\oplus\sigma$ acts on pairs of field elements by $\tau$ on the first and $\sigma$ on the second, with no interaction between them. That is, we can write the representation as

$\displaystyle\left[\tau\oplus\sigma\right](n)=\begin{pmatrix}\tau(n)&0\\ 0&\sigma(n)\end{pmatrix}=\begin{pmatrix}1&0\\ 0&\sigma(n)\end{pmatrix}$

And we’re looking for some isomorphism so that for every $n\in\mathbb{Z}$ we get from the matrix $\rho(n)$ to the matrix $\left[\tau\oplus\sigma\right](n)$ by conjugation. Explicitly, we’ll need a matrix $\begin{pmatrix}a&b\\c&d\end{pmatrix}$. But we also need to make sure that $\tau$ as a subrepresentation of $\rho$ is sent to $\tau$ as a subrepresentation of $\tau\oplus\sigma$. That is we must satisfy

$\displaystyle\begin{pmatrix}a&b\\c&d\end{pmatrix}\begin{pmatrix}1\\ 0\end{pmatrix}=\begin{pmatrix}1\\ 0\end{pmatrix}$

Thus $a=1$ and $c=0$ right off the bat! Now the intertwining condition (equivalent to the conjugation) is that

$\displaystyle\begin{pmatrix}1&b\\ 0&d\end{pmatrix}\begin{pmatrix}1&n\\ 0&1\end{pmatrix}=\begin{pmatrix}1&0\\ 0&\sigma(n)\end{pmatrix}\begin{pmatrix}1&b\\ 0&d\end{pmatrix}$

$\displaystyle\begin{pmatrix}1&n+b\\ 0&d\end{pmatrix}=\begin{pmatrix}1&b\\ 0&\sigma(n)d\end{pmatrix}$

But this says that $n+b=b$ for all $n\in\mathbb{Z}$, and this is clearly impossible!

So here’s an example where a short exact sequence of representations can not be split. At some point later we’ll see that in many cases we’re interested in they do split, but for now it’s good to see that they don’t always work out so nicely.

December 17, 2008

## The Category of Representations is Abelian

We’ve been considering the category of representations of an algebra $A$, and we’re just about done showing that $\mathbf{Rep}_\mathbb{F}(A)$ is abelian.

First of all, the intertwiners between any two representations form a vector space, which is really an abelian group plus stuff. Since the composition of intertwiners is bilinear, this makes $\mathbf{Rep}_\mathbb{F}(A)$ into an $\mathbf{Ab}$-category. Secondly, we can take direct sums of representations, which is a categorical biproduct. Thirdly, every intertwiner has a kernel and a cokernel.

The only thing we’re missing is that every monomorphism and every epimorphism be normal. That is, every monomorphism should actually be the kernel of some intertwiner, and every epimorphism should actually be the cokernel of some intertwiner. So, given representations $\rho:A\rightarrow\mathrm{End}(V)$ and $\sigma:A\rightarrow\mathrm{End}(W)$, let’s consider a monomorphic intertwiner $f:\rho\rightarrow\sigma$.

As for linear maps, it’s straightforward to show that $f$ is monomorphic if and only if its kernel is trivial. Specifically, we can consider the inclusion $\iota:\mathrm{Ker}(f)\rightarrow V$ and the zero map $0:\mathrm{Ker}(f)\rightarrow V$. It’s easy to see that $f\circ\iota=f\circ0$, and so the left-cancellation property shows that $\iota=0$, which is only possible if $\mathrm{Ker}(f)=\mathbf{0}$. So a monomorphism has a trivial kernel. Thus the underlying linear map $f$ is an isomorphism of $V$ onto the image $\mathrm{Im}(f)\subseteq W$. Then this subrepresentation is exactly the kernel of the quotient map $W\rightarrow W/\mathrm{Im}(f)$. And so the monomorphism $f$ is the kernel of some map. The proof that any epimorphism is normal is dual.

And so we have established that the category of representations of the algebra $A$ is abelian. This allows us to bring in all the machinery of homological algebra, if we should so choose. In particular, we can talk about exact sequences, which can be useful from time to time.

December 15, 2008

## Kernels and Images of Intertwiners

The next obvious things to consider are the kernel and the image of an intertwining map. So let’s say we’ve got a representation $\rho:A\rightarrow\mathrm{End}(V)$, a representation $\sigma:A\rightarrow\mathrm{End}(W)$, and an intertwiner $f:\rho\rightarrow\sigma$ defined by the linear map $f:V\rightarrow W$ which satisfies $\left[\sigma(a)\right]\left(f(v)\right)=f\left(\left[\rho(a)\right](v)\right)$ for all $v\in V$.

Now the linear map $f$ immediately gives us two subspaces: the kernel $\mathrm{Ker}(f)\subseteq V$ and the image $\mathrm{Im}(f)\subseteq W$. And it turns out that each of these is actually a subrepresentation. Showing this isn’t difficult. A subrepresentation is just a subspace that gets sent to itself under the action on the whole space, so we just have to check that $\rho(a)$ always sends vectors in $\mathrm{Ker}(f)$ back to this subspace, and that $\sigma(a)$ always sends vectors in $\mathrm{Im}(f)$ back into this subspace.

First off, $v\in V$ is in the kernel of $f$ if $f(v)=0$. Then we calculate

\displaystyle\begin{aligned}f\left(\left[\rho(a)\right](v)\right)=\left[\sigma(a)\right]\left(f(v)\right)\\=\left[\sigma(a)\right](0)=0\end{aligned}

which shoes that $\left[\rho(a)\right](v)$ is also in the kernel of $f$.

On the other hand, if $w\in W$ is in the image of $f$, then there is some $v\in V$ so that $w=f(v)$. We calculate

\displaystyle\begin{aligned}\left[\sigma(a)\right](w)=\left[\sigma(a)\right]\left(f(v)\right)\\=f\left(\left[\rho(a)\right](v)\right)\end{aligned}

And so $\left[\sigma(a)\right](w)$ is also in the image of $f$.

So we’ve seen that the image and kernel of an intertwining map have well-defined actions of $A$, and so we have subrepresentations. Immediately we can conclude that the coimage $\mathrm{Coim}(f)=V/\mathrm{Ker}(f)$ and the cokernel $\mathrm{Cok}(f)=W/\mathrm{Im}(f)$ are quotient representations.

December 12, 2008