The Unapologetic Mathematician

Mathematics for the interested outsider

Calculating the Determinant

Today we’ll actually calculate the determinant representation of an element of \mathrm{GL}(V) for a finite-dimensional vector space V. That is, what transformation does an element of the general linear group induce on the one-dimensional space of antisymmetric tensors of maximal degree?

First off, what will the determinant of a linear transformation be? It will be an invertible linear transformation from a one-dimensional vector space to itself. That is, if we have a basis (any single nonzero vector) for the one-dimensional space, the determinant can be described by an invertible 1\times1 matrix — a single nonzero field element. The action is just to multiply every vector by this field element. So all we have to do is find a vector in our space and see what the representation does to it.

But finding a vector is easy. We just pick a basis of Vany basis of V — and antisymmetrize a d-tuple of basis elements. The obvious one to pick is e_1\otimes...\otimes e_d. Then we have an antisymmetric tensor

\displaystyle A\left(\bigotimes\limits_{k=1}^d e_k\right)

Now, we could calculate this right away, but let’s not do that. What we’re really interested in is how a group element T\in\mathrm{GL}(V) acts on this tensor

\displaystyle T^{\otimes d}\left(A\left(\bigotimes\limits_{k=1}^d e_k\right)\right)

But remember that the actions of the symmetric and general linear groups commute

\displaystyle\begin{aligned}A\left(T^{\otimes d}\left(\bigotimes\limits_{k=1}^de_k\right)\right)=A\left(\bigotimes\limits_{k=1}^dT(e_k)\right)\\=A\left(\bigotimes\limits_{k=1}^d\left(\sum\limits_{j=1}^dt_k^je_j\right)\right)\end{aligned}

Where we’re not using the summation convention for the moment.

The next step is the tricky part. What we have is a product of sums, which we want to turn into a sum of products. We walk through the factors, picking one summand from each as we go along. That is, for every k\in\{1,...,d\} we pick some \pi(k)\in\{1,...,d\} to get the term

\displaystyle\bigotimes\limits_{k=1}^dt_k^{\pi(k)}e_{\pi(k)}

And we can factor out all these constants to make our term look like

\displaystyle\prod\limits_{k=1}^dt_k^{\pi(k)}\bigotimes\limits_{k=1}^de_{\pi(k)}

We want to sum up over all possible such terms. This is really just a big application of the distributive property — the linearity of tensor multiplication. At the end we have

\displaystyle A\left(\sum\limits_{\pi:\{1,...,d\}\rightarrow\{1,...,d\}}\left(\prod\limits_{k=1}^dt_k^{\pi(k)}\bigotimes\limits_{k=1}^de_{\pi(k)}\right)\right)

But since antisymmetrization is linear we get

\displaystyle\sum\limits_{\pi:\{1,...,d\}\rightarrow\{1,...,d\}}\left(\prod\limits_{k=1}^dt_k^{\pi(k)}A\left(\bigotimes\limits_{k=1}^de_{\pi(k)}\right)\right)

And here’s where the properties of the antisymmetrizer come in. First off, if \pi(j)=\pi(k) for any two distinct indices j and k, then the antisymmetrization of the term will vanish. Thus our sum is really only over those \pi\in S_d

\displaystyle\sum\limits_{\pi\in S_d}\left(\prod\limits_{k=1}^dt_k^{\pi(k)}A\left(\bigotimes\limits_{k=1}^de_{\pi(k)}\right)\right)

But now each term involves antisymmetrizing the same collection of basis vectors, but in different orders. So for each one we rearrange the tensorands at the possible cost of picking up a negative sign

\displaystyle\sum\limits_{\pi\in S_d}\left(\left(\prod\limits_{k=1}^dt_k^{\pi(k)}\right)\mathrm{sgn}(\pi)A\left(\bigotimes\limits_{k=1}^de_k\right)\right)

And now the antisymmetrizer part has nothing to do with the summation over \pi. We factor it out to find

\displaystyle\left(\sum\limits_{\pi\in S_d}\mathrm{sgn}(\pi)\left(\prod\limits_{k=1}^dt_k^{\pi(k)}\right)\right)A\left(\bigotimes\limits_{k=1}^de_k\right)

So in the end we’ve multiplied our antisymmetric tensor by the factor

\displaystyle\sum\limits_{\pi\in S_d}\mathrm{sgn}(\pi)\prod\limits_{k=1}^dt_k^{\pi(k)}

which is our determinant. For each permutation \pi we take our matrix and walk down the rows. At the kth row we multiply by the element in the \pi(k)th column, and we sum up these products over all \pi\in S_d.

January 2, 2009 Posted by | Algebra, Linear Algebra, Representation Theory | 10 Comments

   

Follow

Get every new post delivered to your Inbox.

Join 388 other followers