The Unapologetic Mathematician

Mathematics for the interested outsider

Calculating the Determinant

Today we’ll actually calculate the determinant representation of an element of \mathrm{GL}(V) for a finite-dimensional vector space V. That is, what transformation does an element of the general linear group induce on the one-dimensional space of antisymmetric tensors of maximal degree?

First off, what will the determinant of a linear transformation be? It will be an invertible linear transformation from a one-dimensional vector space to itself. That is, if we have a basis (any single nonzero vector) for the one-dimensional space, the determinant can be described by an invertible 1\times1 matrix — a single nonzero field element. The action is just to multiply every vector by this field element. So all we have to do is find a vector in our space and see what the representation does to it.

But finding a vector is easy. We just pick a basis of Vany basis of V — and antisymmetrize a d-tuple of basis elements. The obvious one to pick is e_1\otimes...\otimes e_d. Then we have an antisymmetric tensor

\displaystyle A\left(\bigotimes\limits_{k=1}^d e_k\right)

Now, we could calculate this right away, but let’s not do that. What we’re really interested in is how a group element T\in\mathrm{GL}(V) acts on this tensor

\displaystyle T^{\otimes d}\left(A\left(\bigotimes\limits_{k=1}^d e_k\right)\right)

But remember that the actions of the symmetric and general linear groups commute

\displaystyle\begin{aligned}A\left(T^{\otimes d}\left(\bigotimes\limits_{k=1}^de_k\right)\right)=A\left(\bigotimes\limits_{k=1}^dT(e_k)\right)\\=A\left(\bigotimes\limits_{k=1}^d\left(\sum\limits_{j=1}^dt_k^je_j\right)\right)\end{aligned}

Where we’re not using the summation convention for the moment.

The next step is the tricky part. What we have is a product of sums, which we want to turn into a sum of products. We walk through the factors, picking one summand from each as we go along. That is, for every k\in\{1,...,d\} we pick some \pi(k)\in\{1,...,d\} to get the term

\displaystyle\bigotimes\limits_{k=1}^dt_k^{\pi(k)}e_{\pi(k)}

And we can factor out all these constants to make our term look like

\displaystyle\prod\limits_{k=1}^dt_k^{\pi(k)}\bigotimes\limits_{k=1}^de_{\pi(k)}

We want to sum up over all possible such terms. This is really just a big application of the distributive property — the linearity of tensor multiplication. At the end we have

\displaystyle A\left(\sum\limits_{\pi:\{1,...,d\}\rightarrow\{1,...,d\}}\left(\prod\limits_{k=1}^dt_k^{\pi(k)}\bigotimes\limits_{k=1}^de_{\pi(k)}\right)\right)

But since antisymmetrization is linear we get

\displaystyle\sum\limits_{\pi:\{1,...,d\}\rightarrow\{1,...,d\}}\left(\prod\limits_{k=1}^dt_k^{\pi(k)}A\left(\bigotimes\limits_{k=1}^de_{\pi(k)}\right)\right)

And here’s where the properties of the antisymmetrizer come in. First off, if \pi(j)=\pi(k) for any two distinct indices j and k, then the antisymmetrization of the term will vanish. Thus our sum is really only over those \pi\in S_d

\displaystyle\sum\limits_{\pi\in S_d}\left(\prod\limits_{k=1}^dt_k^{\pi(k)}A\left(\bigotimes\limits_{k=1}^de_{\pi(k)}\right)\right)

But now each term involves antisymmetrizing the same collection of basis vectors, but in different orders. So for each one we rearrange the tensorands at the possible cost of picking up a negative sign

\displaystyle\sum\limits_{\pi\in S_d}\left(\left(\prod\limits_{k=1}^dt_k^{\pi(k)}\right)\mathrm{sgn}(\pi)A\left(\bigotimes\limits_{k=1}^de_k\right)\right)

And now the antisymmetrizer part has nothing to do with the summation over \pi. We factor it out to find

\displaystyle\left(\sum\limits_{\pi\in S_d}\mathrm{sgn}(\pi)\left(\prod\limits_{k=1}^dt_k^{\pi(k)}\right)\right)A\left(\bigotimes\limits_{k=1}^de_k\right)

So in the end we’ve multiplied our antisymmetric tensor by the factor

\displaystyle\sum\limits_{\pi\in S_d}\mathrm{sgn}(\pi)\prod\limits_{k=1}^dt_k^{\pi(k)}

which is our determinant. For each permutation \pi we take our matrix and walk down the rows. At the kth row we multiply by the element in the \pi(k)th column, and we sum up these products over all \pi\in S_d.

About these ads

January 2, 2009 - Posted by | Algebra, Linear Algebra, Representation Theory

10 Comments »

  1. [...] Determinant of a Noninvertible Transformation We’ve defined and calculated the determinant representation of for a finite-dimensional vector space . But we can extend this [...]

    Pingback by The Determinant of a Noninvertible Transformation « The Unapologetic Mathematician | January 14, 2009 | Reply

  2. [...] off, if we choose a basis for we have matrix representations of endomorphisms, and thus a formula for their determinants. For instance, if is represented by the matrix with entries , then its determinant is given [...]

    Pingback by The Characteristic Polynomial « The Unapologetic Mathematician | January 28, 2009 | Reply

  3. [...] John Armstrong is currently on an exposition of the determinant, starting here and is now here. This must be about the fifth time I’m relearning the determinant, each time more [...]

    Pingback by Welcome to Carnival of Mathematics 48 = 6!!! « Concrete Nonsense | January 30, 2009 | Reply

  4. [...] that we can calculate the determinant by summing one term for each permutation in the symmetric group . Each term is the product of one [...]

    Pingback by The Determinant of an Upper-Triangular Matrix « The Unapologetic Mathematician | February 3, 2009 | Reply

  5. [...] we can start calculating the determinant of , summing over permutations. Just like we saw with an upper-triangular matrix, if we have a [...]

    Pingback by The Characteristic Polynomial of a Real Linear Transformation « The Unapologetic Mathematician | April 2, 2009 | Reply

  6. [...] Determinant of the Adjoint It will be useful to know what happens to the determinant of a transformation when we pass to its adjoint. Since the determinant doesn’t depend on any [...]

    Pingback by The Determinant of the Adjoint « The Unapologetic Mathematician | July 30, 2009 | Reply

  7. [...] that , and in the fourth line we’ve relabelled . This looks a lot like the calculation of a determinant. In fact, it is times the determinant of the matrix with entries [...]

    Pingback by Inner Products on Exterior Algebras and Determinants « The Unapologetic Mathematician | October 30, 2009 | Reply

  8. [...] to as the Jacobian, or the Jacobian matrix. Since this matrix is square, we can calculate its determinant, which is also referred to as the Jacobian, or the Jacobian determinant. I’ll try to be clear [...]

    Pingback by The Jacobian « The Unapologetic Mathematician | November 11, 2009 | Reply

  9. [...] is where Cramer’s rule comes in, and it starts by analyzing the way we calculate the determinant of a matrix . This [...]

    Pingback by Cramer’s Rule « The Unapologetic Mathematician | November 17, 2009 | Reply

  10. [...] in terms of a sum over permutations. And that, of course, is a matter of exterior algebra. The Unapologetic Mathematician has written on determinants, although I’m not sure he discussed the volume definition in [...]

    Pingback by Heron’s formula « Annoying Precision | January 30, 2010 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 394 other followers

%d bloggers like this: