The Unapologetic Mathematician

Mathematics for the interested outsider

Inner Products on Exterior Algebras and Determinants

I want to continue yesterday’s post with some more explicit calculations to hopefully give a bit more of the feel.

First up, let’s consider wedges of degree k. That is, we pick k vectors \left\{v_i\right\}_{i=1}^k and wedge them all together (in order) to get v_1\wedge\dots\wedge v_k. What is its inner product with another of the same form? We calculate

\displaystyle\begin{aligned}\langle v_1\wedge\dots\wedge v_k,w_1\wedge\dots\wedge w_k\rangle&=\frac{1}{k!}\frac{1}{k!}\sum\limits_{\pi\in S_k}\sum\limits_{\hat{\pi}\in S_k}\mathrm{sgn}(\pi\hat{\pi})\langle v_{\pi(1)}\otimes\dots\otimes v_{\pi(k)},w_{\hat{\pi}(1)}\otimes\dots\otimes w_{\hat{\pi}(k)}\rangle\\&=\frac{1}{k!}\frac{1}{k!}\sum\limits_{\pi\in S_k}\sum\limits_{\hat{\pi}\in S_k}\mathrm{sgn}(\pi\hat{\pi})\langle v_{\pi(1)},w_{\hat{\pi}(1)}\rangle\dots\langle v_{\pi(k)},w_{\hat{\pi}(k)}\rangle\\&=\frac{1}{k!}\frac{1}{k!}\sum\limits_{\pi\in S_k}\sum\limits_{\hat{\pi}\in S_k}\mathrm{sgn}(\pi^{-1}\hat{\pi})\langle v_1,w_{\pi^{-1}(\hat{\pi}(1))}\rangle\dots\langle v_{k},w_{\pi^{-1}(\hat{\pi}(k))}\rangle\\&=\frac{1}{k!}\frac{1}{k!}\sum\limits_{\pi\in S_k}\sum\limits_{\sigma\in S_k}\mathrm{sgn}(\sigma)\langle v_1,w_{\sigma(1)}\rangle\dots\langle v_k,w_{\sigma(k)}\rangle\\&=\frac{1}{k!}\sum\limits_{\sigma\in S_k}\mathrm{sgn}(\sigma)\langle v_1,w_{\sigma(1)}\rangle\dots\langle v_k,w_{\sigma(k)}\rangle\end{aligned}

where in the third line we’ve rearranged the factors at the right and used the fact that \mathrm{sgn}(\pi)=\mathrm{sgn}(\pi^{-1}), and in the fourth line we’ve relabelled \sigma=\pi^{-1}\hat{\pi}. This looks a lot like the calculation of a determinant. In fact, it is \frac{1}{k!} times the determinant of the matrix with entries \langle v_i,w_j\rangle.

\displaystyle \langle v_1\wedge\dots\wedge v_k,w_1\wedge\dots\wedge w_k\rangle=\frac{1}{k!}\det\left(\langle v_i,w_j\rangle\right)

If we use the “renormalized” inner product on \Lambda(V) from the end of yesterday’s post, then we get an extra factor of k!, which cancels off the \frac{1}{k!} and gives us exactly the determinant.

We can use the inner product to read off components of exterior algebra elements. If \mu is an element of degree k we write

\displaystyle\mu^{i_1\dots i_k}=k!\langle e_{i_1}\wedge\dots\wedge e_{i_k},\mu\rangle

As an explicit example, we may take V to have dimension 3 and consider an element of degree 2 in \Lambda(V)

\displaystyle\mu=\mu^{12}e_1\wedge e_2+\mu^{13}e_1\wedge e_3+\mu^{23}e_2\wedge e_3

We call what we’re writing in the superscript to \mu we call a “multi-index”, and sometimes we just write it as I, which in the summation convention runs over all increasing collections of k indices. Correspondingly, we can just write e_I=e_{i_1}\wedge\dots\wedge e_{i_k} for the multi-index I=(i_1,\dots,i_k).

Alternatively, we could expand the wedges out in terms of tensors:

\displaystyle\begin{aligned}\mu&=\mu^{12}e_1\wedge e_2+\mu^{13}e_1\wedge e_3+\mu^{23}e_2\wedge e_3\\&=\mu^{12}e_1\otimes e_2-\mu^{12}e_2\otimes e_1+\mu^{13}e_1\wedge e_3-\mu^{13}e_3\wedge e_1+\mu^{23}e_2\wedge e_3-\mu^{23}e_3\wedge e_2\\&=\mu^{12}e_1\otimes e_2+\mu^{21}e_2\otimes e_1+\mu^{13}e_1\wedge e_3+\mu^{31}e_3\wedge e_1+\mu^{23}e_2\wedge e_3+\mu^{32}e_3\wedge e_2\\&=\mu^{ij}e_i\otimes e_j\end{aligned}

where we just think of the superscript as a collection of k separate indices, all of which run from 1 to the dimension of V, with the understanding that \mu^{ij}=-\mu^{ji}, and similarly for higher degrees; swapping two indices switches the sign of the component. All this index juggling gets distracting and confusing, but it’s sometimes necessary for explicit computations, and the physicists love it.

Anyway, we can use this to get back to our original definition of the determinant of a linear transformation T. Pick a orthonormal basis \left\{e_i\right\}_{i=1}^n for V and wedge them all together to get an element e_1\wedge\dots\wedge e_n of top degree in \Lambda(V). Since the space of top degree is one-dimensional, any linear transformation on it just consists of multiplying by a scalar. So we can let T act on this one element we’ve cooked up, and then read off the coefficient using the inner product.

The linear transformation T sends e_i to the vector T(e_i)=t_i^je_j. By functoriality, it sends e_1\wedge\dots\wedge e_n to T(e_1)\wedge\dots\wedge T(e_n). And now we want to calculate the coefficient.

\displaystyle\begin{aligned}n!\langle e_1\wedge\dots\wedge e_n,T(e_1)\wedge\dots\wedge T(e_n)\rangle&=\frac{n!}{n!}\det\left(\langle e_j,T(e_i)\rangle\right)\\&=\det\left(\langle e_j,t_i^ke_k\rangle\right)\\&=\det\left(t_i^k\langle e_j,e_k\rangle\right)\\&=\det\left(t_i^k\delta_k^j\right)\\&=\det\left(t_i^j\right)\end{aligned}

The determinant of T is exactly the factor by which T acting on the top degree subspace in \Lambda(V) expands any given element.

October 30, 2009 Posted by | Algebra, Linear Algebra | 3 Comments