The Unapologetic Mathematician

Mathematics for the interested outsider

Inner Products on Differential Forms

Now that we’ve defined inner products on 1-forms we can define them on k forms for all other k. In fact, our construction will not depend on the fact that they come from a metric at all.

In fact, we’ve basically seen this already when we were just dealing with vector spaces and we introduced inner products on tensor spaces. Pretty much everything goes just as it did then, so going back and reviewing those constructions will pay dividends now.

Anyway, the upshot: we know that we can write any k-form as a sum of k-fold wedges, so the bilinearity of the inner product means we just need to figure out how to calculate the inner product of such k-fold wedges. And this works out like

\displaystyle\begin{aligned}\langle \alpha_1\wedge\dots\wedge \alpha_k,\beta_1\wedge\dots\wedge \beta_k\rangle&=\frac{1}{k!}\frac{1}{k!}\sum\limits_{\pi\in S_k}\sum\limits_{\hat{\pi}\in S_k}\mathrm{sgn}(\pi\hat{\pi})\langle \alpha_{\pi(1)}\otimes\dots\otimes \alpha_{\pi(k)},\beta_{\hat{\pi}(1)}\otimes\dots\otimes \beta_{\hat{\pi}(k)}\rangle\\&=\frac{1}{k!}\frac{1}{k!}\sum\limits_{\pi\in S_k}\sum\limits_{\hat{\pi}\in S_k}\mathrm{sgn}(\pi\hat{\pi})\langle \alpha_{\pi(1)},\beta_{\hat{\pi}(1)}\rangle\dots\langle \alpha_{\pi(k)},\beta_{\hat{\pi}(k)}\rangle\\&=\frac{1}{k!}\frac{1}{k!}\sum\limits_{\pi\in S_k}\sum\limits_{\hat{\pi}\in S_k}\mathrm{sgn}(\pi^{-1}\hat{\pi})\langle \alpha_1,\beta_{\pi^{-1}(\hat{\pi}(1))}\rangle\dots\langle \alpha_{k},\beta_{\pi^{-1}(\hat{\pi}(k))}\rangle\\&=\frac{1}{k!}\frac{1}{k!}\sum\limits_{\pi\in S_k}\sum\limits_{\sigma\in S_k}\mathrm{sgn}(\sigma)\langle \alpha_1,\beta_{\sigma(1)}\rangle\dots\langle \alpha_k,\beta_{\sigma(k)}\rangle\\&=\frac{1}{k!}\sum\limits_{\sigma\in S_k}\mathrm{sgn}(\sigma)\langle \alpha_1,\beta_{\sigma(1)}\rangle\dots\langle \alpha_k,\beta_{\sigma(k)}\rangle\\&=\frac{1}{k!}\det\left(\langle\alpha_i,\beta_j\rangle\right)\end{aligned}

Now let’s say we have an orthonormal basis of 1-forms \{\eta^i\} — a collection of 1-forms such that \langle\eta^i,\eta^j\rangle is the constant function with value 1 if i=j and 0 otherwise. Taking them in order gives us an n-fold wedge \eta^1\wedge\dots\wedge\eta^n. We can calculate its inner product with itself as follows:

\displaystyle\begin{aligned}\langle\eta^1\wedge\dots\wedge\eta^n,\eta^1\wedge\dots\wedge\eta^n\rangle&=\frac{1}{n!}\det\left(\langle\eta^i,\eta^j\rangle\right)\\&=\frac{1}{n!}\det\left(\delta^{ij}\right)=\frac{1}{n!}\end{aligned}

We’ve seen this before when talking about the volume of a parallelepiped, but it still feels like this should have volume 1. For this reason, many authors will rescale the inner products on k-forms to compensate. That is, they’ll define the inner product on \Omega^k(U) to be the determinant above, rather than \frac{1}{k!} times the determinant like we wrote. We’ll stick with this version, but remember that not everyone does it this way.

October 4, 2011 Posted by | Differential Geometry, Geometry | 2 Comments

Inner Products on 1-Forms

Our next step after using a metric to define an inner product on the module \mathfrak{X}(U) of vector spaces over the ring \mathcal{O}(U) of smooth functions is to flip it around to the module \Omega^1(U) of 1-forms. The nice thing is that the hard part is already done. All we really need to do is define an inner product on the cotangent space \mathcal{T}^*_p(M); then the extension to 1-forms is exactly like extending from inner products on each tangent space to an inner product on vector fields.

And really this construction is just a special case of a more general one. Let’s say that \langle\underbar{\hphantom{X}},\underbar{\hphantom{X}}\rangle is an inner product on a vector space V. As we mentioned when discussing adjoint transformations, this gives us an isomorphism from V to its dual space V^*. That is, when we have a metric floating around we have a canonical way of identifying tangent vectors in \mathcal{T}_pM with cotangent vectors in \mathcal{T}^*_pM.

Everything is perfectly well-defined at this point, but let’s consider this a bit more explicitly. Say that \{e_i\} is a basis of \mathcal{T}_pM. We automatically have a dual basis \{\eta^i\} defined by \eta^i(e_j)=\delta^i_j, even before defining the metric. So if the inner product g_p defines a mapping \mathcal{T}_pM\to\mathcal{T}^*_pM, what does it look like with respect to these bases? It takes the vector e_i and sends it to a linear functional whose value at e_j is g_p(e_i,e_j). Since we get a number at each point p, we will also write this as a function g_{ij}(p) That is, we can break the image of e_i out as the linear combination

\displaystyle e_i\mapsto\sum\limits_{j=1}^ng_p(e_i,e_j)\eta^j=\sum\limits_{j=1}^ng_{ij}(p)\eta^j

What about a vector with components v^i? We easily calculate

\displaystyle\begin{aligned}\sum\limits_{i=1}^nv^ie_i&\mapsto\sum\limits_{i=1}^nv^i\sum\limits_{j=1}^ng_{ij}(p)\eta^j\\&=\sum\limits_{j=1}^n\left(\sum\limits_{i=1}^nv^ig_{ij}(p)\right)\eta^j\end{aligned}

So g_{ij} is the matrix of this transformation. The fact that both indices on the bottom tells us that we are moving from vectors to covectors.

The same sort of reasoning can be applied to the inner product on the dual space. If we write it again by g_p, then we get another matrix:

\displaystyle g^{ij}(p)=g_p(\eta^i,\eta^j)

which tells us how to send a basis covector \eta^i to a vector:

\displaystyle \eta^i\mapsto\sum\limits_{j=1}^ng^{ij}(p)e_j

and thus we can calculate the image of any covector with components \lambda_i:

\displaystyle\begin{aligned}\sum\limits_{i=1}^n\lambda_i\eta^i&\mapsto\sum\limits_{i=1}^n\lambda_i\sum\limits_{j=1}^ng^{ij}(p)e_j\\&=\sum\limits_{j=1}^n\left(\sum\limits_{i=1}^n\lambda_ig^{ij}(p)\right)e_j\end{aligned}

But these are supposed to be inverses to each other! Thus we can send a vector v=v^ie_i to a covector and back:

\displaystyle\begin{aligned}\sum\limits_{i=1}^nv^ie_i&\mapsto\sum\limits_{j=1}^n\left(\sum\limits_{i=1}^nv^ig_{ij}(p)\right)\eta^j\\&\mapsto\sum\limits_{k=1}^n\left(\sum\limits_{j=1}^n\left(\sum\limits_{i=1}^nv^ig_{ij}(p)\right)g^{jk}(p)\right)e_k\\&=\sum\limits_{k=1}^n\left(\sum\limits_{i=1}^nv^i\left(\sum\limits_{j=1}^ng_{ij}(p)g^{jk}(p)\right)\right)e_k\end{aligned}

If this is to be the original vector back, the coefficient of e_k must be v^k, which means the inner sum — the matrix product of g_{ij} and g^{jk} — must be the Kronecker delta. That is, g^{jk} must be the right matrix inverse of g_{ij}.

Similarly, if we start with a covector we will find that g^{ij} must be the left matrix inverse of g_{jk}. Since it’s a left and a right inverse, it must be the inverse; in particular, g_{ij} must be invertible, which is equivalent to the assumption that g_p is nondegenerate! It also means that we can always find the matrix of the inner product on the dual space in terms of the dual basis, assuming we have the matrix of the inner product on the original space.

And to return to differential geometry, let’s say we have a coordinate patch (U,x). We get a basis of coordinate vector fields, which let us define the matrix-valued function

\displaystyle g_{ij}(p)=g_p\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right)

This much we calculate from the metric we are given by assumption. But then we can invert the matrix at each point to get another one:

\displaystyle g^{ij}(p)=g_p\left(dx^i,dx^j\right)

where this is how we define the inner product on covectors. Of course, the situation is entirely symmetric, and if we’d started with a symmetric tensor field of type (2,0) that defined an inner product at each point, we could flip it over to get a metric.

October 1, 2011 Posted by | Differential Geometry, Geometry | 5 Comments

Follow

Get every new post delivered to your Inbox.

Join 366 other followers