The Unapologetic Mathematician

Mathematics for the interested outsider

Inner Products on 1-Forms

Our next step after using a metric to define an inner product on the module \mathfrak{X}(U) of vector spaces over the ring \mathcal{O}(U) of smooth functions is to flip it around to the module \Omega^1(U) of 1-forms. The nice thing is that the hard part is already done. All we really need to do is define an inner product on the cotangent space \mathcal{T}^*_p(M); then the extension to 1-forms is exactly like extending from inner products on each tangent space to an inner product on vector fields.

And really this construction is just a special case of a more general one. Let’s say that \langle\underbar{\hphantom{X}},\underbar{\hphantom{X}}\rangle is an inner product on a vector space V. As we mentioned when discussing adjoint transformations, this gives us an isomorphism from V to its dual space V^*. That is, when we have a metric floating around we have a canonical way of identifying tangent vectors in \mathcal{T}_pM with cotangent vectors in \mathcal{T}^*_pM.

Everything is perfectly well-defined at this point, but let’s consider this a bit more explicitly. Say that \{e_i\} is a basis of \mathcal{T}_pM. We automatically have a dual basis \{\eta^i\} defined by \eta^i(e_j)=\delta^i_j, even before defining the metric. So if the inner product g_p defines a mapping \mathcal{T}_pM\to\mathcal{T}^*_pM, what does it look like with respect to these bases? It takes the vector e_i and sends it to a linear functional whose value at e_j is g_p(e_i,e_j). Since we get a number at each point p, we will also write this as a function g_{ij}(p) That is, we can break the image of e_i out as the linear combination

\displaystyle e_i\mapsto\sum\limits_{j=1}^ng_p(e_i,e_j)\eta^j=\sum\limits_{j=1}^ng_{ij}(p)\eta^j

What about a vector with components v^i? We easily calculate

\displaystyle\begin{aligned}\sum\limits_{i=1}^nv^ie_i&\mapsto\sum\limits_{i=1}^nv^i\sum\limits_{j=1}^ng_{ij}(p)\eta^j\\&=\sum\limits_{j=1}^n\left(\sum\limits_{i=1}^nv^ig_{ij}(p)\right)\eta^j\end{aligned}

So g_{ij} is the matrix of this transformation. The fact that both indices on the bottom tells us that we are moving from vectors to covectors.

The same sort of reasoning can be applied to the inner product on the dual space. If we write it again by g_p, then we get another matrix:

\displaystyle g^{ij}(p)=g_p(\eta^i,\eta^j)

which tells us how to send a basis covector \eta^i to a vector:

\displaystyle \eta^i\mapsto\sum\limits_{j=1}^ng^{ij}(p)e_j

and thus we can calculate the image of any covector with components \lambda_i:

\displaystyle\begin{aligned}\sum\limits_{i=1}^n\lambda_i\eta^i&\mapsto\sum\limits_{i=1}^n\lambda_i\sum\limits_{j=1}^ng^{ij}(p)e_j\\&=\sum\limits_{j=1}^n\left(\sum\limits_{i=1}^n\lambda_ig^{ij}(p)\right)e_j\end{aligned}

But these are supposed to be inverses to each other! Thus we can send a vector v=v^ie_i to a covector and back:

\displaystyle\begin{aligned}\sum\limits_{i=1}^nv^ie_i&\mapsto\sum\limits_{j=1}^n\left(\sum\limits_{i=1}^nv^ig_{ij}(p)\right)\eta^j\\&\mapsto\sum\limits_{k=1}^n\left(\sum\limits_{j=1}^n\left(\sum\limits_{i=1}^nv^ig_{ij}(p)\right)g^{jk}(p)\right)e_k\\&=\sum\limits_{k=1}^n\left(\sum\limits_{i=1}^nv^i\left(\sum\limits_{j=1}^ng_{ij}(p)g^{jk}(p)\right)\right)e_k\end{aligned}

If this is to be the original vector back, the coefficient of e_k must be v^k, which means the inner sum — the matrix product of g_{ij} and g^{jk} — must be the Kronecker delta. That is, g^{jk} must be the right matrix inverse of g_{ij}.

Similarly, if we start with a covector we will find that g^{ij} must be the left matrix inverse of g_{jk}. Since it’s a left and a right inverse, it must be the inverse; in particular, g_{ij} must be invertible, which is equivalent to the assumption that g_p is nondegenerate! It also means that we can always find the matrix of the inner product on the dual space in terms of the dual basis, assuming we have the matrix of the inner product on the original space.

And to return to differential geometry, let’s say we have a coordinate patch (U,x). We get a basis of coordinate vector fields, which let us define the matrix-valued function

\displaystyle g_{ij}(p)=g_p\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right)

This much we calculate from the metric we are given by assumption. But then we can invert the matrix at each point to get another one:

\displaystyle g^{ij}(p)=g_p\left(dx^i,dx^j\right)

where this is how we define the inner product on covectors. Of course, the situation is entirely symmetric, and if we’d started with a symmetric tensor field of type (2,0) that defined an inner product at each point, we could flip it over to get a metric.

October 1, 2011 - Posted by | Differential Geometry, Geometry

5 Comments »

  1. […] that we’ve defined inner products on -forms we can define them on forms for all other . In fact, our construction will not depend on […]

    Pingback by Inner Products on Differential Forms « The Unapologetic Mathematician | October 4, 2011 | Reply

  2. […] Now, we already have something else floating around in our discussion: the metric tensor . When we pick coordinates we get a matrix-valued function: […]

    Pingback by The Hodge Star on Differential Forms « The Unapologetic Mathematician | October 6, 2011 | Reply

  3. […] Armstrong: (Pseudo)-Riemannian Metrics, Isometries, Inner Products on 1-Forms, The Hodge Star in Coordinates, The Hodge Star on Differential Forms, Inner Products on […]

    Pingback by Thirteenth Linkfest | October 8, 2011 | Reply

  4. […] we know that and are inverse matrices. And so we get the canonical volume […]

    Pingback by A Hodge Star Example « The Unapologetic Mathematician | October 11, 2011 | Reply

  5. […] Inner Products on 1-Forms (unapologetic.wordpress.com) […]

    Pingback by Calculating the Kronecker product | cartesian product | January 14, 2012 | Reply


Leave a comment