The Unapologetic Mathematician

Mathematics for the interested outsider

Conservative Vector Fields

For a moment, let’s return to the case of Riemannian manifolds; the vector field analogue of an exact 1-form \omega=df is called a “conservative” vector field X=\nabla f, which is the gradient of some function f.

Now, “conservative” is not meant in any political sense. To the contrary: integration is easy with conservative vector fields. Indeed, if we have a curve c that starts at a point p and ends at a point q then fundamental theorem of line integrals makes it easy to calculate:

\displaystyle\int\limits_c\nabla f\cdot ds=f(q)-f(p)

I didn’t go into this before, but the really interesting thing here is that this means that line integrals of conservative vector fields are independent of the path we integrate along. As a special case, the integral around any closed curve — where q=p — is automatically zero. The application of such line integrals to calculating the change of energy of a point moving through a field of force explains the term “conservative”; the line integral gives the change of energy, and whenever we return to our starting point energy is unchanged — “conserved” — by a conservative force field.

This suggests that it might actually be more appropriate to say that a vector field is conservative if it satisfies this condition on closed loops; I say that this is actually the same thing as our original definition. That is, a vector field is conservative — the gradient of some function — if and only if its line integral around any closed curve is automatically zero.

As a first step back the other way, it’s easy to see that this condition implies path-independence: if c_1 and c_2 go between the same two points — if c_1(0)=c_2(0) and c_1(1)=c_2(1) — then

\displaystyle\int\limits_{c_1}X\cdot ds=\int\limits_{c_2}X\cdot ds

Indeed, the formal sum c_1-c_2 is a closed curve, since \partial(c_1-c_2)=\partial c_1-\partial c_2=0, and so

\displaystyle\int\limits_{c_1}X\cdot ds-\int\limits_{c_2}X\cdot ds=\int\limits_{c_1-c_2}X\cdot ds=0

Of course, this also gives rise to a parallel — and equivalent — assertion about 1-forms: if the integral of \omega around any closed 1-chain is always zero, then \omega=df for some function f. Since we can state this even in the general, non-Riemannian case, we will prove this one instead.

December 15, 2011 Posted by | Differential Geometry, Geometry | 6 Comments

The Classical Stokes Theorem

At last we come to the version of Stokes’ theorem that people learn with that name in calculus courses. Ironically, unlike the fundamental theorem and divergence theorem special cases, Stokes’ theorem only works in dimension n=3, where the differential can take us straight from a line integral over a 1-dimensional region to a surface integral over an n-1-dimensional region.

So, let’s say that S is some two-dimensional oriented surface inside a three-dimensional manifold M, and let c=\partial S be its boundary. On the other side, let \alpha be a 1-form corresponding to a vector field F. We can easily define the line integral

\displaystyle\int\limits_c\alpha

and Stokes’ theorem tells us that this is equal to

\displaystyle\int\limits_{\partial S}\alpha=\int\limits_Sd\alpha

Now if we define \beta=*d\alpha as another 1-form then we know it corresponds to the curl \nabla\times F. But on the other hand we know that in dimension 3 we have *^2=1, and so we find *\beta=**d\alpha=d\alpha as well. Thus we have

\displaystyle\int\limits_c\alpha=\int\limits_S*\beta

which means that the line integral of F around the (oriented) boundary c of S is the same as the surface integral of the curl \nabla\times F through S itself. And this is exactly the old Stokes theorem from multivariable calculus.

November 23, 2011 Posted by | Differential Geometry, Geometry | 1 Comment

The Divergence Theorem

I’ve been really busy with other projects — and work — of late, but I think we can get started again. We left off right after defining hypersurface integrals, which puts us in position to prove the divergence theorem.

So, let S be a hypersurface with dimension n-1 in an n-manifold M, and let F be a vector field on some region containing S, so we can define the hypersurface integral

\displaystyle\int\limits_SF\cdot dS

And if F corresponds to a 1-form \alpha, we can write this as

\displaystyle\int\limits_S\langle\alpha,*\omega_S\rangle\omega_S

where \omega_S is the oriented volume form of S and *\omega_S is a 1-form that “points perpendicular to” S in M. We take the given inner product and integrate it as a function against the volume form of S itself.

A little juggling lets us rewrite:

\displaystyle\int\limits_S*\alpha

where we take our 1-form and flip it around to the “perpendicular” n-1 form *\alpha. Integrating this over S involves projecting against \omega_S, which is basically what the above formula connotes.

Now, let’s say that the surface S is the boundary of some n-dimensional submanifold E of M, and that it’s outward-oriented. That is, we can write S=\partial E. Then our hypersurface integral looks like

\displaystyle\int\limits_{\partial E}*\alpha

Next we’ll jump over to the other end and take the divergence \nabla\cdot F and integrate it over the region E. In terms of the 1-form \alpha, this looks like

\displaystyle\int\limits_E(*d*\alpha)\omega=\int\limits_E(d*\alpha)

But Stokes’ theorem tells us that

\displaystyle\int\limits_{\partial E}*\alpha=\int\limits_E(d*\alpha)

which tells us in our vector field notation that

\displaystyle\int\limits_SF\cdot dS=\int\limits_E\nabla\cdot F\,dV

This is the divergence — or Gauss’, or Gauss–Ostrogradsky — theorem, and it’s yet another special case of Stokes’ theorem.

November 22, 2011 Posted by | Differential Geometry, Geometry | 8 Comments

(Hyper-)Surface Integrals

The flip side of the line integral is the surface integral.

Given an n-manifold M we let S be an oriented n-1-dimensional “hypersurface”, which term we use because in the usual case of M=\mathbb{R}^3 a hypersurface has two dimensions — it’s a regular surface. The orientation on a hypersurface consists of n-1 tangent vectors which are all in the image of the derivative of the local parameterization map, which is a singular cube.

Now we want another way of viewing this orientation. Given a metric on M we can use the inverse of the Hodge star from M on the orientation n-1-form of S, which gives us a covector \nu(p)\in\mathcal{T}^*_pM defined at each point p of S. Roughly speaking, the orientation form of S defines an n-1-dimensional subspace of \mathcal{T}^*_pM: the covectors “tangent to S“. The covector \nu(p) is “perpendicular to S", and since S has only one fewer dimension than M there are only two choices for the direction of such a vector. The choice of one or the other defines a distinction between the two sides of S.

If we flip around our covectors into vectors — again using the metric — we get something like a vector field defined on S. I say “something like” because it’s only defined on S, which is not an open subspace of M, and strictly speaking a vector field must be defined on an open subspace. It’s not hard to see that we could “thicken up” S into an open subspace and extend our vector field smoothly there, so I’m not really going to make a big deal of it, but I want to be careful with my language; it’s also why I didn’t say we get a 1-form from the Hodge star. Anyway, we will call this vector-valued function dS, for reasons that will become apparent shortly.

Now what if we have another vector-valued function F defined on S — for example, it could be a vector field defined on an open subspace containing S. We can define the “surface integral of F through S“, which measures how much the vector field flows through the surface in the direction our orientation picks out as “positive”. And we measure this amount at any given point p by taking the covector at p provided by the Hodge star and evaluating it at the vector F(p). This gives us a value that we can integrate over the surface. This evaluation can be flipped around into our vector field notation and language, allowing us to write down the integral as

\displaystyle\int\limits_SF\cdot dS

because the “dot product” \left[F\cdot dS\right](p) is exactly what it means to evaluate the covector dual to dS(p) at the vector F(p). This should look familiar from multivariable calculus, but I’ve been saying that a lot lately, haven’t I?

We can also go the other direction and make things look more abstract. We could write our vector field as a 1-form \alpha, which lets us write our surface integral as

\displaystyle\int\limits_S\langle\alpha,\nu\rangle\omega_M=\int\limits_S\alpha\wedge*\nu

where \omega_M is the volume form on M. But then we know that *\nu=\omega_S the volume form on S. That is, we can define a hypersurface integral of any 1-form \alpha across any hypersurface S equipped with a volume form \omega_S by

\displaystyle\int\limits_S\alpha\wedge\omega_S

whether or not we have a metric on M.

October 27, 2011 Posted by | Differential Geometry, Geometry | 3 Comments

The Fundamental Theorem of Line Integrals

At last we can state one of our special cases of Stokes’ theorem. We don’t need to prove it, since we already did in far more generality, but it should help put our feet on some familiar ground.

So, we take an oriented curve c in a manifold M. The image of c is an oriented submanifold of M, where the “orientation” means picking one of the two possible tangent directions at each point along the image of the curve. As a high-level view, we can characterize the orientation as the direction that we traverse the curve, from one “starting” endpoint to the other “ending” endpoint.

Given any 1-form \alpha on the image of c — in particular, given an \alpha defined on M — we can define the line integral of \alpha over c. We already have a way of evaluating line integrals: pull the 1-form back to the parameter interval of c and integrate there as usual. But now we want to use Stokes’ theorem to come up with another way. Let’s write down what it will look like:

\displaystyle\int\limits_cd\omega=\int\limits_{\partial c}\omega

where \omega is some 0-form. That is: a function. This tells us that we can only make this work for “exact” 1-forms \alpha, which can be written in the form \alpha=df for some function f.

But if this is the case, then life is beautiful. The (oriented) boundary of c is easy: it consists of two 0-faces corresponding to the two endpoints. The starting point gets a negative orientation while the ending point gets a positive orientation. And so we write

\displaystyle\int\limits_cdf=\int\limits_{\partial c}f=\sum\limits_{j=0,1}(-1)^{1+j}f(c_{1,j})=f(c(1))-f(c(0))

That is, we just evaluate f at the two endpoints and subtract the value at the start from the value at the end!

What does this look like when we have a metric and we can rewrite the 1-form \alpha as a vector field F? In this case, \alpha is exact if and only if F is “conservative”, which just means that F=\nabla f for some function f. Then we can write

\displaystyle\int\limits_c\nabla f\cdot ds=f(c(1))-f(c(0))

which should look very familiar from multivariable calculus.

We call this the fundamental theorem of line integrals by analogy with the fundamental theorem of calculus. Indeed, if we set this up in the manifold \mathbb{R}, we get back exactly the second part of the fundamental theorem of calculus back again.

October 24, 2011 Posted by | Differential Geometry, Geometry | 6 Comments

Line Integrals

We now define some particular kinds of integrals as special cases of our theory of integrals over manifolds. And the first such special case is that of a line integral.

Consider an oriented curve c in the manifold M. We know that this is a singular 1-cube, and so we can pair it off with a 1-form \alpha. Specifically, we pull back \alpha to c^*\alpha on the interval [0,1] and integrate.

More explicitly, the pullback c^*\alpha is evaluated as

\displaystyle\left[c^*\alpha\left(\frac{d}{dt}\right)\right](t_0)=\left[\alpha_{c(t_0)}\right]\left(c_{*t_0}\frac{d}{dt}\bigg\vert_{t_0}\right)

That is, for a t_0\in[0,1], we take the value \alpha_{c(t_0)}\in\mathcal{T}^*_{c(t_0)}M of the 1-form \alpha at the point c(t_0)\in M and the tangent vector c'(t_0)\in\mathcal{T}_{c(t_0)}M and pair them off. This gives us a real-valued function which we can integrate over the interval.

So, why do we care about this particularly? In the presence of a metric, we have an equivalence between 1-forms \alpha and vector fields F. And specifically we know that the pairing \alpha_{c(t)}\left(c'(t)\right) is equal to the inner product \langle F(c(t)),c'(t)\rangle — this is how the equivalence is defined, after all. And thus the line integral looks like

\displaystyle\int\limits_c\alpha=\int\limits_{[0,1]}\langle F(c(t)),c'(t)\rangle\,dt

Often the inner product is written with a dot — usually called the “dot product” of vectors — in which case this takes the form

\displaystyle\int\limits_{[0,1]}F(c(t))\cdot c'(t)\,dt

We also often write ds=c'(t)\,dt as a “vector differential-valued function”, in which case we can write

\displaystyle\int\limits_{[0,1]}F\cdot ds

Of course, we often parameterize a curve by a more general interval I than [0,1], in which case we write

\displaystyle\int\limits_IF\cdot ds

This expression may look familiar from multivariable calculus, where we first defined line integrals. We can now see how this definition is a special case of a much more general construction.

October 21, 2011 Posted by | Differential Geometry, Geometry | 5 Comments

The Codifferential

From our calculation of the square of the Hodge star we can tell that the star operation is invertible. Indeed, since *^2=(-1)^{k(n-k)}\lvert g_{ij}\rvert — applying the star twice to a k-form in an n-manifold with metric g is the same as multiplying it by (-1)^{k(n-k)} and the determinant of the matrix of g — we conclude that *^{-1}=(-1)^{k(n-k)}\lvert g^{ij}\rvert*.

With this inverse in hand, we will define the “codifferential”

\displaystyle\delta=(-1)^k*^{-1}d*

The first star sends a k-form to an n-k-form; the exterior derivative sends it to an n-k+1-form; and the inverse star sends it to a k-1-form. Thus the codifferential goes in the opposite direction from the differential — the exterior derivative.

Unfortunately, it’s not quite as algebraically nice. In particular, it’s not a derivation of the algebra. Indeed, we can consider fdx and gdy in \mathbb{R}^3 and calculate

\displaystyle\begin{aligned}\delta(fdx)&=-*d*(fdx)=-*d(fdy\wedge dz)=-*\frac{\partial f}{\partial x}dx\wedge dy\wedge dz=-\frac{\partial f}{\partial x}\\\delta(gdy)&=-*d*(gdy)=-*d(gdz\wedge dx)=-*\frac{\partial g}{\partial y}dy\wedge dz\wedge dx=-\frac{\partial g}{\partial y}\end{aligned}

while

\displaystyle\begin{aligned}\delta(fgdx\wedge dy)&=-*d*(fgdx\wedge dy)\\&=-*d(fgdz)\\&=-*\left(\left(\frac{\partial f}{\partial x}g+f\frac{\partial g}{\partial x}\right)dx\wedge dz+\left(\frac{\partial f}{\partial y}g+f\frac{\partial g}{\partial y}\right)dy\wedge dz\right)\\&=\left(\frac{\partial f}{\partial x}g+f\frac{\partial g}{\partial x}\right)dy-\left(\frac{\partial f}{\partial y}g+f\frac{\partial g}{\partial y}\right)dx\end{aligned}

but there is no version of the Leibniz rule that can account for the second and third terms in this latter expansion. Oh well.

On the other hand, the codifferential \delta is (sort of) the adjoint to the differential. Adjointness would mean that if \eta is a k-form and \zeta is a k+1-form, then

\displaystyle\langle d\eta,\zeta\rangle=\langle\eta,\delta\zeta\rangle

where these inner products are those induced on differential forms from the metric. This doesn’t quite hold, but we can show that it does hold “up to homology”. We can calculate their difference times the canonical volume form

\displaystyle\begin{aligned}\left(\langle d\eta,\zeta\rangle-\langle\eta,\delta\zeta\rangle\right)\omega&=d\eta\wedge*\zeta-\eta\wedge*\delta\zeta\\&=d\eta\wedge*\zeta-(-1)^k\eta\wedge**^{-1}d*\zeta\\&=d\eta\wedge*\zeta-(-1)^k\eta\wedge d*\zeta\\&=d\left(\eta\wedge*\zeta\right)\end{aligned}

which is an exact n-form. It’s not quite as nice as equality, but if we pass to De Rham cohomology it’s just as good.

October 21, 2011 Posted by | Differential Geometry, Geometry | Leave a comment

The Hodge Star, Squared

It’s interesting to look at what happens when we apply the Hodge star twice. We just used the fact that in our special case of \mathbb{R}^3 we always get back exactly what we started with. That is, in this case, *^2=1 — the identity operation.

It’s easiest to work this out in coordinates. If \eta=dx^{i_1}\wedge\dots\wedge dx^{i_k} is some k-fold wedge then *\eta is \pm\sqrt{\lvert g_{ij}\rvert} times the wedge of all the indices that don’t show up in \eta. But then **\eta is \pm\sqrt{\lvert g_{ij}\rvert} times the wedge of all the indices that don’t show up in *\eta, which is exactly all the indices that were in \eta in the first place! That is, **\eta=\pm\lvert g_{ij}\rvert\eta, where the sign is still a bit of a mystery.

But some examples will quickly shed some light on this. We can even extend to the pseudo-Riemannian case and pick a coordinate system so that \langle dx^i,dx^j\rangle=\epsilon_i\delta^{ij}, where \epsilon_i=\pm1. That is, any two dx^i are orthogonal, and each dx^i either has “squared-length” 1 or -1. The determinant \lvert g_{ij}\rvert is simply the product of all these signs. In the Riemannian case this is simply 1.

Now, let’s start with an easy example: let \eta be the wedge of the first k indices:

\displaystyle\eta=dx^1\wedge\dots\wedge dx^k

Then *\eta is (basically) the wedge of the other indices:

\displaystyle*\eta=\sqrt{\lvert g_{ij}\rvert}dx^{k+1}\wedge\dots\wedge dx^n

The sign is positive since \eta\wedge*\eta already has the right order. But now we flip this around the other way:

\displaystyle**\eta=\pm\lvert g_{ij}\rvert dx^1\wedge\dots\wedge dx^k

but this should obey the same rule as ever:

\displaystyle\begin{aligned}*\eta\wedge**\eta&=\pm\lvert g_{ij}\rvert\sqrt{\lvert g_{ij}\rvert}dx^{k+1}\wedge\dots\wedge dx^n\wedge dx^1\wedge\dots\wedge dx^k\\&=\pm\lvert g_{ij}\rvert(-1)^{k(n-k)}\sqrt{\lvert g_{ij}\rvert}dx^1\wedge\dots\wedge dx^n\\&=\pm\lvert g_{ij}\rvert(-1)^{k(n-k)}\omega\end{aligned}

where we pick up a factor of -1 each time we pull one of the last k 1-forms leftwards past the first n-k. We conclude that the actual sign must be \lvert g_{ij}\rvert(-1)^{k(n-k)} so that this result is exactly \omega. Similar juggling for other selections of \eta will give the same result.

In our special Riemannian case with n=3, then no matter what k is we find the sign is always positive, as we expected. The same holds true in any odd dimension for Riemannian manifolds. In even dimensions, when k is odd then so is n-k, and so *^2=-1 — the negative of the identity transformation. And the whole situation gets more complicated in the pseudo-Riemannian version depending on the number of -1s in the diagonalized metric tensor.

October 18, 2011 Posted by | Differential Geometry, Geometry | 5 Comments

The Divergence Operator

One fact that I didn’t mention when discussing the curl operator is that the curl of a gradient is zero: \nabla\times(\nabla f)=0. In our terms, this is a simple consequence of the nilpotence of the exterior derivative. Indeed, when we work in terms of 1-forms instead of vector fields, the composition of the two operators is *d(df), and d^2 is always zero.

So why do we bring this up now? Because one of the important things to remember from multivariable calculus is that the divergence of a curl is also automatically zero, and this will help us figure out what a divergence is in terms of differential forms. See, if we take our vector field and consider it as a 1-form, the exterior derivative is already known to be (essentially) the curl. So what else can we do?

We use the Hodge star again to flip the 1-form back to a 2-form, so we can apply the exterior derivative to that. We can check that this will be automatically zero if we start with an image of the curl operator; our earlier calculations show that *^2 is always the identity mapping — at least on \mathbb{R}^3 with this metric — so if we first apply the curl *d and then the steps we’ve just suggested, the result is like applying the operator d**d=dd=0.

There’s just one catch: as we’ve written it this gives us a 3-form, not a function like the divergence operator should! No matter; we can break out the Hodge star once more to flip it back to a 0-form — a function — just like we want. That is, the divergence operator on 1-forms is *d*.

Let’s calculate this in our canonical basis. If we start with a 1-form \alpha=Pdx+Qdy+Rdz then we first hit it with the Hodge star:

\displaystyle*\alpha=Pdy\wedge dz+Qdz\wedge dx+Rdx\wedge dy

Next comes the exterior derivative:

\displaystyle d*\alpha=\left(\frac{\partial P}{\partial x}+\frac{\partial Q}{\partial y}+\frac{\partial R}{\partial z}\right)dx\wedge dy\wedge dz

and then the Hodge star again:

\displaystyle*d*\alpha=\frac{\partial P}{\partial x}+\frac{\partial Q}{\partial y}+\frac{\partial R}{\partial z}

which is exactly the definition (in coordinates) of the usual divergence \nabla\cdot F of a vector field F on \mathbb{R}^3.

October 13, 2011 Posted by | Differential Geometry, Geometry | 3 Comments

The Curl Operator

Let’s continue our example considering the special case of \mathbb{R}^3 as an oriented, Riemannian manifold, with the coordinate 1-forms \{dx, dy, dz\} forming an oriented, orthonormal basis at each point.

We’ve already see the gradient vector \nabla f, which has the same components as the differential df:

\displaystyle\begin{aligned}df&=\frac{\partial f}{\partial x}dx+\frac{\partial f}{\partial y}dy+\frac{\partial f}{\partial z}dz\\\nabla f&=\begin{pmatrix}\displaystyle\frac{\partial f}{\partial x}\\\displaystyle\frac{\partial f}{\partial y}\\\displaystyle\frac{\partial f}{\partial z}\end{pmatrix}\end{aligned}

This is because we use the metric to convert from vector fields to 1-forms, and with respect to our usual bases the matrix of the metric is the Kronecker delta.

We will proceed to define analogues of the other classical differential operators you may remember from multivariable calculus. We will actually be defining operators on differential forms, but we will use this same trick to identify vector fields and 1-forms. We will thus not usually distinguish our operators from the classical ones, but in practice we will use the classical notations when acting on vector fields and our new notations when acting on 1-forms.

Anyway, the next operator we come to is the curl of a vector field: F\mapsto\nabla\times F. Of course we’ll really start with a 1-form instead of a vector field, and we already know a differential operator to use on forms. Given a 1-form \alpha we can send it to d\alpha.

The only hangup is that this is a 2-form, while we want the curl of a vector field to be another vector field. But we do have a Hodge star, which we can use to flip a 2-form back into a 1-form, which is “really” a vector field again. That is, the curl operator corresponds to the differential operator *d that takes 1-forms back to 1-forms.

Let’s calculate this in our canonical basis, to see that it really does look like the familiar curl. We start with a 1-form \alpha=Pdx+Qdy+Rdz. The first step is to hit it with the exterior derivative, which gives

\displaystyle\begin{aligned}d\alpha=&dP\wedge dx+dQ\wedge dy + dR\wedge dz\\=&\left(\frac{\partial P}{\partial x}dx+\frac{\partial P}{\partial y}dy+\frac{\partial P}{\partial z}dz\right)\wedge dx\\&+\left(\frac{\partial Q}{\partial x}dx+\frac{\partial Q}{\partial y}dy+\frac{\partial Q}{\partial z}dz\right)\wedge dy\\&+\left(\frac{\partial R}{\partial x}dx+\frac{\partial R}{\partial y}dy+\frac{\partial R}{\partial z}dz\right)\wedge dz\\=&\frac{\partial P}{\partial y}dy\wedge dx+\frac{\partial P}{\partial z}dz\wedge dx\\&+\frac{\partial Q}{\partial x}dx\wedge dy+\frac{\partial Q}{\partial z}dz\wedge dy\\&+\frac{\partial R}{\partial x}dx\wedge dz+\frac{\partial R}{\partial y}dy\wedge dz\\=&\left(\frac{\partial R}{\partial y}-\frac{\partial Q}{\partial z}\right)dy\wedge dz\\&+\left(\frac{\partial P}{\partial z}-\frac{\partial R}{\partial x}\right)dz\wedge dx\\&+\left(\frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y}\right)dx\wedge dy\end{aligned}

Next we hit this with the Hodge star. We’ve already calculated how the Hodge star affects the canonical basis of 2-forms, so this is just a simple lookup to find:

\displaystyle*d\alpha=\left(\frac{\partial R}{\partial y}-\frac{\partial Q}{\partial z}\right)dx+\left(\frac{\partial P}{\partial z}-\frac{\partial R}{\partial x}\right)dy+\left(\frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y}\right)dz

which are indeed the usual components of the curl. That is, if \alpha is the 1-form corresponding to the vector field F, then *d\alpha is the 1-form corresponding to the vector field \nabla\times F.

October 12, 2011 Posted by | Differential Geometry, Geometry | 8 Comments