If we orient a manifold by picking an everywhere-nonzero top form , then it induces an orientation on each coordinate patch . Since each one also comes with its own orientation form, we can ask whether they’re compatible or not.
And it’s easy to answer; just calculate
and if the answer is positive then the two are compatible, while if it’s negative then they’re incompatible. But no matter; just swap two of the coordinates and we have a new coordinate map on whose own orientation is compatible with .
This shows that we can find an atlas on all of whose patches have compatible orientations. Given any atlas at all for , either use a coordinate patch as is or swap two of its coordinates depending on whether its native orientation agrees with or not. In fact, if we’re already using a differentiable structure — containing all possible coordinate patches which are (smoothly) compatible with each other — then we just have to throw out all the patches which are (orientably) incompatible with .
The converse, as it happens, is also true: if we can find an atlas for such that for any two patches and the Jacobian of the transition function is everywhere positive on the intersection , then we can find an everywhere-nonzero top form to orient the whole manifold.
Basically, what we want is to patch together enough of the patches’ native orientations to cover the whole manifold. And as usual for this sort of thing, we pick a partition of unity subordinate to our atlas. That is, we have a countable, locally finite collection of functions so that is supported within the patch . Then we define the -form on by
and by outside of . Adding up all the gives us our top form; the sum makes sense because it’s locally finite, and at each point we don’t have to worry about things canceling off because each orientation form is a positive multiple of each other one wherever they’re both nonzero.
Any coordinate patch in a manifold is orientable. Indeed, the image is orientable — we can just use to orient — and given a choice of top form on we can pull it back along to give an orientation of itself.
But what happens when we bring two patches and together? They may each have orientations given by top forms and . We must ask whether they are “compatible” on their overlap. And compatibility means each one picks out the same end of at each point. But this just means that — when restricted to the intersection — for some everywhere-positive smooth function .
Another way to look at the same thing is to let be the pullback , and . Then we must ask what this function is. It must exist even if the orientations are incompatible, since is never zero, but what is it?
A little thought gives us our answer: is the Jacobian determinant of the coordinate transformation from one patch to the other. Indeed, we use the Jacobian to change bases on the cotangent bundle, and transforming between these top forms amounts to taking the determinant of the transformation between the -forms and .
So what does this mean? It tells us that if the Jacobian of the coordinate transformation relating two coordinate patches is everywhere positive, then the coordinates have compatible orientations. On the other hand, if the coordinate transformation’s Jacobian is everywhere negative, then the coordinates also have compatible orientations. Why? because even though the sample orientations differ, we can just use and , which do give the same orientation everywhere.
The problem comes up when the Jacobian is sometimes positive and sometimes negative. Now, it can never be zero, but if the intersection has more than one component it may be positive on one and negative on the other. Then if you pick orientations which coincide on one part of the overlap, they must necessarily disagree on the other part, and no coherent orientation can be chosen for the whole manifold.
I won’t go into this example in full detail yet, but this is essentially what happens with the famous Möbius strip: glue two strips of paper together at one end and we can coherently orient their union. But if we give a half-twist to the other ends before gluing them, we cannot coherently orient the result. The Jacobian is positive on the one overlap and negative on the other.
We recall that if is an -dimensional vector space than the space of “top forms” on is one-dimensional. And since we’re working over this means that the zero form divides this space into two halves. Choosing one — by choosing a nonzero -form on or, equivalently, by choosing a basis for — makes into an “oriented” vector space.
We can do the same thing, of course, for the tangent space of an -dimensional manifold; For each point the stalk is isomorphic to , and so we can pick a nonzero value at each point. But of course this isn’t really what we want; we want to be able to choose this orientation “smoothly”. That is, we want a top form such that for all .
If we can find such a form, we say that it “orients” , and — along with the choice of orientation — is “orientable”. This is not always possible; there are “non-orientable” manifolds for which there are no non-vanishing top forms.
It turns out that is orientable if and only if the bundle is isomorphic to the product . That is, if we can find a map that plays nicely with the projections down to , and so that the restriction to the stalk is a linear isomorphism of one-dimensional real vector spaces.
In the one direction, if we have an orientation given by a top form , then at each point we have a nonzero . Any other point in is some multiple of , so we just define the real component of our transformation to be this multiple, while the component is the projection of the point from to . The smoothness of guarantees that this map will be smooth.
On the flip side, if we have such a map, then it’s invertible, giving a bundle map . We can take the section of the product bundle sending each to and feed it through this inverse map: .
First we write down the definition:
This actually isn’t that hard; there’s only the one basis vector to consider, and we find
We also have to calculate the composition
This lets us calculate
So, what conclusions can we draw from this? Well, Stokes’ theorem now tells us that the -form cannot be the differential of any -form — any function — on . Why? Well, if we had , then we would find
which we now know not to be the case. Similarly, cannot be the boundary of any -chain, for if then
It turns out that there’s a deep connection between the two halves of this example. Further, in a sense every failure of a closed -form to be the differential of a -form and every failure of a closed -chain to be the boundary of a -chain comes in a pair like this one.
We follow yesterday’s example of an interesting differential form with a (much simpler) example of some -chains. Specifically, we’ll talk about circles!
More specifically, we consider the circle of radius around the origin in the “punctured” plane. I used this term yesterday, but I should define it now: a “punctured” space is a topological space with a point removed. There are also “twice-punctured” or “-times punctured” spaces, and as long as the space is a nice connected manifold it doesn’t really matter much which point is removed. But since we’re talking about the plane it comes with an identified point — the origin — and so it makes sense to “puncture” the plane there.
Now the circle of radius will be a singular -cube. That is, it’s a curve in the plane that never touches the origin. Specifically, we’ll parameterize it by:
so as ranges from to we traverse the whole circle. There are two -dimensional “faces”, which we get by setting and :
When we calculate the boundary of , these get different signs:
We must be very careful here; these are not vectors and the addition is not vector addition. These are merely points in the plane — -cubes — and the addition is purely formal. Still, the same point shows up once with a positive and once with a negative sign, so it cancels out to give zero. Thus the boundary of is empty.
On the other hand, we will see that this circle cannot be the boundary of any -chain. The obvious thing it might be the boundary of is the disk of radius , but this cannot work because there is a hole at the origin, and the disk cannot cross that hole. However this does not constitute a proof; maybe there is some weird chain that manages to have the circle as its boundary without crossing the origin. But the proof will have to wait.
After all this talk about integration I think we need an example. This is going to take a while to do in gory detail, but I think it’s very illustrative.
First, let’s start with a function. Let be a function defined on the real plane with the negative -axis and origin cut out. We define it as the angle that the vector from the origin to makes with the positive -axis, just like in polar coordinates. Anything on the positive -axis gets the value ; anything in the upper half-plane gets a positive value of , approaching as we get near the negative -axis from above; anything in the lower half-plane gets a negative value of , approaching as we get near the negative -axis from below. The function cannot be defined smoothly across the negative axis, nor can it be defined consistently at the origin.
What we do know is that . We will now take the differential of both sides of this equation. On the left, we take the partial derivative with respect to :
and a similar formula holding true for the partial derivative with respect to . On the right, we calculate:
Putting these all together we get
Since and are independent we get two equations:
which tell us:
and so we have the differential:
Now we still can’t make sense of these formulas at , but there’s no problem along the negative -axis. In fact, if we’d chosen a different curve to cut along when we’d defined the function, we’d get the same formula for the differential. This suggests that we define the -form
on all of . Some authors will even still call this ““, even though it cannot be the differential of any single smooth function defined on the whole punctured plane. We will soon see that this is the case.
Even so, the differential is going to be identically zero. Away from the “branch curve” on which we cut in our setup — the negative real axis here — this should be obvious, because here we have since the square of the exterior derivative is automatically zero.
It would be hard to imagine it being nonzero along the branch curve, but just as an exercise let’s calculate. For the term only the partial derivative with respect to matters — the one with respect to will give another term which will cancel out — and similarly for the term. So we calculate:
Just as asserted.
Of course, we only have to handle the case of a general singular cube, since we defined the boundary operator to be additive; if is a general chain — a formal sum of singular cubes — then is the same formal sum of the boundaries of these cubes. Since integration is also defined to be additive on the chain over which we integrate, everything works out:
where we have used the special case of singular cubes to pass from the second line to the third.
So let’s tackle this special case:
Basically, it all works out for the same reason parameterization invariance and the change of variables formula do. Passing from the boundary of the singular cube back to the boundary of the standard one transforms the integral one way; passing from the standard cube itself back to the singular cube undoes this transformation. And so Stokes’ theorem is proved.
We turn now to the proof of Stokes’ theorem.
Now any -form can be written out as
where each term omits exactly one of the basic -forms. Since everything in sight — the differential operator and both integrals — is -linear, we can just use one of these terms. And so we can calculate the pullbacks:
It takes a bit of juggling with the definition of , but we can see that this determinant is if and otherwise. Roughly this is because takes the basis vector fields of and turns them into all of the basis vector fields of except the -th one. If then some basis -form has to line up against some basis vector field with a different index and everything goes to zero, while if then they can all pair off in exactly one way.
The upshot is that only the two faces of the cube in the direction contribute anything at all to the boundary integral, and we find
On the other side, we can calculate the differential of :
The tricky bit here is that when there’s nowhere to put this brand new , since it must collide with one of the other basic -forms in the wedge. But when then it can slip right into the “hole” where we’ve left out , at a cost of a factor of to pull the across the first terms in the wedge.
With this result in hand, we calculate the interior integral:
which it should be clear is the same as our answer for the boundary integral above. Thus Stokes’ theorem holds for the standard cube.
Sorry for the little hiatus. I’ve been busier than usual.
Anyway, now we come to Stokes’ theorem. You may remember something by this name if you took a good multivariable calculus course, but this is not quite the same thing. In fact, the Stokes’ theorem you remember is connected to but one special case of this theorem, which also subsumes Gauss’ theorem, Green’s theorem, and the fundamental theorem of calculus, all in one neat little package. The exact details of the connection, though, require us to move into the realm of differential geometry, so we’ll have to come back to them later.
But anyway, on to the theorem! We know how to integrate a differential -form over a -chain . We also have a differential operator on differential forms and a boundary operator on chains. We can put these together in two ways: either we start with a -form , take its exterior derivative to get the -form , then integrate that over the -chain ; or we take the boundary of to get the -chain , then integrate over that. What Stokes’ theorem asserts is that these two give the same answer. As a formula:
In a hand-wavy, conceptual way of putting it: integrating a differential form over the boundary of a region is the same as integrating its derivative over the interior. Indeed, if you look back over the results I mentioned above — even just the fundamental theorem of calculus — you can see this concept at work.
The definition couldn’t be simpler. We really only need to define the image of a singular -cube in and extend by linearity. And since is a function, we can just compose it with to get a singular -cube . What’s the face of this singular -cube? Why it’s
and so we find that this map commutes with the boundary operation , making it a chain map.
We should still check functoriality. The identity map clearly gives us the identity chain map. And if and are two smooth maps, then we can check
which makes this construction a covariant functor.