Now that we can define the inner product of two vectors using a metric , we want to generalize this to apply to vector fields.
This should be pretty straightforward: if and are vector fields on an open region it gives us vectors and at each . We can hit these pairs with to get , which is a real number. Since we get such a number at each point , this gives us a function .
That this is a bilinear function is clear. In fact we’ve already implied this fact when saying that is a tensor field. But in what sense is it an inner product? It’s symmetric, since each is, and positive definite as well. To be more explicit: with equality if and only if is the zero vector in . Thus the function always takes on nonneative values, is zero exactly where is, and is the zero function if and only if is the zero vector field.
What about nondegeneracy? This is a little trickier. Given a nonzero vector field, we can find some point where is nonzero, and we know that there is some such that . In fact, we can find some region around where is everywhere nonzero, and for each point we can find a such that . The question is: can we do this in such a way that is a smooth vector field?
The trick is to pick some coordinate map on , shrinking the region if necessary. Then there must be some such that
because otherwise would be degenerate. Now we get a smooth function near :
which is nonzero at , and so must be nonzero in some neighborhood of . Letting be this coordinate vector field gives us a vector field that when paired with using gives a smooth function that is not identically zero. Thus is also nonzero, and is worthy of the title “inner product” on the module of vector fields over the ring of smooth functions .
Notice that we haven’t used the fact that the are positive-definite except in the proof that is, which means that if is merely pseudo-Riemannian then is still symmetric and nondegenerate, so it’s still sort of like an inner product, like an symmetric, nondegenerate, but indefinite form is still sort of like an inner product.
Sorry for the delay but it’s been sort of hectic with work, my non-math hobbies, and my latest trip up to DC.
Anyway, now that we’ve introduced the idea of a metric on a manifold, it’s natural to talk about mappings that preserve them. We call such maps “isometries”, since they give the same measurements on tangent vectors before and after their application.
Now, normally there’s no canonical way to translate tensor fields from one manifold to another so that we can compare them, but we’ve seen one case where we can do it: pulling back differential forms. This works because differential forms are entirely made from contravariant vector fields, so we can pull back by using the derivative to push forward vectors and then evaluate.
So let’s get explicit: say we have a metric on the manifold , which gives us an inner product on each tangent space . If we have a smooth map , we want to use it to define a metric on . That is, we want an inner product on each tangent space .
Now, given vectors and in , we can use the derivative to push them forward to and in . We can hit these with the inner product , defining
It should be straightforward to check that this is indeed an inner product. To be thorough, we must also check that is actually a tensor field. That is, as we move continuously around the inner product varies smoothly.
To check this, we will use our trick: let be a coordinate patch around , giving us the basic coordinate vector fields in the patch. If is a coordinate patch around , then we know how to calculate the derivative applied to these vectors:
so we can stick this into the above calculation:
Since we assumed that is a metric, the evaluation on the right side is a smooth function, and thus the left side is as well. So we conclude that is a smooth tensor field, and thus a metric.
Now if comes with its own metric , we can ask if the pull-back is equal to at each point. If it is, then we call an isometry. It’s also common to say that “preserves the metric”, even though the metric gets pulled back not pushed forward.
Ironically, in order to tie what we’ve been doing back to more familiar material, we actually have to introduce more structure. It’s sort of astonishing in retrospect how much structure comes along with the most basic, intuitive cases, or how much we can do before even using that structure.
In particular, we need to introduce something called a “Riemannian metric”, which will move us into the realm of differential geometry instead of just topology. Everything up until this point has been concerned with manifolds as “shapes”, but we haven’t really had any sense of “size” or “angle” or anything else we could measure. Having these notions — and asking that they be preserved — is the difference between geometry and topology.
Anyway, a Riemannian metric on a manifold is nothing more than a certain kind of tensor field of type on . At each point , the field gives us a tensor:
We can interpret this as a bilinear function which takes in two vectors and spits out a number . That is, is a bilinear form on the space of tangent vectors at .
So, what makes into a Riemannian metric? We now add the assumption that is not just a bilinear form, but that it’s an inner product. That is, is symmetric, nondegenerate, and positive-definite. We can let the last condition slip a bit, in which case we call a “pseudo-Riemannian metric”. When equipped with a metric, we call a “(pseudo-)Riemannian manifold”.
It’s common to also say “Riemannian” in the case of negative-definite metrics, since there’s little difference between the cases of signature and . Another common special case is that of a “Lorentzian” metric, which is signature or .
As we might expect, is called a metric because it lets us measure things. Specifically, since is an inner product it gives us notions of the length and angle for tangent vectors at . We must be careful here; we do not yet have a way of measuring distances between points on the manifold itself. The metric only tells us about the lengths of tangent vectors; it is not a metric in the sense of metric spaces. However, if two curves cross at a point we can use their tangent vectors to define the angle between the curves, so that’s something.
If is such a manifold of dimension , and if is a compactly-supported -form, then as usual we can use a partition of unity to break up the form into pieces, each of which is supported within the image of an orientation-preserving singular -cube. For each singular cube , either the image is contained totally within the interior of , or it runs up against the boundary. In the latter case, without loss of generality, we can assume that is exactly the face of where the th coordinate is zero.
In the first case, our work is easy:
since is zero everywhere along the image of , and along .
In the other case, the vector fields — in order — give positively-oriented basss of the tangent spaces of the standard -cube. As is orientation, preserving, the ordered collection gives positively-oriented bases of the tangent spaces of the image of . The basis is positively-oriented if and only if is even, since we have to pull the th vector past others, picking up a negative sign for each one. But for a point with , we see that
for all . That is, these image vectors are all within the tangent space of the boundary, and in this order. And since is outward-pointing, this means that is orientation-preserving if and only if is even.
Now we can calculate
where we use the fact that integrals over orientation-reversing singular cubes pick up negative signs, along with the sign that comes attached to the face of a singular -cube to cancel each other off.
So in general we find
The last sum is finite, since on of the support of all but finitely many of the are constantly zero, meaning that their differentials are zero as well. Since the sum is (locally) finite, we have no problem pulling it all the way inside:
so the sum cancels off, leaving just the integral, as we’d expect. That is, under these circumstances,
which is Stokes’ theorem on manifolds.
Let’s take a manifold with boundary and give it an orientation. In particular, for each we can classify any ordered basis as either positive or negative with respect to our orientation. It turns out that this gives rise to an orientation on the boundary .
Now, if is a boundary point, we’ve seen that we can define the tangent space , which contains — as an -dimensional subspace — . This subspace cuts the tangent space into two pieces, which we can distinguish as follows: if is a coordinate patch around with , then the image of near is a chunk of the hyperplane . The inside of corresponds to the area where , while the outside corresponds to .
And so the map sends a vector to a vector in , which either points up into the positive half-space, along the hyperplane, or down into the negative half-space. Accordingly, we say that is “inward-pointing” if lands in the first category, and “outward-pointing” if if lands in the last. We can tell the difference by measuring the th component — the value . If this value is positive the vector is inward-pointing, while if it’s positive the vector is outward-pointing.
This definition may seem to depend on our choice of coordinate patch, but the division of into halves is entirely geometric. The only role the coordinate map plays is in giving a handy test to see which half a vector lies in.
Now we are in a position to give an orientation to the boundary , which we do by specifying which bases of are “positively oriented” and which are “negatively oriented”. Specifically, if is a basis of $ then we say it’s positively oriented if for any outward-pointing the basis is positively oriented as a basis of , and similarly for negatively oriented bases.
We must check that this choice does define an orientation on . Specifically, if is another coordinate patch with , then we can set up the same definitions and come up with an orientation on each point of . If and are compatibly oriented, then and must be compatible as well.
So we assume that the Jacobian of is everywhere positive on . That is
We can break down and to strip off their last components. That is, we write , and similarly for . The important thing here is that when we restrict to the boundary the work as a coordinate map, as do the . So if we set and vary any of the other , the result of remains at zero. And thus we can expand the determinant above:
The determinant is therefore the determinant of the upper-left submatrix — which is the Jacobian determinant of the transition function on the intersection — times the value in the lower right.
If the orientations induced by those on and are to be compatible, then this Jacobian determinant on the boundary must be everywhere positive. Since the overall determinant is everywhere positive, this is equivalent to the lower-right component being everywhere positive on the boundary. That is:
But this asks how the th component of changes as the th component of increases; as we move away from the boundary. But, at least where we start on the boundary, can do nothing but increase! And thus this partial derivative must be positive, which proves our assertion.
If we have a manifold with boundary , then at all the interior points it looks just like a regular manifold, and so the tangent space is just the same as ever. But what happens when we consider a point ?
Well, if is a chart around with , then we see that the part of the boundary within — — is the surface . The point has a perfectly good tangent space as a point in : . We will consider this to be the tangent space of at zero, even though half of its vectors “point outside” the space itself.
We can use this to define the tangent space . Indeed, the function goes from to and takes the point to ; it only makes sense to define as .
This is all well and good algebraically, but geometrically it seems that we’re letting tangent vectors spill “off the edge” of . But remember our geometric characterization of tangent vectors as equivalence classes of curves — of “directions” that curves can go through . Indeed, a curve could well run up to the edge of at the point in any direction that — if continued — would leave the manifold through its boundary. The geometric definition makes it clear that this is indeed the proper notion of the tangent space at a boundary point.
Now, let be the function we get by restricting to the boundary . The function sends the boundary to the boundary — at least locally — and there is an inclusion . On the other hand, there is an inclusion , which then sends to — again, at least locally. That is, we have the equation
Taking the derivative, we see that
But must be the inclusion of the subspace into the tangent space . That is, the tangent vectors to the boundary manifold are exactly those tangent vectors on the boundary that sends to tangent vectors in whose th component is zero.
Ever since we started talking about manifolds, we’ve said that they locally “look like” the Euclidean space . We now need to be a little more flexible and let them “look like” the half-space .
Away from the subspace , is a regular -dimensional manifold — we can always find a small enough ball that stays away from the edge — but on the boundary subspace it’s a different story. Just like we wrote the boundary of a singular cubic chain, we write for this boundary. Any point that gets sent to by a coordinate map must be sent to by every coordinate map. Indeed, if is another coordinate map on the same patch around , then the transition function must be a homeomorphism from onto , and so it must send boundary points to boundary points. Thus we can define the boundary to be the collection of all these points.
Locally, is an -dimensional manifold. Indeed, if is a coordinate patch around a point then , and thus the preimage is an -dimensional coordinate patch around . Since every point is contained in such a patch, is indeed an -dimensional manifold.
As for smooth structures on and , we define them exactly as usual; real-valued functions on a patch of containing some boundary points are considered smooth if and only if the composition is smooth as a map from (a portion of) the half-space to . And such a function is smooth at a boundary point of the half-space if and only if it’s smooth in some neighborhood of the point, which extends — slightly — across the boundary.
Let’s say we have a diffeomorphism from one -dimensional manifold to another. Since is both smooth and has a smooth inverse, we must find that the Jacobian is always invertible; the inverse of at is at . And so — assuming is connected — the sign of the determinant must be constant. That is, is either orientation preserving or orientation-reversing.
Remembering that diffeomorphism is meant to be our idea of what it means for two smooth manifolds to be “equivalent”, this means that is either equivalent to or to . And I say that this equivalence comes out in integrals.
So further, let’s say we have a compactly-supported -form on . We can use to pull back from to . Then I say that
where the positive sign holds if is orientation-preserving and the negative if is orientation-reversing.
In fact, we just have to show the orientation-preserving side, since if is orientation-reversing from to then it’s orientation-preserving from to , and we already know that integrals over are the negatives of those over . Further, we can assume that the support of fits within some singular cube , for if it doesn’t we can chop it up into pieces that do fit into cubes , and similarly chop up into pieces that fit within corresponding singular cubes .
But now it’s easy! If is supported within the image of an orientation-preserving singular cube , then must be supported within , which is also orientation-preserving since both and are, by assumption. Then we find
In this sense we say that integrals are preserved by (orientation-preserving) diffeomorphisms.
If we have an oriented manifold , then we know that the underlying manifold has another orientation available; if is a top form that gives its orientation, then gives it the opposite orientation. We will write for the same underlying manifold equipped with this opposite orientation.
Now it turns out that the integrals over the same manifold with the two different orientations are closely related. Indeed, if is any -form on the oriented -manifold , then we find
Without loss of generality, we may assume that is supported within the image of a singular cube . If not, we break it apart with a partition of unity as usual.
Now, if is orientation-preserving, then we can come up with another singular cube that reverses the orientation. Indeed, let . It’s easy to see that sends to and preserves all the other . Thus it sends to its negative, which shows that it’s an orientation-reversing mapping from the standard -cube to itself. Thus we conclude that the composite is an orientation-reversing singular cube with the same image as .
But then is an orientation-preserving singular cube containing the support of , and so we can use it to calculate integrals over . Working in from each side of our proposed equality we find
We know that we can write
for some function . And as we saw above, sends to its negative. Thus we conclude that
meaning that when we calculate the integral over we’re using the negative of the form on that we use when calculating the integral over .
This makes it even more sensible to identify an orientation-preserving singular cube with its image. When we write out a chain, a positive multiplier has the sense of counting a point in the domain more than once, while a negative multiplier has the sense of counting a point with the opposite orientation. In this sense, integration is “additive” in the domain of integration, as well as linear in the integrand.
The catch is that this only works when is orientable. When this condition fails we still know how to integrate over chains, but we lose the sense of orientation.
Well, paradoxically, we start by getting smaller. Specifically, I say that we can always find an orientable open cover of such that each set in the cover is contained within the image of a singular cube.
We start with any orientable atlas, which gives us a coordinate patch around any point we choose. Without loss of generality we can pick the coordinates such that . There must be some open ball around whose closure is completely contained within ; this closure is itself the image of a singular cube, and the ball obviously contained in its closure. Hitting everything with we get an open set — the inverse image of the ball — contained in the image of a singular cube, all of which contains . Since we can find such a set around any point we can throw them together to get an open cover of .
So, what does this buy us? If is any compactly-supported form on an -dimensional manifold , we can cover its support with some open subsets of , each of which is contained in the image of a singular -cube. In fact, since the support is compact, we only need a finite number of the open sets to do the job, and throw in however many others we need to cover the rest of .
We can then find a partition of unity subordinate to this cover of . We can decompose into a (finite) sum:
which is great because now we can define
But now we must be careful! What if this definition depends on our choice of a suitable partition of unity? Well, say that is another such partition. Then we can write
so we get the same answer no matter which partition we use.