Well now we know how to translate -forms by pulling back, and thus we can define another Lie derivative:
What happens if is a -form — a function ? We check
That is, the Lie derivative by acts on exactly the same as the vector field does itself.
I also say that the Lie derivative by is a degree-zero derivation of the algebra . That is, it’s a real-linear transformation, and it satisfies the Leibniz rule:
for any -form and -form . Linearity is straightforward, and given linearity the Leibniz rule follows if we can show
for -forms . Indeed, we can write and as linear combinations of such - and -fold wedges, and then the Leibniz rule is obvious.
So, let us calculate:
So we see how we can peel off one of the -forms. A simple induction gives us the general case.
We’ve just seen that smooth real-valued functions are differential forms with grade zero. We also know that functions pull back along smooth maps; if is a smooth function on an open subset and if is a smooth map, then is a smooth function — .
It turns out that all -forms pull back in a similar way. But the “value” of a -form doesn’t only depend on a point, but on vectors at that point. Functions pull back because smooth maps push points forward. It turns out that vectors push forward as well, by the derivative. And so we can define the pullback of a -form :
Here is a -form on a region , is a point in , and the are vectors in . Since the differential is a linear function and is a multilinear function on , is a multilinear function on , as asserted.
This pullback is a homomorphism of graded algebras. Since it sends -forms to -forms, it has degree zero. To show that it’s a homomorphism, we must verify that it preserves addition, scalar multiplication by functions, and exterior multiplication. If and are -forms in , we can check
so . Also if we can check
As for exterior multiplication, we will use the fact that we can write any -form as a linear combination of -fold products of -forms. Thus we only have to check that
Thus preserves the wedge product as well, and thus gives us a degree-zero homomorphism of the exterior algebras.
We’ve defined the exterior bundle over a manifold . Given any open we’ve also defined a -form over to be a section of this bundle: a function such that . We write for the collection of all such -forms over . It’s straightforward to see that this defines a sheaf on .
This isn’t just a sheaf of sets; it’s a sheaf of modules over the structure sheaf of smooth functions on . We define the necessary operations pointwise:
where the right hand sides are defined by the vector space structures on the respective .
We can go even further and define the sheaf of differential forms
This sheaf is not just a sheaf of modules over , it’s a sheaf of algebras. For an and a , we define their exterior product pointwise:
In fact, this is a graded algebra, and the multiplication has degree zero:
Even better, this is a unital algebra. We see this by considering the zero grade, since the unit must live in the zero grade. Indeed, , so sections of are simply functions on . That is, . Given a function we will just write instead of .
A tensor field over a manifold gives us a tensor at each point . And we know that can be considered as a multilinear map. Specifically, if is a tensor field of type , then we find
which we can interpret as a multilinear map:
where multilinearity means that is linear in each variable separately.
As we let vary over , we can interpret as defining a function which takes vector fields and covector fields and gives a function. Explicitly:
And, in particular, this function is multilinear over . That is,
And a similar calculation holds for any of the other variables, vector or covector.
So each tensor field gives us a multilinear function , and this multilinearity is not only true over but over as well.
Conversely, let be an -multilinear function. If it’s also linear over in each variable, then it “lives locally”. That is, if and then
and so at each there is some tensor so that is a tensor field.
This is as distinguished from things like differential operators — , for instance — which fail both sides. On the one side, we can calculate
which picks up an extra term. It’s -linear but not -linear. On the other side, the value of this function at doesn’t just depend on the value of at , but on how changes through . That is, this operator does not “live locally”, and is not a tensor field.
To prove this assertion, it will suffice to deal with the case where takes a single vector variable , and we only need to verify that if then . Let be a chart around , and write
where by assumption each . We let be a neighborhood of whose closure is contained in . We know we can find a smooth bump function supported in and with on .
Now we define vector fields on and on . Similarly we define on and on . Then we can write
We’ve seen that given a local coordinate patch we can decompose tensor fields in terms of the coordinate bases and on and , respectively. But what happens if we want to pass from the -coordinate system to another coordinate system ?
For vectors and covectors, we know the answers. We pass from the -coordinate basis to the -coordinate basis of by using a Jacobian:
where we calculate the coefficients by writing the coordinate function in terms of the coordinates. That is, we’re calculating the Jacobian of the function .
Similarly, we pass from the -coordinate basis to the -coordinate basis of by using another Jacobian:
Where here we use the Jacobian of the inverse transformation .
Since we build up the coordinates on the tensor bundles as the canonical ones induced on the tensor spaces by the coordinate bases on and , we immediately get coordinate transforms for all these bundles.
As one example in particular, given the basis and the basis on the coordinate patch we can build up two “top forms” in — top, since is the highest possible degree of a differential form. These are and , and it turns out there’s a nice formula relating them. We just work it out from the formula above:
That is, the two forms differ at each point by a factor of the Jacobian determinant at that point. This is the differential topology version of the change of basis formula for top forms in exterior algebras.
Just as for vector fields, we need a good condition to identify tensor fields in the wild. And the condition we will use is similar: if is a smooth tensor field of type , then for any coordinate patch in the domain of , we should be able to write out
for some smooth functions on . Conversely, this formula defines a smooth tensor field on .
Indeed, we can find these coefficient functions by evaluation:
for using this definition, if we plug these coordinate vector fields and coordinate covector fields into either the left or the right side of the expression above we will get the same answer. Any vector or covector fields on can be written as a linear combination of these coordinate fields with smooth functions as coefficients, and the multilinear properties of tensors will ensure that both sides get the same value no matter what fields we evaluate them on.
Similarly, if is a differential -form and is a coordinate patch within its domain, then we can write
for some smooth functions on . The proof in this case is similar, following from the definition
In this case we can pick the indices to be strictly increasing because of the antisymmetry of the tensors.
We have a number of other constructions similar to the tangent bundle that will come in handy. These are all sort of analogues of certain constructions we already know about on vector spaces. Let’s review these first.
where we have copies of the vector space and copies of the dual space . Vectors in , then, are tensors of type , while vectors in the dual space are tensors of type .
We also know about the space of antisymmetric tensors of rank over a vector space. In particular, we’re most interested in carrying this construction out over the dual space: . And of course we can take the direct sum of these spaces over all to get the exterior algebra .
Now, we will take these constructions and apply them to the tangent spaces to a manifold. We define the bundle of tensors of type over :
the “exterior -bundle” over :
and the exterior algebra bundle over :
The trick here is that for each of these constructions, if we have a basis of we automatically get a basis of each space , , and . If we start with a coordinate patch on , we get a basis of for each . Then, just as we did with the tangent bundle and the cotangent bundle we can come up with a coordinate patch “induced by ” on each of our new bundles. In this way, we can turn them from disjoint unions of vector spaces into manifolds of their own right, each with a smooth projection down to .
Now we can define a “tensor field of type ” on an open region as a section of the projection . That is, it’s a smooth map such that . Similarly, we define a “differential -form” over to be a section of the projection .
As a nontrivial example of a foliation, I present the “Hopf fibration”. The name I won’t really explain quite yet, but we’ll see it’s a one-dimensional foliation of the three-dimensional sphere.
So, first let’s get our hands on the three-sphere . This is by definition the collection of vectors of length in , but I want to consider this definition slightly differently. Since we know that the complex plane is isomorphic to the real plane as a real vector space, so we find the isomorphism . Now we use the inner product on to define as the collection of vectors with .
Now for each we can define a foliation. The leaf through the point is the curve . Since multiplying by and doesn’t change the norm of a complex number, this whole curve is still contained within . Every point in is clearly contained in some such curve, and two points being contained within the same curve is an equivalence relation: any point is in the same curve as itself; if and , then and ; and if , , and , then and . This shows that the curves do indeed partition .
Now we need to show that the tangent spaces to the leaves provide a distribution on . Since this will be a one-dimensional distribution, we just need to find an everywhere nonzero vector field tangent to the leaves, and the derivative of the curve through each point will do nicely. At we get the derivative
It should be clear that this defines a smooth vector field over all of , though it may not be clear from the formulas that these vectors are actually tangent to . To see this we can either (messily) convert back to real coordinates or we can think geometrically and see that the tangent to a curve within a submanifold must be tangent to that submanifold.
The Hopf fibration is what results when we pick , but the case of irrational is very interesting. In this case we find that some leaves curve around and meet themselves, forming circles, while others never meet themselves, forming homeomorphic images of the whole real line. What this tells us is that not all the leaves of a foliation have to look like each other.
To see this, we try to solve the equations
The first equation tells us that either or . In the first case, we simply have the circle . In the second case, the second equation tells us that either or . The case where is similar to the case , but if neither coordinate is zero then we find . But we assumed that is irrational, so we get no nontrivial solutions for here.
Since the curves don’t change the length of either component, we can get other examples of foliations. For instance, if we let , then the curve will stay on the torus where each circle has radius in its copy of . Looking at all the curves on this surface gives a foliation of the torus. If is irrational, the curve winds around and around the donut-shaped surface, never quite coming back to touch itself, but eventually coming arbitrarily close to any given point on the surface.
A -dimensional “foliation” of an -dimensional manifold is a partition of into -dimensional connected immersed submanifolds, which are called the “leaves” of the foliation. We also ask that the tangent spaces to the leaves define a -dimensional distribution on , which we say is “induced” by , and that any connected integral submanifold of should be contained in a leaf of . It makes sense, then, that we should call a leaf of a “maximal integral submanifold” of .
One obvious family of foliations arises as follows: let , and pick some -dimensional vector subspace . The quotient space consists of all the -dimensional affine spaces “parallel” to — if we pick a representative point then is one of the cosets in . The decomposition of into the collection of is a foliation. At any point the induced distribution is the subspace , which is the image of under the standard identification of with .
Now we have another theorem of Frobenius (prolific guy, wasn’t he?) about foliations: every integrable distribution of on comes from a foliation of .
Around any point we know we can find some chart so that the slices are all integrable submanifolds of . By the assumption that is second-countable we can find a countable cover of consisting of these patches.
We let be the collection of all the slices from all the patches in this cover, and define an equivalence relation on . We say that if there is a finite sequence of slices so that . Since each is a manifold, it can only intersect another chart in countably many slices, and from here it’s straightforward to show that each equivalence class of can only contain countably many slices. Taking the (countable) union of each equivalence class gives us a bunch of immersed connected integral manifolds of , and any two of these are either equal or disjoint, thus giving us a partition. And since any connected integral manifold of must align with one of the slices in any of our coordinate patches it meets, it must be contained in one of these leaves. Thus we have a foliation, which induces .