And so, like any chain map, the Lie derivative defines homomorphisms on cohomology: . But which homomorphism does it define?
Well, it turns out that Cartan’s formula comes in handy here as well, for it’s exactly what we need to say that the Lie derivative is null-homotopic. And like any null-homotopic map, it defines the zero map on cohomology. That is, if we take some closed -form , which defines a cohomology class in — any cohomology class has such a representative -form — and hit it with , the result is an exact -form.
Actually, this shouldn’t be very surprising, considering Cartan’s formula. Indeed, we can calculate directly
since by assumption is closed, which means that .
It starts with the observation that for a function and a vector field , the Lie derivative is and the exterior derivative evaluated at is . That is, on functions.
Next we consider the differential of a function. If we apply to it, the nilpotency of the exterior derivative tells us that we automatically get zero. On the other hand, if we apply , we get , which it turns out is . To see this, we calculate
just as if we took and applied it to .
So on exact -forms, gives zero while gives . And on functions gives , while gives zero. In both cases we find that
and in fact this holds for all differential forms, which follows from these two base cases by a straightforward induction. This is Cartan’s formula, and it’s the natural extension to all differential forms of the basic identity on functions.
We have yet another operation on the algebra of differential forms: the “interior product”. Given a vector field and a -form , the interior product is the -form defined by
That is, we just take the vector field and stick it into the first “slot” of a -form. We extend this to functions by just defining .
Two interior products anticommute: , which follows easily from the antisymmetry of differential forms. Each one is also clearly linear, and in fact is also a graded derivation of with degree -1:
where is the degree of . This can be shown by reducing to the case where and are wedge products of -forms, but rather than go through all that tedious calculation we can think about it like this: sticking into a slot of means either sticking it into a slot of or into one of . In the first case we just get , while in the second we have to “move the slot” through all of , which incurs a sign of , as asserted.
It turns out that the de Rham cohomology spaces are all contravariant functors on the category of smooth manifolds. We’ve even seen how it acts on smooth maps. All we really need to do is check that it plays nice with compositions.
So let’s say we have smooth maps and , which give rise to pullbacks and . All we really have to do is show that , because we already know that passing from chain maps to the induced maps on homology is functorial.
As usual, we calculate:
as asserted. And so we get maps and which compose appropriately: .
We’ve seen that if is a smooth map of manifolds that we can pull back differential forms, and that this pullback is a degree-zero homomorphism of graded algebras. But now that we’ve seen that and are differential graded algebras, it would be nice if the pullback respected this structure as well. And luckily enough, it does!
Specifically, the pullback commutes with the exterior derivatives on and , both of which are (somewhat unfortunately) written as . If we temporarily write them as and , then we can write our assertion as for all -forms on .
First, we show that this is true for a function . It we pick a test vector field , then we can check
For other -forms it will make life easier to write out as a sum
Then we can write the left side of our assertion as
and the right side as
So these really are the same.
The useful thing about this fact that pullbacks commute with the exterior derivative is that it makes pullbacks into a chain map between the chains of the and . And then immediately we get homomorphisms , which we also write as .
If you want, you can walk the diagrams yourself to verify that a cohomology class in is sent to a unique, well-defined cohomology class in , but it’d probably be more worth it to go back to read over the general proof that chain maps give homomorphisms on homology.
The really important thing about the exterior derivative is that it makes the algebra of differential forms into a “differential graded algebra”. We had the structure of a graded algebra before, but now we have a degree-one derivation whose square is zero. And as long as we want it to agree with the differential on functions, there’s only one way to do it.
Why does this matter? Well, the algebra is the direct sum of its grades — the spaces , and for each one we have a map . We can even write them out in a row:
where we have padded out the sequence with — the trivial space — in either direction. This is just like a chain complex, except the arrows go backwards! Instead of the indices counting down, they count up. We can deal with this by thinking of these as negative numbers, but really it doesn’t matter.
Anyway, now we can bring all our homological machinery to bear! We say that a differential form is “closed” if , and we write the subspace of closed forms as . We say that is “exact” if there is some with , and we write the subspace of exact forms as . The fact that implies that .
And now we can define the -th “de Rham cohomology space” . The cohomology space measures the extent to which it is possible to have a -form on be closed without being exact. If , then closed -forms are all exact. And it’s roughly accurate to say that the rank of counts the “number of independent ways” to set up a closed-but-not-exact -form.
The really amazing thing, which we will come to understand later, is that this actually tells us a lot about the topology of itself: combinatorial information about the topology of a manifold is encoded into the algebraic structure of its sheaf of differential forms.
It turns out that our exterior derivative is uniquely characterized by some of its properties; it is the only derivation of the algebra of degree whose square is zero and which gives the differential on functions. That is, once we specify that , that , that if is a -form, that , and that for functions , then there is no other choice but the exterior derivative we already defined.
First, we want to show that these properties imply another one that’s sort of analytic in character: if in a neighborhood of then . Equivalently (given linearity), if in a neighborhood of then . But then we can pick a bump function which is on a neighborhood of and outside of . Then we have and
And so we may as well throw this property onto the pile. Notice, though, how this condition is different from the way we said that tensor fields live locally. In this case we need to know that vanishes in a whole neighborhood, not just at itself.
Next, we show that these conditions are sufficient for determining a value of for any -form . It will helps us to pick a local coordinate patch around a point , and then we’ll show that the result doesn’t actually depend on this choice. Picking a coordinate patch gives us a canonical basis of the space of -forms over , indexed by “multisets” . Any -form over can be written as
and so we can calculate
where we use the fact that .
Now if is a different coordinate patch around then we get a different decomposition
but both decompositions agree on the intersection , which is a neighborhood of , and thus when we apply to them we get the same value at , by the “analytic” property we showed above. Thus the value only depends on itself (and the point ), and not on the choice of coordinates we used to help with the evaluation. And so the exterior derivative is uniquely determined by the four given properties.
One extremely important property of the exterior derivative is that for all exterior forms . This is only slightly less messy to prove than the fact that is a derivation. But since it’s so extremely important, we soldier onward! If is a -form we calculate
We now expand out the on the first line. First we extract an from the list of vector fields. If , then we get a term like
while if then we get a term like
If we put these together, we get the sum over all of
We continue expanding the first line by picking out two vector fields. There are three ways of doing this, which give us terms like
Next we can start expanding the second line. First we pull out the first vector field to get
which cancels out against the first group of terms from the expansion of the first line! Progress!
We continue by pulling out an extra vector field from the second line, getting three collections of terms:
It’s less obvious, but each of these groups of terms cancels out one of the groups from the second half of the expansion of the first line! Our sum has reached zero!
Unfortunately, we’re not quite done. We have to finish expanding the second line, and this is where things will get really ugly. We have to pull two more vector fields out; first we’ll handle the easy case where we avoid the term, and we get a whopping six cases:
In each group, we can swap the term with the term to get a different group. These two groups always have the same leading sign, but the antisymmetry of means that this swap brings another negative sign with it, and thus all these terms cancel out with each other!
Finally, we have the dreaded case where we pull the term and one other vector field. Here we mercifully have only three cases:
Here we can choose to re-index the three vector fields so we always have . Adding all three terms up we get
Taking the linear combination of double brackets out to examine it on its own we find
Which is zero because of the Jacobi identity!
And so it all comes together: some of the terms from the second row work to cancel out the terms from the first row; the antisymmetry of the exterior form takes care some remaining terms from the second row; the Jacobi identity mops up the rest.
Now I say again that the reason we’re doing all this messy juggling is that nowhere in here have we had to pick any local coordinates on our manifold. The identity is purely geometric, even though we will see later that it actually boils down to something that looks a lot simpler — but more analytic — when we write it out in local coordinates.
To further make our case that the exterior derivative deserves its name, I say it’s a derivation of the algebra . But since it takes -forms and sends them to -forms, it has degree one instead of zero like the Lie derivative. As a consequence, the Leibniz rule looks a little different. If is a -form and is an -form, I say that:
This is because of a general rule of thumb that when we move objects of degree and past each other we pick up a sign of .
Anyway, the linearity property of a derivation is again straightforward, and it’s the Leibniz rule that we need to verify. And again it suffices to show that
If we plug this into both sides of the Leibniz identity, it’s obviously true. And then it suffices to show that we can peel off a single -form from the front of the list. That is, we can just show that the Leibniz identity holds in the case where is a -form and bootstrap it from there.
So here’s the thing: this is a huge, tedious calculation. I had this thing worked out most of the way; it was already five times as long as this post you see here, and the last steps would make it even more complicated. So I’m just going to assert that if you let be a -form and be an -form, and if you expand out both sides of the Leibniz rule all the way, you’ll see that they’re the same. To make it up to you, I promise that we can come back to this later once we have a simpler expression for the exterior derivative and show that it works then.
The Lie derivative looks sort of familiar as a derivative, but we have another sort of derivative on the algebra of differential forms: the “exterior derivative”. But this one doesn’t really look like a derivative at first, since we’ll define it with some algebraic manipulations.
If is a -form then is a -form, defined by
where a hat over a vector field means we leave it out of the list. There’s a lot going on here: first we take each vector field out of the list, evaluate on the remaining vector fields, and apply to the resulting function. Moving to the front entails moving it past other vector fields in the list, which gives us a factor of , so we include that before adding the results all up. Then, for each pair of vector fields and , we remove both from the list, take their bracket, and stick that at the head of the list before applying . This time we apply a factor of before adding the results all up, and add this sum to the previous sum.
Wow, that’s really sort of odd, and there’s not much reason to believe that this has anything to do with differentiation! Well, the one hint is that we’re applying to a function, which is a sort of differential operator. In fact, let’s look at what happens for a -form — a function :
That is, is the -form that takes a vector field and evaluates it on the function . And this is just like the differential of a multivariable function: a new function that takes a point and a vector at that point and gives a number out measuring the derivative of the function in that direction through that point.
As a more detailed example, what if is a -form?
We’ve got two terms that look like we’re taking some sort of derivative, and one extra term that we can’t quite explain yet. But it will become clear how useful this is soon enough.