The Unapologetic Mathematician

Mathematics for the interested outsider

The Lie Derivative on Cohomology

With Cartan’s formula in hand we can show that the Lie derivative is a chain map L_X:\Omega(M)\to\Omega(M). That is, it commutes with the exterior derivative. And indeed, it’s easy to calculate

\displaystyle\begin{aligned}L_X\circ d=(d\circ\iota_X+\iota_X\circ d)\circ d&=d\circ\iota_X\circ d\\d\circ L_X=d\circ(d\circ\iota_X+\iota_X\circ d)&=d\circ\iota_X\circ d\end{aligned}

And so, like any chain map, the Lie derivative defines homomorphisms on cohomology: L_X:H^k(M)\to H^k(M). But which homomorphism does it define?

Well, it turns out that Cartan’s formula comes in handy here as well, for it’s exactly what we need to say that the Lie derivative is null-homotopic. And like any null-homotopic map, it defines the zero map on cohomology. That is, if we take some closed k-form \omega\in Z^k(M), which defines a cohomology class in H^k(M) — any cohomology class has such a representative k-form — and hit it with L_X, the result is an exact k-form.

Actually, this shouldn’t be very surprising, considering Cartan’s formula. Indeed, we can calculate directly

\displaystyle L_X\omega=d(\iota_X\omega)+\iota_X(d\omega)=d(\iota_X\omega)

since by assumption \omega is closed, which means that d\omega=0.

July 28, 2011 Posted by | Differential Topology, Topology | Leave a comment

Cartan’s Formula

It turns out that there is a fantastic relationship between the interior product, the exterior derivative, and the Lie derivative.

It starts with the observation that for a function f and a vector field X, the Lie derivative is L_Xf=Xf and the exterior derivative evaluated at X is \iota_X(df)=df(X)=Xf. That is, L_X=\iota_X\circ d on functions.

Next we consider the differential df of a function. If we apply \iota_X\circ d to it, the nilpotency of the exterior derivative tells us that we automatically get zero. On the other hand, if we apply d\circ\iota_X, we get d(\iota_X(df))=d(Xf), which it turns out is L_X(df). To see this, we calculate

\displaystyle\begin{aligned}\left[L_X(df)\right](Y)&=\lim\limits_{t\to0}\frac{1}{t}\left(\left[(\Phi_t)^*(df)\right](Y)-df(Y)\right)\\&=\lim\limits_{t\to0}\frac{1}{t}\left(df((\Phi_t)_*Y)-df(Y)\right)\\&=\frac{\partial}{\partial t}\left[(\Phi_t)_*Y\right](f)\bigg\vert_{t=0}\\&=\frac{\partial}{\partial t}Y(f\circ\Phi_t)\bigg\vert_{t=0}\\&=YXf\end{aligned}

just as if we took d(Xf) and applied it to Y.

So on exact 1-forms, \iota_X\circ d gives zero while d\circ\iota_X gives L_X. And on functions \iota_X\circ d gives L_X, while d\circ\iota_X gives zero. In both cases we find that

\displaystyle L_X=d\circ\iota_X+\iota_X\circ d

and in fact this holds for all differential forms, which follows from these two base cases by a straightforward induction. This is Cartan’s formula, and it’s the natural extension to all differential forms of the basic identity L_X(f)=\iota_X(df) on functions.

July 26, 2011 Posted by | Differential Topology, Topology | 1 Comment

The Interior Product

We have yet another operation on the algebra \Omega(M) of differential forms: the “interior product”. Given a vector field X\in\mathfrak{X}(M) and a k-form \omega\in\Omega^k(M), the interior product \iota_X(\omega) is the k-1-form defined by

\displaystyle\left[\iota_X\omega\right](X_1,\dots,X_{k-1})=\omega(X,X_1,\dots,X_{k-1})

That is, we just take the vector field X and stick it into the first “slot” of a k-form. We extend this to functions by just defining \iota_Xf=0.

Two interior products anticommute: \iota_X\iota_Y=-\iota_Y\iota_X, which follows easily from the antisymmetry of differential forms. Each one is also clearly linear, and in fact is also a graded derivation of \Omega(M) with degree -1:

\displaystyle\iota_X(\alpha\wedge\beta)=(\iota_X\alpha)\wedge\beta+(-1)^{-p}\alpha\wedge(\iota_X\beta)

where p is the degree of \alpha. This can be shown by reducing to the case where \alpha and \beta are wedge products of 1-forms, but rather than go through all that tedious calculation we can think about it like this: sticking X into a slot of \alpha\wedge\beta means either sticking it into a slot of \alpha or into one of \beta. In the first case we just get \iota_x\alpha, while in the second we have to “move the slot” through all of \alpha, which incurs a sign of (-1)^p=(-1)^{-p}, as asserted.

July 26, 2011 Posted by | Differential Topology, Topology | 7 Comments

De Rham Cohomology is Functorial

It turns out that the de Rham cohomology spaces are all contravariant functors on the category of smooth manifolds. We’ve even seen how it acts on smooth maps. All we really need to do is check that it plays nice with compositions.

So let’s say we have smooth maps f:M_1\to M_2 and g:M_2\to M_3, which give rise to pullbacks f^*:\Omega(M_2)\to\Omega(M_1) and g^*:\Omega(M_3)\to\Omega(M_2). All we really have to do is show that g^*\circ f^*=(f\circ g)^*, because we already know that passing from chain maps to the induced maps on homology is functorial.

As usual, we calculate:

\displaystyle\begin{aligned}\left[\left[\left[g^*\circ f^*\right](\omega)\right](p)\right](v_1,\dots,v_k)&=\left[\left[g^*(f^*\omega)\right](p)\right](v_1,\dots,v_k)\\&=\left[\left[f^*\omega\right](g(p))\right](g_*v_1,\dots,g_*v_k)\\&=\left[\omega(f(g(p)))\right](f_*g_*v_1,\dots,f_*g_*v_k)\\&=\left[\omega(\left[f\circ g\right](p))\right]((f\circ g)_*v_1,\dots,(f\circ g)_*v_k)\\&=\left[\left[(f\circ g)^*\omega\right](p)\right](v_1,\dots,v_k)\end{aligned}

as asserted. And so we get maps f^*=H^k(f):H^k(M_2)\to H^k(M_1) and g^*=H^k(f):H^k(M_3)\to H^k(M_2) which compose appropriately: H^k(g)\circ H^k(f)\to H^k(f\circ g).

July 23, 2011 Posted by | Differential Topology, Topology | 1 Comment

Pullbacks on Cohomology

We’ve seen that if f:M\to N is a smooth map of manifolds that we can pull back differential forms, and that this pullback f^*:\Omega(N)\to\Omega(M) is a degree-zero homomorphism of graded algebras. But now that we’ve seen that \Omega(M) and \Omega(N) are differential graded algebras, it would be nice if the pullback respected this structure as well. And luckily enough, it does!

Specifically, the pullback f^* commutes with the exterior derivatives on \Omega(M) and \Omega(N), both of which are (somewhat unfortunately) written as d. If we temporarily write them as d_M and d_N, then we can write our assertion as f^*(d_N\omega)=d_M(f^*\omega) for all k-forms \omega on N.

First, we show that this is true for a function \phi\in\Omega^0(N). It we pick a test vector field X\in\mathfrak{X}(M), then we can check

\displaystyle\begin{aligned}\left[f^*(d\phi)\right](X)&=\left[d\phi\circ f\right](f_*(X))\\&=\left[f_*(X)\right]\phi\\&=X(\phi\circ f)\\&=\left[d(\phi\circ f)\right](X)\\&=\left[d(f^*\phi)\right](X)\end{aligned}

For other k-forms it will make life easier to write out \omega as a sum

\displaystyle\omega=\sum\limits_I\alpha_Idx^{i_1}\wedge\dots\wedge dx^{i_k}

Then we can write the left side of our assertion as

\displaystyle\begin{aligned}f^*\left(d\left(\sum\limits_I\alpha_Idx^{i_1}\wedge\dots\wedge dx^{i_k}\right)\right)&=f^*\left(\sum\limits_Id\alpha_I\wedge dx^{i_1}\wedge\dots\wedge dx^{i_k}\right)\\&=\sum\limits_If^*(d\alpha_I)\wedge f^*(dx^{i_1})\wedge\dots\wedge f^*(dx^{i_k})\\&=\sum\limits_Id(f^*\alpha_I)\wedge f^*(dx^{i_1})\wedge\dots\wedge f^*(dx^{i_k})\\&=\sum\limits_Id(\alpha_I\circ f)\wedge f^*(dx^{i_1})\wedge\dots\wedge f^*(dx^{i_k})\end{aligned}

and the right side as

\displaystyle\begin{aligned}d\left(f^*\left(\sum\limits_I\alpha_Idx^{i_1}\wedge\dots\wedge dx^{i_k}\right)\right)&=d\left(\sum\limits_I(\alpha_I\circ f)f^*(dx^{i_1})\wedge\dots\wedge f^*(dx^{i_k})\right)\\&=d\left(\sum\limits_I(\alpha_I\circ f)d(f^*x^{i_1})\wedge\dots\wedge d(f^*x^{i_k})\right)\\&=\sum\limits_Id(\alpha_I\circ f)\wedge d(f^*x^{i_1})\wedge\dots\wedge d(f^*x^{i_k})\\&=\sum\limits_Id(\alpha_I\circ f)\wedge f^*(dx^{i_1})\wedge\dots\wedge f^*(dx^{i_k})\end{aligned}

So these really are the same.

The useful thing about this fact that pullbacks commute with the exterior derivative is that it makes pullbacks into a chain map between the chains of the \Omega^k(N) and \Omega^k(M). And then immediately we get homomorphisms H^k(N)\to H^k(M), which we also write as f^*.

If you want, you can walk the diagrams yourself to verify that a cohomology class in H^k(N) is sent to a unique, well-defined cohomology class in H^k(M), but it’d probably be more worth it to go back to read over the general proof that chain maps give homomorphisms on homology.

July 21, 2011 Posted by | Differential Topology, Topology | 5 Comments

De Rham Cohomology

The really important thing about the exterior derivative is that it makes the algebra of differential forms into a “differential graded algebra”. We had the structure of a graded algebra before, but now we have a degree-one derivation whose square is zero. And as long as we want it to agree with the differential on functions, there’s only one way to do it.

Why does this matter? Well, the algebra \Omega(M) is the direct sum of its grades — the spaces \Omega^k(M), and for each one we have a map d:\Omega^k(M)\to\Omega^{k+1}(M). We can even write them out in a row:

\displaystyle \dots\to\mathbf{0}\to\Omega^0(M)\to\dots\to\Omega^k(M)\to\dots\to\Omega^n(M)\to\mathbf{0}\to\dots

where we have padded out the sequence with \mathbf{0} — the trivial space — in either direction. This is just like a chain complex, except the arrows go backwards! Instead of the indices counting down, they count up. We can deal with this by thinking of these as negative numbers, but really it doesn’t matter.

Anyway, now we can bring all our homological machinery to bear! We say that a differential form \omega\in\Omega^k(M) is “closed” if d\omega=0, and we write the subspace of closed forms as Z^k(M)=\mathrm{Ker}(d)\subseteq\Omega^k(M). We say that \omega is “exact” if there is some \alpha\in\Omega^{k-1}(M) with \omega=d\alpha, and we write the subspace of exact forms as B^k(M)=\mathrm{Im}(d)\subseteq\Omega^k(M). The fact that d^2=0 implies that B^k(M)\subseteq Z^k(M).

And now we can define the k-th “de Rham cohomology space” H^k(M)=Z^k(M)/B^k(M). The cohomology space H^k(M) measures the extent to which it is possible to have a k-form on M be closed without being exact. If H^k(M)=\mathbf{0}, then closed k-forms are all exact. And it’s roughly accurate to say that the rank of H^k(M) counts the “number of independent ways” to set up a closed-but-not-exact k-form.

The really amazing thing, which we will come to understand later, is that this actually tells us a lot about the topology of M itself: combinatorial information about the topology of a manifold is encoded into the algebraic structure of its sheaf of differential forms.

July 20, 2011 Posted by | Differential Topology, Topology | 11 Comments

The Uniqueness of the Exterior Derivative

It turns out that our exterior derivative is uniquely characterized by some of its properties; it is the only derivation of the algebra \Omega(M) of degree 1 whose square is zero and which gives the differential on functions. That is, once we specify that d:\Omega^k(M)\to\Omega^{k+1}(M), that d(\alpha+\beta)=d\alpha+d\beta, that d(\alpha\wedge\beta)=d\alpha\wedge\beta+(-1)^p\wedge d\beta if \alpha is a p-form, that d(d\omega)=0, and that df(X)=Xf for functions f\in\Omega^0(M), then there is no other choice but the exterior derivative we already defined.

First, we want to show that these properties imply another one that’s sort of analytic in character: if \alpha=\beta in a neighborhood of p then d\alpha(p)=d\beta(p). Equivalently (given linearity), if \alpha=0 in a neighborhood U of p then d\alpha(p)=0. But then we can pick a bump function \phi which is 0 on a neighborhood of p and 1 outside of U. Then we have \phi\alpha=\alpha and

\displaystyle\begin{aligned}d\alpha(p)&=\left[d(\phi\alpha)\right](p)\\&=d\phi(p)\wedge\alpha(p)+\phi(p)d\alpha(p)\\&=0+0=0\end{aligned}

And so we may as well throw this property onto the pile. Notice, though, how this condition is different from the way we said that tensor fields live locally. In this case we need to know that \alpha vanishes in a whole neighborhood, not just at p itself.

Next, we show that these conditions are sufficient for determining a value of d\omega for any k-form \omega. It will helps us to pick a local coordinate patch (U,x) around a point p, and then we’ll show that the result doesn’t actually depend on this choice. Picking a coordinate patch gives us a canonical basis of the space \Omega^k(U) of k-forms over U, indexed by “multisets” I=\{0\leq i_1<\dots<i_k\leq n\}. Any k-form \omega over U can be written as

\displaystyle\omega(q)=\sum\limits_I\omega_I(q)dx^{i_1}(q)\wedge\dots\wedge dx^{i_k}(q)

and so we can calculate

\displaystyle\begin{aligned}d\omega(p)=&d\left(\sum\limits_I\omega_I(p)dx^{i_1}(p)\wedge\dots\wedge dx^{i_k}(p)\right)\\=&\sum\limits_Id\left(\omega_I(p)dx^{i_1}(p)\wedge\dots\wedge dx^{i_k}(p)\right)\\=&\sum\limits_I\Bigl(d\omega_I(p)\wedge dx^{i_1}(p)\wedge\dots\wedge dx^{i_k}(p)\\&+\omega_I(p)d\left(dx^{i_1}(p)\wedge\dots\wedge dx^{i_k}(p)\right)\Bigr)\\=&\sum\limits_I\Biggl(d\omega_I(p)\wedge dx^{i_1}(p)\wedge\dots\wedge dx^{i_k}(p)\\&+\omega_I(p)\sum\limits_{j=1}^k(-1)^jd\left(dx^{i_1}(p)\wedge\dots\wedge d\left(dx^{i_j}\right)\wedge\dots\wedge dx^{i_k}(p)\right)\Biggr)\\=&\sum\limits_Id\omega_I(p)\wedge dx^{i_1}(p)\wedge\dots\wedge dx^{i_k}(p)\end{aligned}

where we use the fact that d\left(dx^i\right)=0.

Now if (V,y) is a different coordinate patch around p then we get a different decomposition

\displaystyle\omega(q)=\sum\limits_J\omega_J(q)dy^{i_1}(q)\wedge\dots\wedge dy^{i_k}(q)

but both decompositions agree on the intersection U\cap V, which is a neighborhood of p, and thus when we apply d to them we get the same value at p, by the “analytic” property we showed above. Thus the value only depends on \omega itself (and the point p), and not on the choice of coordinates we used to help with the evaluation. And so the exterior derivative d\omega is uniquely determined by the four given properties.

July 19, 2011 Posted by | Differential Topology, Topology | 3 Comments

The Exterior Derivative is Nilpotent

One extremely important property of the exterior derivative is that d(d\omega)=0 for all exterior forms \omega. This is only slightly less messy to prove than the fact that d is a derivation. But since it’s so extremely important, we soldier onward! If \omega is a p-form we calculate

\displaystyle\begin{aligned}\left[d(d\omega)\right](X_0,\dots,X_{p+1})=&\sum\limits_{i=0}^k(-1)^iX_i\left(d\omega(X_0,\dots,\hat{X}_i,\dots,X_{p+1})\right)\\&+\sum\limits_{0\leq i<j\leq k}(-1)^{i+j}d\omega\left([X_i,X_j],X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,X_{p+1}\right)\end{aligned}

We now expand out the d\omega on the first line. First we extract an X_j from the list of vector fields. If j<i, then we get a term like

\displaystyle(-1)^{i+j}X_iX_j\omega(X_0,\dots,\hat{X}_j,\dots,\hat{X}_i,\dots,X_{p+1})

while if j>i then we get a term like

\displaystyle(-1)^{i+j-1}X_iX_j\omega(X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,X_{p+1})=(-1)^{i+j-1}X_jX_i\omega(X_0,\dots,\hat{X}_j,\dots,\hat{X}_i,\dots,X_{p+1})

If we put these together, we get the sum over all i<j of

\displaystyle(-1)^{i+j}[X_j,X_i]\omega(X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,X_{p+1})

We continue expanding the first line by picking out two vector fields. There are three ways of doing this, which give us terms like

\displaystyle\begin{aligned}(-1)^{i+j+k}&X_i\omega([X_j,X_k],X_0,\dots,\hat{X}_j,\dots,\hat{X}_k,\dots,\hat{X}_i,\dots,X_{p+1})\\(-1)^{i+j+k-1}&X_i\omega([X_j,X_k],X_0,\dots,\hat{X}_j,\dots,\hat{X}_i,\dots,\hat{X}_k,\dots,X_{p+1})\\(-1)^{i+j+k-2}&X_i\omega([X_j,X_k],X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,\hat{X}_k,\dots,X_{p+1})\end{aligned}

Next we can start expanding the second line. First we pull out the first vector field to get

\displaystyle(-1)^{i+j}[X_i,X_j]\omega(X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,X_{p+1})

which cancels out against the first group of terms from the expansion of the first line! Progress!

We continue by pulling out an extra vector field from the second line, getting three collections of terms:

\displaystyle\begin{aligned}(-1)^{i+j+k+1}&X_k\omega([X_i,X_j],X_0,\dots,\hat{X}_k,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,X_{p+1})\\(-1)^{i+j+k}&X_k\omega([X_i,X_j],X_0,\dots,\hat{X}_i,\dots,\hat{X}_k,\dots,\hat{X}_j,\dots,X_{p+1})\\(-1)^{i+j+k-1}&X_k\omega([X_i,X_j],X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,\hat{X}_k,\dots,X_{p+1})\end{aligned}

It’s less obvious, but each of these groups of terms cancels out one of the groups from the second half of the expansion of the first line! Our sum has reached zero!

Unfortunately, we’re not quite done. We have to finish expanding the second line, and this is where things will get really ugly. We have to pull two more vector fields out; first we’ll handle the easy case where we avoid the [X_i,X_j] term, and we get a whopping six cases:

\displaystyle\begin{aligned}(-1)^{i+j+k+l+2}&\omega([X_k,X_l],[X_i,X_j],X_0,\dots,\hat{X}_k,\dots,\hat{X}_l,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,X_{p+1})\\(-1)^{i+j+k+l+1}&\omega([X_k,X_l],[X_i,X_j],X_0,\dots,\hat{X}_k,\dots,\hat{X}_i\dots,\hat{X}_l,\dots,\hat{X}_j,\dots,X_{p+1})\\(-1)^{i+j+k+l}&\omega([X_k,X_l],[X_i,X_j],X_0,\dots,\hat{X}_k,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,\hat{X}_l,\dots,X_{p+1})\\(-1)^{i+j+k+l}&\omega([X_k,X_l],[X_i,X_j],X_0,\dots,\hat{X}_i,\dots,\hat{X}_k,\dots,\hat{X}_l,\dots,\hat{X}_j,\dots,X_{p+1})\\(-1)^{i+j+k+l-1}&\omega([X_k,X_l],[X_i,X_j],X_0,\dots,\hat{X}_i,\dots,\hat{X}_k,\dots,\hat{X}_j,\dots,\hat{X}_l,\dots,X_{p+1})\\(-1)^{i+j+k+l-2}&\omega([X_k,X_l],[X_i,X_j],X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,\hat{X}_k,\dots,\hat{X}_l,\dots,X_{p+1})\end{aligned}

In each group, we can swap the [X_k,X_l] term with the [X_i,X_j] term to get a different group. These two groups always have the same leading sign, but the antisymmetry of \omega means that this swap brings another negative sign with it, and thus all these terms cancel out with each other!

Finally, we have the dreaded case where we pull the [X_i,X_j] term and one other vector field. Here we mercifully have only three cases:

\displaystyle\begin{aligned}(-1)^{i+j+k+1}&\omega([[X_i,X_j],X_k],X_0,\dots,\hat{X}_k,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,X_{p+1})\\(-1)^{i+j+k}&\omega([[X_i,X_j],X_k],X_0,\dots,\hat{X}_i,\dots,\hat{X}_k,\dots,\hat{X}_j,\dots,X_{p+1})\\(-1)^{i+j+k-1}&\omega([[X_i,X_j],X_k],X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,\hat{X}_k,\dots,X_{p+1})\end{aligned}

Here we can choose to re-index the three vector fields so we always have 0\leq i<j<k\leq p+1. Adding all three terms up we get

\displaystyle(-1)^{i+j+k}\omega(-[[X_i,X_j],X_k]+[[X_i,X_k],X_j]-[[X_j,X_k],X_i],X_0,\dots,\hat{X}_i,\dots,\hat{X}_j,\dots,\hat{X}_k,\dots,X_{p+1})

Taking the linear combination of double brackets out to examine it on its own we find

\displaystyle-[[X_i,X_j],X_k]+[[X_i,X_k],X_j]-[[X_j,X_k],X_i]=[X_k,[X_i,X_j]]-\left([[X_k,X_i],X_j]+[X_i,[X_k,X_j]]\right)

Which is zero because of the Jacobi identity!

And so it all comes together: some of the terms from the second row work to cancel out the terms from the first row; the antisymmetry of the exterior form \omega takes care some remaining terms from the second row; the Jacobi identity mops up the rest.

Now I say again that the reason we’re doing all this messy juggling is that nowhere in here have we had to pick any local coordinates on our manifold. The identity d(d\omega)=0 is purely geometric, even though we will see later that it actually boils down to something that looks a lot simpler — but more analytic — when we write it out in local coordinates.

July 19, 2011 Posted by | Differential Topology, Topology | 3 Comments

The Exterior Derivative is a Derivation

To further make our case that the exterior derivative deserves its name, I say it’s a derivation of the algebra \Omega(M). But since it takes k-forms and sends them to k+1-forms, it has degree one instead of zero like the Lie derivative. As a consequence, the Leibniz rule looks a little different. If \alpha is a k-form and \beta is an l-form, I say that:

\displaystyle d(\alpha\wedge\beta)=(d\alpha)\wedge\beta+(-1)^k\alpha\wedge(d\beta)

This is because of a general rule of thumb that when we move objects of degree p and q past each other we pick up a sign of (-1)^{pq}.

Anyway, the linearity property of a derivation is again straightforward, and it’s the Leibniz rule that we need to verify. And again it suffices to show that

\displaystyle d(\alpha_1\wedge\dots\wedge\alpha_k)=\sum\limits_{i=1}^k(-1)^{i-1}\alpha_1\wedge\dots\wedge(d\alpha_i)\wedge\dots\wedge\alpha_k

If we plug this into both sides of the Leibniz identity, it’s obviously true. And then it suffices to show that we can peel off a single 1-form from the front of the list. That is, we can just show that the Leibniz identity holds in the case where \alpha is a 1-form and bootstrap it from there.

So here’s the thing: this is a huge, tedious calculation. I had this thing worked out most of the way; it was already five times as long as this post you see here, and the last steps would make it even more complicated. So I’m just going to assert that if you let \alpha be a 1-form and \beta be an l-form, and if you expand out both sides of the Leibniz rule all the way, you’ll see that they’re the same. To make it up to you, I promise that we can come back to this later once we have a simpler expression for the exterior derivative and show that it works then.

July 16, 2011 Posted by | Differential Topology, Topology | 4 Comments

The Exterior Derivative

The Lie derivative looks sort of familiar as a derivative, but we have another sort of derivative on the algebra of differential forms: the “exterior derivative”. But this one doesn’t really look like a derivative at first, since we’ll define it with some algebraic manipulations.

If \omega is a k-form then d\omega is a k+1-form, defined by

\displaystyle\begin{aligned}d\omega(X_0,\dots,X_k)=&\sum\limits_{i=0}^k(-1)^iX_i\left(\omega(X_0,\dots,\hat{X_i},\dots,X_k)\right)\\&+\sum\limits_{0\leq i<j\leq k}(-1)^{i+j}\omega\left([X_i,X_j],X_0,\dots,\hat{X_i},\dots,\hat{X_j},\dots,X_k\right)\end{aligned}

where a hat over a vector field means we leave it out of the list. There’s a lot going on here: first we take each vector field X_i out of the list, evaluate \omega on the k remaining vector fields, and apply X_i to the resulting function. Moving X_i to the front entails moving it past i other vector fields in the list, which gives us a factor of (-1)^i, so we include that before adding the results all up. Then, for each pair of vector fields X_i and X_j, we remove both from the list, take their bracket, and stick that at the head of the list before applying \omega. This time we apply a factor of (-1)^{i+j} before adding the results all up, and add this sum to the previous sum.

Wow, that’s really sort of odd, and there’s not much reason to believe that this has anything to do with differentiation! Well, the one hint is that we’re applying X_i to a function, which is a sort of differential operator. In fact, let’s look at what happens for a 0-form — a function f:

\displaystyle df(X)=X(f)

That is, df is the 1-form that takes a vector field X and evaluates it on the function f. And this is just like the differential of a multivariable function: a new function that takes a point and a vector at that point and gives a number out measuring the derivative of the function in that direction through that point.

As a more detailed example, what if \omega is a 1-form?

\displaystyle d\omega(X,Y)=X\left(\omega(Y)\right)-Y\left(\omega(X)\right)-\omega\left([X,Y]\right)

We’ve got two terms that look like we’re taking some sort of derivative, and one extra term that we can’t quite explain yet. But it will become clear how useful this is soon enough.

July 15, 2011 Posted by | Differential Topology, Topology | 9 Comments