# The Unapologetic Mathematician

## Orientable Atlases

If we orient a manifold $M$ by picking an everywhere-nonzero top form $\omega$, then it induces an orientation on each coordinate patch $(U,x)$. Since each one also comes with its own orientation form, we can ask whether they’re compatible or not.

And it’s easy to answer; just calculate

$\displaystyle\omega\left(\frac{\partial}{\partial x^1},\dots,\frac{\partial}{\partial x^n}\right)$

and if the answer is positive then the two are compatible, while if it’s negative then they’re incompatible. But no matter; just swap two of the coordinates and we have a new coordinate map on $U$ whose own orientation is compatible with $\omega$.

This shows that we can find an atlas on $M$ all of whose patches have compatible orientations. Given any atlas at all for $M$, either use a coordinate patch as is or swap two of its coordinates depending on whether its native orientation agrees with $\omega$ or not. In fact, if we’re already using a differentiable structure — containing all possible coordinate patches which are (smoothly) compatible with each other — then we just have to throw out all the patches which are (orientably) incompatible with $\omega$.

The converse, as it happens, is also true: if we can find an atlas for $M$ such that for any two patches $(U,x)$ and $(V,y)$ the Jacobian of the transition function is everywhere positive on the intersection $U\cap V$, then we can find an everywhere-nonzero top form to orient the whole manifold.

Basically, what we want is to patch together enough of the patches’ native orientations to cover the whole manifold. And as usual for this sort of thing, we pick a partition of unity subordinate to our atlas. That is, we have a countable, locally finite collection of functions $\{\phi_i\}$ so that $\phi_i$ is supported within the patch $(U_i,x_i)$. Then we define the $n$-form $\omega_i$ on $U_i$ by

$\displaystyle\omega_i(p)=\phi_i(p)dx_i^1\wedge\dots\wedge dx_i^n$

and by $0$ outside of $U_i$. Adding up all the $\omega_i$ gives us our top form; the sum makes sense because it’s locally finite, and at each point we don’t have to worry about things canceling off because each orientation form $\omega_i$ is a positive multiple of each other one wherever they’re both nonzero.

August 31, 2011

## Compatible Orientations

Any coordinate patch $(U,x)$ in a manifold $M$ is orientable. Indeed, the image $x(U)\subseteq\mathbb{R}^n$ is orientable — we can just use $du^1\wedge\dots\wedge du^n$ to orient $\mathbb{R}^n$ — and given a choice of top form on $x(U)$ we can pull it back along $x$ to give an orientation of $U$ itself.

But what happens when we bring two patches $U$ and $V$ together? They may each have orientations given by top forms $\omega_U$ and $\omega_V$. We must ask whether they are “compatible” on their overlap. And compatibility means each one picks out the same end of $\Lambda*_k(U\cap V)$ at each point. But this just means that — when restricted to the intersection $U\cap V$$\omega_U=f\omega_V$ for some everywhere-positive smooth function $f$.

Another way to look at the same thing is to let $\omega_U$ be the pullback $x^*(du^1\wedge\dots\wedge du^n)=dx^1\wedge\dots\wedge dx^n$, and $\omega_V=dy^1\wedge\dots\wedge dy^n$. Then we must ask what this function $f$ is. It must exist even if the orientations are incompatible, since $\omega_V$ is never zero, but what is it?

A little thought gives us our answer: $f$ is the Jacobian determinant of the coordinate transformation from one patch to the other. Indeed, we use the Jacobian to change bases on the cotangent bundle, and transforming between these top forms amounts to taking the determinant of the transformation between the $1$-forms $du^i$ and $dv^j$.

So what does this mean? It tells us that if the Jacobian of the coordinate transformation relating two coordinate patches is everywhere positive, then the coordinates have compatible orientations. On the other hand, if the coordinate transformation’s Jacobian is everywhere negative, then the coordinates also have compatible orientations. Why? because even though the sample orientations differ, we can just use $\omega_U$ and $-\omega_V$, which do give the same orientation everywhere.

The problem comes up when the Jacobian is sometimes positive and sometimes negative. Now, it can never be zero, but if the intersection has more than one component it may be positive on one and negative on the other. Then if you pick orientations which coincide on one part of the overlap, they must necessarily disagree on the other part, and no coherent orientation can be chosen for the whole manifold.

I won’t go into this example in full detail yet, but this is essentially what happens with the famous Möbius strip: glue two strips of paper together at one end and we can coherently orient their union. But if we give a half-twist to the other ends before gluing them, we cannot coherently orient the result. The Jacobian is positive on the one overlap and negative on the other.

August 29, 2011

## Oriented Manifolds

We recall that if $V$ is an $n$-dimensional vector space than the space $\Lambda^n(V)$ of “top forms” on $V$ is one-dimensional. And since we’re working over $\mathbb{R}$ this means that the zero form divides this space into two halves. Choosing one — by choosing a nonzero $n$-form on $V$ or, equivalently, by choosing a basis for $V$ — makes $V$ into an “oriented” vector space.

We can do the same thing, of course, for the tangent space $\mathcal{T}_pM$ of an $n$-dimensional manifold; For each point $p\in M$ the stalk $\Lambda^*_k(M)_p$ is isomorphic to $\mathbb{R}$, and so we can pick a nonzero value at each point. But of course this isn’t really what we want; we want to be able to choose this orientation “smoothly”. That is, we want a top form $\omega\in\Omega^n(M)$ such that $\omega(p)\neq0$ for all $p\in M$.

If we can find such a form, we say that it “orients” $M$, and — along with the choice of orientation — $M$ is “orientable”. This is not always possible; there are “non-orientable” manifolds for which there are no non-vanishing top forms.

It turns out that $M$ is orientable if and only if the bundle $\Lambda^*_k(M)$ is isomorphic to the product $M\times\mathbb{R}$. That is, if we can find a map $f:\Lambda^*_k(M)\to M\times\mathbb{R}$ that plays nicely with the projections down to $M$, and so that the restriction to the stalk $f_p:\Lambda^*_k(M)_p\to\{p\}\times\mathbb{R}$ is a linear isomorphism of one-dimensional real vector spaces.

In the one direction, if we have an orientation given by a top form $\omega$, then at each point we have a nonzero $\omega(p)\in\Lambda^*_k(M)_p$. Any other point in $\Lambda^*_k(M)_p$ is some multiple of $\omega(p)$, so we just define the real component of our transformation $f$ to be this multiple, while the $M$ component is the projection of the point from $\Lambda^*_k(M)_p$ to $M$. The smoothness of $\omega$ guarantees that this map will be smooth.

On the flip side, if we have such a map, then it’s invertible, giving a bundle map $f^{-1}:M\times\mathbb{R}\to\Lambda^*_k(M)$. We can take the section of the product bundle sending each $p\in M$ to $1\in\mathbb{R}$ and feed it through this inverse map: $\omega(p)=f^{-1}_p(1)\in\Lambda^*_k(M)_p$.

August 25, 2011

## An Example (part 3)

Now we can take our differential form and our singular cube and put them together. That is, we can integrate the $1$-form $\omega$ over the circle $c_a$.

First we write down the definition:

$\int\limits_{c_a}\omega=\int\limits_{[0,1]}{c_a}^*\omega$

to find this pullback of $\omega$ we must work out how to push forward vectors from $[0,1]$. That is, we must work out the derivative of $c_a$.

This actually isn’t that hard; there’s only the one basis vector $\frac{d}{dt}$ to consider, and we find

$\displaystyle {c_a}_{*t}\left(\frac{d}{dt}\right)=-2\pi a\sin(2\pi t)\frac{\partial}{\partial x}+2\pi a\cos(2\pi t)\frac{\partial}{\partial y}$

We also have to calculate the composition

\displaystyle\begin{aligned}\omega(c_a(t))&=-\frac{a\sin(2\pi t)}{(a\cos(2\pi t))^2+(a\sin(2\pi t))^2}dx+\frac{a\cos(2\pi t)}{(a\cos(2\pi t))^2+(a\sin(2\pi t))^2}dy\\&=\frac{1}{a}\left(-\sin(2\pi t)dx+\cos(2\pi t)dy\right)\end{aligned}

This lets us calculate

\displaystyle\begin{aligned}\int\limits_{c_a}\omega&=\int\limits_{[0,1]}{c_a}^*\omega\\&=\int\limits_{[0,1]}\left[\omega(c_a(t))\right]\left({c_a}_{*t}\left(\frac{d}{dt}\right)\right)\,dt\\&=\int\limits_{[0,1]}\left[\frac{1}{a}\left(-\sin(2\pi t)dx+\cos(2\pi t)dy\right)\right]\left(2\pi a\left(-\sin(2\pi t)\frac{\partial}{\partial x}+\cos(2\pi t)\frac{\partial}{\partial y}\right)\right)\,dt\\&=2\pi\int\limits_0^1\sin(2\pi t)^2+\cos(2\pi t)^2\,dt=2\pi\int\limits_0^1\,dt=2\pi\end{aligned}

So, what conclusions can we draw from this? Well, Stokes’ theorem now tells us that the $1$-form $\omega$ cannot be the differential of any $0$-form — any function — on $\mathbb{R}^2\setminus{0}$. Why? Well, if we had $\omega=df$, then we would find

$\displaystyle\int\limits_{c_a}\omega=\int\limits_{c_a}df=\int\limits_{\partial c_a}f=\int\limits_{0}f=0$

which we now know not to be the case. Similarly, $c_a$ cannot be the boundary of any $2$-chain, for if $c_a=\partial c$ then

$\displaystyle\int\limits_{c_a}\omega=\int\limits_{\partial c}\omega=\int\limits_{\partial c}d\omega=\int\limits_{\partial c}0=0$

It turns out that there’s a deep connection between the two halves of this example. Further, in a sense every failure of a closed $k$-form to be the differential of a $k-1$-form and every failure of a closed $k$-chain to be the boundary of a $k+1$-chain comes in a pair like this one.

August 24, 2011

## An Example (part 2)

We follow yesterday’s example of an interesting differential form with a (much simpler) example of some $1$-chains. Specifically, we’ll talk about circles!

More specifically, we consider the circle of radius $a$ around the origin in the “punctured” plane. I used this term yesterday, but I should define it now: a “punctured” space is a topological space with a point removed. There are also “twice-punctured” or “$n$-times punctured” spaces, and as long as the space is a nice connected manifold it doesn’t really matter much which point is removed. But since we’re talking about the plane $\mathbb{R}^2$ it comes with an identified point — the origin — and so it makes sense to “puncture” the plane there.

Now the circle of radius $a$ will be a singular $1$-cube. That is, it’s a curve in the plane that never touches the origin. Specifically, we’ll parameterize it by:

$\displaystyle c_a(t)=(a\cos(2\pi t),a\sin(2\pi t))$

so as $t$ ranges from $0$ to $1$ we traverse the whole circle. There are two $0$-dimensional “faces”, which we get by setting $t=0$ and $t=1$:

\displaystyle\begin{aligned}c_a(0)&=(a,0)\\c_a(1)&=(a,0)\end{aligned}

When we calculate the boundary of $c$, these get different signs:

\displaystyle\begin{aligned}\partial c_a&=(-1)^{1+0}c_a(0)+(-1)^{1+1}c_a(1)\\&=-(a,0)+(a,0)=0\end{aligned}

We must be very careful here; these are not vectors and the addition is not vector addition. These are merely points in the plane — $0$-cubes — and the addition is purely formal. Still, the same point shows up once with a positive and once with a negative sign, so it cancels out to give zero. Thus the boundary of $c_a$ is empty.

On the other hand, we will see that this circle cannot be the boundary of any $2$-chain. The obvious thing it might be the boundary of is the disk of radius $a$, but this cannot work because there is a hole at the origin, and the disk cannot cross that hole. However this does not constitute a proof; maybe there is some weird chain that manages to have the circle as its boundary without crossing the origin. But the proof will have to wait.

August 24, 2011

## An Example (part 1)

After all this talk about integration I think we need an example. This is going to take a while to do in gory detail, but I think it’s very illustrative.

First, let’s start with a function. Let $\theta$ be a function defined on the real plane $\mathbb{R}^2$ with the negative $x$-axis and origin cut out. We define it as the angle that the vector from the origin to $(x,y)$ makes with the positive $x$-axis, just like in polar coordinates. Anything on the positive $x$-axis gets the value $0$; anything in the upper half-plane gets a positive value of $\theta$, approaching $\pi$ as we get near the negative $x$-axis from above; anything in the lower half-plane gets a negative value of $\theta$, approaching $-\pi$ as we get near the negative $x$-axis from below. The function cannot be defined smoothly across the negative axis, nor can it be defined consistently at the origin.

What we do know is that $\tan(\theta)=\frac{y}{x}$. We will now take the differential of both sides of this equation. On the left, we take the partial derivative with respect to $x$:

\displaystyle\begin{aligned}\frac{\partial}{\partial x}\tan(\theta(x,y))&=\frac{1}{\cos(\theta)^2}\frac{\partial\theta}{\partial x}\\&=\frac{x^2+y^2}{x^2}\frac{\partial\theta}{\partial x}\end{aligned}

and a similar formula holding true for the partial derivative with respect to $y$. On the right, we calculate:

\displaystyle\begin{aligned}\frac{\partial}{\partial x}\frac{y}{x}&=-\frac{y}{x^2}\\\frac{\partial}{\partial y}\frac{y}{x}&=\frac{1}{x}\end{aligned}

Putting these all together we get

$\displaystyle\frac{x^2+y^2}{x^2}\frac{\partial\theta}{\partial x}dx+\frac{x^2+y^2}{x^2}\frac{\partial\theta}{\partial y}dy=-\frac{y}{x^2}dx+\frac{1}{x}dy$

Since $dx$ and $dy$ are independent we get two equations:

\displaystyle\begin{aligned}\frac{x^2+y^2}{x^2}\frac{\partial\theta}{\partial x}&=-\frac{y}{x^2}\\\frac{x^2+y^2}{x^2}\frac{\partial\theta}{\partial y}&=\frac{1}{x}\end{aligned}

which tell us:

\displaystyle\begin{aligned}\frac{\partial\theta}{\partial x}&=-\frac{y}{x^2+y^2}\\\frac{\partial\theta}{\partial y}&=\frac{x}{x^2+y^2}\end{aligned}

and so we have the differential:

$\displaystyle d\theta=-\frac{y}{x^2+y^2}dx+\frac{x}{x^2+y^2}dy$

Now we still can’t make sense of these formulas at $(0,0)$, but there’s no problem along the negative $x$-axis. In fact, if we’d chosen a different curve to cut along when we’d defined the $\theta$ function, we’d get the same formula for the differential. This suggests that we define the $1$-form

$\displaystyle\omega=-\frac{y}{x^2+y^2}dx+\frac{x}{x^2+y^2}dy$

on all of $\mathbb{R}^2\setminus{(0,0)}$. Some authors will even still call this “$d\theta$“, even though it cannot be the differential of any single smooth function defined on the whole punctured plane. We will soon see that this is the case.

Even so, the differential $d\omega$ is going to be identically zero. Away from the “branch curve” on which we cut in our setup — the negative real axis here — this should be obvious, because here we have $d\omega=d^2\theta=0$ since the square of the exterior derivative is automatically zero.

It would be hard to imagine it being nonzero along the branch curve, but just as an exercise let’s calculate. For the $dx$ term only the partial derivative with respect to $y$ matters — the one with respect to $x$ will give another $dx$ term which will cancel out — and similarly for the $dy$ term. So we calculate:

\displaystyle\begin{aligned}\frac{\partial}{\partial y}\frac{-y}{x^2+y^2}&=\frac{(x^2+y^2)(-1)-(-y)(2y)}{(x^2+y^2)^2}\\&=\frac{-x^2+y^2}{(x^2+y^2)^2}\\\frac{\partial}{\partial x}\frac{x}{x^2+y^2}&=\frac{(x^2+y^2)-(x)(2x)}{(x^2+y^2)^2}\\&=\frac{-x^2+y^2}{(x^2+y^2)^2}\end{aligned}

and thus:

\displaystyle\begin{aligned}d\omega&=\frac{\partial}{\partial y}\frac{-y}{x^2+y^2}dy\wedge dx+\frac{\partial}{\partial x}\frac{x}{x^2+y^2}dx\wedge dy\\&=-\frac{-x^2+y^2}{(x^2+y^2)^2}dx\wedge dy+\frac{-x^2+y^2}{(x^2+y^2)^2}dx\wedge dy\\&=0\end{aligned}

Just as asserted.

August 22, 2011

## Stokes’ Theorem (proof part 2)

Now that we’ve proven Stokes’ theorem in the case of the standard cube, we can now tackle the general case.

Of course, we only have to handle the case of a general singular cube, since we defined the boundary operator $\partial$ to be additive; if $c$ is a general chain — a formal sum of singular cubes — then $\partial c$ is the same formal sum of the boundaries of these cubes. Since integration is also defined to be additive on the chain over which we integrate, everything works out:

\displaystyle\begin{aligned}\int\limits_cd\omega&=\int\limits_{a_1c_1+\dots+a_lc_l}d\omega\\&=a_1\int\limits_{c_1}d\omega+\dots+a_l\int\limits_{c_l}d\omega\\&=a_1\int\limits_{\partial c_1}\omega+\dots+a_l\int\limits_{\partial c_l}\omega\\&=\int\limits_{a_1\partial c_1+\dots+a_l\partial c_l}\omega\\&=\int\limits_{\partial(a_1c_1+\dots+a_lc_l)}\omega\\&=\int\limits_{\partial c}\omega\end{aligned}

where we have used the special case of singular cubes to pass from the second line to the third.

So let’s tackle this special case:

\displaystyle\begin{aligned}\int\limits_{\partial c}\omega&=\sum\limits_{i=1}^k\sum\limits_{\alpha=0,1}(-1)^{i+\alpha}\int\limits_{c\circ I^{k-1}_{i,\alpha}}\omega\\&=\sum\limits_{i=1}^k\sum\limits_{\alpha=0,1}(-1)^{i+\alpha}\int\limits_{[0,1]^{k-1}}(c\circ I^{k-1}_{i,\alpha})^*\omega\\&=\sum\limits_{i=1}^k\sum\limits_{\alpha=0,1}(-1)^{i+\alpha}\int\limits_{[0,1]^{k-1}}{I^{k-1}_{i,\alpha}}^*c^*\omega\\&=\sum\limits_{i=1}^k\sum\limits_{\alpha=0,1}(-1)^{i+\alpha}\int\limits_{I^{k-1}_{i,\alpha}}c^*\omega\\&=\int\limits_{\partial([0,1]^k)}c^*\omega\\&=\int\limits_{[0,1]^k}dc^*\omega\\&=\int\limits_{[0,1]^k}c^*d\omega\\&=\int\limits_{[0,1]^k}c^*d\omega\\&=\int\limits_cd\omega\end{aligned}

Basically, it all works out for the same reason parameterization invariance and the change of variables formula do. Passing from the boundary of the singular cube back to the boundary of the standard one transforms the integral one way; passing from the standard cube itself back to the singular cube undoes this transformation. And so Stokes’ theorem is proved.

August 20, 2011

## Stokes’ Theorem (proof part 1)

We turn now to the proof of Stokes’ theorem.

$\displaystyle\int\limits_cd\omega=\int\limits_{\partial c}\omega$

We start by considering the case where $c$ is the standard cube $[0,1]^k\subseteq\mathbb{R}^k$. Whipping out the definition of the boundary operator $\partial$, the integral on the right proceeds as follows:

$\displaystyle\int\limits_{\partial([0,1]^k)}\omega=\sum\limits_{j,\alpha}(-1)^{j+\alpha}\int\limits_{[0,1]^{k-1}}\left[{I^k_{j,\alpha}}^*\omega\right]\left(\frac{\partial}{\partial u^1},\dots,\frac{\partial}{\partial u^{k-1}}\right)$

Now any $k-1$-form $\omega$ can be written out as

$\displaystyle\omega=\sum\limits_{i=1}^kf^idu^1\wedge\dots\wedge\widehat{du^i}\wedge\dots\wedge du^k$

where each term omits exactly one of the basic $1$-forms. Since everything in sight — the differential operator and both integrals — is $\mathbb{R}$-linear, we can just use one of these terms. And so we can calculate the pullbacks:

\displaystyle\begin{aligned}\left[{I^k_{j,\alpha}}^*f^idu^1\wedge\dots\wedge\widehat{du^i}\wedge\dots\wedge du^k\right]\left(\frac{\partial}{\partial u^1},\dots,\frac{\partial}{\partial u^{k-1}}\right)&=\\(f^i\circ I^k_{j,\alpha})du^1\wedge\dots\wedge\widehat{du^i}\wedge\dots\wedge du^k\left({I^k_{j,\alpha}}_*\frac{\partial}{\partial u^1},\dots,{I^k_{j,\alpha}}_*\frac{\partial}{\partial u^{k-1}}\right)&=\\(f^i\circ I^k_{j,\alpha})\det\left(\frac{\partial(v^p\circ I^k_{j,\alpha})}{\partial u^q}\right)&\end{aligned}

It takes a bit of juggling with the definition of $I^k_{j,\alpha}$, but we can see that this determinant is $1$ if $j=i$ and $0$ otherwise. Roughly this is because $I^k_{j,\alpha}$ takes the $k-1$ basis vector fields of $\mathcal{T}[0,1]^{k-1}$ and turns them into all of the basis vector fields of $\mathcal{T}[0,1]^k$ except the $j$-th one. If $i\neq j$ then some basis $1$-form has to line up against some basis vector field with a different index and everything goes to zero, while if $i=j$ then they can all pair off in exactly one way.

The upshot is that only the two faces of the cube in the $i$ direction contribute anything at all to the boundary integral, and we find

\displaystyle\begin{aligned}\int\limits_{\partial([0,1]^k)}\omega=&(-1)^{i+1}\int\limits_{[0,1]^{k-1}}f(u^1,\dots,u^{i-1},1,u^i,\dots,u^{k-1})\,d(u^1,\dots,u^{k-1})\\&+(-1)^i\int\limits_{[0,1]^{k-1}}f(u^1,\dots,u^{i-1},0,u^i,\dots,u^{k-1})\,d(u^1,\dots,u^{k-1})\end{aligned}

On the other side, we can calculate the differential of $\omega$:

\displaystyle\begin{aligned}d\left(f^idu^1\wedge\dots\wedge\widehat{du^i}\wedge\dots\wedge du^k\right)&=df^i\wedge du^1\wedge\dots\wedge\widehat{du^i}\wedge\dots\wedge du^k\\&=\left(\sum\limits_{j=1}^k\frac{\partial f^i}{\partial u^j}du^j\right)\wedge du^1\wedge\dots\wedge\widehat{du^i}\wedge\dots\wedge du^k\\&=\left(\frac{\partial f^i}{\partial u^i}du^i\right)\wedge du^1\wedge\dots\wedge\widehat{du^i}\wedge\dots\wedge du^k\\&=(-1)^{i-1}\frac{\partial f^i}{\partial u^i}du^1\wedge\dots\wedge du^k\end{aligned}

The tricky bit here is that when $j\neq i$ there’s nowhere to put this brand new $du^j$, since it must collide with one of the other basic $1$-forms in the wedge. But when $j=i$ then it can slip right into the “hole” where we’ve left out $du^i$, at a cost of a factor of $(-1)^{i-1}$ to pull the $du^i$ across the first $i-1$ terms in the wedge.

With this result in hand, we calculate the interior integral:

$\displaystyle\int_{[0,1]^k}d\omega=(-1)^{i-1}\int_{[0,1]^k}\frac{\partial f^i}{\partial u^i}\,d(u^1,\dots,u^k)$

We can turn this into an iterated integral, which Fubini’s theorem tells us we can evaluate in any order we want:

\displaystyle\begin{aligned}\int_{[0,1]^k}d\omega=&(-1)^{i-1}\int_{[0,1]^{k-1}}\int\limits_0^1\frac{\partial f^i}{\partial u^i}\,du^i\,d(u^1,\dots,\widehat{u^i},\dots,u^k)\\=&(-1)^{i-1}\int_{[0,1]^{k-1}}f^i(u^1,\dots,u^{i-1},1,u^{i+1},\dots,u^k)\,d(u^1,\dots,\widehat{u^i},\dots,u^k)\\&-(-1)^{i-1}\int_{[0,1]^{k-1}}f^i(u^1,\dots,u^{i-1},0,u^{i+1},\dots,u^k)\,d(u^1,\dots,\widehat{u^i},\dots,u^k)\end{aligned}

which it should be clear is the same as our answer for the boundary integral above. Thus Stokes’ theorem holds for the standard cube.

August 18, 2011

## Stokes’ Theorem (statement)

Sorry for the little hiatus. I’ve been busier than usual.

Anyway, now we come to Stokes’ theorem. You may remember something by this name if you took a good multivariable calculus course, but this is not quite the same thing. In fact, the Stokes’ theorem you remember is connected to but one special case of this theorem, which also subsumes Gauss’ theorem, Green’s theorem, and the fundamental theorem of calculus, all in one neat little package. The exact details of the connection, though, require us to move into the realm of differential geometry, so we’ll have to come back to them later.

But anyway, on to the theorem! We know how to integrate a differential $k$-form $\omega$ over a $k$-chain $c$. We also have a differential operator on differential forms and a boundary operator on chains. We can put these together in two ways: either we start with a $k-1$-form $\omega$, take its exterior derivative to get the $k$-form $d\omega$, then integrate that over the $k$-chain $c$; or we take the boundary of $c$ to get the $k-1$-chain $\partial c$, then integrate $\omega$ over that. What Stokes’ theorem asserts is that these two give the same answer. As a formula:

$\displaystyle\int\limits_cd\omega=\int\limits_{\partial c}\omega$

In a hand-wavy, conceptual way of putting it: integrating a differential form over the boundary of a region is the same as integrating its derivative over the interior. Indeed, if you look back over the results I mentioned above — even just the fundamental theorem of calculus — you can see this concept at work.

August 17, 2011

## Functoriality of Cubic Singular Homology

We want to show that the cubic singular homology we’ve constructed is actually a functor. That is, given a smooth map $f:M\to N$ we want a chain map $C_k(f):C_k(M)\to C_k(N)$, which then will induce a map on homology: $H_k(f):H_k(M)\to H_k(N)$.

The definition couldn’t be simpler. We really only need to define the image of a singular $k$-cube $c$ in $M$ and extend by linearity. And since $c:I^k\to M$ is a function, we can just compose it with $f$ to get a singular $k$-cube $f\circ c:I^k\to N$. What’s the $(i,j)$ face of this singular $k$-cube? Why it’s

$\displaystyle(f\circ c)_{i,j}=(f\circ c)\circ I^k_{i,j}=f\circ(c\circ I^k_{i,j})=f\circ c_{i,j}$

and so we find that this map commutes with the boundary operation $\partial$, making it a chain map.

We should still check functoriality. The identity map clearly gives us the identity chain map. And if $f:M_1\to M_2$ and $g:M_2\to M_3$ are two smooth maps, then we can check

$\displaystyle\left[C_k(g\circ f)\right](c)=(g\circ f)\circ c=g\circ(f\circ c)=\left[C_k(f)\circ C_k(g)\right](c)$

which makes this construction a covariant functor.

August 10, 2011