# The Unapologetic Mathematician

## The Lie Derivative on Forms

We’ve defined the Lie derivative $L_XY$ of one vector field $Y$ by another, $X$. This worked by using the flow of $X$ to compare nearby points, and used the derivative of the flow to translate vectors.

Well now we know how to translate $k$-forms by pulling back, and thus we can define another Lie derivative:

$\displaystyle L_X\omega=\lim\limits_{t\to0}\frac{1}{t}\left((\Phi_t)^*(\omega)-\omega\right)$

What happens if $\omega$ is a $0$-form — a function $f$? We check

\displaystyle\begin{aligned}\left[L_Xf\right](p)&=\lim\limits_{t\to0}\frac{1}{t}\left(\left[(\Phi_t)^*(f)\right](p)-f(p)\right)\\&=\lim\limits_{t\to0}\frac{1}{t}\left(f\left(\Phi_t(p)\right)-f(p)\right)\\&=X_p(f)=Xf(p)\end{aligned}

That is, the Lie derivative by $X$ acts on $\Omega^0(M)$ exactly the same as the vector field $X$ does itself.

I also say that the Lie derivative by $X$ is a degree-zero derivation of the algebra $\Omega(M)$. That is, it’s a real-linear transformation, and it satisfies the Leibniz rule:

$\displaystyle L_X(\alpha\wedge\beta)=\left(L_X\alpha\right)\wedge\beta+\alpha\wedge\left(L_X\beta\right)$

for any $k$-form $\alpha$ and $l$-form $\beta$. Linearity is straightforward, and given linearity the Leibniz rule follows if we can show

$\displaystyle L_X(\alpha_1\wedge\dots\wedge\alpha_k)=\sum\limits_{i=1}^k\alpha_1\wedge\dots\wedge\left(L_X\alpha_i\right)\wedge\dots\wedge\alpha_k$

for $1$-forms $\alpha_i$. Indeed, we can write $\alpha$ and $\beta$ as linear combinations of such $k$- and $l$-fold wedges, and then the Leibniz rule is obvious.

So, let us calculate:

\displaystyle\begin{aligned}L_X\left(\alpha_1\wedge\dots\wedge\alpha_k\right)=&\lim\limits_{t\to0}\frac{1}{t}\left((\Phi_t)^*\left(\alpha_1\wedge\dots\wedge\alpha_k\right)-\alpha_1\wedge\dots\wedge\alpha_k\right)\\=&\lim\limits_{t\to0}\frac{1}{t}\left(\left((\Phi_t)^*\alpha_1\right)\wedge\dots\wedge\left((\Phi_t)^*\alpha_k\right)-\alpha_1\wedge\dots\wedge\alpha_k\right)\\=&\lim\limits_{t\to0}\frac{1}{t}\left(\left((\Phi_t)^*\alpha_1\right)\wedge\dots\wedge\left((\Phi_t)^*\alpha_k\right)-\alpha_1\wedge\dots\wedge\alpha_{k-1}\wedge\left((\Phi_t)^*\alpha_k\right)\right)\\&+\lim\limits_{t\to0}\frac{1}{t}\left(\alpha_1\wedge\dots\wedge\alpha_{k-1}\wedge\left((\Phi_t)^*\alpha_k\right)-\alpha_1\wedge\dots\wedge\alpha_k\right)\\=&\lim\limits_{t\to0}\left(\frac{1}{t}\left(\left((\Phi_t)^*\alpha_1\right)\wedge\dots\wedge\left((\Phi_t)^*\alpha_{k-1}\right)-\alpha_1\wedge\dots\wedge\alpha_{k-1}\right)\wedge\left((\Phi_t)^*\alpha_k\right)\right)\\&+\alpha_1\wedge\dots\wedge\alpha_{k-1}\wedge\lim\limits_{t\to0}\frac{1}{t}\left(\left((\Phi_t)^*\alpha_k\right)-\alpha_k\right)\\=&\left(\lim\limits_{t\to0}\frac{1}{t}\left(\left((\Phi_t)^*\alpha_1\right)\wedge\dots\wedge\left((\Phi_t)^*\alpha_{k-1}\right)-\alpha_1\wedge\dots\wedge\alpha_{k-1}\right)\right)\wedge\alpha_k\\&+\alpha_1\wedge\dots\wedge\alpha_{k-1}\wedge L_X\alpha_k\\&=L_X\left(\alpha_1\wedge\dots\wedge\alpha_{k-1}\right)\wedge\alpha_k+\alpha_1\wedge\dots\wedge\alpha_{k-1}\wedge L_X\alpha_k\end{aligned}

So we see how we can peel off one of the $1$-forms. A simple induction gives us the general case.

July 13, 2011

## Pulling Back Forms

We’ve just seen that smooth real-valued functions are differential forms with grade zero. We also know that functions pull back along smooth maps; if $g\in\mathcal{O}_NV$ is a smooth function on an open subset $v\subseteq N$ and if $f:M\to N$ is a smooth map, then $g\circ f:f^{-1}(V)\to\mathbb{R}$ is a smooth function — $g\circ f\in\mathcal{O}_{f^{-1}(V)}M$.

It turns out that all $k$-forms pull back in a similar way. But the “value” of a $k$-form doesn’t only depend on a point, but on $k$ vectors at that point. Functions pull back because smooth maps push points forward. It turns out that vectors push forward as well, by the derivative. And so we can define the pullback of a $k$-form $\alpha$:

$\displaystyle \left[\left[f^*\alpha\right](p)\right](v_1,\dots,v_k)=\left[\alpha(f(p))\right](f_{*p}(v_1),\dots,f_{*p}(v_k))$

Here $\alpha$ is a $k$-form on a region $V\subseteq N$, $p$ is a point in $f^{-1}(V)\subseteq M$, and the $v_i$ are $k$ vectors in $\mathcal{T}_pM$. Since the differential $f_{*p}:\mathcal{T}_pM\to\mathcal{T}_{f(p)}N$ is a linear function and $\alpha(f(p))$ is a multilinear function on $\mathcal{T}_{f(p)}N^{\otimes k}$, $\left[f^*\alpha\right](p)$ is a multilinear function on $\mathcal{T}_pM^{\otimes k}$, as asserted.

This pullback $f^*:\Omega_N(V)\to\Omega_M(f^{-1}(V))$ is a homomorphism of graded algebras. Since it sends $k$-forms to $k$-forms, it has degree zero. To show that it’s a homomorphism, we must verify that it preserves addition, scalar multiplication by functions, and exterior multiplication. If $\alpha$ and $\beta$ are $k$-forms in $\Omega_N(V)$, we can check

\displaystyle\begin{aligned}\left[\left[f^*(\alpha+\beta)\right](p)\right](v_1,\dots,v_k)&=\left[[\alpha+\beta](f(p))\right](f_{*p}(v_1),\dots,f_{*p}(v_k))\\&=\left[\alpha(f(p))+\beta(f(p))\right](f_{*p}(v_1),\dots,f_{*p}(v_k))\\&=\left[\alpha(f(p))\right](f_{*p}(v_1),\dots,f_{*p}(v_k))+\left[\beta(f(p))\right](f_{*p}(v_1),\dots,f_{*p}(v_k))\\&=\left[\left[f^*\alpha\right](p)\right](v_1,\dots,v_k)+\left[\left[f^*\beta\right](p)\right](v_1,\dots,v_k)\end{aligned}

so $f^*(\alpha+\beta)=f^*\alpha+f^*\beta$. Also if $g\in\mathcal{O}(V)$ we can check

\displaystyle\begin{aligned}\left[\left[f^*(g\alpha)\right](p)\right](v_1,\dots,v_k)&=\left[[g\alpha](f(p))\right](f_{*p}(v_1),\dots,f_{*p}(v_k))\\&=\left[g(f(p))\alpha(f(p))\right](f_{*p}(v_1),\dots,f_{*p}(v_k))\\&=g(f(p))\left[\alpha(f(p))\right](f_{*p}(v_1),\dots,f_{*p}(v_k))\\&=\left[f^*g\right](p)\left[\left[f^*\alpha\right](p)\right](v_1,\dots,v_k)\end{aligned}

As for exterior multiplication, we will use the fact that we can write any $k$-form $\alpha$ as a linear combination of $k$-fold products of $1$-forms. Thus we only have to check that

\displaystyle\begin{aligned}\left[f^*(\alpha^1\wedge\dots\wedge\alpha^k)\right](v_1,\dots,v_k)&=\left[(\alpha^1\wedge\dots\wedge\alpha^k)\circ f\right](f_{*p}v_1,\dots,f_{*p}v_k)\\&=\det\left(\left[\alpha_i\circ f\right](f_{*p}v_j)\right)\\&=\det\left(\left[f^*\alpha_i\right](v_j)\right)\\&=\left[(f^*\alpha_1)\wedge\dots\wedge(f^*\alpha_k)\right](v_1,\dots,v_k)\end{aligned}

Thus $f^*$ preserves the wedge product as well, and thus gives us a degree-zero homomorphism of the exterior algebras.

July 13, 2011

## The Algebra of Differential Forms

We’ve defined the exterior bundle $\Lambda^*_k(M)$ over a manifold $M$. Given any open $U\subseteq M$ we’ve also defined a $k$-form over $U$ to be a section of this bundle: a function $\alpha:U\to\Lambda^*_k(U)$ such that $\pi\circ\alpha=I_U:U\to U$. We write $\Omega^k(U)=\Omega_M^k(U)$ for the collection of all such $k$-forms over $U$. It’s straightforward to see that this defines a sheaf on $M$.

This isn’t just a sheaf of sets; it’s a sheaf of modules over the structure sheaf $\mathcal{O}_M$ of smooth functions on $M$. We define the necessary operations pointwise:

\displaystyle\begin{aligned}\left[\alpha+\beta\right](p)&=\alpha(p)+\beta(p)\\\left[f\alpha\right](p)&=f(p)\alpha(p)\end{aligned}

where the right hand sides are defined by the vector space structures on the respective $\mathcal{T}_pM$.

We can go even further and define the sheaf of differential forms

$\displaystyle\Omega_M(U)=\Omega(U)=\bigoplus\limits_{k=1}^n\Omega^k(U)$

This sheaf $\Omega_M$ is not just a sheaf of modules over $\mathcal{O}_M$, it’s a sheaf of algebras. For an $\alpha\in\Omega^k(U)$ and a $\beta\in\Omega^l(U)$, we define their exterior product pointwise:

$\displaystyle\left[\alpha\wedge\beta\right](p)=\alpha(p)\wedge\beta(p)$

In fact, this is a graded algebra, and the multiplication has degree zero:

$\displaystyle\wedge:\Omega^k(U)\otimes\Omega^l(U)\to\Omega^{k+l}(U)$

Even better, this is a unital algebra. We see this by considering the zero grade, since the unit must live in the zero grade. Indeed, $\Lambda_0^*(U)\cong\mathbb{R}$, so sections of $\Lambda_0^*(U)$ are simply functions on $U$. That is, $\Omega^0(U)\cong\mathcal{O}(U)$. Given a function $f\in\mathcal{O}(U)$ we will just write $f\alpha$ instead of $f\wedge\alpha$.

July 12, 2011

## Tensor Fields and Multilinear Maps

A tensor field $T$ over a manifold $M$ gives us a tensor $T_p$ at each point $p\in M$. And we know that $T_p$ can be considered as a multilinear map. Specifically, if $T$ is a tensor field of type $(r,s)$, then we find

$\displaystyle T_p\in\mathcal{T}_pM^{\otimes r}\otimes\mathcal{T}^*_pM^{\otimes s}$

which we can interpret as a multilinear map:

$\displaystyle T_p:\mathcal{T}_pM^{\times r}\times\mathcal{T}^*_pM^{\times s}\to\mathbb{R}$

where multilinearity means that $T_p$ is linear in each variable separately.

As we let $p$ vary over $M$, we can interpret $T$ as defining a function which takes $r$ vector fields and $s$ covector fields and gives a function. Explicitly:

$\displaystyle\left[T(X_1,\dots,X_r,\alpha^1,\dots,\alpha^s)\right](p)=T_p(X_1(p),\dots,X_r(p),\alpha^1(p),\dots,\alpha^s(p))$

And, in particular, this function is multilinear over $\mathcal{O}M$. That is,

\displaystyle\begin{aligned}\left[T(fX_1,\dots,X_r,\alpha^1,\dots,\alpha^s)\right](p)&=T_p(\left[fX_1\right](p),\dots,X_r(p),\alpha^1(p),\dots,\alpha^s(p))\\&=T_p(f(p)X_1(p),\dots,X_r(p),\alpha^1(p),\dots,\alpha^s(p))\\&=f(p)T_p(X_1(p),\dots,X_r(p),\alpha^1(p),\dots,\alpha^s(p))\\&=\left[fT(X_1,\dots,X_r,\alpha^1,\dots,\alpha^s)\right](p)\end{aligned}

And a similar calculation holds for any of the other variables, vector or covector.

So each tensor field gives us a multilinear function $T:\mathfrak{X}M^{\times r}\times\mathfrak{X}^*M^{\times s}\to\mathcal{O}M$, and this multilinearity is not only true over $\mathbb{R}$ but over $\mathcal{O}M$ as well.

Conversely, let $T:\mathfrak{X}M^{\times r}\times\mathfrak{X}^*M^{\times s}\to\mathcal{O}M$ be an $\mathbb{R}$-multilinear function. If it’s also linear over $\mathcal{O}M$ in each variable, then it “lives locally”. That is, if $X_i(p)=Y_i(p)$ and $\alpha^j(p)=\beta^j(p)$ then

$\displaystyle\left[T(X_1,\dots,X_r,\alpha^1,\dots,\alpha^s)\right](p)=\left[T(Y_1,\dots,Y_r,\beta^1,\dots,\beta^s)\right](p)$

and so at each $p$ there is some tensor $T_p\in T^r_s\left(\mathcal{T}_pM\right)$ so that $T$ is a tensor field.

This is as distinguished from things like differential operators — $X\to L_Y(X)^1$, for instance — which fail both sides. On the one side, we can calculate

\displaystyle\begin{aligned}L_Y(fX)^1&=[Y,fX]^1\\&=f[X,Y]^1+Y(f)X^1\\&=fL_Y(X)^1+Y(f)X^1\end{aligned}

which picks up an extra term. It’s $\mathbb{R}$-linear but not $\mathcal{O}M$-linear. On the other side, the value of this function at $p$ doesn’t just depend on the value of $X$ at $p$, but on how $X$ changes through $p$. That is, this operator does not “live locally”, and is not a tensor field.

To prove this assertion, it will suffice to deal with the case where $T$ takes a single vector variable $X$, and we only need to verify that if $X_p=0$ then $\left[T(X)\right](p)=0$. Let $(U,x)$ be a chart around $p$, and write

$\displaystyle X=\sum\limits_{i=1}^nf^i\frac{\partial}{\partial x^i}$

where by assumption each $f^i(p)=0$. We let $V$ be a neighborhood of $p$ whose closure is contained in $U$. We know we can find a smooth bump function $\phi$ supported in $U$ and with $\phi(q)=1$ on $\bar{V}$.

Now we define vector fields $X_i=\phi\frac{\partial}{\partial x^i}$ on $U$ and $0$ on $M\setminus U$. Similarly we define $g^i=\phi f^i$ on $U$ and $0$ on $M\setminus U$. Then we can write

$\displaystyle X=\phi^2X+(1-\phi^2)X=\sum\limits_{i=1}^ng^iX_i+(1-\phi^2)X$

and thus

\displaystyle\begin{aligned}\left[T(X)\right](p)&=\sum\limits_{i=1}^ng^i(p)\left[T(X_i)\right](p)+(1-\phi(p)^2)\left[T(X)\right](p)\\&=\sum\limits_{i=1}^n\phi(p)f^i(p)\left[T(X_i)\right](p)+(1-\phi(p)^2)\left[T(X)\right](p)\\&=\sum\limits_{i=1}^n1\cdot0\cdot\left[T(X_i)\right](p)+(1-1^2)\left[T(X)\right](p)=0\end{aligned}

as asserted.

July 9, 2011

## Change of Variables for Tensor Fields

We’ve seen that given a local coordinate patch $(U,x)$ we can decompose tensor fields in terms of the coordinate bases $\frac{\partial}{\partial x^i}$ and $dx^i$ on $\mathcal{T}_pM$ and $\mathcal{T}^*_pM$, respectively. But what happens if we want to pass from the $x$-coordinate system to another coordinate system $y$?

For vectors and covectors, we know the answers. We pass from the $x$-coordinate basis to the $y$-coordinate basis of $\mathcal{T}_pM$ by using a Jacobian:

$\displaystyle\frac{\partial}{\partial x^i}=\sum\limits_{j=1}^n\frac{\partial y^j}{\partial x^i}\frac{\partial}{\partial y^j}$

where we calculate the coefficients by writing the coordinate function $y^j$ in terms of the $x$ coordinates. That is, we’re calculating the Jacobian of the function $y\circ x^{-1}:\mathbb{R}^n\to\mathbb{R}^n$.

Similarly, we pass from the $x$-coordinate basis to the $y$-coordinate basis of $\mathcal{T}^*_pM$ by using another Jacobian:

$\displaystyle dx^i=\sum\limits_{j=1}^n\frac{\partial x^i}{\partial y^j}dy^j$

Where here we use the Jacobian of the inverse transformation $x\circ y^{-1}:\mathbb{R}^n\to\mathbb{R}^n$.

Since we build up the coordinates on the tensor bundles as the canonical ones induced on the tensor spaces by the coordinate bases on $\mathcal{T}_pM$ and $\mathcal{T}^*_pM$, we immediately get coordinate transforms for all these bundles.

As one example in particular, given the basis $\{dx^i\}$ and the basis $\{dy^j\}$ on the coordinate patch $(U,x)$ we can build up two “top forms” in $\Lambda^*_n(U)$ — top, since $n$ is the highest possible degree of a differential form. These are $dx^1\wedge\dots\wedge dx^n$ and $dy^1\wedge\dots\wedge dy^n$, and it turns out there’s a nice formula relating them. We just work it out from the formula above:

\displaystyle\begin{aligned}dx^1\wedge\dots\wedge dx^n&=\left(\sum\limits_{j_1=1}^n\frac{\partial x^1}{\partial y^{j_1}}dy^{j_1}\right)\wedge\dots\wedge\left(\sum\limits_{j_n=1}^n\frac{\partial x^n}{\partial y^{j_n}}dy^{j_n}\right)\\&=\sum\limits_{j_1,\dots,j_n=1}^n\frac{\partial x^1}{\partial y^{j_1}}\dots\frac{\partial x^n}{\partial y^{j_n}}dy^{j_1}\wedge\dots\wedge dy^{j_n}\\&=\sum\limits_{\pi\in S_n}\prod\limits_{i=1}^n\frac{\partial x^i}{\partial y^{\pi(i)}}dy^{\pi(1)}\wedge\dots\wedge dy^{\pi(n)}\\&=\left(\sum\limits_{\pi\in S_n}\prod\limits_{i=1}^n\frac{\partial x^i}{\partial y^{\pi(i)}}\mathrm{sgn}(\pi)\right)dy^1\wedge\dots\wedge dy^n\\&=\det\left(\frac{\partial x^i}{\partial y^j}\right)dy^1\wedge\dots\wedge dy^n\end{aligned}

That is, the two forms differ at each point by a factor of the Jacobian determinant at that point. This is the differential topology version of the change of basis formula for top forms in exterior algebras.

July 8, 2011

## Identifying Tensor Fields

Just as for vector fields, we need a good condition to identify tensor fields in the wild. And the condition we will use is similar: if $T$ is a smooth tensor field of type $(r,s)$, then for any coordinate patch $(U,x)$ in the domain of $T$, we should be able to write out

$\displaystyle T\vert_U=\sum\limits_{i_1,\dots,i_r,j_1,\dots,j_s=1}^nT^{i_1\dots i_r}{}_{j_1\dots j_s}\frac{\partial}{\partial x^{i_1}}\otimes\dots\otimes\frac{\partial}{\partial x^{i_r}}\otimes dx^{j_1}\otimes\dots\otimes dx^{j_s}$

for some smooth functions $T^{i_1\dots i_r}{}_{j_1\dots j_s}$ on $U$. Conversely, this formula defines a smooth tensor field on $U$.

Indeed, we can find these coefficient functions by evaluation:

$\displaystyle T^{i_1\dots i_r}{}_{j_1\dots j_s}=T\left(dx^{i_1},\dots,dx^{i_r},\frac{\partial}{\partial x^{j_1}},\dots,\frac{\partial}{\partial x^{j_s}}\right)$

for using this definition, if we plug these coordinate vector fields and coordinate covector fields into either the left or the right side of the expression above we will get the same answer. Any vector or covector fields on $U$ can be written as a linear combination of these coordinate fields with smooth functions as coefficients, and the multilinear properties of tensors will ensure that both sides get the same value no matter what fields we evaluate them on.

Similarly, if $\alpha$ is a differential $k$-form and $(U,x)$ is a coordinate patch within its domain, then we can write

$\displaystyle\alpha\vert_U=\sum\limits_{1\leq i_1<\dots

for some smooth functions $\alpha_{i_1\dots i_k}$ on $U$. The proof in this case is similar, following from the definition

$\displaystyle\alpha_{i_1\dots i_k}=\alpha\left(\frac{\partial}{\partial x^{i_1}},\dots,\frac{\partial}{\partial x^{i_k}}\right)$

In this case we can pick the indices to be strictly increasing because of the antisymmetry of the tensors.

July 7, 2011

## Tensor Bundles

We have a number of other constructions similar to the tangent bundle that will come in handy. These are all sort of analogues of certain constructions we already know about on vector spaces. Let’s review these first.

Taking the tensor product of vector spaces is old hat by now, as is using the dual space $V^*$. We’ll put them together by defining the space of “tensors of type $(r,s)$” as

$\displaystyle T^r_s(V)=V\otimes\dots\otimes V\otimes V^*\otimes\dots\otimes V^*$

where we have $r$ copies of the vector space $V$ and $s$ copies of the dual space $V^*$. Vectors in $V$, then, are tensors of type $(1,0)$, while vectors in the dual space are tensors of type $(0,1)$.

We also know about the space of antisymmetric tensors of rank $k$ over a vector space. In particular, we’re most interested in carrying this construction out over the dual space: $\Lambda^*_k(V)=\Lambda_k(V^*)$. And of course we can take the direct sum of these spaces over all $k$ to get the exterior algebra $\Lambda^*(V)$.

Now, we will take these constructions and apply them to the tangent spaces to a manifold. We define the bundle of tensors of type $(r,s)$ over $M$:

$\displaystyle T^r_s(M)=\bigcup\limits_{p\in M}T^r_s(\mathcal{T}_pM)$

the “exterior $k$-bundle” over $M$:

$\displaystyle \Lambda^*_k(M)=\bigcup\limits_{p\in M}\Lambda_k(\mathcal{T}^*_pM)$

and the exterior algebra bundle over $M$:

$\displaystyle \Lambda^*(M)=\bigcup\limits_{p\in M}\Lambda(\mathcal{T}^*_pM)$

The trick here is that for each of these constructions, if we have a basis of $V$ we automatically get a basis of each space $T^r_s(V)$, $\Lambda^*_k(V)$, and $\Lambda(V)$. If we start with a coordinate patch $(U,x)$ on $M$, we get a basis $\frac{\partial}{\partial x^i}$ of $\mathcal{T}_pM$ for each $p\in U$. Then, just as we did with the tangent bundle and the cotangent bundle we can come up with a coordinate patch “induced by $(U,x)$” on each of our new bundles. In this way, we can turn them from disjoint unions of vector spaces into manifolds of their own right, each with a smooth projection down to $M$.

Now we can define a “tensor field of type $(r,s)$” on an open region $U\subseteq M$ as a section of the projection $\pi:T^r_s(U)\to U$. That is, it’s a smooth map $t:U\to T^r_s(U)$ such that $\pi(t(p))=p$. Similarly, we define a “differential $k$-form” over $U$ to be a section of the projection $\pi:\Lambda^*_k(U)\to U$.

July 6, 2011

## The Hopf Fibration

As a nontrivial example of a foliation, I present the “Hopf fibration”. The name I won’t really explain quite yet, but we’ll see it’s a one-dimensional foliation of the three-dimensional sphere.

So, first let’s get our hands on the three-sphere $S^3$. This is by definition the collection of vectors of length $1$ in $\mathbb{R}^4$, but I want to consider this definition slightly differently. Since we know that the complex plane $\mathbb{C}$ is isomorphic to the real plane $\mathbb{R}^2$ as a real vector space, so we find the isomorphism $\mathbb{R}^4\cong\mathbb{C}^2$. Now we use the inner product on $\mathbb{C}$ to define $S^3$ as the collection of vectors $(z_1,z_2)$ with $\lvert z_1\rvert^2+\lvert z_2\rvert^2=1$.

Now for each $\alpha\in\mathbb{R}$ we can define a foliation. The leaf through the point $(z_1,z_2)$ is the curve $\left(z_1e^{it},z_2e^{i\alpha t}\right)$. Since multiplying by $e^{it}$ and $e^{i\alpha t}$ doesn’t change the norm of a complex number, this whole curve is still contained within $S^3$. Every point in $S^3$ is clearly contained in some such curve, and two points being contained within the same curve is an equivalence relation: any point is in the same curve as itself; if $w_1=z_1e^{it}$ and $w_2=z_2e^{i\alpha t}$, then $z_1=w_1e^{i(-t)}$ and $z_2=w_2e^{i\alpha(-t)}$; and if $w_1=z_1e^{is}$, $w_2=z_2e^{i\alpha s}$, $x_1=w_1e^{it}$ and $x_2=w_2e^{i\alpha t}$, then $x_1=z_1e^{i(s+t)}$ and $x_2=w_2e^{i\alpha(s+t)}$. This shows that the curves do indeed partition $S^3$.

Now we need to show that the tangent spaces to the leaves provide a distribution on $S^3$. Since this will be a one-dimensional distribution, we just need to find an everywhere nonzero vector field tangent to the leaves, and the derivative of the curve through each point will do nicely. At $(z_1,z_2)$ we get the derivative

$\displaystyle\frac{d}{dt}\left(z_1e^{it},z_2e^{i\alpha t}\right)\Big\vert_0=(iz_1,i\alpha z_2)$

It should be clear that this defines a smooth vector field over all of $S^3$, though it may not be clear from the formulas that these vectors are actually tangent to $S^3$. To see this we can either (messily) convert back to real coordinates or we can think geometrically and see that the tangent to a curve within a submanifold must be tangent to that submanifold.

The Hopf fibration is what results when we pick $\alpha=1$, but the case of irrational $\alpha$ is very interesting. In this case we find that some leaves curve around and meet themselves, forming circles, while others never meet themselves, forming homeomorphic images of the whole real line. What this tells us is that not all the leaves of a foliation have to look like each other.

To see this, we try to solve the equations

\displaystyle\begin{aligned}z_1&=z_1e^{it}\\z_2&=z_2e^{i\alpha t}\end{aligned}

The first equation tells us that either $z_1=0$ or $t=2\pi n$. In the first case, we simply have the circle $\lvert z_2\rvert=1$. In the second case, the second equation tells us that either $z_2=0$ or $2\pi m=\alpha t=2\pi\alpha n$. The case where $z_2=0$ is similar to the case $z_1=0$, but if neither coordinate is zero then we find $\alpha=\frac{m}{n}$. But we assumed that $\alpha$ is irrational, so we get no nontrivial solutions for $t$ here.

Since the curves don’t change the length of either component, we can get other examples of foliations. For instance, if we let $\lvert z_1\rvert=\lvert z_2\rvert=\frac{1}{2}$, then the curve will stay on the torus $S^1\times S^1$ where each circle has radius $\frac{1}{\sqrt{2}}$ in its copy of $\mathbb{C}$. Looking at all the curves on this surface gives a foliation of the torus. If $\alpha$ is irrational, the curve winds around and around the donut-shaped surface, never quite coming back to touch itself, but eventually coming arbitrarily close to any given point on the surface.

July 4, 2011

## Foliations

A $k$-dimensional “foliation” $\mathcal{F}$ of an $n$-dimensional manifold $M$ is a partition of $M$ into $k$-dimensional connected immersed submanifolds, which are called the “leaves” of the foliation. We also ask that the tangent spaces to the leaves define a $k$-dimensional distribution $\Delta$ on $M$, which we say is “induced” by $\mathcal{F}$, and that any connected integral submanifold of $\Delta$ should be contained in a leaf of $\mathcal{F}$. It makes sense, then, that we should call a leaf of $\mathcal{F}$ a “maximal integral submanifold” of $\Delta$.

One obvious family of foliations arises as follows: let $M=\mathbb{R}^n$, and pick some $k$-dimensional vector subspace $N\subseteq M$. The quotient space $M/N$ consists of all the $k$-dimensional affine spaces “parallel” to $N$ — if we pick a representative point $a\in M$ then $a+N = \{a+n\vert n\in N\}$ is one of the cosets in $M/N$. The decomposition of $M$ into the collection of $M/N$ is a foliation. At any point $a\in M$ the induced distribution $\Delta$ is the subspace $\Delta_a\subseteq\mathcal{T}_aM$, which is the image of $N$ under the standard identification of $M$ with $\mathcal{T}_aM$.

Now we have another theorem of Frobenius (prolific guy, wasn’t he?) about foliations: every integrable distribution of $\Delta$ on $M$ comes from a foliation of $M$.

Around any point we know we can find some chart $(U,x)$ so that the slices $\{q\in U\vert x^{k+j}(q)=a_{k+j}\}$ are all integrable submanifolds of $\Delta$. By the assumption that $M$ is second-countable we can find a countable cover of $M$ consisting of these patches.

We let $\mathcal{S}$ be the collection of all the slices from all the patches in this cover, and define an equivalence relation $\sim$ on $\mathcal{S}$. We say that $S\sim S'$ if there is a finite sequence $S=S_0,S_1,\dots,S_l=S'$ of slices so that $S_i\cap S_{i+1}\neq\emptyset$. Since each $S\subseteq U$ is a manifold, it can only intersect another chart $(V,y)$ in countably many slices, and from here it’s straightforward to show that each equivalence class of $\mathcal{S}/\sim$ can only contain countably many slices. Taking the (countable) union of each equivalence class gives us a bunch of immersed connected integral manifolds of $\Delta$, and any two of these are either equal or disjoint, thus giving us a partition. And since any connected integral manifold of $\Delta$ must align with one of the slices in any of our coordinate patches it meets, it must be contained in one of these leaves. Thus we have a foliation, which induces $\Delta$.

July 1, 2011