# The Unapologetic Mathematician

## Mathematics for the interested outsider

Since Lie groups are groups, they have representations — homomorphisms to the general linear group of some vector space or another. But since $GL(V)$ is a Lie group, we can use this additional structure as well. And so we say that a representation of a Lie group should not only be a group homomorphism, but a smooth map of manifolds as well.

As a first example, we define a representation that every Lie group has: the adjoint representation. To define it, we start by defining conjugation by $g\in G$. As we might expect, this is the map $\tau_g=L_g\circ R_{g^{-1}}:G\to G$ — that is, $\tau_g(h)=ghg^{-1}$. This is a diffeomorphism from $G$ back to itself, and in particular it has the identity $e\in G$ as a fixed point: $\tau_g(e)=e$. Thus the derivative sends the tangent space at $e$ back to itself: $\tau_{g*e}:\mathcal{T}_eG\to\mathcal{T}_eG$. But we know that this tangent space is canonically isomorphic to the Lie algebra $\mathfrak{g}$. That is, $\tau_g\in GL(\mathfrak{g})$. So now we can define $\mathrm{Ad}:G\to GL(\mathfrak{g})$ by $\mathrm{Ad}(g)=\tau_g$. We call this the “adjoint representation” of $G$.

To get even more specific, we can consider the adjoint representation of $GL_n(\mathbb{R})$ on its Lie algebra $\mathfrak{gl}_n(\mathbb{R})\cong M_n(\mathbb{R})$. I say that $\mathrm{Ad}_g$ is just $\tau_g$ itself. That is, if we view $GL_n(\mathbb{R})$ as an open subset of $M_n(\mathbb{R})$ then we can identify $\mathcal{I}_e:M_n(\mathbb{R})\cong\mathfrak{gl}_n(\mathbb{R})$. The fact that $\tau_g$ and $\mathrm{Ad}(g)$ both commute means that $\mathcal{I}_e\circ\tau_g=\mathrm{Ad}(g)\circ\mathcal{I}_e$, meaning that $\tau_g$ and $\mathrm{Ad}(g)$ are “the same” transformation, under this identification of these two vector spaces.

Put more simply: to calculate the adjoint action of $g\in GL_n(\mathbb{R})$ on the element of $\mathfrak{gl}_n(\mathbb{R})$ corresponding to $A\in M_n(\mathbb{R})$, it suffices to calculate the conjugate $gAg^{-1}$; then

$\displaystyle\left[\mathrm{Ad}(g)\right](\mathcal{I}_e(A))=\mathcal{I}_e(\tau_g(A))=\mathcal{I}_e(gAg^{-1})$

June 13, 2011

## The Lie Algebra of a General Linear Group

Since $GL_n(\mathbb{R})$ is an open submanifold of $M_n(\mathbb{R})$, the tangent space of $GL_n(\mathbb{R})$ at any matrix $A$ is the same as the tangent space to $M_n(\mathbb{R})$ at $A$. And since $M_n(\mathbb{R})$ is (isomorphic to) a Euclidean space, we can identify $M_n(\mathbb{R})$ with $\mathcal{T}_AM_n(\mathbb{R})$ using the canonical isomorphism $\mathcal{I}_A:M_n(\mathbb{R})\to\mathcal{T}_AM_n(\mathbb{R})$. In particular, we can identify it with the tangent space at the identity matrix $I$, and thus with the Lie algebra $\mathfrak{gl}_n(\mathbb{R})$ of $GL_n(\mathbb{R})$:

$\displaystyle M_n(\mathbb{R})\cong\mathcal{T}_IGL_n(\mathbb{R})\cong\mathfrak{gl}_n(\mathbb{R})$

But this only covers the vector space structures. Since $M_n(\mathbb{R})$ is an associative algebra it automatically has a bracket: the commutator. Is this the same as the bracket on $\mathfrak{gl}_n(\mathbb{R})$ under this vector space isomorphism? Indeed it is.

To see this, let $A$ be a matrix in $M_n(\mathbb{R})$ and assign $X(I)=\mathcal{I}_I(A)\in\mathcal{T}_IGL_n(\mathbb{R})$. This specifies the value of the vector field $X$ at the identity in $GL_n(\mathbb{R})$. We extend this to a left-invariant vector field by setting

$\displaystyle X(g)=L_{g*}X(I)=L_{g*}\mathcal{I}_I(A)=\mathcal{I}_{g}(gA)$

where we subtly slip from left-translation by $g$ within $GL_n(\mathbb{R})$ to left-translation within the larger manifold $M_n(\mathbb{R})$. We do the same thing to go from another matrix $B$ to another left-invariant vector field $Y$.

Now that we have our hands on two left-invariant vector fields $X$ and $Y$ coming from two matrices $A$ and $B$. We will calculate the Lie bracket $[X,Y]$ — we know that it must be left-invariant — and verify that its value at $I$ indeed corresponds to the commutator $AB-BA$.

Let $u^{ij}:GL_n(\mathbb{R})\to\mathbb{R}$ be the function sending an $n\times n$ matrix to its $(i,j)$ entry. We hit it with one of our vector fields:

$\displaystyle Yu^{ij}(g)=Y_gu^{ij}=\mathcal{I}_g(gB)u^{ij}=u^{ij}(gB)$

That is, $Yu^{ij}=u^{ij}\circ R_B$, where $R_B$ is right-translation by $B$. To apply the vector $X_I=A$ to this function, we must take its derivative at $I$ in the direction of $A$. If we consider the curve through $I$ defined by $c(t)=I+tA$ we find that

$\displaystyle X_IYu^{ij}=\dot{c}(0)(u^{ij}\circ R_B)=\frac{d}{dt}(u^{ij}(B+tAB))\Big\vert_{t=0}=(AB)_{i,j}$

Similarly, we find that $Y_IXu^{ij}=(BA)_{i,j}$. And thus

$\displaystyle [X,Y]_Iu^{ij}=(AB-BA)_{i,j}=\mathcal{I}_I(AB-BA)(u^{ij})$

Of course, for any $Q\in M_n(\mathbb{R})$ we have the decomposition

$\displaystyle\mathcal{I}_IQ=\sum\limits_{i,j=1}^n\mathcal{I}_IQ(u^{ij})\frac{\partial}{\partial u^{ij}}\Big\vert_I$

Therefore, since we’ve calculated $[\mathcal{I}_IA,\mathcal{I}_IB](u^{ij})=\mathcal{I}_I(AB-BA)(u^{ij})$ we know these two vectors have all the same components, and thus are the same vector. And so we conclude that the Lie bracket on $\mathfrak{gl}_n(\mathbb{R})$ agrees with the commutator on $M_n(\mathbb{R})$, and thus that these two are isomorphic as Lie algebras.

June 9, 2011

## General Linear Groups are Lie Groups

One of the most important examples of a Lie group we’ve already seen: the general linear group $GL(V)$ of a finite dimensional vector space $V$. Of course for the vector space $\mathbb{R}^n$ this is the same as — or at least isomorphic to — the group $GL_n(\mathbb{R})$ of all invertible $n\times n$ real matrices, so that’s a Lie group we can really get our hands on. And if $V$ has dimension $n$, then $V\cong\mathbb{R}^n$, and thus $GL(V)\cong GL_n(\mathbb{R})$.

So, how do we know that it’s a Lie group? Well, obviously it’s a group, but what about the topology? The matrix group $GL_n(\mathbb{R})$ sits inside the algebra $M_n(\mathbb{R})$ of all $n\times n$ matrices, which is an $n^2$-dimensional vector space. Even better, it’s an open subset, which we can see by considering the (continuous) map $\mathrm{det}:M_n(\mathbb{R})\to\mathbb{R}$. Since $GL_n(\mathbb{R})$ is the preimage of $\mathbb{R}\setminus\{0\}$ — which is an open subset of $\mathbb{R}$$GL_n(\mathbb{R})$ is an open subset of $M_n(\mathbb{R})$.

So we can conclude that $GL_n(\mathbb{R})$ is an open submanifold of $M_n$, which comes equipped with the standard differentiable structure on $\mathbb{R}^{n^2}$. Matrix multiplication is clearly smooth, since we can write each component of a product matrix $AB$ as a (quadratic) polynomial in the entries of $A$ and $B$. As for inversion, Cramer’s rule expresses the entries of the inverse matrix $A^{-1}$ as the quotient of a (degree $n-1$) polynomial in the entries of $A$ and the determinant of $A$. So long as $A$ is invertible these are two nonzero smooth functions, and thus their quotient is smooth at $A$.

June 9, 2011

## The Lie Algebra of a Lie Group

Since a Lie group $G$ is a smooth manifold we know that the collection of vector fields $\mathfrak{X}G$ form a Lie algebra. But this is a big, messy object because smoothness isn’t a very stringent requirement on a vector field. The value can’t vary too wildly from point to point, but it can still flop around a huge amount. What we really want is something more tightly controlled, and hopefully something related to the algebraic structure on $G$ to boot.

To this end, we consider the “left-invariant” vector fields on $G$. A vector field $X\in\mathfrak{X}G$ is left-invariant if the diffeomorphism $L_h:G\to G$ of left-translation intertwines $X$ with itself for all $h\in G$. That is, $X$ must satisfy $L_{h*}\circ X=X\circ L_h$; or to put it another way: $L_{h*}\left(X(g)\right)=X(hg)$. This is a very strong condition indeed, for any left-invariant vector field is determined by its value at the identity $e\in G$. Just set $g=e$ and find that $X(h)=L_{h*}\left(X(e)\right)$

The really essential thing for our purposes is that left-invariant vector fields form a Lie subalgebra. That is, if $X$ and $Y$ are left-invariant vector fields, then so is their sum $X+Y$, scalar multiples $cX$ — where $c$ is a constant and not a function varying as we move around $M$ — and their bracket $[X,Y]$. And indeed left-invariance of sums and scalar multiples are obvious, using the formula $X(h)=L_{h*}\left(X(e)\right)$ and the fact that $L_{h*}$ is linear on individual tangent spaces. As for brackets, this follows from the lemma we proved when we first discussed maps intertwining vector fields.

So given a Lie group $G$ we get a Lie algebra we’ll write as $\mathfrak{g}$. In general Lie groups are written with capital Roman letters, while their corresponding Lie algebras are written with corresponding lowercase fraktur letters. When $G$ has dimension $n$, $\mathfrak{g}$ also has dimension $n$ — this time as a vector space — since each vector field in $\mathfrak{g}$ is uniquely determined by a single vector in $\mathcal{T}_eG$.

We should keep in mind that while $\mathfrak{g}$ is canonically isomorphic to $\mathcal{T}_eG$ as a vector space, the Lie algebra structure comes not from that tangent space itself, but from the way left-invariant vector fields interact with each other.

And of course there’s the glaring asymmetry that we’ve chosen left-invariant vector fields instead of right-invariant vector fields. Indeed, we could have set everything up in terms of right-invariant vector fields and the right-translation diffeomorphism $R_g:G\to G$. But it turns out that the inversion diffeomorphism $i:G\to G$ interchanges left- and right-invariant vector fields, and so we end up in the same place anyway.

How does the inversion $i$ act on vector fields? We recognize that $i^{-1}=i$, and find that it sends the vector field $X$ to $i_*\circ X\circ i$. Now if $X$ is left-invariant then $L_{h*}\circ X=X\circ L_h$ for all $h\in G$. We can then calculate

\displaystyle\begin{aligned}R_{h*}\circ\left(i_*\circ X\circ i\right)&=\left(R_h\circ i\right)_*\circ X\circ i\\&=\left(i\circ L_{h^{-1}}\right)_*\circ X\circ i\\&=i_*\circ L_{h^{-1}*}\circ X\circ i\\&=i_*\circ X\circ L_{h^{-1}}\circ i\\&=\left(i_*\circ X\circ i\right)\circ R_h\end{aligned}

where the identities $R_h\circ i=i\circ L_{h^{-1}}$ and $L_h^{-1}\circ i=i\circ R_h$ reflect the simple group equations $g^{-1}h=\left(h^{-1}g\right)^{-1}$ and $h^{-1}g^{-1}=\left(gh\right)^{-1}$, respectively. Thus we conclude that if $X$ is left-invariant then $i_*\circ X\circ i$ is right-invariant. The proof of the converse is similar.

The one thing that’s left is proving that if $X$ and $Y$ are left-invariant then their right-invariant images have the same bracket. This will follow from the fact that $i_*(X(e))=-X(e)$, but rather than prove this now we’ll just push ahead and use left-invariant vector fields.

June 8, 2011

## Lie Groups

Now we come to one of the most broadly useful and fascinating structures on all of mathematics: Lie groups. These are objects which are both smooth manifolds and groups in a compatible way. The fancy way to say it is, of course, that a Lie group is a group object in the category of smooth manifolds.

To be a little more explicit, a Lie group $G$ is a smooth $n$-dimensional manifold equipped with a multiplication $G\times G\to G$ and an inversion $G\to G$ which satisfy all the usual group axioms (wow, it’s been a while since I wrote that stuff down) and are also smooth maps between manifolds. Of course, when we write $G\times G$ we mean the product manifold.

We can use these to construct some other useful maps. For instance, if $h\in G$ is any particular element we know that we have a smooth inclusion $G\to G\times G$ defined by $g\mapsto (h,g)$. Composing this with the multiplication map we get a smooth map $L_h:G\to G$ defined by $L_h(g)=hg$, which we call “left-translation by $h$“. Similarly we get a smooth right-translation $R_h(g)=gh$.

June 6, 2011

## Maps Intertwining Vector Fields

Let $f:M\to N$ be a smooth map between manifolds, with derivative $f_*:\mathcal{T}M\to\mathcal{T}N$, and let $X:M\to\mathcal{T}M$ and $Y:N\to\mathcal{T}N$ be smooth vector fields. We can compose them as $f_*\circ X:M\to\mathcal{T}N$ and $Y\circ f:M\to\mathcal{T}N$, and it makes sense to ask if these are the same map.

To put it another way, $Y\circ f:p\mapsto Y_{f(p)}$ is the vector $Y$ specifies for the point $f(p)$. On the other hand, $f_*\circ X:p\mapsto f_*(X_p)$ is the image of the vector $X$ specifies for the point $p$. If these two vectors are the same for every $p\in M$, then we say that $f$ “intertwines” the two vector fields, or that $X$ and $Y$ are “$f$-related”. The latter term is a bit awkward, which is why I prefer the former, especially since it does have that same commutative-diagram feel as intertwinors between representations.

Anyway, in the case that $f$ is a diffeomorphism we can actually use this to transfer vector fields from one manifold to the other. Given a point $q\in N$, which point should it be compared to? the inverse image $f^{-1}(q)$, of course. This point gets the vector $X\left(f^{-1}(q)\right)$, which then gets sent to $f_*\left(X\left(f^{-1}(q)\right)\right)$. That is, if we define $Y=f_*\circ X\circ f^{-1}$, then $f$ is guaranteed to intertwine $X$ and $Y$.

Since $f_*$ is a linear map on each stalk it’s clear that if $f$ intertwines $X_1$ and $Y_1$, as well as $X_2$ and $Y_2$, then $f$ intertwines $c_1X_1+c_2X_2$ and $c_1Y_1+c_2Y_2$. But we’ve just seen that vector fields form a Lie algebra, and it would be nice if we could say the same for $[X_1,X_2]$ and $[Y_1,Y_2]$. The catch is that we don’t just compute these point-by-point.

Let’s pick a text function $\phi\in\mathcal{O}N$ and a point $p\in M$. We first check that

\displaystyle\begin{aligned}{}[(Y_i\phi)\circ f](q)&=(Y_i\phi)(f(q))\\&=Y_{i,f(q)}\phi\\&=[f_*X_{i,q}]\phi\\&=X_{i,q}(\phi\circ f)\end{aligned}

Now we can calculate

\displaystyle\begin{aligned}{}[Y_1,Y_2]_{f(p)}\phi&=Y_{1,f(p)}(Y_2\phi)-Y_{2,f(p)}(Y_1\phi)\\&=f_*X_{1,p}(Y_2\phi)-f_*X_{2,p}(Y_1\phi)\\&=X_{1,p}((Y_2\phi)\circ f)-X_{2,p}((Y_1\phi)\circ f)\\&=X_{1,p}(X_2(\phi\circ f))-X_{2,p}(X_1(\phi\circ f))\\&=[X_1,X_2]_p(\phi\circ f)\\&=(f_*[X_1,X_2]_p)\phi\end{aligned}

So $f$ intertwines $[X_1,X_2]$ and $[Y_1,Y_2]$, as we asserted. In the case where $f$ is a diffeomorphism, this means that the construction above gives us a homomorphism of Lie algebras from $\mathcal{T}M$ to $\mathcal{T}N$.

June 3, 2011

## The Lie Bracket of Vector Fields

We know that any vector field $X\in\mathfrak{X}U$ can act as an endomorphism on the space $\mathcal{O}U$ of smooth functions on $U$. What happens if we act by one vector field followed by another? To really make things explicit, let’s say that $(U,x)$ is a coordinate patch, so we can write

$\displaystyle Xf(p)=\sum\limits_{i=1}^nX^i(p)\frac{\partial f}{\partial x^i}\bigg\vert_p$

where $X^i$ is a coefficient function and $\frac{\partial f}{\partial x^i}\vert_p$ measures how fast $f$ is changing as we increase the $i$th coordinate function through $p$. Now if we hit this with another vector field $Y$ we find

$\displaystyle YXf=\sum\limits_{i=1}^nY\left(X^i\frac{\partial f}{\partial x^i}\right)$

At each point $p$ the field $Y$ gives a vector $Y_p$, which acts as a derivation on the ring of smooth functions at $p$. That is

\displaystyle\begin{aligned}YXf&=\sum\limits_{i=1}^nY\left(X^i\right)\frac{\partial f}{\partial x^i}+X^iY\left(\frac{\partial f}{\partial x^i}\right)\\&=\sum\limits_{i=1}^n\sum\limits_{j=1}^nY^j\frac{\partial X^i}{\partial x^j}\frac{\partial f}{\partial x^i}+X^iY^j\frac{\partial^2f}{\partial x^j\partial x^i}\end{aligned}

Now, this is obviously an endomorphism on $\mathcal{O}U$ since it’s the composite of two endomorphisms. But it is not a vector field, since at a given point $p$ we don’t get a derivation of the ring of smooth functions at $p$. Indeed, what happens if we give it the product of two functions?

\displaystyle\begin{aligned}YX(fg)&=\sum\limits_{i=1}^n\sum\limits_{j=1}^nY^j\frac{\partial X^i}{\partial x^j}\frac{\partial(fg)}{\partial x^i}+X^iY^j\frac{\partial}{\partial x^j}\frac{\partial(fg)}{\partial x^i}\\&=\sum\limits_{i=1}^n\sum\limits_{j=1}^nY^j\frac{\partial X^i}{\partial x^j}\left(\frac{\partial f}{\partial x^i}g+f\frac{\partial g}{\partial x^i}\right)+X^iY^j\frac{\partial}{\partial x^j}\left(\frac{\partial f}{\partial x^i}g+f\frac{\partial g}{\partial x^i}\right)\\&=\sum\limits_{i=1}^n\sum\limits_{j=1}^nY^j\frac{\partial X^i}{\partial x^j}\left(\frac{\partial f}{\partial x^i}g+f\frac{\partial g}{\partial x^i}\right)+X^iY^j\left(\frac{\partial^2f}{\partial x^j\partial x^i}g+\frac{\partial f}{\partial x^i}\frac{\partial g}{\partial x^j}+\frac{\partial f}{\partial x^j}\frac{\partial g}{\partial x^i}+f\frac{\partial^2g}{\partial x^j\partial x^i}\right)\\&=YX(f)g+fYX(g)+\sum\limits_{i=1}^n\sum\limits_{j=1}^nX^iY^j\left(\frac{\partial f}{\partial x^i}\frac{\partial g}{\partial x^j}+\frac{\partial f}{\partial x^j}\frac{\partial g}{\partial x^i}\right)\end{aligned}

We’ve got a bunch of terms left over at the end! But one thing is nice about it: the leftover terms are symmetric between $X$ and $Y$:

\displaystyle\begin{aligned}XY(fg)&=XY(f)g+fXY(g)+\sum\limits_{i=1}^n\sum\limits_{j=1}^nY^iX^j\left(\frac{\partial f}{\partial x^i}\frac{\partial g}{\partial x^j}+\frac{\partial f}{\partial x^j}\frac{\partial g}{\partial x^i}\right)\\&=XY(f)g+fXY(g)+\sum\limits_{i=1}^n\sum\limits_{j=1}^nX^iY^j\left(\frac{\partial f}{\partial x^i}\frac{\partial g}{\partial x^j}+\frac{\partial f}{\partial x^j}\frac{\partial g}{\partial x^i}\right)\end{aligned}

So what would happen if instead of using the regular composition product of these endomorphisms, we used the associated Lie bracket? We’d find

\displaystyle\begin{aligned}{}[X,Y](fg)=&XY(fg)-YX(fg)\\=&XY(f)g+fXY(g)+\sum\limits_{i=1}^n\sum\limits_{j=1}^nX^iY^j\left(\frac{\partial f}{\partial x^i}\frac{\partial g}{\partial x^j}+\frac{\partial f}{\partial x^j}\frac{\partial g}{\partial x^i}\right)\\&-YX(f)g+fYX(g)+\sum\limits_{i=1}^n\sum\limits_{j=1}^nX^iY^j\left(\frac{\partial f}{\partial x^i}\frac{\partial g}{\partial x^j}+\frac{\partial f}{\partial x^j}\frac{\partial g}{\partial x^i}\right)\\=&XY(f)g-YX(f)g+fXY(g)-fYX(g)\\=&[X,Y](f)g+f[X,Y](g)\end{aligned}

That is, the Lie bracket $[X,Y]$ of $X$ and $Y$ is another vector field! Indeed, let’s see what it looks like in coordinates:

\displaystyle\begin{aligned}{}[X,Y]f=&XYf-YXf\\=&\sum\limits_{i=1}^n\sum\limits_{j=1}^nX^j\frac{\partial Y^i}{\partial x^j}\frac{\partial f}{\partial x^i}+Y^iX^j\frac{\partial^2f}{\partial x^j\partial x^i}\\&-\sum\limits_{i=1}^n\sum\limits_{j=1}^nY^j\frac{\partial X^i}{\partial x^j}\frac{\partial f}{\partial x^i}+X^iY^j\frac{\partial^2f}{\partial x^j\partial x^i}\\=&\sum\limits_{i=1}^n\left(\sum\limits_{j=1}^nX^j\frac{\partial Y^i}{\partial x^j}-Y^j\frac{\partial X^i}{\partial x^j}\right)\frac{\partial f}{\partial x^i}\end{aligned}

where we can cancel off the two second partial derivatives because we’re assuming that $f$ is “smooth”, which in this case entails “has mixed second partial derivatives which commute” in any local coordinate system.

And so we might appropriately write

$\displaystyle[X,Y]^i=\sum\limits_{j=1}^nX^j\frac{\partial Y^i}{\partial x^j}-Y^j\frac{\partial X^i}{\partial x^j}$

Of course, even where we don’t have local coordinates we can still write $[X,Y]f=XYf-YXf$ or $[X,Y]=XY-YX$ and get a vector field. We may also find it useful to write down the value of this field at a point: $[X,Y]_p=X_pY-Y_pX$. Indeed we can check that this behaves like a vector at $p$:

\displaystyle\begin{aligned}{}[X,Y]_p(fg)=&X_p(Y(fg))-Y_p(X(fg))\\=&X_p(Y(f)g+fY(g))-Y_p(X(f)g+fX(g))\\=&X_p(Yf)g(p)+Yf(p)X_p(g)+X_p(f)Yg(p)+f(p)X_p(Yg)\\&-Y_p(Xf)g(p)-Xf(p)Y_p(g)-Y_p(f)Xg(p)-f(p)Y_p(Xg)\\=&X_p(Yf)g(p)+Y_p(f)X_p(g)+X_p(f)Y_p(g)+f(p)X_p(Yg)\\&-Y_p(Xf)g(p)-X_p(f)Y_p(g)-Y_p(f)X_p(g)-f(p)Y_p(Xg)\\=&\left[X_pY-Y_pX\right](f)g(p)+f(p)\left[X_pY-Y_pX\right](g)\\=&[X,Y]_p(f)g(p)+f(p)[X,Y]_p(g)\end{aligned}

And so the space $\mathfrak{X}U$ of smooth vector fields on $U$ forms a Lie subalgebra of the Lie algebra of endomorphisms of the vector space $\mathcal{O}U$.

June 2, 2011

## Vector Fields on Compact Manifolds are Complete

It turns out that any vector field on a compact manifold is complete. That is, starting at any point we can follow the vector field on construct its integral curve as far forward or as far backward as we want. I’ll show this two ways.

First of all, let’s say that there is some open interval $I$ containing a closed interval $[-\delta,\delta]$ such that $I\times M$ is contained in the $W$ from the maximal flow of $X$. That is, assume that no matter where we start we can flow forward or backward by $\delta$. But if we start at $p_0$ this means we can flow forward by $\delta$ to reach a point

$\displaystyle p_1=\Phi_\delta(p_0)=\Phi(\delta,p_0)$

But then by assumption we can flow forward again by $\delta$ to reach

$\displaystyle p_2=\Phi_\delta(p_1)=\Phi_\delta(\Phi_\delta(p_0))=\Phi_{2\delta}(p_0)$

And we can keep flowing forward to reach $p_n=\Phi_{n\delta}(p_0)$ for arbitrarily large $n$. Similarly we can flow backward as far as we want.

So, is there such an interval? In general there doesn’t have to be, but if $M$ is compact there is. Indeed, at each point $p$ we can define the interval $I_p=\left(-a(p),b(p)\right)$ as the maximal open interval on which the integral curve based at $p$ is defined. Here, each $a(p)$ and $b(p)$ is a positive real number or $\infty$. Since $M$ is compact, each of these functions has a minimum — $a$ and $b$. And then the interval $(a,b)$ is contained in each $I_p$, as asserted.

A little more analytically, let $\phi:[\alpha,\beta)\to M$ be an integral curve of a vector field $X$, and suppose that there is a sequence $t_n$ increasing to $\beta$ for which $\lim\phi(t_n)\to p$. Then it can only make sense to define $\bar{\phi}:[\alpha,\beta]\to M$ by defining $\bar{\phi}(t)=\phi(t)$ for $\alpha\leq<\beta$ and $\bar{\phi}(\beta)=p$, for continuity. But now if $c:I\to M$ is the maximal integral curve of $X$ with $c(\beta)=p$ then uniqueness tells us that $c(t)=\bar{\phi}(t)$ for $\alpha\leq t\leq\beta$.

So what happens when $M$ is compact? In this case, any sequence of $t_n$ converging to $\beta$ gives rise to a sequence of $\phi(t_n)$ converging to some common $p$. Thus we can always extend any integral curve on a right-open interval $[\alpha,\beta)$ to the closed interval $[\alpha,\beta]$, and then past $\beta$ to a somewhat longer right-open interval, and so on as far as we want to go. And thus, again, every integral curve can be extended forever in either direction, which makes $X$ complete.

June 1, 2011