# The Unapologetic Mathematician

## Gronwall’s Inequality

We’re going to need another analytic lemma, this one called “Gronwall’s inequality”. If $v:[0,\alpha]\to\mathbb{R}$ is a continuous, nonnegative function, and if $C$ and $K$ are nonnegative constants such that

$\displaystyle v(t)\leq C+\int\limits_0^tKv(s)\,ds$

for all $t\in[0,\alpha]$ then for all $t$ in this interval we have

$\displaystyle v(t)\leq Ce^{Kt}$

That is, we can conclude that $v$ grows no faster than an exponential function. Exponential growth may seem fast, but at least it doesn’t blow up to an infinite singularity in finite time, no matter what Kurzweil seems to think.

Anyway, first let’s deal with strictly positive $C$. If we define

$\displaystyle V(t)=C+\int\limits_0^tKv(s)\,ds>0$

then by assumption we have $v(t)\leq V(t)$. Differentiating, we find $V'(t)=Kv(t)$, and thus

$\displaystyle\frac{d}{dt}\left(\log(V(t))\right)=\frac{V'(t)}{V(t)}=\frac{Kv(t)}{V(t)}\leq K$

Integrating, we find

$\displaystyle\log(V(t))\leq\log(V(0))+Kt=\log(C)+Kt$

Finally we can exponentiate to find

$\displaystyle v(t)\leq V(t)\leq Ce^{Kt}$

proving Gronwall’s inequality.

If $C=0$, in our hypothesis, the hypothesis is true for any $\bar{C}>0$ in its place, and so we see that $v(t)\leq\bar{C}e^{Kt}$ for any positive $\bar{C}$, which means that $v(t)$ must be zero, as required by Gronwall’s inequality in this case.

May 11, 2011

## Another Existence Proof

I’d like to go back and give a different proof that the Picard iteration converges — one which is closer to the spirit of Newton’s method. In that case, we proved that Newton’s method converged by showing that the derivative of the iterating function was less than one at the desired solution, making it an attracting fixed point.

In this case, however, we don’t have a derivative because our iteration runs over functions rather than numbers. We will replace it with a similar construction called the “functional derivative”, which is a fundamental part of the “calculus of variations”. I’m not really going to go too deep into this field right now, and I’m not going to prove the analogous result that a small functional derivative means an attracting fixed point, but it’s a nice exercise and introduction anyway.

$\displaystyle P[v](t)=a+\int\limits_0^tF(v(s))\,ds$

We consider what happens when we add an adjustment to $v$:

\displaystyle\begin{aligned}P[v+h](t)&=a+\int\limits_0^tF(v(s)+h(s))\,ds\\&\approx a+\int\limits_0^tF(v(s))+dF(v(s))h(s)\,ds\\&=a+\int\limits_0^tF(v(s))\,ds+\int\limits_0^tdF(v(s))h(s)\,ds\\&=P[v](t)+\int\limits_0^tdF(v(s))h(s)\,ds\end{aligned}

We call the small change the “variation” of $v$, and we write $\delta v=h$. Similarly, we call the difference between $P[v+\delta v]$ and $P[v]$ the variation of $P$ and write $\delta P$. It turns out that controlling the size of the variation $\delta v$ gives us some control over the size of the variation $\delta P$. To wit, if $\lVert\delta v\rVert_\infty\leq d$ then we find

\displaystyle\begin{aligned}\left\lVert\int\limits_0^tdF(v(s))\delta v(s)\,ds\right\rVert&\leq\int\limits_0^t\lVert dF(v(s))\delta v(s)\rVert\,ds\\&\leq\int\limits_0^t\lVert dF(v(s))\rVert_\text{op}\lVert\delta v(s)\rVert_\infty\,ds\\&\leq d\int\limits_0^t\lVert dF(v(s))\rVert_\text{op}\,ds\end{aligned}

Now our proof that $F$ is locally Lipschitz involved showing that there’s a neighborhood of $a$ where we can bound $\lVert dF(x)\rVert_\text{op}$ by $K$. Again we can pick a small enough $c$ so that $\lvert s\rvert c$ implies that $v(s)$ stays within this neighborhood, and also such that $cK<1$. And then we conclude that $\lVert\delta P\rVert_\infty\leq d$, which we can also write as

$\displaystyle\frac{\delta P}{\delta v}<1$

Now, admittedly this argument is a bit handwavy as it stands. Still, it does go to show the basic idea of the technique, and it’s a nice little introduction to the calculus of variations.

May 10, 2011

## Uniqueness of Solutions to Differential Equations

The convergence of the Picard iteration shows the existence part of our existence and uniqueness theorem. Now we prove the uniqueness part.

Let’s say that $u(t)$ and $v(t)$ are both solutions of the differential equation — $u'(t)=F(u(t))$ and $v'(t)=F(v(t))$ — and that they both satisfy the initial condition — $u(0)=v(0)=a$ — on the same interval $J=[-c,c]$ from the existence proof above. We will show that $u(t)=v(t)$ for all $t\in J$ by measuring the $L^\infty$ norm of their difference:

$\displaystyle Q=\lVert u-v\rVert_\infty=\max\limits_{t\in J}\lvert u(t)-v(t)\rvert$

Since $J$ is a closed interval, this maximum must be attained at a point $t_1\in J$. We can calculate

\displaystyle\begin{aligned}Q&=\lvert u(t_1)-v(t_1)\rvert\\&=\left\lvert\int\limits_0^{t_1}u'(s)-v'(s)\,ds\right\rvert\\&\leq\int\limits_0^{t_1}\lvert F(u(s))-F(v(s))\rvert\,ds\\&\leq\int\limits_0^{t_1}K\lvert u(s)-v(s)\rvert\,ds\\&\leq cKQ\end{aligned}

but by assumption we know that $cK<1$, which makes this inequality impossible unless $Q=0$. Thus the distance between $u$ and $v$ is $0$, and the two functions must be equal on this interval, proving uniqueness.

May 9, 2011

## The Picard Iteration Converges

Now that we’ve defined the Picard iteration, we have a sequence of functions $v_i:J\to B_\rho$ from a closed neighborhood of $0\in\mathbb{R}$ to a closed neighborhood of $a\in\mathbb{R}^n$. Recall that we defined $M$ to be an upper bound of $\lVert F\rVert$ on $B_\rho$, $K$ to be a Lipschitz constant for $F$ on $B_\rho$, $c$ less than both $\frac{\rho}{M}$ and $\frac{1}{K}$, and $J=[-c,c]$.

Specifically, we’ll show that the sequence converges in the supremum norm on $J$. That is, we’ll show that there is some $v:J\to B_\rho$ so that the maximum of the difference $\lVert v_k(t)-v(t)\rVert$ for $t\in J$ decreases to zero as $i$ increases. And we’ll do this by showing that the individual functions $v_i$ and $v_j$ get closer and closer in the supremum norm. Then they’ll form a Cauchy sequence, which we know must converge because the metric space defined by the supremum norm is complete, as are all the $L^p$ spaces.

Anyway, let $L=\lVert v_1-v_0\rVert_\infty$ be exactly the supremum norm of the difference between the first two functions in the sequence. I say that $\lVert v_{i+1}-v_i\rVert_\infty\leq(cK)^iL$. Indeed, we calculate inductively

\displaystyle\begin{aligned}\lVert v_{i+1}(t)-v_i(t)\rVert&\leq\int\limits_0^t\lVert F(v_i(s))-F(v_{i-1}(s))\rVert\,ds\\&\leq K\int\limits_0^t\lVert v_i(s)-v_{i-1}(s)\rVert\,ds\\&\leq K\int\limits_0^t(cK)^{i-1}L\,ds\\&\leq(cK)(cK)^{i-1}L\\&=(cK)^iL\end{aligned}

Now we can bound the distance between any two functions in the sequence. If $i are two indices we calculate:

\displaystyle\begin{aligned}\lVert v_j-v_i\rVert_\infty&=\left\lVert\sum\limits_{k=i}^{j-1}v_{k+1}-v_k\right\rVert_\infty\\&\leq\sum\limits_{k=i}^{j-1}\lVert v_{k+1}-v_k\rVert_\infty\\&\leq\sum\limits_{k=i}^{j-1}(cK)^kL\end{aligned}

But this is a chunk of a geometric series; since $cK<1$, the series must converge, and so we can make this sum as small as we please by choosing $i$ and $j$ large enough.

This then tells us that our sequence of functions is $L^\infty$-Cauchy, and thus $L^\infty$-convergent, which implies uniform pointwise convergence. The uniformity is important because it means that we can exchange integration with the limiting process. That is,

$\displaystyle\lim\limits_{k\to\infty}\int_0^tv_k(s)\,ds=\int\limits_0^t\lim\limits_{k\to\infty}v_k(s)\,ds=\int\limits_0^tv(s)\,ds$

$\displaystyle v_{k+1}(t)=a+\int\limits_0^tF(v_k(s))\,ds$

and take the limit of both sides

\displaystyle\begin{aligned}v(t)&=\lim\limits_{k\to\infty}v_{k+1}(t)\\&=\lim\limits_{k\to\infty}\left(a+\int\limits_0^tF(v_k(s))\,ds\right)\\&=a+\int\limits_0^t\lim\limits_{k\to\infty}F(v_k(s))\,ds\\&=a+\int\limits_0^tF(v(s))\,ds\end{aligned}

where we have used the continuity of $F$. This shows that the limiting function $v$ does indeed satisfy the integral equation, and thus the original initial value problem.

May 6, 2011

## The Picard Iteration

Now we can start actually closing in on a solution to our initial value problem. Recall the setup:

\displaystyle\begin{aligned}v'(t)&=F(v(t))\\v(0)&=a\end{aligned}

The first thing we’ll do is translate this into an integral equation. Integrating both sides of the first equation and using the second equation we find

$\displaystyle v(t)=a+\int\limits_0^tF(v(s))\,ds$

Conversely, if $v$ satisfies this equation then clearly it satisfies the two conditions in our initial value problem.

Now the nice thing about this formulation is that it expresses $v$ as the fixed point of a certain operation. To find it, we will use an iterative method. We start with $v_0(t)=a$ and define the “Picard iteration”

$\displaystyle v_{i+1}(t)=a+\int\limits_0^tF(v_i(s))\,ds$

This is sort of like Newton’s method, where we express the point we’re looking for as the fixed point of a function, and then find the fixed point by iterating that very function.

The one catch is, how are we sure that this is well-defined? What could go wrong? Well, how do we know that $v_i(s)$ is in the domain of $F$? We have to make some choices to make sure this works out.

First, let $B_\rho$ be the closed ball of radius $\rho$ centered on $a$. We pick $\rho$ so that $F$ satisfies a Lipschitz condition on $B_\rho$, which we know we can do because $F$ is locally Lipschitz. Since this is a closed ball and $F$ is continuous, we can find an upper bound $M\geq\lVert F(x)\rVert$ for $v\in B_\rho$. Finally, we can find a $c<\min\left(\frac{\rho}{M},\frac{1}{K}\right)$, and the interval $J=[-c,c]$. I assert that $v_k:J\to B_\rho$ is well-defined.

First of all, $v_0(t)=a\in B_\rho$ for all $t\in J$, so that’s good. We now assume that $v_i$ is well-defined and prove that $v_{i+1}$ is as well. It’s clearly well-defined as a function, since $v_i(t)\in B_\rho$ by assumption, and $B_\rho$ is contained within the domain of $F$. The integral makes sense since the integrand is continuous, and then we can add $a$. But is $v_{i+1}(t)\in B_\rho$?

So we calculate

\displaystyle\begin{aligned}\left\lVert\int\limits_0^tF(v_i(s))\,ds\right\rVert&\leq\int\limits_0^t\lVert F(v_i(s))\rVert\,ds\\&\leq\int\limits_0^tM\,ds\\&\leq Mc\\&\leq\rho\end{aligned}

which shows that the difference between $v_{i+1}(t)$ and $a$ has length smaller than $\rho$ for any $t\in J$. Thus $v_{i+1}:J\to B_\rho$, as asserted, and the Picard iteration is well-defined.

May 5, 2011

## Continuously Differentiable Functions are Locally Lipschitz

It turns out that our existence proof will actually hinge on our function satisfying a Lipschitz condition. So let’s show that we will have this property anyway.

More specifically, we are given a $C^1$ function $f:U\to\mathbb{R}^n$ defined on an open region $U\subseteq\mathbb{R}^n$. We want to show that around any point $p\in U$ we have some neighborhood $N$ where $f$ satisfies a Lipschitz condition. That is: for $x$ and $y$ in the neighborhood $N$, there is a constant $K$ and we have the inequality

$\displaystyle\lVert f(y)-f(x)\rVert\leq K\lVert y-x\rVert$

We don’t have to use the same $K$ for each neighborhood, but every point should have a neighborhood with some $K$.

Infinitesimally, this is obvious. The differential $df(p):\mathbb{R}^n\to\mathbb{R}^n$ is a linear transformation. Since it goes between finite-dimensional vector spaces it’s bounded, which means we have an inequality

$\displaystyle\lVert df(p)v\rVert\leq\lVert df(p)\rVert_\text{op}\lVert v\rVert$

where $\lVert df(p)\rVert_\text{op}$ is the operator norm of $df(p)$. What this lemma says is that if the function is $C^1$ we can make this work out over finite distances, not just for infinitesimal displacements.

So, given our point $p$ let $B_\epsilon$ be the closed ball of radius $\epsilon$ around $p$, and choose $\epsilon$ so small that $B_\epsilon$ is contained within $U$. Since the function $df(p)$ — which takes points to the space of linear operators — is continuous by our assumption, the function $p\mapsto\lVert df(p)\rVert_\text{op}$ is continuous as well. The extreme value theorem tells us that since $B_\epsilon$ is compact this continuous function must attain a maximum, which we call $K$.

The ball is also “convex”, meaning that given points $x$ and $y$ in the ball the whole segment $x+t(y-x)$ for $0\leq t\leq1$ is contained within the ball. We define a function $g(t)=f(x+t(y-x))$ and use the chain rule to calculate

$\displaystyle g'(t)=df(x+t(y-x))\frac{d}{dt}(x+t(y-x))=df(x+t(y-x))(y-x)$

Then we calculate

\displaystyle\begin{aligned}f(y)-f(x)&=g(1)-g(0)\\&=\int\limits_0^1g'(t)\,dt\\&=\int\limits_0^1df(x+t(y-x))(y-x)\,dt\end{aligned}

And from this we conclude

\displaystyle\begin{aligned}\lVert f(y)-f(x)\rVert&=\left\lVert\int\limits_0^1df(x+t(y-x))(y-x)\,dt\right\rVert\\&\leq\int\limits_0^1\lVert df(x+t(y-x))(y-x)\rVert\,dt\\&\leq\int\limits_0^1\lVert df(x+t(y-x))\rVert_\text{op}\lVert(y-x)\rVert\,dt\\&\leq\int\limits_0^1K\lVert(y-x)\rVert\,dt\\&=K\lVert(y-x)\rVert\end{aligned}

That is, the separation between the outputs is expressible as an integral, the integrand of which is bounded by our infinitesimal result above. Integrating up we get the bound we seek.

May 4, 2011

## The Existence and Uniqueness Theorem of Ordinary Differential Equations (statement)

I have to take a little detour for now to prove an important result: the existence and uniqueness theorem of ordinary differential equations. This is one of those hard analytic nubs that differential geometry takes as a building block, but it still needs to be proven once before we can get back away from this analysis.

Anyway, we consider a continuously differentiable function $F:U\to\mathbb{R}^n$ defined on an open region $U\subseteq\mathbb{R}^n$, and the initial value problem:

\displaystyle\begin{aligned}v'(t)&=F(v(t))\\v(0)&=a\end{aligned}

for some fixed initial value $a\in U$. I say that there is a unique solution to this problem, in the sense that there is some interval $(-a,a)$ and a unique function $v:(-a,a)\to\mathbb{R}^n$ satisfying both conditions.

In fact, more is true: the solution varies continuously with the starting point. That is, there is an interval $I$ around $0\in\mathbb{R}$, some neighborhood $W$ of $a$ and a continuously differentiable function $\psi:I\times W\to U$ called the “flow” of the system defined by the differential equation $v'=F(v)$, which satisfies the two conditions:

\displaystyle\begin{aligned}\frac{\partial}{\partial t}\psi(t,u)&=F(\psi(t,u))\\\psi(0,u)&=u\end{aligned}

Then for any $w\in W$ we can get a curve $v_w:I\to U$ defined by $v_w(t)=\psi(t,w)$. The two conditions on the flow then tell us that $v_w$ is a solution of the initial value problem with initial value $w$.

This will take us a short while, but then we can put it behind us and get back to differential geometry. Incidentally, the approach I will use generally follows that of Hirsch and Smale.

May 4, 2011

## Submersions

Another quick definition: we say that a smooth map $f:M^m\to N^n$ is a “submersion” if it is surjective, and if every point $p\in M$ is a regular point of $f$. Despite the similarity of the terms “immersion” and “submersion”, these are very different concepts, so be careful to keep them separate.

The nice thing about submersions is that every value of $N$ is a regular value, and every one has a nonempty preimage. Thus our extension of the implicit function theorem applies to show that $f^{-1}(q)$ is an $m-n$-dimensional submanifold of $M$.

One obvious example of submersion is a projection from a product manifold. As we’ve seen, the determinant of this projection is always a surjection. In fact, it’s a projection itself.

Another example is the projection from the tangent bundle $\mathcal{T}M$ down to its base manifold $M$. Indeed, given any tangent vector $v$ at $p$ we can pick a coordinate patch $U\subseteq M$ around $p$ and the corresponding patch $\pi^{-1}(U)$ of $\mathcal{T}M$. Within these coordinates we can easily calculate the derivative of $\pi$ and see that it’s just a projection onto the first $\dim(M)$ components, which is surjective. In this case, the preimages $\pi^{-1}(p)$ are the stalks of the tangent bundle $\mathcal{T}_pM$.

May 2, 2011