The Unapologetic Mathematician

Mathematics for the interested outsider

Differentiability Implies Continuity

In the course of showing that the differential of a function at a point — if it exists at all — is unique (and thus we can say “the” differential), we showed that given an orthonormal basis we have all partial derivatives. We even have all directional derivatives, with pretty much the same proof. We replace e_k with an arbitrary vector u, and pick the scalar \tau so that \frac{\delta}{\lVert u\rVert}>\lvert\tau\rvert>0. We find that \left[D_uf\right](x)=df(x;u)=df(x)u. So when a function is differentiable not only do all directional derivatives exist, they’re all given by a single linear functional applied to the direction vector. Notice that this does not hold for the pathological example we used to show that having all directional derivatives didn’t imply continuity.

Okay, so now does having a differential at a point imply that a function is continuous there? Remember that this was the major reason we rejected both partial and directional derivatives as insufficient as generalizations of differentiation in one variable. But, happily, it does. Firstly, we’re going to pick a basis and show that the function satisfies a Lipschitz condition (five minutes of furtive laughter in any advanced calculus class starts…. now) (it’s worse than doing quantum mechanics with bras in front of high schoolers). That is to say, there is a positive number M and some neighborhood N of {0} so that if t\in N but t\neq0, then \lvert f(x+t)-f(x)\rvert<M\lVert t\rVert. Or, in more conceptual terms, any displacement near enough to {0} can only be made M times bigger after running it through f. This gives us some control on what the function does as we move our input point around.

So, first we take \epsilon=1 in the definition of the differential, to find

\displaystyle\lvert\left[f(x+t)-f(x)\right]-df(x;t)\rvert<\lVert t\rVert

we can add \lvert df(x;t)\rvert to both sides and use the triangle inequality to find

\displaystyle\lvert f(x+t)-f(x)\rvert\leq\lvert\left[f(x+t)-f(x)\right]-df(x;t)\rvert+\lvert df(x;t)\rvert<\lvert df(x;t)\rvert+\lVert t\rVert

But once we pick a basis we can write out the differential as

\displaystyle\lvert df(x;t)\rvert=\left\lvert\sum\limits_{i=1}^n\left[D_{e_i}f\right](x)t^i\right\rvert\leq\sum\limits_{i=1}^n\left\lvert\left[D_{e_i}f\right](x)\right\rvert\lvert t^i\rvert\leq\left(\sum\limits_{i=1}^n\left\lvert\left[D_{e_i}f\right](x)\right\rvert\right)\lVert t\rVert

I’ve written out the sum explicitly here because it’s necessary in the last term. So if we pick

\displaystyle M=1+\sum\limits_{i=1}^n\left\lvert\left[D_{e_i}f\right](x)\right\rvert

then we have the Lipschitz condition we want.

And then it just so happens that a Lipschitz condition implies continuity. Indeed, given an \epsilon>0 pick a \delta>0 small enough that both \delta<\frac{\epsilon}{M}, and also the ball of radius \delta fits inside the neighborhood N from the Lipschitz condition. Then for \delta>\lVert t\rVert>0 we find

\displaystyle\lvert f(x+t)-f(x)\rvert<M\lVert t\rVert<M\delta<M\frac{\epsilon}{M}=\epsilon

and we have continuity.

Now to really understand this, go back and walk it through with a function of one variable. See if you can find where the old proof that a having a derivative implies continuity is sitting inside this Lipschitz condition proof.

September 30, 2009 Posted by | Analysis, Calculus | 5 Comments

   

Follow

Get every new post delivered to your Inbox.

Join 392 other followers