# The Unapologetic Mathematician

## Oscillation

Oscillation in a function is sort of a local and non-directional version of variation. If $f:X\rightarrow\mathbb{R}$ is a bounded function on some region $X\subseteq\mathbb{R}^n$, and if $T$ is a nonempty subset of $X$, then we define the oscillation of $f$ on $T$ by the formula

$\displaystyle\Omega_f(T)=\sup\limits_{x,y\in T}\left\{f(y)-f(x)\right\}$

measuring the greatest difference in values of $f$ on $T$.

We also want a version that’s localized to a single point $x\in X$. To do this, we first note that the collection of all subsets $N$ of $X$ which contain $x$ form a poset as usual by inclusion. But we want to reverse this order and say that $M\preceq N$ if and only if $N\subseteq M$.

Now for any two subsets $x\in N_1\subseteq X$ and $x\in N_2\subseteq X$, their intersection $N_1\cap N_2$ is another such subset containing $x$. And since it’s contained in both $N_1$ and $N_2$, it’s above both of them in our partial order, which makes this poset a directed set, and the oscillation of $f$ is a net.

In fact, it’s easy to see that if $N\subseteq M$ then $\Omega_f(N)\leq\Omega_f(M)$, so this net is monotonically decreasing as the subset gets smaller and smaller. Further, we can see that $\Omega_f(N)\geq0$, since if we can always consider the difference $f(t)-f(t)=0$, the supremum must be at least this big.

Anyhow, now we know that the net has a limit, and we define

$\displaystyle\omega_f(x)=\lim\Omega_f(N)$

where $N$ is a subset of $X$ containing $x$, and we take the limit as $N$ gets smaller and smaller.

In fact, this is slightly overdoing it. Our domain is a topological subspace of $\mathbb{R}^n$, and is thus a metric space. If we want we can just work with metric balls and define

$\displaystyle\omega_f(x)=\lim\limits_{r\rightarrow0^+}\Omega_f(N(x;r)\cap X)$

where $N(x;r)$ is the ball of radius $r$ around $x$. These definitions are exactly equivalent in metric spaces, but the net definition works in more general topological spaces, and it’s extremely useful in its own right later, so it’s worth thinking about now.

Oscillation provides a nice way to restate our condition for continuity, and it works either using the metric space definition or the neighborhood definition of continuity. I’ll work it out in the latter case for generality, but it’s worth writing out the parallel proof for the $\epsilon$-$\delta$ definition.

Our assertion is that $f$ is continuous at a point $x$ if and only if $\omega_f(x)=0$. If $f$ is continuous, then for every $\epsilon$ there is some neighborhood $N$ of $x$ so that $\lvert f(y)-f(x)\rvert<\frac{\epsilon}{3}$ for all $y\in N$. Then we can check that

$f(y_2)-f(y_1)=\left(f(y_2)-f(x)\right)+\left(f(x)-f(y_1)\right)<\frac{\epsilon}{3}+\frac{\epsilon}{3}<\frac{2}{3}\epsilon$

for all $y_1$ and $y_2$ in $N$, and so $\Omega_f(N)<\epsilon$. Further, any smaller neighborhood of $x$ will also satisfy this inequality, so the net is eventually within $\epsilon$ of ${0}$. Since this holds for any $\epsilon$, we find that the net has limit ${0}$.

Conversely, let’s assume that the oscillation of $f$ at $x$ is zero. That is, for any $\epsilon$ we have some neighborhood $N$ of $x$ so that $\Omega_f(N)<\frac{\epsilon}{2}$, and the same will automatically hold for smaller neighborhoods. This tells us that $f(y)-f(x)<\epsilon$ for all $y\in N$, and also $f(x)-f(y)<\epsilon$. Together, these tell us that $\lvert f(y)-f(x)\rvert<\epsilon$, and so $f$ is continuous at $x$.

December 7, 2009 Posted by | Analysis, Calculus | 4 Comments

## Upper and Lower Integrals and Riemann’s Condition

Yesterday we defined the Riemann integral of a multivariable function defined on an interval $[a,b]\subseteq\mathbb{R}^n$. We want to move towards some understanding of when a given function $f$ is integrable on a given interval $[a,b]$.

First off, we remember how we set up Darboux sums. These were given by prescribing specific methods of tagging a given partition $P$. In one, we always picked the point in the subinterval where $f$ attained its maximum within that subinterval, and in the other we always picked the point where $f$ attained its minimum. We even extended these to the Riemann-Stieltjes case and built up upper and lower integrals. And we can do the same thing again.

Given a partition $P$ of $[a,b]$ and a function defined on $[a,b]$, we define the upper Riemann sum by

$\displaystyle U_P(f)=\sum\limits_{i_1=1}^{m_1}\dots\sum\limits_{i_n=1}^{m_n}\max\limits_{t\in I_{i_1\dots i_n}}\{f(t)\}\mathrm{vol}(I_{i_1\dots i_n})$

In each subinterval we pick a sample point which gives the largest possible sample function value in that subinterval. We similarly define a lower Riemann sum by

$\displaystyle L_P(f)=\sum\limits_{i_1=1}^{m_1}\dots\sum\limits_{i_n=1}^{m_n}\min\limits_{t\in I_{i_1\dots i_n}}\{f(t)\}\mathrm{vol}(I_{i_1\dots i_n})$

As before, any Riemann sum must fall between these upper and lower sums, since the value of the function on each subinterval is somewhere between its maximum and minimum.

Just like when we did this for single-variable Riemann-Stieltjes integrals, we find that these nets are monotonic. That is, if $P'$ is a refinement of $P$, then $U_{P'}(f)\leq U_P(f)$ and $L_{P'}(f)\geq L_P(f)$. As we refine the partition, the upper sum can only get smaller and smaller, while the lower sum can only get larger and larger. And so we define

\displaystyle\begin{aligned}\overline{I}_{[a,b]}(f)&=\inf\limits_{P\in\mathcal{P}[a,b]}\{U_P(f)\}\\\underline{I}_{[a,b]}(f)&=\sup\limits_{P\in\mathcal{P}[a,b]}\{L_P(f)\}\end{aligned}

The upper integral is the infimum of the upper sums, while the lower integral is the supremum of the lower sums.

Again, as before we find that the upper integral is convex over its integrand, while the lower integral is concave

\displaystyle\begin{aligned}\overline{I}_{[a,b]}(f+g)&\leq\overline{I}_{[a,b]}(f)+\overline{I}_{[a,b]}(g)\\\underline{I}_{[a,b]}(f+g)&\geq\underline{I}_{[a,b]}(f)+\underline{I}_{[a,b]}(g)\end{aligned}

and if we break up an interval into a collection of nonoverlapping subintervals, the upper and lower integrals over the large interval are the sums of the upper and lower integrals over each of the subintervals, respectively.

And, finally, we have Riemann’s condition. The function $f$ satisfies Riemann’s condition on $[a,b]$ we can make upper and lower sums arbitrarily close. That is, if for every $\epsilon>0$ there is some partition $P_\epsilon$ so that $U_{P_\epsilon}(f)-L_{P_\epsilon}(f)<\epsilon$. In this case, the upper and lower integrals will coincide, and we can show that $f$ is actually integrable over $[a,b]$. The proof is almost exactly the same one we gave before, and so I’ll just refer you back there.

December 2, 2009 Posted by | Analysis, Calculus | 9 Comments

## Higher-Dimensional Riemann Integrals

Our coverage of multiple integrals will actually parallel our earlier coverage of Riemann integrals pretty closely. Only now we have to change our notion of “interval” to a higher-dimensional version.

For the moment, the only shapes we’ll integrate over will be closed rectangular $n$-dimensional parallelepipeds with sides parallel to the coordinate axes in $\mathbb{R}^n$. In one dimension, these are just coordinate intervals, but there aren’t many other obvious shapes to use in that case. In two dimensions, we are (for the moment) throwing out circles, ellipses, triangles, and anything else that’s not a rectangle; and even rectangles that are tilted with respect to the coordinate axes! Don’t worry, we’ll get them back later.

Anyway, such a shape in $n$-dimensional space is the product of $n$ closed intervals:

$\displaystyle[a^1,b^1]\times\dots[a^n,b^n]=\{(x^1,\dots,x^n)\in\mathbb{R}^n\vert a^1\leq x^1\leq b^1,\dots,a^n\leq x^n\leq b^n\}$

In this context, we’ll write this as $[a,b]$ for $a=(a^1,\dots,a^n)$ and $b=(b^1,\dots,b^n)$. This looks identical to, but is not to be confused with, the closed line segment from $a$ to $b$ we used in the mean value theorem. I’ll try to keep them separate by always referring to this rectangular parallelepiped as an “interval” and never using that term for a line segment. Of course, in one dimension the two are exactly the same.

Now we define a partition of an interval. We’ll do this just by partitioning each side of the interval. That is, we pick points

\displaystyle\begin{aligned}a^1={x^1}_0<&\dots<{x^1}_{m_1}=b^1\\&\vdots\\a^n={x^n}_0<&\dots<{x^n}_{m_n}=b^n\end{aligned}

where we cut the $i$th side into $m_i$ pieces, and each of the $m_i$ could be different. If we index the subintervals of the partition by $1\leq i_1\leq m_1$ through $1\leq i_n\leq m_n$, we can write

$\displaystyle I_{i_1\dots i_n}=[{x^1}_{i_1},{x^1}_{i_1+1}]\times\dots\times[{x^n}_{i_n},{x^n}_{i_n+1}]$

picking out the $i_k$th slice of the $k$th side. This cuts the whole interval up into a bunch of smaller pieces, which are themselves rectangular parallelepipeds. Again, we have the notion of a tagged partition, which picks out a sample point $t_{i_1\dots i_n}\in I_{i_1\dots i_n}$ for each subinterval. And again we say that one tagged partition is a “refinement” of another one if every partition point of the coarser partition is also one in the finer partition on the same side, and if every sample point in the coarser partition is one in the finer partition as well.

There’s a lot of numbers here and a lot of notation, but relax: most of it is going to go away very quickly. Go back and look at how partitions, tagged partitions, and refinements were defined in one variable, and just think of partitioning (and refining) the one-dimensional interval that makes up each side of the $n$-dimensional interval. Of course, like I said before, the number of parts on each side might have nothing to do with each other. Further, there’s no reason for the tags to have anything to do with each other. One might think of tagging the partition on each side of the interval, and then letting the sample points in the subintervals have those tags as coordinates. This is a perfectly valid way to come up with a collection of sample points, but the sample points don’t have to arise in this manner at all, so long as there’s exactly one within each subinterval.

So, just like in the one-dimensional case, the collection $\mathcal{P}[a,b]$ of all tagged partitions of an interval $[a,b]$ form a directed set, where we say that $P\preceq P'$ if $P'$ is a refinement of $P$. And again we define nets on this directed set; given a function $f$ defined on the interval $[a,b]$ and a partition $P\in\mathcal{P}[a,b]$, we define the Riemann sum

$\displaystyle f_P=\sum\limits_{i_1=1}^{m_1}\dots\sum\limits_{i_n=1}^{m_n}f(t_{i_1\dots i_n})\mathrm{vol}(I_{i_1\dots i_n})$

by sampling the function $f$ at the specified point $t_{i_1\dots i_n}$ in the subinterval $I_{i_1\dots i_n}$, multiplying by the $n$-dimensional volume of the subinterval (which is just the product of its side-lengths), and summing this over all the subintervals in $[a,b]$.

And again, as before, if this net converges to a limiting value $s$, we say that $f$ is Riemann integrable on the interval $[a,b]$ and we write

$\displaystyle\int\limits_{[a,b]}f\,dx=\int\limits_{[a,b]}f(x)\,dx=\int\limits_{[a,b]}f(x^1,\dots,x^n)\,d(x^1,\dots,x^n)=s$

in three different notations, depending on what we want to emphasize. The first emphasizes the function as an object in and of itself; the second, the function’s dependence on the vector variable $x$; and the third, the function’s dependence on each individual real variable. In two and three dimensions, we often write the “double integral” and “triple integral” as

\displaystyle\begin{aligned}\iint\limits_{[a,b]}&f(x)\,dx\\\iiint\limits_{[a,b]}&f(x)\,dx\end{aligned}

as visual reminders that these are multiple integrals. This gets unwieldy very quickly, and so we usually just write one integral sign for each integration.

December 1, 2009 Posted by | Analysis, Calculus | 10 Comments

## Extrema with Constraints II

As we said last time, we have an idea for a necessary condition for finding local extrema subject to constraints of a certain form. To be explicit, we assume that $f:X\rightarrow\mathbb{R}$ is continuously differentiable on an open region $X\subset\mathbb{R}^n$, and we also assume that $g:X\rightarrow\mathbb{R}^m$ is a continuously differentiable vector-valued function on the same region (and that $m). We use $g$ to define the region $A\subseteq X$ consisting of those points $x\in X$ so that $g(x)=0$. Now if $a\in A$ is a point with a neighborhood $N$ so that for all $x\in N\cap A$ we have $f(x)\leq f(a)$ (or $f(x)\geq f(a)$ for all such $x$). And, finally, we assume that the $m\times m$ determinant

$\displaystyle\det\left(\frac{\partial g^i}{\partial x^j}\right)$

is nonzero at $a$. Then we have reason to believe that there will exist real numbers $\lambda_1,\dots,\lambda_m$, one for each component of the constraint function, so that the $n$ equations

$\displaystyle\frac{\partial f}{\partial x^j}+\sum\limits_{i=1}^m\lambda_i\frac{\partial g^i}{\partial x^j}=0\qquad(j=1,\dots,n)$

are satisfied at $a$.

Well, first off we can solve the first $m$ equations and determine the $\lambda_j$ right off. Rewrite them as

$\displaystyle\sum\limits_{i=1}^m\lambda_i\frac{\partial g^i}{\partial x^j}\bigg\vert_a=-\frac{\partial f}{\partial x^j}\bigg\vert_a\qquad(j=1,\dots,m)$

a system of $m$ equations in the $m$ unknowns $\lambda_i$. Since the matrix has a nonzero determinant (by assumption) we can solve this system uniquely to determine the $\lambda_i$. What’s left is to verify that this choice of the $\lambda_i$ also satisfies the remaining $n-m$ equations.

To take care of this, we’ll write $t^k=x^{m+k}$, so we can write the point $x=(x^1,\dots,x^n)$ as $(x';t)=(x^1,\dots,x^m;t^1,\dots,t^{n-m})$ and particularly $a=(a';b)$. Now we can invoke the implicit function theorem! We find an $n-m$-dimensional neighborhood $T$ of $b$ and a unique continuously differentiable function $h:T\rightarrow\mathbb{R}^m$ so that $h(b)=a'$ and $g(h(t);t)=0$ for all $t\in T$. Without loss of generality, we can choose $T$ so that $h(t)\in N\cap A$, where $N$ is the neighborhood from the assumptions above.

This is the parameterization we discussed last time, and we can now substitute these functions into the function $f$. That is, we can define

\displaystyle\begin{aligned}F(t^1,\dots,t^{n-m})&=f\left(h^1(t^1,\dots,t^{n-m}),\dots,h^m(t^1,\dots,t^{n-m});t^1,\dots,t^{n-m}\right)\\G^i(t^1,\dots,t^{n-m})&=g^i\left(h^1(t^1,\dots,t^{n-m}),\dots,h^m(t^1,\dots,t^{n-m});t^1,\dots,t^{n-m}\right)\qquad(i=1,\dots,m)\end{aligned}

Or if we define $H(t)=(h(t);t)$ we can say this more succinctly as $F(t)=f(H(t))$ and $G^i(t)=g^i(H(t))$.

Anyhow, now all of these $G^i$ are identically zero on $T$ as a consequence of the implicit function theorem, and so each partial derivative $\frac{\partial G^i}{\partial t^j}$ is identically zero as well. But since the $G^i$ are composite functions we can also use the chain rule to evaluate these partial derivatives. We find

\displaystyle\begin{aligned}0&=\frac{\partial G^i}{\partial t^j}\\&=\sum\limits_{k=1}^n\frac{\partial g^i}{\partial x^k}\frac{\partial H^k}{\partial t^j}\\&=\sum\limits_{k=1}^m\frac{\partial g^i}{\partial x^k}\frac{\partial h^k}{\partial t^j}+\sum\limits_{k=m+1}^n\frac{\partial g^i}{\partial x^k}\frac{\partial t^{k-m}}{\partial t^j}\\&=\sum\limits_{k=1}^m\frac{\partial g^i}{\partial x^k}\frac{\partial h^k}{\partial t^j}+\sum\limits_{k=1}^{n-m}\frac{\partial g^i}{\partial x^{k+m}}\delta_j^k\\&=\sum\limits_{k=1}^m\frac{\partial g^i}{\partial x^k}\frac{\partial h^k}{\partial t^j}+\frac{\partial g^i}{\partial x^{j+m}}\end{aligned}

Similarly, since $F$ has a local minimum (as a function of the $t^j$) at $b$ we must find its partial derivatives zero at that point. That is

$\displaystyle0=\frac{\partial F}{\partial t^j}\bigg\vert_{t=b}=\sum\limits_{k=1}^m\frac{\partial f}{\partial x^k}\bigg\vert_{x=H(b)}\frac{\partial h^k}{\partial t^j}\bigg\vert_{t=b}+\frac{\partial f}{\partial x^{j+m}}\bigg\vert_{x=H(b)}$

Now let’s take the previous equation involving $g^i$, evaluate it at $t=b$, multiply it by $\lambda_i$, sum over $i$, and add it to this latest equation. We find

$\displaystyle0=\sum\limits_{k=1}^m\left[\frac{\partial f}{\partial x^k}\bigg\vert_{x=H(b)}+\sum\limits_{i=1}^m\lambda_i\frac{\partial g^i}{\partial x^k}\bigg\vert_{x=H(b)}\right]\frac{\partial h^k}{\partial j}\bigg\vert_{t=b}+\frac{\partial f}{\partial x^{j+m}}\bigg\vert_{x=H(b)}+\sum\limits_{i=1}^m\lambda_i\frac{\partial g^i}{\partial x^{j+m}}\bigg\vert_{x=H(b)}$

Now, the expression in brackets is zero because that’s actually how we defined the $\lambda_i$ way back at the start of the proof! And then what remains is exactly the equations we need to complete the proof.

November 27, 2009 Posted by | Analysis, Calculus | 2 Comments

## Extrema with Constraints I

We can consider the problem of maximizing or minimizing a function, as we have been, but insisting that our solution satisfy some constraint.

For instance, we might have a function $f:\mathbb{R}^3\rightarrow\mathbb{R}$ to maximize, but we’re only concerned with unit-length vectors on $S^2\subseteq\mathbb{R}^3$. More generally, we’ll be concerned with constraints imposed by setting a function $g$ equal to zero. In the example, we might set $g(x,y,z)=x^2+y^2+z^2-1$. If we want to impose more conditions, we can make $g$ a vector-valued function with as many components as constraint functions we want to set equal to zero.

Now, we might be able to parameterize the collection of points satisfying $g(x)=0$. In the example, we could use the usual parameterization of the sphere by latitude and longitude, writing

\displaystyle\begin{aligned}x&=\sin(\theta)\cos(\phi)\\y&=\sin(\theta)\sin(\phi)\\z&=\cos(\theta)\end{aligned}

where I’ve used the physicists’ convention on the variables instead of the common one in multivariable calculus classes. Then we could plug these expressions for $x$, $y$, and $z$ into our function $f$, and get a composite function of the variables $\phi$ and $\theta$, which we can then attack with the tools from the last couple days, being careful about when we can and can’t trust Cauchy’s invariant rule, since the second differential can transform oddly.

Besides even that care that must be taken, it may not even be possible to parameterize the surface, or it may be extremely difficult. At least we do know that such a parameterization will often exist. Indeed, the implicit function theorem tells us that if we have $m$ continuously differentiable constraint functions $g^i$ whose zeroes describe a collection of points in an $n$-dimensional space $\mathbb{R}^n$, and the $m\times m$ determinant

$\displaystyle\det\left(\frac{\partial g^i}{\partial x^j}\right)$

is nonzero at some point $a\in\mathbb{R}^n$ satisfying $g(a)=0$, then we can “solve” these equations for the first $m$ variables as functions of the last $n-m$. This gives us exactly such a parameterization, and in principle we could use it. But the calculations get amazingly painful.

Instead, we want to think about this problem another way. We want to consider a point $a\in\mathbb{R}^n$ is a point satisfying $g(a)=0$ which has a neighborhood $N$ so that for all $x\in N$ satisfying $g(x)=0$ we have $f(a)\geq f(x)$. This does not say that there are no nearby points to $a$ where $f$ takes on larger values, but it does say that to reach any such point we must leave the region described by $g(x)=0$.

Now, let’s think about this sort of heuristically. As we look in various directions $v$ from $a$, some of them are tangent to the region described by $g(x)=0$. These are the directions satisfying $[D_vg](a)=dg(a;v)=0$ — where to first order the value of $g$ is not changing in the direction of $v$. I say that in none of these directions can $f$ change (again, to first order) either. For if it did, either $f$ would increase in that direction or not. If it did, then we could find a path in the region where $g(x)=0$ along which $f$ was increasing, contradicting our assertion that we’d have to leave the region for this to happen. But if $f$ decreased to first order, then it would increase to first order in the opposite direction, and we’d have the same problem. That is, we must have $df(a;v)=0$ whenever $dg(a;v)=0$. And so we find that

$\mathrm{Ker}(dg(a))=\bigcap\limits_{i=1}^m\mathrm{Ker}(dg^i(a))\subseteq\mathrm{Ker}(df(a))$

The kernel of $df(a)$ consists of all vectors orthogonal to the gradient vector $\nabla f(a)$, and the line it spans is the orthogonal complement to the kernel. Similarly, the kernel of $dg(a)$ consists of all vectors orthogonal to each of the gradient vectors $\nabla g^i(a)$, and is thus the orthogonal complement to the entire subspace they span. The kernel of $dg(a)$ is contained in the kernel of $df(a)$, and orthogonal complements are order-reversing, which means that $\nabla f(a)$ must lie within the span of the $\nabla g^i(a)$. That is, there must be real numbers $\lambda_i$ so that

$\displaystyle\nabla f(a)=\sum\limits_{i=1}^m\lambda_i\nabla g^i(a)$

or, passing back to differentials

$\displaystyle df(a)=\sum\limits_{i=1}^m\lambda_idg^i(a)$

So in the presence of constraints we replace the condition $df(a)=0$ by this one. We call the $\lambda_i$ “Lagrange multipliers”, and for every one of these variables we add to the system of equations, we also add the constraint equation $g^i(a)=0$, so we should still get an isolated collection of points.

Now, we reached this conclusion by a rather handwavy argument about being able to find increasing directions and so on within the region $g(x)=0$. This line of reasoning could possibly be firmed up, but we’ll find our proof next time in a slightly different approach.

November 25, 2009 Posted by | Analysis, Calculus | 4 Comments

## Classifying Critical Points

So let’s say we’ve got a critical point of a multivariable function $f:X\rightarrow\mathbb{R}$. That is, a point $a\in X$ where the differential $df(x)$ vanishes. We want something like the second derivative test that might tell us more about the behavior of the function near that point, and to identify (some) local maxima and minima. We’ll assume here that $f$ is twice continuously differentiable in some region $S$ around $a$.

The analogue of the second derivative for multivariable functions is the second differential $d^2f(x)$. This function assigns to every point a bilinear function of two displacement vectors $u$ and $v$, and it measures the rate at which the directional derivative in the direction of $v$ is changing as we move in the direction of $u$. That is,

$\displaystyle d^2f(x;u,v)=\left[D_u\left(D_vf\right)\right](x)$

If we choose coordinates on $X$ given by an orthonormal basis $\{e_i\}_{i=1}^n$, we can write the second differential in terms of coordinates

$\displaystyle d^2f(x)=\frac{\partial^2f}{\partial x^i\partial x^j}dx^idx^j$

This matrix is often called the “Hessian” of $f$ at the point $x$.

As I said above, this is a bilinear form. Further, Clairaut’s theorem tells us that it’s a symmetric form. Then the spectral theorem tells us that we can find an orthonormal basis with respect to which the Hessian is actually diagonal, and the diagonal entries are the eigenvalues of the matrix.

So let’s go back and assume we’re working with such a basis. This means that our second partial derivatives are particularly simple. We find that for $i\neq j$ we have

$\displaystyle\frac{\partial^2f}{\partial x^i\partial x^j}=0$

and for $i=j$, the second partial derivative is an eigenvalue

$\displaystyle\frac{\partial^2f}{{\partial x^i}^2}=\lambda_i$

which we can assume (without loss of generality) are nondecreasing. That is, $\lambda_1\leq\lambda_2\leq\dots\leq\lambda_n$.

Now, if all of these eigenvalues are positive at a critical point $a$, then the Hessian is positive-definite. That is, given any direction $v$ we have $d^2f(a;v,v)>0$. On the other hand, if all of the eigenvalues are negative, the Hessian is negative definite; given any direction $v$ we have $d^2f(a;v,v)<0$. In the former case, we’ll find that $f$ has a local minimum in a neighborhood of $a$, and in the latter case we’ll find that $f$ has a local maximum there. If some eigenvalues are negative and others are positive, then the function has a mixed behavior at $a$ we’ll call a “saddle” (sketch the graph of $f(x,y)=xy$ near $(0,0)$ to see why). And if any eigenvalues are zero, all sorts of weird things can happen, though at least if we can find one positive and one negative eigenvalue we know that the critical point can’t be a local extremum.

We remember that the determinant of a diagonal matrix is the product of its eigenvalues, so if the determinant of the Hessian is nonzero then either we have a local maximum, we have a local minimum, or we have some form of well-behaved saddle. These behaviors we call “generic” critical points, since if we “wiggle” the function a bit (while maintaining a critical point at $a$) the Hessian determinant will stay nonzero. If the Hessian determinant is zero, wiggling the function a little will make it nonzero, and so this sort of critical point is not generic. This is the sort of unstable situation analogous to a failure of the second derivative test. Unfortunately, the analogy doesn’t extent, in that the sign of the Hessian determinant isn’t instantly meaningful. In two dimensions a positive determinant means both eigenvalues have the same sign — denoting a local maximum or a local minimum — while a negative determinant denotes eigenvalues of different signs — denoting a saddle. This much is included in multivariable calculus courses, although usually without a clear explanation why it works.

So, given a direction vector $v$ so that $d^2f(a;v,v)>0$, then since $f$ is in $C^2(S)$, there will be some neighborhood $N$ of $a$ so that $d^2f(x;v,v)>0$ for all $x\in N$. In particular, there will be some range of $t$ so that $b=a+tv\in N$. For any such point we can use Taylor’s theorem with $m=2$ to tell us that

$\displaystyle f(b)-f(a)=\frac{1}{2}d^2f(\xi;tv,tv)=\frac{t^2}{2}d^2f(\xi;v,v)$

for some $\xi\in[a,b]\subseteq N$. And from this we see that $f(b)>f(a)$ for every $b\in N$ so that $b-a=tv$. A similar argument shows that if $d^2f(a;v,v)<0$ then $f(b) for any $b$ near $a$ in the direction of $v$.

Now if the Hessian is positive-definite then every direction $v$ from $a$ gives us $d^2f(a;v,v)>0$, and so every point $b$ near $a$ satisfies $f(b)>f(a)$. If the Hessian is negative-definite, then every point $b$ near $a$ satisfies $f(b). And if the Hessian has both positive and negative eigenvalues then within any neighborhood we can find some directions in which $f(b)>f(a)$ and some in which $f(b).

November 24, 2009 Posted by | Analysis, Calculus | 4 Comments

## Local Extrema in Multiple Variables

Just like in one variable, we’re interested in local maxima and minima of a function $f:X\rightarrow\mathbb{R}$, where $X$ is an open region in $\mathbb{R}^n$. Again, we say that $f$ has a local minimum at a point $a\in X$ if there is some neighborhood $N$ of $a$ so that $f(a)\leq f(x)$ for all $x\in N$. A maximum is similarly defined, except that we require $f(a)\geq f(x)$ in the neighborhood. As I alluded to recently, we can bring Fermat’s theorem to bear to determine a necessary condition.

Specifically, if we have coordinates on $\mathbb{R}^n$ given by a basis $\{e_i\}_{i=1}^n$, we can regard $f$ as a function of the $n$ variables $x^i$. We can fix $n-1$ of these variables $x^i=a^i$ for $i\neq k$ and let $x^k$ vary in a neighborhood of $a^k$. If $f$ has a local extremum at $x=a$, then in particular it has a local extremum along this coordinate line at $x^k=a^k$. And so we can use Fermat’s theorem to draw conclusions about the derivative of this restricted function at $x^k=a^k$, which of course is the partial derivative $\frac{\partial f}{\partial x^k}\big\vert_{x=a}$.

So what can we say? For each variable $x^k$, the partial derivative $\frac{\partial f}{\partial x^k}$ either does not exist or is equal to zero at $x=a$. And because the differential subsumes the partial derivatives, if any of them fail to exist the differential must fail to exist as well. On the other hand, if they all exist they’re all zero, and so $df(a)=0$ as well. Incidentally, we can again make the connection to the usual coverage in a multivariable calculus course by remembering that the gradient $\nabla f(a)$ is the vector that corresponds to the linear functional of the differential $df(a)$. So at a local extremum we must have $\nabla f(a)=0$.

As was the case with Fermat’s theorem, this provides a necessary, but not a sufficient condition to have a local extremum. Anything that can go wrong in one dimension can be copied here. For instance, we could define $f(x,y)=x^2+y^3$. Then we find $df=2x\,dx+3y^2\,dy$, which is zero at $(0,0)$. But any neighborhood of this point will contain points $(0,t)$ and $(0,-t)$ for small enough $t>0$, and we see that $f(0,t)>f(0,0)>f(0,-t)$, so the origin cannot be a local extremum.

But weirder things can happen. We might ask that $f$ have a local minimum at $a$ along any line, like we tried with directional derivatives. But even this can go wrong. If we define

$\displaystyle f(x,y)=(y-x^2)(y-3x^2)=y^2-4x^2y+3x^4$

we can calculate

$\displaystyle df=\left(-8xy+12x^3\right)dx+\left(2y-4x^2\right)dy$

which again is zero at $(0,0)$. Along any slanted line through the origin $y=kx$ we find

\displaystyle\begin{aligned}f(t,kt)&=3t^4-4kt^3+k^2t^2\\\frac{d}{dt}f(t,kt)&=12t^3-12kt^2+2k^2t\\\frac{d^2}{dt^2}f(t,kt)&=36t^2-24kt+2k^2\end{aligned}

and so the second derivative is always positive at the origin, except along the $x$-axis. For the vertical line, we find

\displaystyle\begin{aligned}f(0,t)&=t^2\\\frac{d}{dt}f(t,kt)&=2t\\\frac{d^2}{dt^2}f(t,kt)&=2\end{aligned}

so along all of these lines we have a local minimum at the origin by the second derivative test. And along the $x$-axis, we have $f(x,0)=3x^4$, which has the origin as a local minimum.

Unfortunately, it’s still not a local minimum in the plane, since any neighborhood of the origin must contain points of the form $(t,2t^2)$ for small enough $t$. For these points we find

$\displaystyle f(t,2t^2)=-t^4<0=f(0,0)$

and so $f$ cannot have a local minimum at the origin.

What we’ll do is content ourselves with this analogue and extension of Fermat’s theorem as a necessary condition, and then develop tools that can distinguish the common behaviors near such critical points, analogous to the second derivative test.

November 23, 2009 Posted by | Analysis, Calculus | 4 Comments

## The Implicit Function Theorem II

Okay, today we’re going to prove the implicit function theorem. We’re going to think of our function $f$ as taking an $n$-dimensional vector $x$ and a $m$-dimensional vector $t$ and giving back an $n$-dimensional vector $f(x;t)$. In essence, what we want to do is see how this output vector must change as we change $t$, and then undo that by making a corresponding change in $x$. And to do that, we need to know how changing the output changes $x$, at least in a neighborhood of $f(x;t)=0$. That is, we’ve got to invert a function, and we’ll need to use the inverse function theorem.

But we’re not going to apply it directly as the above heuristic suggests. Instead, we’re going to “puff up” the function $f:S\rightarrow\mathbb{R}^n$ into a bigger function $F:S\rightarrow\mathbb{R}^{n+m}$ that will give us some room to maneuver. For $1\leq i\leq n$ we define

$\displaystyle F^i(x;t)=f^i(x;t)$

just copying over our original function. Then we continue by defining for $1\leq j\leq m$

$\displaystyle F^{n+j}(x;t)=t^j$

That is, the new $m$ component functions are just the coordinate functions $t^j$. We can easily calculate the Jacobian matrix

$\displaystyle dF=\begin{pmatrix}\frac{\partial f^i}{\partial x^j}&\frac{\partial f^i}{\partial t^j}\\{0}&I_m\end{pmatrix}$

where ${0}$ is the $m\times n$ zero matrix and $I_m$ is the $m\times m$ identity matrix. From here it’s straightforward to find the Jacobian determinant

$\displaystyle J_F(x;t)=\det\left(dF\right)=\det\left(\frac{\partial f^i}{\partial x^j}\right)$

which is exactly the determinant we assert to be nonzero at $(a;b)$. We also easily see that $F(a;b)=(0;b)$.

And so the inverse function theorem tells us that there are neighborhoods $X$ of $(a;b)$ and $Y$ of $(0;b)$ so that $F$ is injective on $X$ and $Y=F(X)$, and that there is a continuously differentiable inverse function $G:Y\rightarrow X$ so that $G(F(x;t))=(x;t)$ for all $(x;t)\in X$. We want to study this inverse function to recover our implicit function from it.

First off, we can write $G(y;s)=(v(y;s);w(y;s))$ for two functions: $v$ which takes $n$-dimensional vector values, and $w$ which takes $m$-dimensional vector values. Our inverse relation tells us that

\displaystyle\begin{aligned}v(F(x;t))&=x\\w(F(x;t))&=t\end{aligned}

But since $F$ is injective from $X$ onto $Y$, we can write any point $(y;s)\in Y$ as $(y;s)=F(x;t)$, and in this case we must have $s=t$ by the definition of $s$. That is, we have

\displaystyle\begin{aligned}v(y;t)&=v(F(x;t))=x\\w(y;t)&=w(F(x;t))=t\end{aligned}

And so we see that $G(y;t)=(x;t)$, where $x$ is the $n$-dimensional vector so that $y=f(x;t)$. We thus have $f(v(y;t);t)=y$ for every $(y;t)\in Y$.

Now define $T\subseteq\mathbb{R}^m$ be the collection of vectors $t$ so that $(0;t)\in Y$, and for each such $t\in T$ define $g(t)=v(0;t)$, so $F(g(t);t)=0$. As a slice of the open set $Y$ in the product topology on $\mathbb{R}^n\times\mathbb{R}^m$, the set $T$ is open in $\mathbb{R}^m$. Further, $g$ is continuously differentiable on $T$ since $G$ is continuously differentiable on $Y$, and the components of $g$ are taken directly from those of $G$. Finally, $b$ is in $T$ since $(a;b)\in X$, and $F(a;b)=(0;b)\in Y$ by assumption. This also shows that $g(b)=a$.

The only thing left is to show that $g$ is uniquely defined. But there can only be one such function, by the injectivity of $f$. If there were another such function $h$ then we’d have $f(g(t);t)=f(h(t);t)$, and thus $(g(t);t)=(h(t);t)$, or $g(t)=h(t)$ for every $t\in T$.

November 20, 2009 Posted by | Analysis, Calculus | 1 Comment

## The Implicit Function Theorem I

Let’s consider the function $F(x,y)=x^2+y^2-1$. The collection of points $(x,y)$ so that $F(x,y)=0$ defines a curve in the plane: the unit circle. Unfortunately, this relation is not a function. Neither is $y$ defined as a function of $x$, nor is $x$ defined as a function of $y$ by this curve. However, if we consider a point $(a,b)$ on the curve (that is, with $F(a,b)=0$), then near this point we usually do have a graph of $x$ as a function of $y$ (except for a few isolated points). That is, as we move $y$ near the value $b$ then we have to adjust $x$ to maintain the relation $F(x,y)=0$. There is some function $f(y)$ defined “implicitly” in a neighborhood of $b$ satisfying the relation $F(f(y),y)=0$.

We want to generalize this situation. Given a system of $n$ functions of $n+m$ variables

$\displaystyle f^i(x;t)=f^i(x^1,\dots,x^n;t^1,\dots,t^m)$

we consider the collection of points $(x;t)$ in $n+m$-dimensional space satisfying $f(x;t)=0$.

If this were a linear system, the rank-nullity theorem would tell us that our solution space is (generically) $m$ dimensional. Indeed, we could use Gauss-Jordan elimination to put the system into reduced row echelon form, and (usually) find the resulting matrix starting with an $n\times n$ identity matrix, like

$\displaystyle\begin{pmatrix}1&0&0&2&1\\{0}&1&0&3&0\\{0}&0&1&-1&1\end{pmatrix}$

This makes finding solutions to the system easy. We put our $n+m$ variables into a column vector and write

$\displaystyle\begin{pmatrix}1&0&0&2&1\\{0}&1&0&3&0\\{0}&0&1&-1&1\end{pmatrix}\begin{pmatrix}x^1\\x^2\\x^3\\t^1\\t^2\end{pmatrix}=\begin{pmatrix}x^1+2t^1+t^2\\x^2+3t^1\\x^3-t^1+t^2\end{pmatrix}=\begin{pmatrix}0\\{0}\\{0}\end{pmatrix}$

and from this we find

\displaystyle\begin{aligned}x^1&=-2t^1-t^2\\x^2&=-3t^1\\x^3&=t^1-t^2\end{aligned}

Thus we can use the $m$ variables $t^j$ as parameters on the space of solutions, and define each of the $x^i$ as a function of the $t^j$.

But in general we don’t have a linear system. Still, we want to know some circumstances under which we can do something similar and write each of the $x^i$ as a function of the other variables $t^j$, at least near some known point $(a;b)$.

The key observation is that we can perform the Gauss-Jordan elimination above and get a matrix with rank $n$ if and only if the leading $n\times n$ matrix is invertible. And this is generalized to asking that some Jacobian determinant of our system of functions is nonzero.

Specifically, let’s assume that all of the $f^i$ are continuously differentiable on some region $S$ in $n+m$-dimensional space, and that $(a;b)$ is some point in $S$ where $f(a;b)=0$, and at which the determinant

$\displaystyle\det\left(\frac{\partial f^i}{\partial x^j}\bigg\vert_{(a;t)}\right)\neq0$

where both indices $i$ and $j$ run from $1$ to $n$ to make a square matrix. Then I assert that there is some $k$-dimensional neighborhood $T$ of $b$ and a uniquely defined, continuously differentiable, vector-valued function $g:T\rightarrow\mathbb{R}^n$ so that $g(b)=a$ and $f(g(t);t)=0$.

That is, near $(a;b)$ we can use the variables $t^j$ as parameters on the space of solutions to our system of equations. Near this point, the solution set looks like the graph of the function $x=g(t)$, which is implicitly defined by the need to stay on the solution set as we vary $t$. This is the implicit function theorem, and we will prove it next time.

November 19, 2009 Posted by | Analysis, Calculus | 4 Comments

## The Inverse Function Theorem

At last we come to the theorem that I promised. Let $f:S\rightarrow\mathbb{R}^n$ be continuously differentiable on an open region $S\subseteq\mathbb{R}^n$, and $T=f(S)$. If the Jacobian determinant $J_f(a)\neq0$ at some point $a\in S$, then there is a uniquely determined function $g$ and two open sets $X\subseteq S$ and $Y\subseteq T$ so that

• $a\in X$, and $f(a)\in Y$
• $Y=f(X)$
• $f$ is injective on $X$
• $g$ is defined on $Y$, $g(Y)=X$, and $g(f(x))=x$ for all $x\in X$
• $g$ is continuously differentiable on $Y$

The Jacobian determinant $J_f(x)$ is continuous as a function of $x$, so there is some neighborhood $N_1$ of $a$ so that the Jacobian is nonzero within $N_1$. Our second lemma tells us that there is a smaller neighborhood $N\subseteq N_1$ on which $f$ is injective. We pick some closed ball $\overline{K}\subseteq N$ centered at $a$, and use our first lemma to find that $f(K)$ must contain an open neighborhood $Y$ of $f(a)$. Then we define $X=f^{-1}(Y)\cap K$, which is open since both $K$ and $f^{-1}(Y)$ are (the latter by the continuity of $f$). Since $f$ is injective on the compact set $\overline{K}\subseteq N$, it has a uniquely-defined continuous inverse $g$ on $Y\subseteq f(\overline{K})$. This establishes the first four of the conditions of the theorem.

Now the hard part is showing that $g$ is continuously differentiable on $Y$. To this end, like we did in our second lemma, we define the function

$\displaystyle h(z_1,\dots,z_n)=\det\left(\frac{\partial f^i}{\partial x^j}\bigg\vert_{x=z_i}\right)$

along with a neighborhood $N_2$ of $a$ so that as long as all the $z_i$ are within $N_2$ this function is nonzero. Without loss of generality we can go back and choose our earlier neighborhood $N$ so that $N\subseteq N_2$, and thus that $\overline{K}\subseteq N_2$.

To show that the partial derivative $\frac{\partial g^i}{\partial y^j}$ exists at a point $y\in Y$, we consider the difference quotient

$\displaystyle\frac{g^i(y+\lambda e_j)-g^i(y)}{\lambda}$

with $y+\lambda e_j$ also in $Y$ for sufficiently small $\lvert\lambda\rvert$. Then writing $x_1=g(y)$ and $x_2=g(y+\lambda e_j)$ we find $f(x_2)-f(x_1)=\lambda e_j$. The mean value theorem then tells us that

\displaystyle\begin{aligned}\delta_j^k&=\frac{f^k(x_2)-f^k(x_1)}{\lambda}\\&=df^k(\xi_k)\left(\frac{1}{\lambda}(x_2-x_1)\right)\\&=\frac{\partial f^k}{\partial x^i}\bigg\vert_{x=\xi_k}\frac{x_2^i-x_1^i}{\lambda}\\&=\frac{\partial f^k}{\partial x^i}\bigg\vert_{x=\xi_k}\frac{g^i(y+\lambda e_j)-g^i(y)}{\lambda}\end{aligned}

for some $\xi_k\in[x_1,x_2]\subseteq K$ (no summation on $k$). As usual, $\delta_j^k$ is the Kronecker delta.

This is a linear system of equations, which has a unique solution since the determinant of its matrix is $h(\xi_1,\dots,\xi_n)\neq0$. We use Cramer’s rule to solve it, and get an expression for our difference quotient as a quotient of two determinants. This is why we want the form of the solution given by Cramer’s rule, and not by a more computationally-efficient method like Gaussian elimination.

As $\lambda$ approaches zero, continuity of $g$ tells us that $x_2$ approaches $x_1$, and thus so do all of the $\xi_k$. Therefore the determinant in the denominator of Cramer’s rule is in the limit $h(x,\dots,x)=J_f(x)\neq0$, and thus limits of the solutions given by Cramer’s rule actually do exist.

This establishes that the partial derivative $\frac{\partial g^i}{\partial y^j}$ exists at each $y\in Y$. Further, since we found the limit of the difference quotient by Cramer’s rule, we have an expression given by the quotient of two determinants, each of which only involves the partial derivatives of $f$, which are themselves all continuous. Therefore the partial derivatives of $g$ not only exist but are in fact continuous.

November 18, 2009 Posted by | Analysis, Calculus | 4 Comments