## Oscillation

Oscillation in a function is sort of a local and non-directional version of variation. If is a bounded function on some region , and if is a nonempty subset of , then we define the oscillation of on by the formula

measuring the greatest difference in values of on .

We also want a version that’s localized to a single point . To do this, we first note that the collection of all subsets of which contain form a poset as usual by inclusion. But we want to reverse this order and say that if and only if .

Now for any two subsets and , their intersection is another such subset containing . And since it’s contained in both and , it’s above both of them in our partial order, which makes this poset a directed set, and the oscillation of is a net.

In fact, it’s easy to see that if then , so this net is monotonically decreasing as the subset gets smaller and smaller. Further, we can see that , since if we can always consider the difference , the supremum must be at least this big.

Anyhow, now we know that the net has a limit, and we define

where is a subset of containing , and we take the limit as gets smaller and smaller.

In fact, this is slightly overdoing it. Our domain is a topological subspace of , and is thus a metric space. If we want we can just work with metric balls and define

where is the ball of radius around . These definitions are exactly equivalent in metric spaces, but the net definition works in more general topological spaces, and it’s extremely useful in its own right later, so it’s worth thinking about now.

Oscillation provides a nice way to restate our condition for continuity, and it works either using the metric space definition or the neighborhood definition of continuity. I’ll work it out in the latter case for generality, but it’s worth writing out the parallel proof for the - definition.

Our assertion is that is continuous at a point if and only if . If is continuous, then for every there is some neighborhood of so that for all . Then we can check that

for all and in , and so . Further, any smaller neighborhood of will also satisfy this inequality, so the net is eventually within of . Since this holds for any , we find that the net has limit .

Conversely, let’s assume that the oscillation of at is zero. That is, for any we have some neighborhood of so that , and the same will automatically hold for smaller neighborhoods. This tells us that for all , and also . Together, these tell us that , and so is continuous at .

## Upper and Lower Integrals and Riemann’s Condition

Yesterday we defined the Riemann integral of a multivariable function defined on an interval . We want to move towards some understanding of when a given function is integrable on a given interval .

First off, we remember how we set up Darboux sums. These were given by prescribing specific methods of tagging a given partition . In one, we always picked the point in the subinterval where attained its maximum within that subinterval, and in the other we always picked the point where attained its minimum. We even extended these to the Riemann-Stieltjes case and built up upper and lower integrals. And we can do the same thing again.

Given a partition of and a function defined on , we define the upper Riemann sum by

In each subinterval we pick a sample point which gives the largest possible sample function value in that subinterval. We similarly define a lower Riemann sum by

As before, any Riemann sum must fall between these upper and lower sums, since the value of the function on each subinterval is somewhere between its maximum and minimum.

Just like when we did this for single-variable Riemann-Stieltjes integrals, we find that these nets are monotonic. That is, if is a refinement of , then and . As we refine the partition, the upper sum can only get smaller and smaller, while the lower sum can only get larger and larger. And so we define

The upper integral is the infimum of the upper sums, while the lower integral is the supremum of the lower sums.

Again, as before we find that the upper integral is convex over its integrand, while the lower integral is concave

and if we break up an interval into a collection of nonoverlapping subintervals, the upper and lower integrals over the large interval are the sums of the upper and lower integrals over each of the subintervals, respectively.

And, finally, we have Riemann’s condition. The function satisfies Riemann’s condition on we can make upper and lower sums arbitrarily close. That is, if for every there is some partition so that . In this case, the upper and lower integrals will coincide, and we can show that is actually integrable over . The proof is almost exactly the same one we gave before, and so I’ll just refer you back there.

## Higher-Dimensional Riemann Integrals

Our coverage of multiple integrals will actually parallel our earlier coverage of Riemann integrals pretty closely. Only now we have to change our notion of “interval” to a higher-dimensional version.

For the moment, the *only* shapes we’ll integrate over will be closed rectangular -dimensional parallelepipeds with sides parallel to the coordinate axes in . In one dimension, these are just coordinate intervals, but there aren’t many other obvious shapes to use in that case. In two dimensions, we are (for the moment) throwing out circles, ellipses, triangles, and anything else that’s not a rectangle; and even rectangles that are tilted with respect to the coordinate axes! Don’t worry, we’ll get them back later.

Anyway, such a shape in -dimensional space is the product of closed intervals:

In this context, we’ll write this as for and . This looks identical to, but is not to be confused with, the closed line segment from to we used in the mean value theorem. I’ll try to keep them separate by always referring to this rectangular parallelepiped as an “interval” and never using that term for a line segment. Of course, in one dimension the two are exactly the same.

Now we define a partition of an interval. We’ll do this just by partitioning each side of the interval. That is, we pick points

where we cut the th side into pieces, and each of the could be different. If we index the subintervals of the partition by through , we can write

picking out the th slice of the th side. This cuts the whole interval up into a bunch of smaller pieces, which are themselves rectangular parallelepipeds. Again, we have the notion of a tagged partition, which picks out a sample point for each subinterval. And again we say that one tagged partition is a “refinement” of another one if every partition point of the coarser partition is also one in the finer partition on the same side, and if every sample point in the coarser partition is one in the finer partition as well.

There’s a lot of numbers here and a lot of notation, but relax: most of it is going to go away very quickly. Go back and look at how partitions, tagged partitions, and refinements were defined in one variable, and just think of partitioning (and refining) the one-dimensional interval that makes up each side of the -dimensional interval. Of course, like I said before, the number of parts on each side might have nothing to do with each other. Further, there’s no reason for the tags to have anything to do with each other. One might think of tagging the partition on each side of the interval, and then letting the sample points in the subintervals have those tags as coordinates. This is a perfectly valid way to come up with a collection of sample points, but the sample points don’t have to arise in this manner at all, so long as there’s exactly one within each subinterval.

So, just like in the one-dimensional case, the collection of all tagged partitions of an interval form a directed set, where we say that if is a refinement of . And again we define nets on this directed set; given a function defined on the interval and a partition , we define the Riemann sum

by sampling the function at the specified point in the subinterval , multiplying by the -dimensional volume of the subinterval (which is just the product of its side-lengths), and summing this over all the subintervals in .

And again, as before, if this net converges to a limiting value , we say that is Riemann integrable on the interval and we write

in three different notations, depending on what we want to emphasize. The first emphasizes the function as an object in and of itself; the second, the function’s dependence on the vector variable ; and the third, the function’s dependence on each individual real variable. In two and three dimensions, we often write the “double integral” and “triple integral” as

as visual reminders that these are multiple integrals. This gets unwieldy very quickly, and so we usually just write one integral sign for each integration.

## Extrema with Constraints II

As we said last time, we have an idea for a necessary condition for finding local extrema subject to constraints of a certain form. To be explicit, we assume that is continuously differentiable on an open region , and we also assume that is a continuously differentiable vector-valued function on the same region (and that ). We use to define the region consisting of those points so that . Now if is a point with a neighborhood so that for all we have (or for all such ). And, finally, we assume that the determinant

is nonzero at . Then we have reason to believe that there will exist real numbers , one for each component of the constraint function, so that the equations

are satisfied at .

Well, first off we can solve the first equations and determine the right off. Rewrite them as

a system of equations in the unknowns . Since the matrix has a nonzero determinant (by assumption) we can solve this system uniquely to determine the . What’s left is to verify that this choice of the also satisfies the remaining equations.

To take care of this, we’ll write , so we can write the point as and particularly . Now we can invoke the implicit function theorem! We find an -dimensional neighborhood of and a unique continuously differentiable function so that and for all . Without loss of generality, we can choose so that , where is the neighborhood from the assumptions above.

This is the parameterization we discussed last time, and we can now substitute these functions into the function . That is, we can define

Or if we define we can say this more succinctly as and .

Anyhow, now all of these are identically zero on as a consequence of the implicit function theorem, and so each partial derivative is identically zero as well. But since the are composite functions we can also use the chain rule to evaluate these partial derivatives. We find

Similarly, since has a local minimum (as a function of the ) at we must find its partial derivatives zero at that point. That is

Now let’s take the previous equation involving , evaluate it at , multiply it by , sum over , and add it to this latest equation. We find

Now, the expression in brackets is zero because that’s actually how we defined the way back at the start of the proof! And then what remains is exactly the equations we need to complete the proof.

## Extrema with Constraints I

We can consider the problem of maximizing or minimizing a function, as we have been, but insisting that our solution satisfy some constraint.

For instance, we might have a function to maximize, but we’re only concerned with unit-length vectors on . More generally, we’ll be concerned with constraints imposed by setting a function equal to zero. In the example, we might set . If we want to impose more conditions, we can make a vector-valued function with as many components as constraint functions we want to set equal to zero.

Now, we might be able to parameterize the collection of points satisfying . In the example, we could use the usual parameterization of the sphere by latitude and longitude, writing

where I’ve used the physicists’ convention on the variables instead of the common one in multivariable calculus classes. Then we could plug these expressions for , , and into our function , and get a composite function of the variables and , which we can then attack with the tools from the last couple days, being careful about when we can and can’t trust Cauchy’s invariant rule, since the second differential can transform oddly.

Besides even that care that must be taken, it may not even be possible to parameterize the surface, or it may be extremely difficult. At least we do know that such a parameterization will often exist. Indeed, the implicit function theorem tells us that if we have continuously differentiable constraint functions whose zeroes describe a collection of points in an -dimensional space , and the determinant

is nonzero at some point satisfying , then we can “solve” these equations for the first variables as functions of the last . This gives us exactly such a parameterization, and in principle we could use it. But the calculations get amazingly painful.

Instead, we want to think about this problem another way. We want to consider a point is a point satisfying which has a neighborhood so that for all satisfying we have . This does *not* say that there are no nearby points to where takes on larger values, but it does say that to reach any such point we must leave the region described by .

Now, let’s think about this sort of heuristically. As we look in various directions from , some of them are tangent to the region described by . These are the directions satisfying — where to first order the value of is not changing in the direction of . I say that in none of these directions can change (again, to first order) either. For if it did, either would increase in that direction or not. If it did, then we could find a path in the region where along which was increasing, contradicting our assertion that we’d have to leave the region for this to happen. But if decreased to first order, then it would increase to first order in the opposite direction, and we’d have the same problem. That is, we must have whenever . And so we find that

The kernel of consists of all vectors orthogonal to the gradient vector , and the line it spans is the orthogonal complement to the kernel. Similarly, the kernel of consists of all vectors orthogonal to *each* of the gradient vectors , and is thus the orthogonal complement to the entire subspace they span. The kernel of is contained in the kernel of , and orthogonal complements are order-reversing, which means that must lie within the span of the . That is, there must be real numbers so that

or, passing back to differentials

So in the presence of constraints we replace the condition by this one. We call the “Lagrange multipliers”, and for every one of these variables we add to the system of equations, we also add the constraint equation , so we should still get an isolated collection of points.

Now, we reached this conclusion by a rather handwavy argument about being able to find increasing directions and so on within the region . This line of reasoning could possibly be firmed up, but we’ll find our proof next time in a slightly different approach.

## Classifying Critical Points

So let’s say we’ve got a critical point of a multivariable function . That is, a point where the differential vanishes. We want something like the second derivative test that might tell us more about the behavior of the function near that point, and to identify (some) local maxima and minima. We’ll assume here that is twice continuously differentiable in some region around .

The analogue of the second derivative for multivariable functions is the second differential . This function assigns to every point a bilinear function of two displacement vectors and , and it measures the rate at which the directional derivative in the direction of is changing as we move in the direction of . That is,

If we choose coordinates on given by an orthonormal basis , we can write the second differential in terms of coordinates

This matrix is often called the “Hessian” of at the point .

As I said above, this is a bilinear form. Further, Clairaut’s theorem tells us that it’s a symmetric form. Then the spectral theorem tells us that we can find an orthonormal basis with respect to which the Hessian is actually diagonal, and the diagonal entries are the eigenvalues of the matrix.

So let’s go back and assume we’re working with such a basis. This means that our second partial derivatives are particularly simple. We find that for we have

and for , the second partial derivative is an eigenvalue

which we can assume (without loss of generality) are nondecreasing. That is, .

Now, if all of these eigenvalues are positive at a critical point , then the Hessian is positive-definite. That is, given any direction we have . On the other hand, if all of the eigenvalues are negative, the Hessian is negative definite; given any direction we have . In the former case, we’ll find that has a local minimum in a neighborhood of , and in the latter case we’ll find that has a local maximum there. If some eigenvalues are negative and others are positive, then the function has a mixed behavior at we’ll call a “saddle” (sketch the graph of near to see why). And if any eigenvalues are zero, all sorts of weird things can happen, though at least if we can find one positive and one negative eigenvalue we know that the critical point can’t be a local extremum.

We remember that the determinant of a diagonal matrix is the product of its eigenvalues, so if the determinant of the Hessian is nonzero then either we have a local maximum, we have a local minimum, or we have some form of well-behaved saddle. These behaviors we call “generic” critical points, since if we “wiggle” the function a bit (while maintaining a critical point at ) the Hessian determinant will stay nonzero. If the Hessian determinant is zero, wiggling the function a little will make it nonzero, and so this sort of critical point is not generic. This is the sort of unstable situation analogous to a failure of the second derivative test. Unfortunately, the analogy doesn’t extent, in that the sign of the Hessian determinant isn’t instantly meaningful. In two dimensions a positive determinant means both eigenvalues have the same sign — denoting a local maximum or a local minimum — while a negative determinant denotes eigenvalues of different signs — denoting a saddle. This much is included in multivariable calculus courses, although usually without a clear explanation why it works.

So, given a direction vector so that , then since is in , there will be some neighborhood of so that for all . In particular, there will be some range of so that . For any such point we can use Taylor’s theorem with to tell us that

for some . And from this we see that for every so that . A similar argument shows that if then for any near in the direction of .

Now if the Hessian is positive-definite then *every* direction from gives us , and so every point near satisfies . If the Hessian is negative-definite, then every point near satisfies . And if the Hessian has both positive and negative eigenvalues then within any neighborhood we can find some directions in which and some in which .

## Local Extrema in Multiple Variables

Just like in one variable, we’re interested in local maxima and minima of a function , where is an open region in . Again, we say that has a local minimum at a point if there is some neighborhood of so that for all . A maximum is similarly defined, except that we require in the neighborhood. As I alluded to recently, we can bring Fermat’s theorem to bear to determine a necessary condition.

Specifically, if we have coordinates on given by a basis , we can regard as a function of the variables . We can fix of these variables for and let vary in a neighborhood of . If has a local extremum at , then in particular it has a local extremum along this coordinate line at . And so we can use Fermat’s theorem to draw conclusions about the derivative of this restricted function at , which of course is the partial derivative .

So what can we say? For each variable , the partial derivative either does not exist or is equal to zero at . And because the differential subsumes the partial derivatives, if any of them fail to exist the differential must fail to exist as well. On the other hand, if they all exist they’re all zero, and so as well. Incidentally, we can again make the connection to the usual coverage in a multivariable calculus course by remembering that the gradient is the vector that corresponds to the linear functional of the differential . So at a local extremum we must have .

As was the case with Fermat’s theorem, this provides a necessary, but not a sufficient condition to have a local extremum. Anything that can go wrong in one dimension can be copied here. For instance, we could define . Then we find , which is zero at . But any neighborhood of this point will contain points and for small enough , and we see that , so the origin cannot be a local extremum.

But weirder things can happen. We might ask that have a local minimum at along any line, like we tried with directional derivatives. But even this can go wrong. If we define

we can calculate

which again is zero at . Along any slanted line through the origin we find

and so the second derivative is always positive at the origin, except along the -axis. For the vertical line, we find

so along all of these lines we have a local minimum at the origin by the second derivative test. And along the -axis, we have , which has the origin as a local minimum.

Unfortunately, it’s *still* not a local minimum in the plane, since any neighborhood of the origin must contain points of the form for small enough . For these points we find

and so cannot have a local minimum at the origin.

What we’ll do is content ourselves with this analogue and extension of Fermat’s theorem as a necessary condition, and then develop tools that can distinguish the common behaviors near such critical points, analogous to the second derivative test.

## The Implicit Function Theorem II

Okay, today we’re going to prove the implicit function theorem. We’re going to think of our function as taking an -dimensional vector and a -dimensional vector and giving back an -dimensional vector . In essence, what we want to do is see how this output vector must change as we change , and then undo that by making a corresponding change in . And to do that, we need to know how changing the output changes , at least in a neighborhood of . That is, we’ve got to invert a function, and we’ll need to use the inverse function theorem.

But we’re not going to apply it directly as the above heuristic suggests. Instead, we’re going to “puff up” the function into a bigger function that will give us some room to maneuver. For we define

just copying over our original function. Then we continue by defining for

That is, the new component functions are just the coordinate functions . We can easily calculate the Jacobian matrix

where is the zero matrix and is the identity matrix. From here it’s straightforward to find the Jacobian determinant

which is exactly the determinant we assert to be nonzero at . We also easily see that .

And so the inverse function theorem tells us that there are neighborhoods of and of so that is injective on and , and that there is a continuously differentiable inverse function so that for all . We want to study this inverse function to recover our implicit function from it.

First off, we can write for two functions: which takes -dimensional vector values, and which takes -dimensional vector values. Our inverse relation tells us that

But since is injective from onto , we can write any point as , and in this case we must have by the definition of . That is, we have

And so we see that , where is the -dimensional vector so that . We thus have for every .

Now define be the collection of vectors so that , and for each such define , so . As a slice of the open set in the product topology on , the set is open in . Further, is continuously differentiable on since is continuously differentiable on , and the components of are taken directly from those of . Finally, is in since , and by assumption. This also shows that .

The only thing left is to show that is uniquely defined. But there can only be one such function, by the injectivity of . If there were another such function then we’d have , and thus , or for every .

## The Implicit Function Theorem I

Let’s consider the function . The collection of points so that defines a curve in the plane: the unit circle. Unfortunately, this relation is not a function. Neither is defined as a function of , nor is defined as a function of by this curve. However, if we consider a point on the curve (that is, with ), then *near* this point we usually *do* have a graph of as a function of (except for a few isolated points). That is, as we move near the value then we have to adjust to maintain the relation . There is some function defined “implicitly” in a neighborhood of satisfying the relation .

We want to generalize this situation. Given a system of functions of variables

we consider the collection of points in -dimensional space satisfying .

If this were a linear system, the rank-nullity theorem would tell us that our solution space is (generically) dimensional. Indeed, we could use Gauss-Jordan elimination to put the system into reduced row echelon form, and (usually) find the resulting matrix starting with an identity matrix, like

This makes finding solutions to the system easy. We put our variables into a column vector and write

and from this we find

Thus we can use the variables as parameters on the space of solutions, and define each of the as a function of the .

But in general we don’t have a linear system. Still, we want to know some circumstances under which we can do something similar and write each of the as a function of the other variables , at least near some known point .

The key observation is that we can perform the Gauss-Jordan elimination above and get a matrix with rank if and only if the leading matrix is invertible. And this is generalized to asking that some Jacobian determinant of our system of functions is nonzero.

Specifically, let’s assume that all of the are continuously differentiable on some region in -dimensional space, and that is some point in where , and at which the determinant

where both indices and run from to to make a square matrix. Then I assert that there is some -dimensional neighborhood of and a uniquely defined, continuously differentiable, vector-valued function so that and .

That is, near we can use the variables as parameters on the space of solutions to our system of equations. Near this point, the solution set looks like the graph of the function , which is implicitly defined by the need to stay on the solution set as we vary . This is the implicit function theorem, and we will prove it next time.

## The Inverse Function Theorem

At last we come to the theorem that I promised. Let be continuously differentiable on an open region , and . If the Jacobian determinant at some point , then there is a uniquely determined function and two open sets and so that

- , and
- is injective on
- is defined on , , and for all
- is continuously differentiable on

The Jacobian determinant is continuous as a function of , so there is some neighborhood of so that the Jacobian is nonzero within . Our second lemma tells us that there is a smaller neighborhood on which is injective. We pick some closed ball centered at , and use our first lemma to find that must contain an open neighborhood of . Then we define , which is open since both and are (the latter by the continuity of ). Since is injective on the compact set , it has a uniquely-defined continuous inverse on . This establishes the first four of the conditions of the theorem.

Now the hard part is showing that is continuously differentiable on . To this end, like we did in our second lemma, we define the function

along with a neighborhood of so that as long as all the are within this function is nonzero. Without loss of generality we can go back and choose our earlier neighborhood so that , and thus that .

To show that the partial derivative exists at a point , we consider the difference quotient

with also in for sufficiently small . Then writing and we find . The mean value theorem then tells us that

for some (no summation on ). As usual, is the Kronecker delta.

This is a linear system of equations, which has a unique solution since the determinant of its matrix is . We use Cramer’s rule to solve it, and get an expression for our difference quotient as a quotient of two determinants. This is why we want the form of the solution given by Cramer’s rule, and not by a more computationally-efficient method like Gaussian elimination.

As approaches zero, continuity of tells us that approaches , and thus so do all of the . Therefore the determinant in the denominator of Cramer’s rule is in the limit , and thus limits of the solutions given by Cramer’s rule actually do exist.

This establishes that the partial derivative exists at each . Further, since we found the limit of the difference quotient by Cramer’s rule, we have an expression given by the quotient of two determinants, each of which only involves the partial derivatives of , which are themselves all continuous. Therefore the partial derivatives of not only exist but are in fact continuous.