Extrema with Constraints II
As we said last time, we have an idea for a necessary condition for finding local extrema subject to constraints of a certain form. To be explicit, we assume that is continuously differentiable on an open region
, and we also assume that
is a continuously differentiable vector-valued function on the same region (and that
). We use
to define the region
consisting of those points
so that
. Now if
is a point with a neighborhood
so that for all
we have
(or
for all such
). And, finally, we assume that the
determinant
is nonzero at . Then we have reason to believe that there will exist real numbers
, one for each component of the constraint function, so that the
equations
are satisfied at .
Well, first off we can solve the first equations and determine the
right off. Rewrite them as
a system of equations in the
unknowns
. Since the matrix has a nonzero determinant (by assumption) we can solve this system uniquely to determine the
. What’s left is to verify that this choice of the
also satisfies the remaining
equations.
To take care of this, we’ll write , so we can write the point
as
and particularly
. Now we can invoke the implicit function theorem! We find an
-dimensional neighborhood
of
and a unique continuously differentiable function
so that
and
for all
. Without loss of generality, we can choose
so that
, where
is the neighborhood from the assumptions above.
This is the parameterization we discussed last time, and we can now substitute these functions into the function . That is, we can define
Or if we define we can say this more succinctly as
and
.
Anyhow, now all of these are identically zero on
as a consequence of the implicit function theorem, and so each partial derivative
is identically zero as well. But since the
are composite functions we can also use the chain rule to evaluate these partial derivatives. We find
Similarly, since has a local minimum (as a function of the
) at
we must find its partial derivatives zero at that point. That is
Now let’s take the previous equation involving , evaluate it at
, multiply it by
, sum over
, and add it to this latest equation. We find
Now, the expression in brackets is zero because that’s actually how we defined the way back at the start of the proof! And then what remains is exactly the equations we need to complete the proof.
Extrema with Constraints I
We can consider the problem of maximizing or minimizing a function, as we have been, but insisting that our solution satisfy some constraint.
For instance, we might have a function to maximize, but we’re only concerned with unit-length vectors on
. More generally, we’ll be concerned with constraints imposed by setting a function
equal to zero. In the example, we might set
. If we want to impose more conditions, we can make
a vector-valued function with as many components as constraint functions we want to set equal to zero.
Now, we might be able to parameterize the collection of points satisfying . In the example, we could use the usual parameterization of the sphere by latitude and longitude, writing
where I’ve used the physicists’ convention on the variables instead of the common one in multivariable calculus classes. Then we could plug these expressions for ,
, and
into our function
, and get a composite function of the variables
and
, which we can then attack with the tools from the last couple days, being careful about when we can and can’t trust Cauchy’s invariant rule, since the second differential can transform oddly.
Besides even that care that must be taken, it may not even be possible to parameterize the surface, or it may be extremely difficult. At least we do know that such a parameterization will often exist. Indeed, the implicit function theorem tells us that if we have continuously differentiable constraint functions
whose zeroes describe a collection of points in an
-dimensional space
, and the
determinant
is nonzero at some point satisfying
, then we can “solve” these equations for the first
variables as functions of the last
. This gives us exactly such a parameterization, and in principle we could use it. But the calculations get amazingly painful.
Instead, we want to think about this problem another way. We want to consider a point is a point satisfying
which has a neighborhood
so that for all
satisfying
we have
. This does not say that there are no nearby points to
where
takes on larger values, but it does say that to reach any such point we must leave the region described by
.
Now, let’s think about this sort of heuristically. As we look in various directions from
, some of them are tangent to the region described by
. These are the directions satisfying
— where to first order the value of
is not changing in the direction of
. I say that in none of these directions can
change (again, to first order) either. For if it did, either
would increase in that direction or not. If it did, then we could find a path in the region where
along which
was increasing, contradicting our assertion that we’d have to leave the region for this to happen. But if
decreased to first order, then it would increase to first order in the opposite direction, and we’d have the same problem. That is, we must have
whenever
. And so we find that
The kernel of consists of all vectors orthogonal to the gradient vector
, and the line it spans is the orthogonal complement to the kernel. Similarly, the kernel of
consists of all vectors orthogonal to each of the gradient vectors
, and is thus the orthogonal complement to the entire subspace they span. The kernel of
is contained in the kernel of
, and orthogonal complements are order-reversing, which means that
must lie within the span of the
. That is, there must be real numbers
so that
or, passing back to differentials
So in the presence of constraints we replace the condition by this one. We call the
“Lagrange multipliers”, and for every one of these variables we add to the system of equations, we also add the constraint equation
, so we should still get an isolated collection of points.
Now, we reached this conclusion by a rather handwavy argument about being able to find increasing directions and so on within the region . This line of reasoning could possibly be firmed up, but we’ll find our proof next time in a slightly different approach.
Classifying Critical Points
So let’s say we’ve got a critical point of a multivariable function . That is, a point
where the differential
vanishes. We want something like the second derivative test that might tell us more about the behavior of the function near that point, and to identify (some) local maxima and minima. We’ll assume here that
is twice continuously differentiable in some region
around
.
The analogue of the second derivative for multivariable functions is the second differential . This function assigns to every point a bilinear function of two displacement vectors
and
, and it measures the rate at which the directional derivative in the direction of
is changing as we move in the direction of
. That is,
If we choose coordinates on given by an orthonormal basis
, we can write the second differential in terms of coordinates
This matrix is often called the “Hessian” of at the point
.
As I said above, this is a bilinear form. Further, Clairaut’s theorem tells us that it’s a symmetric form. Then the spectral theorem tells us that we can find an orthonormal basis with respect to which the Hessian is actually diagonal, and the diagonal entries are the eigenvalues of the matrix.
So let’s go back and assume we’re working with such a basis. This means that our second partial derivatives are particularly simple. We find that for we have
and for , the second partial derivative is an eigenvalue
which we can assume (without loss of generality) are nondecreasing. That is, .
Now, if all of these eigenvalues are positive at a critical point , then the Hessian is positive-definite. That is, given any direction
we have
. On the other hand, if all of the eigenvalues are negative, the Hessian is negative definite; given any direction
we have
. In the former case, we’ll find that
has a local minimum in a neighborhood of
, and in the latter case we’ll find that
has a local maximum there. If some eigenvalues are negative and others are positive, then the function has a mixed behavior at
we’ll call a “saddle” (sketch the graph of
near
to see why). And if any eigenvalues are zero, all sorts of weird things can happen, though at least if we can find one positive and one negative eigenvalue we know that the critical point can’t be a local extremum.
We remember that the determinant of a diagonal matrix is the product of its eigenvalues, so if the determinant of the Hessian is nonzero then either we have a local maximum, we have a local minimum, or we have some form of well-behaved saddle. These behaviors we call “generic” critical points, since if we “wiggle” the function a bit (while maintaining a critical point at ) the Hessian determinant will stay nonzero. If the Hessian determinant is zero, wiggling the function a little will make it nonzero, and so this sort of critical point is not generic. This is the sort of unstable situation analogous to a failure of the second derivative test. Unfortunately, the analogy doesn’t extent, in that the sign of the Hessian determinant isn’t instantly meaningful. In two dimensions a positive determinant means both eigenvalues have the same sign — denoting a local maximum or a local minimum — while a negative determinant denotes eigenvalues of different signs — denoting a saddle. This much is included in multivariable calculus courses, although usually without a clear explanation why it works.
So, given a direction vector so that
, then since
is in
, there will be some neighborhood
of
so that
for all
. In particular, there will be some range of
so that
. For any such point we can use Taylor’s theorem with
to tell us that
for some . And from this we see that
for every
so that
. A similar argument shows that if
then
for any
near
in the direction of
.
Now if the Hessian is positive-definite then every direction from
gives us
, and so every point
near
satisfies
. If the Hessian is negative-definite, then every point
near
satisfies
. And if the Hessian has both positive and negative eigenvalues then within any neighborhood we can find some directions in which
and some in which
.
Local Extrema in Multiple Variables
Just like in one variable, we’re interested in local maxima and minima of a function , where
is an open region in
. Again, we say that
has a local minimum at a point
if there is some neighborhood
of
so that
for all
. A maximum is similarly defined, except that we require
in the neighborhood. As I alluded to recently, we can bring Fermat’s theorem to bear to determine a necessary condition.
Specifically, if we have coordinates on given by a basis
, we can regard
as a function of the
variables
. We can fix
of these variables
for
and let
vary in a neighborhood of
. If
has a local extremum at
, then in particular it has a local extremum along this coordinate line at
. And so we can use Fermat’s theorem to draw conclusions about the derivative of this restricted function at
, which of course is the partial derivative
.
So what can we say? For each variable , the partial derivative
either does not exist or is equal to zero at
. And because the differential subsumes the partial derivatives, if any of them fail to exist the differential must fail to exist as well. On the other hand, if they all exist they’re all zero, and so
as well. Incidentally, we can again make the connection to the usual coverage in a multivariable calculus course by remembering that the gradient
is the vector that corresponds to the linear functional of the differential
. So at a local extremum we must have
.
As was the case with Fermat’s theorem, this provides a necessary, but not a sufficient condition to have a local extremum. Anything that can go wrong in one dimension can be copied here. For instance, we could define . Then we find
, which is zero at
. But any neighborhood of this point will contain points
and
for small enough
, and we see that
, so the origin cannot be a local extremum.
But weirder things can happen. We might ask that have a local minimum at
along any line, like we tried with directional derivatives. But even this can go wrong. If we define
we can calculate
which again is zero at . Along any slanted line through the origin
we find
and so the second derivative is always positive at the origin, except along the -axis. For the vertical line, we find
so along all of these lines we have a local minimum at the origin by the second derivative test. And along the -axis, we have
, which has the origin as a local minimum.
Unfortunately, it’s still not a local minimum in the plane, since any neighborhood of the origin must contain points of the form for small enough
. For these points we find
and so cannot have a local minimum at the origin.
What we’ll do is content ourselves with this analogue and extension of Fermat’s theorem as a necessary condition, and then develop tools that can distinguish the common behaviors near such critical points, analogous to the second derivative test.
The Implicit Function Theorem II
Okay, today we’re going to prove the implicit function theorem. We’re going to think of our function as taking an
-dimensional vector
and a
-dimensional vector
and giving back an
-dimensional vector
. In essence, what we want to do is see how this output vector must change as we change
, and then undo that by making a corresponding change in
. And to do that, we need to know how changing the output changes
, at least in a neighborhood of
. That is, we’ve got to invert a function, and we’ll need to use the inverse function theorem.
But we’re not going to apply it directly as the above heuristic suggests. Instead, we’re going to “puff up” the function into a bigger function
that will give us some room to maneuver. For
we define
just copying over our original function. Then we continue by defining for
That is, the new component functions are just the coordinate functions
. We can easily calculate the Jacobian matrix
where is the
zero matrix and
is the
identity matrix. From here it’s straightforward to find the Jacobian determinant
which is exactly the determinant we assert to be nonzero at . We also easily see that
.
And so the inverse function theorem tells us that there are neighborhoods of
and
of
so that
is injective on
and
, and that there is a continuously differentiable inverse function
so that
for all
. We want to study this inverse function to recover our implicit function from it.
First off, we can write for two functions:
which takes
-dimensional vector values, and
which takes
-dimensional vector values. Our inverse relation tells us that
But since is injective from
onto
, we can write any point
as
, and in this case we must have
by the definition of
. That is, we have
And so we see that , where
is the
-dimensional vector so that
. We thus have
for every
.
Now define be the collection of vectors
so that
, and for each such
define
, so
. As a slice of the open set
in the product topology on
, the set
is open in
. Further,
is continuously differentiable on
since
is continuously differentiable on
, and the components of
are taken directly from those of
. Finally,
is in
since
, and
by assumption. This also shows that
.
The only thing left is to show that is uniquely defined. But there can only be one such function, by the injectivity of
. If there were another such function
then we’d have
, and thus
, or
for every
.
The Implicit Function Theorem I
Let’s consider the function . The collection of points
so that
defines a curve in the plane: the unit circle. Unfortunately, this relation is not a function. Neither is
defined as a function of
, nor is
defined as a function of
by this curve. However, if we consider a point
on the curve (that is, with
), then near this point we usually do have a graph of
as a function of
(except for a few isolated points). That is, as we move
near the value
then we have to adjust
to maintain the relation
. There is some function
defined “implicitly” in a neighborhood of
satisfying the relation
.
We want to generalize this situation. Given a system of functions of
variables
we consider the collection of points in
-dimensional space satisfying
.
If this were a linear system, the rank-nullity theorem would tell us that our solution space is (generically) dimensional. Indeed, we could use Gauss-Jordan elimination to put the system into reduced row echelon form, and (usually) find the resulting matrix starting with an
identity matrix, like
This makes finding solutions to the system easy. We put our variables into a column vector and write
and from this we find
Thus we can use the variables
as parameters on the space of solutions, and define each of the
as a function of the
.
But in general we don’t have a linear system. Still, we want to know some circumstances under which we can do something similar and write each of the as a function of the other variables
, at least near some known point
.
The key observation is that we can perform the Gauss-Jordan elimination above and get a matrix with rank if and only if the leading
matrix is invertible. And this is generalized to asking that some Jacobian determinant of our system of functions is nonzero.
Specifically, let’s assume that all of the are continuously differentiable on some region
in
-dimensional space, and that
is some point in
where
, and at which the determinant
where both indices and
run from
to
to make a square matrix. Then I assert that there is some
-dimensional neighborhood
of
and a uniquely defined, continuously differentiable, vector-valued function
so that
and
.
That is, near we can use the variables
as parameters on the space of solutions to our system of equations. Near this point, the solution set looks like the graph of the function
, which is implicitly defined by the need to stay on the solution set as we vary
. This is the implicit function theorem, and we will prove it next time.
The Inverse Function Theorem
At last we come to the theorem that I promised. Let be continuously differentiable on an open region
, and
. If the Jacobian determinant
at some point
, then there is a uniquely determined function
and two open sets
and
so that
, and
is injective on
is defined on
,
, and
for all
is continuously differentiable on
The Jacobian determinant is continuous as a function of
, so there is some neighborhood
of
so that the Jacobian is nonzero within
. Our second lemma tells us that there is a smaller neighborhood
on which
is injective. We pick some closed ball
centered at
, and use our first lemma to find that
must contain an open neighborhood
of
. Then we define
, which is open since both
and
are (the latter by the continuity of
). Since
is injective on the compact set
, it has a uniquely-defined continuous inverse
on
. This establishes the first four of the conditions of the theorem.
Now the hard part is showing that is continuously differentiable on
. To this end, like we did in our second lemma, we define the function
along with a neighborhood of
so that as long as all the
are within
this function is nonzero. Without loss of generality we can go back and choose our earlier neighborhood
so that
, and thus that
.
To show that the partial derivative exists at a point
, we consider the difference quotient
with also in
for sufficiently small
. Then writing
and
we find
. The mean value theorem then tells us that
for some (no summation on
). As usual,
is the Kronecker delta.
This is a linear system of equations, which has a unique solution since the determinant of its matrix is . We use Cramer’s rule to solve it, and get an expression for our difference quotient as a quotient of two determinants. This is why we want the form of the solution given by Cramer’s rule, and not by a more computationally-efficient method like Gaussian elimination.
As approaches zero, continuity of
tells us that
approaches
, and thus so do all of the
. Therefore the determinant in the denominator of Cramer’s rule is in the limit
, and thus limits of the solutions given by Cramer’s rule actually do exist.
This establishes that the partial derivative exists at each
. Further, since we found the limit of the difference quotient by Cramer’s rule, we have an expression given by the quotient of two determinants, each of which only involves the partial derivatives of
, which are themselves all continuous. Therefore the partial derivatives of
not only exist but are in fact continuous.
Cramer’s Rule
We’re trying to invert a function which is continuously differentiable on some region
. That is we know that if
is a point where
, then there is a ball
around
where
is one-to-one onto some neighborhood
around
. Then if
is a point in
, we’ve got a system of equations
that we want to solve for all the .
We know how to handle this if is defined by a linear transformation, represented by a matrix
:
In this case, the Jacobian transformation is just the function itself, and so the Jacobian determinant
is nonzero if and only if the matrix
is invertible. And so our solution depends on finding the inverse
and solving
This is the approach we’d like to generalize. But to do so, we need a more specific method of finding the inverse.
This is where Cramer’s rule comes in, and it starts by analyzing the way we calculate the determinant of a matrix . This formula
involves a sum over all the permutations , and we want to consider the order in which we add up these terms. If we fix an index
, we can factor out each matrix entry in the
th column:
where the hat indicates that we omit the th term in the product. For a given value of
, we can consider the restricted sum
which is times the determinant of the
–
“minor” of the matrix
. That is, if we strike out the row and column of
which contain
and take the determinant of the remaining
matrix, we multiply this by
to get
. These are the entries in the “adjugate” matrix
.
What we’ve shown is that
(no summation on ). It’s not hard to show, however, that if we use a different row from the adjugate matrix we find
That is, the adjugate times the original matrix is the determinant of times the identity matrix. And so if
we find
So what does this mean for our system of equations? We can write
But how does this sum differ from the one
we used before (without summing on
) to calculate the determinant of
? We’ve replaced the
th column of
by the column vector
, and so this is just another determinant, taken after performing this replacement!
Here’s an example. Let’s say we’ve got a system written in matrix form
The entry in the th row and
th column of the adjugate matrix is calculated by striking out the
th column and
th row of our original matrix, taking the determinant of the remaining matrix, and multiplying by
. We get
and thus we find
where we note that
In other words, our solution is given by ratios of determinants:
and similar formulae hold for larger systems of equations.
Another Lemma on Nonzero Jacobians
Sorry for the late post. I didn’t get a chance to get it up this morning before my flight.
Brace yourself. Just like last time we’ve got a messy technical lemma about what happens when the Jacobian determinant of a function is nonzero.
This time we’ll assume that is not only continuous, but continuously differentiable on a region
. We also assume that the Jacobian
at some point
. Then I say that there is some neighborhood
of
so that
is injective on
.
First, we take points
in
and make a function of them
That is, we take the th partial derivative of the
th component function and evaluate it at the
th sample point to make a matrix
, and then we take the determinant of this matrix. As a particular value, we have
Since each partial derivative is continuous, and the determinant is a polynomial in its entries, this function is continuous where it’s defined. And so there’s some ball of
so that if all the
are in
we have
. We want to show that
is injective on
.
So, let’s take two points and
in
so that
. Since the ball is convex, the line segment
is completely contained within
, and so we can bring the mean value theorem to bear. For each component function we can write
for some in
(no summation here on
). But like last time we now have a linear system of equations described by an invertible matrix. Here the matrix has determinant
which is nonzero because all the are inside the ball
. Thus the only possible solution to the system of equations is
. And so if
for points within the ball
, we must have
, and thus
is injective.
A Lemma on Nonzero Jacobians
Okay, let’s dive right in with a first step towards proving the inverse function theorem we talked about at the end of yesterday’s post. This is going to get messy.
We start with a function and first ask that it be continuous and injective on the closed ball
of radius
around the point
. Then we ask that all the partial derivatives of
exist within the open interior
— note that this is weaker than our existence condition for the differential of
— and that the Jacobian determinant
on
. Then I say that the image
actually contains a neighborhood of
. That is, the image doesn’t “flatten out” near
.
The boundary of the ball
is the sphere of radius
:
Now the Heine-Borel theorem says that this sphere, being both closed and bounded, is a compact subset of . We’ll define a function on this sphere by
which must be continuous and strictly positive, since if then
, but we assumed that
is injective on
. But we also know that the image of a continuous real-valued function on a compact, connected space must be a closed interval. That is,
, and there exists some point
on the sphere where this minimum is actually attained:
.
Now we’re going to let be the ball of radius
centered at
. We will show that
, and is thus a neighborhood of
contained within
. To this end, we’ll pick
and show that
.
So, given such a point , we define a new function on the closed ball
by
This function is continuous on the compact ball , so it again has an absolute minimum. I say that it happens somewhere in the interior
.
At the center of the ball, we have (since
), so the minimum must be even less. But on the boundary
, we find
so the minimum can’t happen on the boundary. So this minimum of happens at some point
in the open ball
, and so does the minimum of the square of
:
Now we can vary each component of
separately, and use Fermat’s theorem to tell us that the derivative in terms of
must be zero at the minimum value
. That is, each of the partial derivatives of
must be zero (we’ll come back to this more generally later):
This is the product of the vector by the matrix
. And the determinant of this matrix is
: the Jacobian determinant at
, which we assumed to be nonzero way back at the beginning! Thus the matrix must be invertible, and the only possible solution to this system of equations is for
, and so
.