## The Gradient Vector

The most common first approach to differential calculus in more than one variable starts by defining partial derivatives and directional derivatives, as we did. But instead of defining the differential, it simply collects the partial derivatives together as the components of a vector called the gradient of , and written .

We showed that these partial derivatives are the components of the differential (when it exists), and so there should be some connection between the two concepts. And indeed there is.

As a bilinear form, our inner product defines an isomorphism from the space of displacements to its dual space. This isomorphism sends the basis vector to the dual basis vector , since we can check both

and

That is, the linear functional is the same as the linear functional .

So under this isomorphism the differential corresponds to the vector

We can remove the function from this notation to write the operator on its own as

We also write the gradient vector at a given point as , where we have to remember to parse this as evaluating a function at the point rather than as applying the operator to the value .

Now, under our approach the differential is more fundamental and more useful than the gradient vector. However, there is some meaning to the geometric interpretation of the gradient as a displacement vector.

First of all, let’s ask that be a unit vector. Then we can calculate the directional derivative . But the linear functional given by the differential is the same as the linear functional . Thus we also find that . And we can interpret this inner product in terms of the length of and the angle between and :

since the length of is automatically .

The cosine term has a maximum value of when points in the same direction as so that . That is, the direction that gives us the largest directional derivative is the direction of . And then we can calculate the rate that the function increases in this direction as the length of the gradient .

So for most purposes we’ll stick to using the differential, but in practice it’s often useful to think of the gradient vector to get some geometric intuition about what the differential means.

## Examples and Notation

Okay, let’s do some simple examples of differentials, which will lead to some notational “syntactic sugar”.

First of all, if we pick an orthonormal basis we can write any point as . This gives us nice functions to consider: is the function that takes a point and returns its th coordinate. This is actually a sort of subtle point that’s important to consider deeply. We’re used to thinking of as a variable, which stands in for some real number. I’m saying that we want to consider it as a function in its own right. In a way, this is just extending what we did when we considered polynomials as functions and we can do everything algebraically with abstract “variables” as we can with specific “functions” as our .

Analytically, though, we can ask how the function behaves as we move our input point around. It’s easy to find the partial derivatives. If then

since moving in the direction doesn’t change the th component. On the other hand, if then

since moving a distance in the direction adds exactly to the th component. That is, we can write — the Kronecker delta.

Of course, since and are both constant, they’re clearly continuous everywhere. Thus by the condition we worked out yesterday the differential of exists, and we find

We can also write the differential as a linear functional . Since this takes a vector and returns its th component, it is exactly the dual basis element . That is, once we pick an orthonormal basis for our vector space of displacements, we can actually write the dual basis of linear functionals as the differentials . And from now on that’s exactly what we’ll do.

So, for example, let’s say we’ve got a differentiable function . Then we can write its differential as a linear functional

In the one-dimensional case, we write , leading us to the standard Leibniz notation

If we have to evaluate this function, we use an “evaluation bar” , or telling us to substitute for in the formula for . We also can write the operator that takes in a function and returns its derivative by simply removing the function from this Leibniz notation: .

Now when it comes to more than one variable, we can’t just “divide” by one of the differentials , but we’re going to use something like this notation to read off the coefficient anyway. In order to remind us that we’re not really dividing and that there are other variables floating around, we replace the with a curly version: . Then we can write the partial derivative

and the whole differential as

Notice here that when we see an upper index in the denominator of this notation, we consider it to be a *lower* index. Similarly, if we find a lower index in the denominator, we’ll consider it to be like an *upper* index for the purposes of the summation convention. We can even incorporate evaluation bars

or strip out the function altogether to write the “differential operator”

## An Existence Condition for the Differential

To this point we’ve seen what happens when a function does have a differential at a given point , but we haven’t yet seen any conditions that tell us that any such function exists. We know from the uniqueness proof that if it *does* exist, then given an orthonormal basis we have all partial derivatives, and the differential must be given by the formula

where is the partial derivative of in the th coordinate direction. This is clearly linear in the displacement , so all that remains is to see whether the inequality in the definition of the differential can be satisfied.

We must have all partial derivatives to write down this formula, but that can’t be sufficient for differentiability, because if it were then having all partial derivatives would imply continuity, and we know that it doesn’t. What *will* be sufficient is to ask that not only do all partial derivatives exist at , but that they themselves are continuous there. Note, though, that I’m not asserting that this condition is not necessary for a function to be differentiable. Indeed, it’s possible to construct differentiable functions whose partial derivatives all exist, but are not continuous at . This is an example of the way that analysis tends to be shot through with “counterexamples”, as Michael was talking about recently.

Okay, so let’s assume that all these partial derivatives exist and are continuous at . We have to show that for any there is some so that if we have the inequality

We’re going to take the difference and break it into terms, each of which will approximate one of the partial derivative terms.

First off, since each is continuous at , there is some so that if then . In fact, there’s a for each index , but we can just take the smallest of all these, and that one will work for each index. From this point on, we’ll assume that is actually less than . We’ll write , where is a unit vector and is a scalar so that . We’ll also write in terms of our orthonormal basis .

Now we can build up our displacement direction step-by-step as a sequence of vectors , , and so on, stepping in the th direction on the th step: (not summing on here). So we can break up the difference of function values as

So now each step only changes the th coordinate, and the points at each end both lie within the ball of radius around , since each is shorter than , which has unit length. To look closer at the step from to , we introduce a new function of one real variable:

for . This lets us write our step as . It turns out that everywhere in this closed interval, the function is differentiable! Indeed, we have

So as goes to zero, we find , which exists because we’re in a small enough ball around . Now the mean value theorem can be brought to bear, which says

for some . And now the difference of function values can be written

since .

Now , and so we find that the each of these differences of partial derivative evaluations is less than . And thus

which establishes the inequality we need.