## Partial Derivatives

Okay, we want to move towards some analogue of the derivative of a function that applies to functions of more than one variable. For the moment we’ll stick to single real outputs. As a goal, we want “differentiability” to be a refinement of the idea of smoothness started with “continuity“, so an important check is that it’s a stronger condition. That is, a differentiable function should be continuous.

For functions with a single real input we defined the derivative of the function at the point by the limit of the difference quotient

The problem here is that for vector inputs we can’t “divide” by the vector . So we need some other way around this problem.

Our first attempt may be familiar from calculus classes: we’ll just look at one variable at a time. That is, if we have a function of real variables and we keep all of them fixed except the th one, we can try to take the limit

That is, we fix down the values of all the other variables and get a function of the single remaining variable. We then take the single-variable derivative as normal.

The first problem here is that it having these partial derivatives — even having a partial derivative for each variable — doesn’t make a function continuous. Let’s look at the first pathological example of a limit we discussed:

If we consider the point , we can calculate both partial derivatives here. First we fix and find . Thus it’s easy to check that . Similarly, we can fix to find , and thus that . So both partial derivatives exist at , but the function doesn’t even have a limit there, much less one which equals its value.

The problem is the same one we saw in the case of multivariable limits: we can’t take a limit as one input point approaches another along a single path and just blithely expect that it’s going to mean anything. Here we’re just picking out two paths towards the same point and establishing that the function is continuous when we restrict to those paths, which doesn’t establish continuity in general.

There’s a deeper problem with partial derivatives, though. Implicit in the whole set-up is *choosing a basis* of our space. To write as a function of real variables instead of one -dimensional vector variable means picking a basis. In practice we often have no problem with this. Indeed, many problems come to us in terms of a collection of variables which we bind together to make a single vector variable. But in principle, anything with any geometric meaning should be independent of artificial choices of coordinates. We can’t even *talk* about partial derivatives without making such a choice, and so they clearly don’t get to the heart of any sensible notion of “differentiability”.

“We can’t even talk about partial derivatives without making such a choice, and so they clearly don’t get to the heart of any sensible notion of “differentiability”.”

Well now I want to know how we can talk about differentiability?

Comment by David | September 21, 2009 |

Patience, grasshopper. We’ve got one more blind alley before we get to the right answer.

Comment by John Armstrong | September 21, 2009 |

[…] So yesterday we noted that the big conceptual problem with partial derivatives is that they’re highly […]

Pingback by Hard Choices « The Unapologetic Mathematician | September 22, 2009 |

[…] Okay, now let’s generalize away from partial derivatives. The conceptual problem there was picking a bunch of specific directions as our basis, and […]

Pingback by Directional Derivatives « The Unapologetic Mathematician | September 23, 2009 |

[…] Okay, partial derivatives don’t work as an extension of derivation to higher-dimensional spaces. Even generalizing them […]

Pingback by Differentials « The Unapologetic Mathematician | September 24, 2009 |

[…] want to analyze these components. I assert that these are just the partial derivatives in terms of the orthonormal basis we’ve chosen: . In particular, I’m asserting that if […]

Pingback by Uniqueness of the Differential « The Unapologetic Mathematician | September 29, 2009 |

So according to the definition of partial derivative you need a value for f(0,0) to compute its partial derivatives at (0,0). But you do not have any such value.

Comment by Johan Richter | September 29, 2009 |

Sorry, Johan. Easiest way around is to patch it at that point by defining . Then the partial derivatives exist, but the limit doesn’t. I did something similar when talking about directional derivatives.

Comment by John Armstrong | September 29, 2009 |

With that definition I do not think either limit defining the partial derivatives exist.

For the x-derivative for example I get:

which does not exist.

Comment by Johan Richter | September 29, 2009 |

Okay, Johan, here’s one that’s a lot more hacky but should finally settle this: let if either or are , and otherwise. Then both partials exist but the limit doesn’t.

Or go back to the original function and use the definition of the derivative that takes a sample point approaching the given point from either side, which is a common alternative to handle the nonexistence of a function at the point in question

Enough.

Comment by John Armstrong | September 29, 2009 |

[…] we can say “the” differential), we showed that given an orthonormal basis we have all partial derivatives. We even have all directional derivatives, with pretty much the same proof. We replace with an […]

Pingback by Differentiability Implies Continuity « The Unapologetic Mathematician | September 30, 2009 |

[…] know from the uniqueness proof that if it does exist, then given an orthonormal basis we have all partial derivatives, and the differential must be given by the […]

Pingback by An Existence Condition for the Differential « The Unapologetic Mathematician | October 1, 2009 |

[…] we can ask how the function behaves as we move our input point around. It’s easy to find the partial derivatives. If […]

Pingback by Examples and Notation « The Unapologetic Mathematician | October 2, 2009 |

[…] most common first approach to differential calculus in more than one variable starts by defining partial derivatives and directional derivatives, as we did. But instead of defining the differential, it simply […]

Pingback by The Gradient Vector « The Unapologetic Mathematician | October 5, 2009 |

[…] Chain Rule Since the components of the differential are given by partial derivatives, and partial derivatives (like all single-variable derivatives) are linear, it’s […]

Pingback by The Chain Rule « The Unapologetic Mathematician | October 7, 2009 |

[…] within an open region . In particular, if we pick coordinates on the function has all partial derivatives at each point in . As we move around within the value of the partial derivative changes, […]

Pingback by Higher Partial Derivatives « The Unapologetic Mathematician | October 14, 2009 |

[…] Just like we assembled partial derivatives into the differential of a function, so we can assemble higher partial derivatives into […]

Pingback by Higher-Order Differentials « The Unapologetic Mathematician | October 16, 2009 |

[…] we consider those functions which have all partial derivatives at every point of , and these partial derivatives are themselves continuous throughout . […]

Pingback by Smoothness « The Unapologetic Mathematician | October 21, 2009 |

[…] for every . Further, assume that the partial derivative exists and is continuous throughout . Then the derivative of exists and is given […]

Pingback by Differentiation Under the Integral Sign « The Unapologetic Mathematician | January 13, 2010 |

[…] the coordinate map to get , where is some open neighborhood of the point . Now we can take that th partial derivative of this function and evaluate it at the point […]

Pingback by Tangent Vectors and Coordinates « The Unapologetic Mathematician | March 30, 2011 |