With Clairaut’s theorem comes the first common example of a smoothness assumption. It’s a good time to say just what I mean by this.
Let’s look at an open region . We can now define a tower of algebras of functions on this set. We start by setting out the real-valued functions which are continuous at each point of , and write this as . It’s an algebra under pointwise addition and multiplication of functions.
Next we consider those functions which have all partial derivatives at every point of , and these partial derivatives are themselves continuous throughout . We’ve seen that this will imply that such a function has a differential at each point of . This gives us a subalgebra of which we write as . That is, these functions have “one continuous derivative”, or are “once (continuously) differentiable”.
Continuing on, we consider those functions which have all second partial derivatives, and that these second partials are themselves continuous at each point of . Clairaut’s theorem tells us that the mixed second partials are equal, since they’re both continuous, and we can define the second differential. These functions form a subalgebra of (and thus a further subalgebra of ) which we write as . These functions have “two continuous derivatives”, or are “twice (continuously) differentiable”.
From here it’s clear how to proceed, defining functions with higher and higher differentials. We get algebras , , and so on. We can even define the infinitely differentiable functions to be the limit (in the categorical sense) of this process. It consists of all the functions that are in for all natural numbers . Taking any directional derivative (with a constant direction) of a function in lands us in , although such differentiation sends back into itself.
Is this the end? Not quite. Just like in one variable we have analytic functions. Once we’re in and we have all higher derivatives we can use Taylor’s theorem to write out the Taylor series of our function. But this may or may not converge back to the function itself. If it does for every point in we say that the function is analytic on . The collection of all analytic functions forms a subalgebra of which we write as .
It’s interesting to observe that at each step, “most” functions fail to fall into the finer subalgebra, just like “most” points on the - plane are not on the -axis. An arbitrary function selected from will actually lie within with probability zero. An arbitrary infinitely differentiable function is analytic with probability zero. Pretty much every example we show students in calculus classes is infinitely differentiable, if not analytic, and yet such functions make up a vanishingly small portion of even once differentiable functions.
So, what does it mean to say that a function is “smooth”? It’s often said to mean that a function is in — that it has derivatives of all orders. But in practice, this seems to actually be just a convenience. What “smooth” actually means is a subtler point.
Let’s say I’m working in some situation where I’m going to be taking first and second partial derivatives of a function in a region, and I’m going to want the mixed partials to commute by Clairaut’s theorem. If I say that is infinitely differentiable on , this will certainly do the trick. But I’ve excluded a huge number of functions. All I really need is for to fall into , of which is an incredibly tiny subalgebra.
In practice, then, “smooth” effectively means “has enough derivatives to do what I want with it”. It’s a way of saying that we understand that it’s possible to come up with pathological cases which break the theorem we’re stating, but as long as we have sufficiently many derivatives (where “sufficiently many” is some fixed natural number we don’t care to work out in detail) the pathological cases can be excluded. Saying that “smooth” means “infinitely differentiable” accomplishes this goal, and it’s usually easier than trying to stomach the idea that “smoothness” is a highly context-dependent term of art rather than a nice, well-defined mathematical concept.