The Unapologetic Mathematician

Mathematics for the interested outsider


Okay, so we’ve got one of our real-valued functions defined on some domain D\subseteq\mathbb{R}: f:D\rightarrow\mathbb{R}. Let’s start analyzing it!

We start with some point x_0\in D, and we can crank out the value the function takes at that point: f(x_0). What we want to understand is how the value of the function changes as we change x. More specifically, we want to understand how it changes as we vary our input continuously. Of course, “continuous” means we’re just moving around a little bit in some neighborhood of the point we started with, and neighborhoods in \mathbb{R} basically come down to open intervals. So let’s just assume that our domain D is some open interval containing the point we’re looking at. If it contains an open interval already we can just restrict it, and if it doesn’t contain a neighborhood of our point then we can’t vary the input continuously, so we aren’t interested in that case.

The simplest sort of function is just a constant f(x)=c. In this case, the value doesn’t change. That’s what it means to be constant! A little more complex is a linear function f(x)=ax+b for real numbers a and b. Then if we move our point over a bit by adding an amount \Delta x to it our function takes the value

f(x_0+\Delta x)=a(x_0+\Delta x)+b = (ax_0+b)+a\Delta x = f(x_0)+a\Delta x

That is, adding \Delta x to our input adds the constant multiple a\Delta x to our output. It’s easy to understand how this sort of function changes as we change the input. We can characterize this behavior by calling the change in the output \Delta f=a\Delta x, and considering the constant \frac{\Delta f}{\Delta x}=a.

Now, let’s consider an arbitrary continuous function. We can still tweak our input by adding \Delta x to it, and now we get a new output f(x_0+\Delta x). Subtracting off f(x_0) we get the change in the output: \Delta f=f(x_0+\Delta x)-f(x_0). This won’t in general be a constant like it was for the linear functions above: if we pick different values for \Delta x we may get different values for \Delta f. But we can still ask how the changes in the input and output are related by calculating the “difference quotient” \frac{\Delta f}{\Delta x}. This gives us a function of the amount by which we changed our input.

Let’s look back at the difference quotient for a linear function: \frac{\Delta f}{\Delta x}=\frac{a\Delta x}{\Delta x}=a. But it’s not really the constant function a! There’s a hole in the function at \Delta x=0, which we can patch by taking the limit \lim\limits_{\Delta x\rightarrow 0}\frac{\Delta f}{\Delta x}. Since the difference quotient is a everywhere around the hole, the limit exists and equals a.

There’s also a hole at \Delta x=0 in all our difference quotient functions, and we’d love to patch them up by taking a limit just like we did above. But can we always do this? Look at the function f(x)=|x| near x=0. For positive inputs the function just gives the input back again, for negative inputs it gives back the negative of the input, and at zero it gives back zero again. So let’s look at \frac{f(0+\Delta x)-f(0)}{\Delta x}=\frac{|\Delta x|}{\Delta x}. When \Delta x is positive this is 1, where \Delta x is negative this is -1, and of course there’s a hole at \Delta x=0. But now we see that there’s no limit as \Delta x approaches zero, since the image of a sequence approaching from the left converges to -1, while the image of one approaching from the right converges to 1. Since they don’t agree, we can’t unambiguously patch the hole.

On the other hand, maybe we can patch the hole by taking a limit. If we can, then we say that f is “differentiable” at x_0, and the limit of the difference quotient is called the “derivative” of f at x_0. We write this as

\displaystyle{\frac{df}{dx}\bigg\vert_{x=x_0}}=\lim\limits_{\Delta x\rightarrow 0}\frac{\Delta f}{\Delta x}=\lim\limits_{\Delta x\rightarrow 0}\frac{f(x_0+\Delta x)-f(x_0)}{\Delta x}

Another notation for the derivative that shows up is f'(x_0). This hints at the fact that as we change the point x_0 we started with we may get different values for the derivative. That is, the derivative is a new function! In analogy with continuity, we say that a function is differentiable on a region D if it is differentiable — if the difference quotient has a limit — for each point x_0\in D. The linear functions we considered above are differentiable everywhere in \mathbb{R}, with f'(x)=a for all x. On the other hand, the absolute value function is continuous everywhere, but differentiable only where x_0\neq0. In this case, the derivative f'(x) is the constant 1 when x is positive and the constant -1 when x is negative.

It’s worth pointing out that if a function f is differentiable at a point x_0 then it must be continuous there. Indeed, if \lim\limits_{\Delta x\rightarrow0}\frac{f(x_0+\Delta x)-f(x_0)}{\Delta x} is to have any chance at converging, we must have \lim\limits_{\Delta x\rightarrow0}f(x_0+\Delta x)-f(x_0)=0, and this just asserts that the limit of f at x_0 is its value there. So differentiability implies continuity, but continuity doesn’t imply differentiability, as we saw from the absolute value above.

December 21, 2007 Posted by | Analysis, Calculus | 9 Comments



Get every new post delivered to your Inbox.

Join 366 other followers