The Unapologetic Mathematician

Mathematics for the interested outsider


Okay, so we’ve got one of our real-valued functions defined on some domain D\subseteq\mathbb{R}: f:D\rightarrow\mathbb{R}. Let’s start analyzing it!

We start with some point x_0\in D, and we can crank out the value the function takes at that point: f(x_0). What we want to understand is how the value of the function changes as we change x. More specifically, we want to understand how it changes as we vary our input continuously. Of course, “continuous” means we’re just moving around a little bit in some neighborhood of the point we started with, and neighborhoods in \mathbb{R} basically come down to open intervals. So let’s just assume that our domain D is some open interval containing the point we’re looking at. If it contains an open interval already we can just restrict it, and if it doesn’t contain a neighborhood of our point then we can’t vary the input continuously, so we aren’t interested in that case.

The simplest sort of function is just a constant f(x)=c. In this case, the value doesn’t change. That’s what it means to be constant! A little more complex is a linear function f(x)=ax+b for real numbers a and b. Then if we move our point over a bit by adding an amount \Delta x to it our function takes the value

f(x_0+\Delta x)=a(x_0+\Delta x)+b = (ax_0+b)+a\Delta x = f(x_0)+a\Delta x

That is, adding \Delta x to our input adds the constant multiple a\Delta x to our output. It’s easy to understand how this sort of function changes as we change the input. We can characterize this behavior by calling the change in the output \Delta f=a\Delta x, and considering the constant \frac{\Delta f}{\Delta x}=a.

Now, let’s consider an arbitrary continuous function. We can still tweak our input by adding \Delta x to it, and now we get a new output f(x_0+\Delta x). Subtracting off f(x_0) we get the change in the output: \Delta f=f(x_0+\Delta x)-f(x_0). This won’t in general be a constant like it was for the linear functions above: if we pick different values for \Delta x we may get different values for \Delta f. But we can still ask how the changes in the input and output are related by calculating the “difference quotient” \frac{\Delta f}{\Delta x}. This gives us a function of the amount by which we changed our input.

Let’s look back at the difference quotient for a linear function: \frac{\Delta f}{\Delta x}=\frac{a\Delta x}{\Delta x}=a. But it’s not really the constant function a! There’s a hole in the function at \Delta x=0, which we can patch by taking the limit \lim\limits_{\Delta x\rightarrow 0}\frac{\Delta f}{\Delta x}. Since the difference quotient is a everywhere around the hole, the limit exists and equals a.

There’s also a hole at \Delta x=0 in all our difference quotient functions, and we’d love to patch them up by taking a limit just like we did above. But can we always do this? Look at the function f(x)=|x| near x=0. For positive inputs the function just gives the input back again, for negative inputs it gives back the negative of the input, and at zero it gives back zero again. So let’s look at \frac{f(0+\Delta x)-f(0)}{\Delta x}=\frac{|\Delta x|}{\Delta x}. When \Delta x is positive this is 1, where \Delta x is negative this is -1, and of course there’s a hole at \Delta x=0. But now we see that there’s no limit as \Delta x approaches zero, since the image of a sequence approaching from the left converges to -1, while the image of one approaching from the right converges to 1. Since they don’t agree, we can’t unambiguously patch the hole.

On the other hand, maybe we can patch the hole by taking a limit. If we can, then we say that f is “differentiable” at x_0, and the limit of the difference quotient is called the “derivative” of f at x_0. We write this as

\displaystyle{\frac{df}{dx}\bigg\vert_{x=x_0}}=\lim\limits_{\Delta x\rightarrow 0}\frac{\Delta f}{\Delta x}=\lim\limits_{\Delta x\rightarrow 0}\frac{f(x_0+\Delta x)-f(x_0)}{\Delta x}

Another notation for the derivative that shows up is f'(x_0). This hints at the fact that as we change the point x_0 we started with we may get different values for the derivative. That is, the derivative is a new function! In analogy with continuity, we say that a function is differentiable on a region D if it is differentiable — if the difference quotient has a limit — for each point x_0\in D. The linear functions we considered above are differentiable everywhere in \mathbb{R}, with f'(x)=a for all x. On the other hand, the absolute value function is continuous everywhere, but differentiable only where x_0\neq0. In this case, the derivative f'(x) is the constant 1 when x is positive and the constant -1 when x is negative.

It’s worth pointing out that if a function f is differentiable at a point x_0 then it must be continuous there. Indeed, if \lim\limits_{\Delta x\rightarrow0}\frac{f(x_0+\Delta x)-f(x_0)}{\Delta x} is to have any chance at converging, we must have \lim\limits_{\Delta x\rightarrow0}f(x_0+\Delta x)-f(x_0)=0, and this just asserts that the limit of f at x_0 is its value there. So differentiability implies continuity, but continuity doesn’t imply differentiability, as we saw from the absolute value above.

December 21, 2007 - Posted by | Analysis, Calculus


  1. Nice post. Now you are talking about things that I understand well, I can appreciate that you are a great expositer.

    Comment by Jake | December 23, 2007 | Reply

  2. Thanks. Glad to hear it.

    Comment by John Armstrong | December 23, 2007 | Reply

  3. […] Geometric Meaning of the Derivative Now we know what the derivative of a function is, and we have some tools to help us calculate them. But what does the derivative […]

    Pingback by The Geometric Meaning of the Derivative « The Unapologetic Mathematician | December 28, 2007 | Reply

  4. […] side note here: we define the derivative of a complex function by exactly the same limit of a difference quotient as before. There’s a lot to be said about derivatives of complex functions, but we’ll […]

    Pingback by Derivatives of Power Series « The Unapologetic Mathematician | September 17, 2008 | Reply

  5. […] took a real number in and gave a real number back out, and the two main aspects to this study: differentiation and […]

    Pingback by What’s Next? « The Unapologetic Mathematician | September 10, 2009 | Reply

  6. […] Okay, we want to move towards some analogue of the derivative of a function that applies to functions of more than one variable. For the moment we’ll stick […]

    Pingback by Partial Derivatives « The Unapologetic Mathematician | September 21, 2009 | Reply

  7. […] this looks a lot like our familiar derivative. Indeed, if we’re working in and we set we recover our regular derivative. And we have the […]

    Pingback by Directional Derivatives « The Unapologetic Mathematician | September 23, 2009 | Reply

  8. […] Now we know how to modify the notion of the derivative of a function to deal with vector inputs by defining the differential. But what about functions […]

    Pingback by Vector-Valued Functions « The Unapologetic Mathematician | October 6, 2009 | Reply

  9. […] given a smooth function we use this as if we were taking a derivative from all the way back in single-variable calculus: measure at , flow forward by and measure at , […]

    Pingback by The Lie Derivative « The Unapologetic Mathematician | June 15, 2011 | Reply

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: