The Unapologetic Mathematician

The Geometric Meaning of the Derivative

Now we know what the derivative of a function is, and we have some tools to help us calculate them. But what does the derivative mean. Here’s a picture:

In green I’ve drawn a function $f(x)$ defined on (at least) the interval $(0,4)$ of real numbers between ${0}$ and $4$. The specifics of the function don’t matter. In fact having a formula around to fall back on would be detrimental to understanding what’s going on.

In red I’ve drawn the line with equation $y=g(x)=f(2)+f'(2)(x-2)$. This describes a function with two very important properties. First, when $x=2$ we get $g(2)=f(2)$, so the two functions take the same value there. Second, the derivative $g'(x)=f'(2)$ everywhere, and in particular $g'(2)=f'(2)$. That is, not only do both graphs pass through the same point above $x=2$, they’re pointing in the same direction. As they pass through the point, the line “touches” the graph of $f$, and we call it the “tangent” line after the Latin tangere: “to touch”.

So the derivative $f'(x_0)$ seems to describe the direction of the tangent line to the graph of $f$ at the point $x_0$. Indeed, if we change our input by adding $\Delta x$, the tangent line predicts a change in output of $f'(x_0)\Delta x$. Remember, it’s this simple relation between changes in input and changes in output that makes lines lines. But the graph of the function is not its tangent line, and the function $f$ is not the same as the function $g$ defined by $g(x)=f(x_0)+f'(x_0)(x-x_0)$. How do they differ?

Well, we can subtract them. At $x_0$, we get a difference of ${0}$ because of how we define the function $g$, so let’s push away to the point $x_0+\Delta x$. There we find a difference of $f(x_0+\Delta x)-f(x_0)-f'(x_0)\Delta x$. But we saw this already in the lead-up to the chain rule! This is the function $\epsilon(\Delta x)\Delta x$, where $\lim\limits_{\Delta x\rightarrow0}\epsilon(\Delta x)=0$. That is, not only does the difference go to zero — the line and the graph pass through the same point — but it goes fast enough that the difference divided by $\Delta x$ still goes to zero — the line and the graph point in the same direction.

Let’s try to understand why the tangent line works like this. It’s pretty difficult to draw a tangent line, except in some simple geometric circumstances. So how can we get ahold of it? Well instead of trying to draw a line that touches the graph at that point, let’s imagine drawing one that cuts through at $x=x_0$, and also at the nearby point $x=x_0+\Delta x$. We’ll call it the “secant” line after the Latin secare: “to cut”. Now along this line we changed our input by $\Delta x$ and changed our output by $f(x_0+\Delta x)-f(x_0)$. That is, the relationship between inputs and outputs along this secant line is just the difference quotient $\frac{\Delta f}{\Delta x}$!

We know that the derivative $f'(x_0)$ is the limit of the difference quotient as $\Delta x$ goes to ${0}$. In the same way, the tangent line is the limit of the secant lines as we pick our second point closer and closer to $x_0$ — as long as our function is well-behaved. It might happen that the secants don’t approach any one tangent line, in which case our function is not differentiable at that point. In fact, that’s exactly what it means for a function to fail to be differentiable.

So in terms of the graph of a function, the derivative of a function at a point describes the tangent line to the graph of the function through that point. In particular, it gives us the “slope” — the constant relationship between inputs and outputs along the line.

December 28, 2007 Posted by | Analysis, Calculus | 2 Comments

The Chain Rule

Today we get another rule for manipulating derivatives. Along the way we’ll see another way of viewing the definition of the derivative which will come in handy in the future.

Okay, we defined the derivative of the function $f$ at the point $x$ as the limit of the difference quotient:
$\displaystyle f'(x)=\lim\limits_{\Delta x\rightarrow0}\frac{f(x+\Delta x)-f(x)}{\Delta x}$
The point of the derivative-as-limit-of-difference-quotient is that if we adjust our input by $\Delta x$, we adjust our output “to first order” by $f'(x)\Delta x$. That is, the the change in output is roughly the change in input times the derivative, and we have a good idea of how to control the error:
$\displaystyle\left(f(x+\Delta x)-f(x)\right)-f'(x)\Delta x=\epsilon(\Delta x)\Delta x$
where $\epsilon$ is a function of $\Delta x$ satisfying $\lim\limits_{\Delta x\rightarrow0}\epsilon(\Delta x)=0$. This means the difference between the actual change in output and the change predicted by the derivative not only goes to zero as we look closer and closer to $x$, but it goes to zero fast enough that we can divide it by $\Delta x$ and still it goes to zero. (Does that make sense?)

Okay, so now we can use this viewpoint on the derivative to look at what happens when we follow one function by another. We want to consider the composite function $f\circ g$ at the point $x_0$ where $f$ is differentiable. We’re also going to assume that $g$ is differentiable at the point $f(x_0)$. The differentiability of $f$ at $x_0$ tells us that
$\displaystyle\left(f(x_0+\Delta x)-f(x_0)\right)=f'(x_0)\Delta x+\epsilon(\Delta x)\Delta x$
and the differentiability of $g$ at $y_0$ tells us that
$\displaystyle\left(g(y_0+\Delta y)-g(y_0)\right)=g'(y_0)\Delta y+\eta(\Delta y)\Delta y$
where $\lim\limits_{\Delta x\rightarrow0}\epsilon(\Delta(x)=0$, and similarly for $\eta$. Now when we compose the functions $f$ and $g$ we set $y_0=f(x_0)$, and $\Delta y$ is exactly the value described in the first line! That is,
$\displaystyle \left[f\circ g\right](x_0+\Delta x)-\left[f\circ g\right](x_0)=g(f(x_0)+f'(x_0)\Delta x+\epsilon(\Delta x))-g(f(x_0))=$
$\displaystyle g'(f(x_0))\left(f'(x_0)\Delta x+\epsilon(\Delta x)\Delta x\right)+\eta(\Delta y)\left(f'(x_0)\Delta x+\epsilon(\Delta x)\Delta x\right)=$
$\displaystyle g'(f(x_0))f'(x_0)\Delta x+\left(g'(f(x_0))\epsilon(\Delta x)+\eta(\Delta y)\left(f'(x_0)+\epsilon(\Delta x)\right)\right)\Delta x$

The last quantity in parentheses which we multiply by $\Delta x$ goes to zero as $\Delta x$ does. First, $\epsilon(\Delta x)$ does by assumption. Then as $\Delta x$ goes to zero, so does $\Delta y$, since $f$ must be continuous. Thus $\eta(\Delta y)$ must go to zero, and the whole quantity is then zero in the limit. This establishes that not only is $f\circ g$ differentiable at $x_0$, but that its derivative there is
$\displaystyle\left[f\circ g\right]'(x_0)=\frac{d}{dx}g(f(x))\bigg|_{x=x_0}=g'(f(x_0))f'(x_0)$
This means that since “to first order” we get the change in the output of $f$ by multiplying the change in its input by $f'(x_0)$, and “to first order” we get the change in the output of $g$ by multiplying the change in its input by $g'(y_0)$, we get the change in the output of their composite by multiplying first by $f'(x_0)$ and then by $g'(y_0)=g'(f(x_0))$.

Another way we often write the chain rule is by setting $y=f(x)$ and $z=g(y)$. Then the derivative $f'(x)$ is written $\frac{dy}{dx}$, while $g'(y)$ is written $\frac{dz}{dy}$. The chain rule then says:
$\displaystyle \frac{dz}{dx}=\frac{dz}{dy}\frac{dy}{dx}$
This is nice since it looks like we’re multiplying fractions. The drawback is that we have to remember in our heads where to evaluate each derivative.

Now we can take this rule and use it to find the derivative of the inverse of an invertible function $f$. More specifically, if a function $f$ is one-to-one in some neighborhood of a point $x_0$, we can find another function $f^{-1}$ whose domain is the set of values $f$ takes — the range of $f$ — and so that $f(f^{-1}(x))=x=f^{-1}(f(x))$. Then if the function is differentiable at $x_0$ and the derivative $f'(x_0)$ is not zero, the inverse function will be differentiable, with a derivative we will calculate.

First we set $y=f(x)$ and $x=f^{-1}(y)$. Then we take the derivative of the defining equation of the inverse to get $\frac{df^{-1}}{dy}\frac{df}{dx}=1$, which we could write even more suggestively as $\frac{dx}{dy}\frac{dy}{dx}=1$. That is, the derivative of the composition inverse of our function is the multiplicative inverse of the derivative. But as we noted above, we have to remember where to evaluate everything. So let’s do it again in the other notation.

Since $f^{-1}(f(x))=x$, we differentiate to find $\left[f^{-1}\right]'(f(x))f'(x)=1$. Then we substitute $x=f^{-1}(y)$ and juggle some algebra to write
$\displaystyle\left[f^{-1}\right]'(y)=\frac{1}{f'(f^{-1}(y))}$

December 27, 2007 Posted by | Analysis, Calculus | 43 Comments

Algebraic Laws of Differentiation

Just like we had the laws of limits we have a collection of rules to help us calculate derivatives. Let’s start with the most basic functions.

As we said while defining the derivative, any linear function $f(x)=ax+b$ has the derivative $f'(x)=a$ at each point. We’ll separate this out into two rules:

• $\displaystyle\frac{d}{dx}c=0$
• $\displaystyle\frac{d}{dx}x=1$

That is, the derivative of any constant function is the constant function ${0}$, and the derivative of the identity function is the constant function $1$.

The next two rules should be perfectly straightforward to establish, so we’ll skip their proofs:

• $\displaystyle\frac{d}{dx}\left(f(x)+g(x)\right)=f'(x)+g'(x)$
• $\displaystyle\frac{d}{dx}\left(cf(x)\right)=cf'(x)$

That is, the derivative of the sum of two functions is the sum of their derivatives, and the derivative of a constant multiple of a function is the same constant multiple of its derivative. In particular, we can use these rules along with the basic pieces above to recalculate the derivative $\frac{d}{dx}(ax+b)=\frac{d}{dx}(ax)+\frac{d}{dx}b=a\frac{d}{dx}(x)+b\frac{d}{dx}(1)=1a+0b=a$.

Multiplication is a bit tougher. We might hope to simply split derivatives along products like we do for limits, and even the inventors of the calculus originally thought this would work. But look what happens for $f(x)=x^2$. If we split the derivative of this function along its product we get $\frac{dx}{dx}\frac{dx}{dx}=1$. But we know that this function doesn’t always go up at this constant rate. In fact, for negative values of $x$, the function actually goes down. So this rule doesn’t work.

Let’s go back to the definition of the derivative as the limit of a difference quotient:
$\displaystyle\lim\limits_{\Delta x\rightarrow0}\frac{\left[fg\right](x+\Delta x)-\left[fg\right](x)}{\Delta x}=\lim\limits_{\Delta x\rightarrow0}\frac{f(x+\Delta x)g(x+\Delta x)-f(x)g(x)}{\Delta x}$
Now the trick we’ll use to evaluate this limit is to add and subtract $f(x+\Delta x)g(x)$ to the numerator here. That is, in effect we’re adding zero and leaving it alone, but the formula will be easier to work with. In particular, we can start splitting it up using the laws of limits.
$\displaystyle\lim\limits_{\Delta x\rightarrow0}\frac{f(x+\Delta x)g(x+\Delta x)-f(x+\Delta x)g(x)+f(x+\Delta x)g(x)-f(x)g(x)}{\Delta x}=$
$\displaystyle\left(\lim\limits_{\Delta x\rightarrow0}f(x+\Delta x)\right)\left(\lim\limits_{\Delta x\rightarrow0}\frac{g(x+\Delta x)-g(x)}{\Delta x}\right)+\left(\lim\limits_{\Delta x\rightarrow0}\frac{f(x+\Delta x)-f(x)}{\Delta x}\right)\left(\lim\limits_{\Delta x\rightarrow0}g(x)\right)$
Of these four limits, the fourth is the limit of a continuous function because $g(x)$ doesn’t depend on $\Delta x$. The second and third are just the definitions of $f'(x)$ and $g'(x)$. The first limit goes to $f(x)$ because, as we showed, all differentiable functions are continuous. And so we have the rule

• $\displaystyle\frac{d}{dx}\left(f(x)g(x)\right)=f'(x)g(x)+f(x)g'(x)$

In general, we can take the derivative of the product of a bunch of functions by taking the derivative of each one and multiplying by the other functions, then adding up all the results. As a special case we get the “power rule”:

• $\displaystyle\frac{d}{dx}x^n=nx^{n-1}$

If $f(x)$ is differentiable at $x_0$, but doesn’t take the value ${0}$ there, then its reciprocal will also be differentiable. We want to calculate its derivative. We could try evaluating the limit of the difference quotient again, but instead we will proceed as follows. Define the reciprocal to be $g(x)=\frac{1}{f(x)}$. Then we have the equation $f(x)g(x)=1$. Taking the derivative of both sides at $x_0$ and using the product rule we find $f'(x_0)g(x_0)+f(x_0)g'(x_0)=0$. We can now solve this to find $g'(x_0)=-\frac{f'(x_0)g(x_0)}{f(x_0)}$, or:

• $\displaystyle\frac{d}{dx}\frac{1}{f(x)}=\frac{-f'(x)}{f(x)^2}$

Combining this with the product rule we find the “quotient rule”:

• $\displaystyle\frac{d}{dx}\frac{f(x)}{g(x)}=\frac{f'(x)g(x)-f(x)g'(x)}{g(x)^2}$

Now we have all the tools in hand to take the derivative of any “rational” function. That is, a function of the form $f(x)=\frac{P(x)}{Q(x)}$, where $P$ and $Q$ are polynomials. We can take the derivative of any polynomial by using the power rule, constant multiple rule, and addition rule. Then we can take the derivative of $f$ with the quotient rule.

A little anecdote about the quotient rule: often it’s written in the form $d(\frac{u}{v})=\frac{vdu-udv}{v^2}$. My first calculus teacher wrote this formula on the board, then noted that one must remember which order the top comes in or you’ll get the wrong sign. “V-D-U-D”. At this point he fake-tiptoed his way to the door, closed it gently, and in a loud stage whisper said, “Venereal Disease is an Ugly Disease”. To this day I can’t teach the quotient rule, and can barely use it myself, without remembering that line.

December 26, 2007 Posted by | Analysis, Calculus | 9 Comments

Derivatives

Okay, so we’ve got one of our real-valued functions defined on some domain $D\subseteq\mathbb{R}$: $f:D\rightarrow\mathbb{R}$. Let’s start analyzing it!

We start with some point $x_0\in D$, and we can crank out the value the function takes at that point: $f(x_0)$. What we want to understand is how the value of the function changes as we change $x$. More specifically, we want to understand how it changes as we vary our input continuously. Of course, “continuous” means we’re just moving around a little bit in some neighborhood of the point we started with, and neighborhoods in $\mathbb{R}$ basically come down to open intervals. So let’s just assume that our domain $D$ is some open interval containing the point we’re looking at. If it contains an open interval already we can just restrict it, and if it doesn’t contain a neighborhood of our point then we can’t vary the input continuously, so we aren’t interested in that case.

The simplest sort of function is just a constant $f(x)=c$. In this case, the value doesn’t change. That’s what it means to be constant! A little more complex is a linear function $f(x)=ax+b$ for real numbers $a$ and $b$. Then if we move our point over a bit by adding an amount $\Delta x$ to it our function takes the value

$f(x_0+\Delta x)=a(x_0+\Delta x)+b = (ax_0+b)+a\Delta x = f(x_0)+a\Delta x$

That is, adding $\Delta x$ to our input adds the constant multiple $a\Delta x$ to our output. It’s easy to understand how this sort of function changes as we change the input. We can characterize this behavior by calling the change in the output $\Delta f=a\Delta x$, and considering the constant $\frac{\Delta f}{\Delta x}=a$.

Now, let’s consider an arbitrary continuous function. We can still tweak our input by adding $\Delta x$ to it, and now we get a new output $f(x_0+\Delta x)$. Subtracting off $f(x_0)$ we get the change in the output: $\Delta f=f(x_0+\Delta x)-f(x_0)$. This won’t in general be a constant like it was for the linear functions above: if we pick different values for $\Delta x$ we may get different values for $\Delta f$. But we can still ask how the changes in the input and output are related by calculating the “difference quotient” $\frac{\Delta f}{\Delta x}$. This gives us a function of the amount by which we changed our input.

Let’s look back at the difference quotient for a linear function: $\frac{\Delta f}{\Delta x}=\frac{a\Delta x}{\Delta x}=a$. But it’s not really the constant function $a$! There’s a hole in the function at $\Delta x=0$, which we can patch by taking the limit $\lim\limits_{\Delta x\rightarrow 0}\frac{\Delta f}{\Delta x}$. Since the difference quotient is $a$ everywhere around the hole, the limit exists and equals $a$.

There’s also a hole at $\Delta x=0$ in all our difference quotient functions, and we’d love to patch them up by taking a limit just like we did above. But can we always do this? Look at the function $f(x)=|x|$ near $x=0$. For positive inputs the function just gives the input back again, for negative inputs it gives back the negative of the input, and at zero it gives back zero again. So let’s look at $\frac{f(0+\Delta x)-f(0)}{\Delta x}=\frac{|\Delta x|}{\Delta x}$. When $\Delta x$ is positive this is $1$, where $\Delta x$ is negative this is $-1$, and of course there’s a hole at $\Delta x=0$. But now we see that there’s no limit as $\Delta x$ approaches zero, since the image of a sequence approaching from the left converges to $-1$, while the image of one approaching from the right converges to $1$. Since they don’t agree, we can’t unambiguously patch the hole.

On the other hand, maybe we can patch the hole by taking a limit. If we can, then we say that $f$ is “differentiable” at $x_0$, and the limit of the difference quotient is called the “derivative” of $f$ at $x_0$. We write this as

$\displaystyle{\frac{df}{dx}\bigg\vert_{x=x_0}}=\lim\limits_{\Delta x\rightarrow 0}\frac{\Delta f}{\Delta x}=\lim\limits_{\Delta x\rightarrow 0}\frac{f(x_0+\Delta x)-f(x_0)}{\Delta x}$

Another notation for the derivative that shows up is $f'(x_0)$. This hints at the fact that as we change the point $x_0$ we started with we may get different values for the derivative. That is, the derivative is a new function! In analogy with continuity, we say that a function is differentiable on a region $D$ if it is differentiable — if the difference quotient has a limit — for each point $x_0\in D$. The linear functions we considered above are differentiable everywhere in $\mathbb{R}$, with $f'(x)=a$ for all $x$. On the other hand, the absolute value function is continuous everywhere, but differentiable only where $x_0\neq0$. In this case, the derivative $f'(x)$ is the constant $1$ when $x$ is positive and the constant $-1$ when $x$ is negative.

It’s worth pointing out that if a function $f$ is differentiable at a point $x_0$ then it must be continuous there. Indeed, if $\lim\limits_{\Delta x\rightarrow0}\frac{f(x_0+\Delta x)-f(x_0)}{\Delta x}$ is to have any chance at converging, we must have $\lim\limits_{\Delta x\rightarrow0}f(x_0+\Delta x)-f(x_0)=0$, and this just asserts that the limit of $f$ at $x_0$ is its value there. So differentiability implies continuity, but continuity doesn’t imply differentiability, as we saw from the absolute value above.

December 21, 2007 Posted by | Analysis, Calculus | 9 Comments

Laws of Limits

Okay, we know how to define the limit of a function at a point in the closure of its domain. But we don’t always want to invoke the whole machinery of all sequences converging to that point or that of neighborhoods with the $\epsilon$-$\delta$ definition. Luckily, we have some shortcuts.

First off, we know that the constant function $f(x)=1$ and the identity function $f(x)=x$ are continuous and defined everywhere, so we immediately see that $\lim\limits_{x->x_0}1=1$ and $\lim\limits_{x\rightarrow x_0}x=x_0$. Those are the basic functions we defined. We also defined some ways of putting functions together, and we’ll have a rule for each one telling us how to build limits for more complicated functions from limits for simpler ones.

We can multiply a function by a constant real number. If we have $\lim\limits_{x\rightarrow x_0}f(x)=L$ then we find $\lim\limits_{x\rightarrow x_0}\left[cf\right](x)=cL$. Let’s say we’re given an error bound $\epsilon$. Then we can consider $\frac{\epsilon}{|c|}$, and use the assumption about the limit of $f$ to find a $\delta$ so that $0<|x-x_0|<\delta$ implies that $|f(x)-L|<\frac{\epsilon}{c}$. This, in turn, implies that $|\left[cf\right](x)-cL|=|c||f(x)-L|<|c|\frac{\epsilon}{|c|}=\epsilon$, and so the assertion is proved.

Similarly, we can add functions. If $\lim\limits_{x\rightarrow x_0}f_1(x)=L_1$ and $\lim\limits_{x\rightarrow x_0}f_2(x)=L_2$, then we find $\lim\limits_{x\rightarrow x_0}\left[f_1+f_2\right](x)=L_1+L_2$. Here we start with an $\epsilon$ and find $\delta_1$ and $\delta_2$ so that $0<|x-x_0|<\delta_i$ implies $|f_i(x)-L_i|<\frac{\epsilon}{2}$ for $i=1,2$. Then if we set $\delta$ to be the smaller of $\delta_1$ and $\delta_2$, we see that $0<|x-x_0|<\delta$ implies $|\left[f_1+f_2\right](x)-L_1+L_2|<|f_1(x)-L_1|+|f_2(x)-L_2|<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$.

From these two we can see that the process of taking a limit at a point is linear. In particular, we also see that $\lim\limits_{x\rightarrow x_0}\left[f_1-f_2\right](x)=\lim\limits_{x\rightarrow x_0}f_1(x)-\lim\limits_{x\rightarrow x_0}f_2(x)$ by combining the two rules above. Similarly we can show that $\lim\limits_{x\rightarrow x_0}\left[f_1f_2\right](x)=\lim\limits_{x\rightarrow x_0}f_1(x)\lim\limits_{x\rightarrow x_0}f_2(x)$, which I’ll leave to you to verify as we did the rule for addition above.

Another way to combine functions that I haven’t mentioned yet is composition. Let’s say we have functions $f_1:D_1\rightarrow\mathbb{R}$ and $f_2:D_2\rightarrow\mathbb{R}$. Then we can pick out those points $x\in D_1$ so that $f_1(x)\in D_2$ and call this collection $D$. Then we can apply the second function to get $f_1\circ f_2:D\rightarrow\mathbb{R}$, defined by $\left[f_1\circ f_2\right](x)=f_2(f_1(x))$. Our limit rule here is that if $f_2$ is continuous at $\lim\limits_{x\rightarrow x_0}f_1(x)$, then $\lim\limits_{x\rightarrow x_0}f_2(f_1(x))=f_2(\lim\limits_{x\rightarrow x_0}f_1(x))$. That is, we can pull limits past continuous functions. This is just a reflection of the fact that continuous functions are exactly those which preserve limits of sequences. In particular, a continuous function equals its own limit wherever it’s defined: $\lim\limits_{x\rightarrow x_0}f(x)=f(\lim\limits_{x\rightarrow x_0}x)=f(x_0)$.

As an application of this fact, we can check that $f(x)=\frac{1}{x}$ is continuous for all nonzero $x$. Then the limit rule tells us that as long as $\lim\limits_{x\rightarrow x_0}f(x)\neq0$, then $\lim\limits_{x\rightarrow x_0}\frac{1}{f(x)}=\frac{1}{\lim\limits_{x\rightarrow x_0}f(x)}$. Combining this with the rule for multiplication we see that as long as the limit of $g$ at $x_0$ is nonzero then $\lim\limits_{x\rightarrow x_0}\frac{f(x)}{g(x)}=\frac{\lim\limits_{x\rightarrow x_0}f(x)}{\lim\limits_{x\rightarrow x_0}g(x)}$.

Another thing that limits play well with is the order on the real numbers. If $f(x)\geq g(x)$ on their common domain $D$ then $\lim\limits_{x\rightarrow x_0}f(x)\geq\lim\limits_{x\rightarrow x_0}g(x)$ as long as both limits exist. Indeed, since both limits exist we can take any sequence converging to $x_0$. The image sequence under $f$ is always above the image sequence under $g$, and so the limits of the sequences are in the same order. Notice that we really just need $f(x)\geq g(x)$ to hold on some neighborhood of $x_0$, since we can then restrict to that neighborhood.

Similarly if we have three functions $f(x),$latex g(x)\$ and $h(x)$ with $f(x)\geq g(x)\geq h(x)$ on a common domain $D$ containing a neighborhood of $a$, and if $\lim\limits_{x\rightarrow a}f(x)=L=\lim\limits_{x\rightarrow a}h(x)$, then the limit of $g$ at $a$ exists and is also equal to $L$. Given any sequence $x_n\in D$ converging to $a$, our hypothesis tells us that $f(x_n)\geq g(x_n)\geq h(x_n)$. Given any neighborhood of $L$, $f(x_n)$ and $h(x_n)$ are both within the neighborhood for sufficiently large $n$, and then so will $g(x_n)$ be in the neighborhood. Thus the image of the sequence under $g$ is “squeezed” between the images under $f$ and $h$, and converges to $L$ as well.

These rules for limits suffice to calculate almost all the limits that we care about without having to mess around with the raw definitions. In fact, many calculus classes these days only skim the definition if they mention it at all. We can more or less get away with this while we’re only dealing with a single real variable, but later on the full power of the definition comes in handy.

There’s one more situation I should be a little more explicit about. If we are given a function $f$ on some domain $D$ and we want to find its limit at a border point $a$ (which includes the case of a single-point hole in the domain) and we can extend the function to a continuous function $\hat{f}$ on a larger domain $\hat{D}$ which contains a neighborhood of the point in question, then $\lim\limits_{x\rightarrow a}f(x)=\hat{f}(a)$. Indeed, given any sequence $x_n\in D$ converging to $a$ we have $f(x_n)=\hat{f}(x_n)$ (since they agree on $D$), and the limit of $\hat{f}$ is just its value at $a$. This extends what we did before to handle the case of $\frac{x}{x}$ at $x=0$, and similar situations will come up over and over in the future.

December 20, 2007 Posted by | Analysis, Calculus | 4 Comments

Limits of Functions

Okay, we know what it is for a net to have a limit, and then we used that to define continuity in terms of nets. Continuity just says that the function’s value is exactly what it takes to preserve convergence of nets.

But what if we have a bunch of nets and no function value? Like, if there’s a hole in our domain — as there is at ${0}$ for the function $\frac{x}{x}$ — we certainly shouldn’t penalize this function just on a technicality of how we presented it. Well there may be a hole in the domain, but we still have sequences in the domain that converge to where that hole is. So let’s take a domain $D\subseteq\mathbb{R}$, a function $f:D\rightarrow\mathbb{R}$, and a point $p\in\overline{D}$. In particular, we’re interested in what happens when $p$ is in the closure of $D$, but not in $D$ itself.

Now we look at all sequences $x_n\in D$ which converge to $p$. There’s at least one of them because $p\in\overline{D}$, but there may be quite a few. Each one of these sequences has an opinion on what the value of $f$ should be at $p$. If they all agree, then we can define the limit of the function $\lim\limits_{x\rightarrow p}f(x)=\lim\limits_{n\rightarrow\infty}f(x_n)$ where $x_n$ is any one of these sequences. In the case of $\frac{x}{x}$ we see that at every point other than ${0}$ our function takes the value $1$. Thus on any sequence converging to ${0}$ (but never taking $x_n=0$) the function gives the constant sequence $1$. Since they all agree, we can define the limit $\lim\limits_{x\rightarrow0}\frac{x}{x}=1$.

If a function has a limit at a hole in its domain, we can use that limit to patch up the hole. That is, if our point $p$ is in the closure of $D$ but not in $D$ itself, and if our function $f$ has a limit at $p$, then we can extend our function to $D\cup\{p\}$ by setting $f(p)=\lim\limits_{x\rightarrow p}f(x)$. Just like we by default set the domain of a function to be wherever it makes sense, we will just assume that the domain has been extended to whatever boundary points the function takes a limit at.

On the other hand, we can also describe limits in terms of neighborhoods instead of sequences. Here we end up with formulas that look like those we saw when we defined continuity in metric spaces. A function $f$ has a limit $L$ at the point $p$ if for every $\epsilon>0$ there is a $\delta>0$ so that $0<|x-p|<\delta$ implies $|f(x)-L|<\epsilon$. Going back and forth from this definition to the one in terms of sequences behaves just the same as going back and forth between net and neighborhood definitions of continuity.

To a certain extent we’re starting to see a little more clearly the distinct feels of the two different approaches. Using nets tells us about approaching a point in various systematic ways, and having a limit at a point tells us that we can understand the function at that point by understanding any system along which we can approach it. We can even replace the limiting point by the convergent net and say that the net is the point, as we did when first defining the real numbers. Using neighborhoods, on the other hand, feels more like giving error tolerances. A limit is the value the function is trying to get to, and if we’re willing to live with being wrong by $\epsilon$, there’s a way to pick a $\delta$ for how wrong our input can be and still come at least that close to the target.

December 19, 2007 Posted by | Analysis, Calculus | 4 Comments

Movie news

I just heard this:

• MGM and New Line will co-finance and co-distribute two films, The Hobbit and a sequel to The Hobbit. New Line will distribute in North America and MGM will distribute internationally.
• Peter Jackson and Fran Walsh will serve as Executive Producers of two films based on The Hobbit. New Line will manage the production of the films, which will be shot simultaneously.
• Peter Jackson and New Line have settled all litigation relating to the Lord of the Rings Trilogy.

So great. We’re going to finally have a Hobbit movie. And a seq…what?

Look, I love Tolkien as much as the next guy, and in an intellectual (opp. fantasy fanboy) way. I grew up with it, I’ve dabbled in Quenya and Sindarin, I’ve read the archives of Vinyar Tengwar, and I wasn’t horribly disappointed by the LotR trilogy. But honestly, people, there’s just not that much there in the Hobbit. It won’t really support two movies on its own. So either they’re reeeeeeeeeally stretching the script to squeeze the money out; they’re bringing in a lot of stuff from Unfinished Tales, or maybe even HoME (unlikely given how much of LotR proper was cut); or they’re creating new Hobbit material out of whole cloth.

A friend of mine says that he trusts Jackson’s vision. I trusted Lucas’ vision, and see where that got us. This may be good, but buckle up just in case.

December 18, 2007 Posted by | Uncategorized | 5 Comments

Real-Valued Functions of a Single Real Variable

At long last we can really start getting into one of the most basic kinds of functions: those which take a real number in and spit a real number out. Quite a lot of mathematics is based on a good understanding of how to take these functions apart and put them together in different ways — to analyze them. And so we have the topic of “real analysis”. At our disposal we have a toolbox with various methods for calculating and dealing with these sorts of functions, which we call “calculus”. Really, all calculus is is a collection of techniques for understanding what makes these functions tick.

Sitting behind everything else, we have the real number system $\mathbb{R}$ — the unique ordered topological field which is big enough to contain limits of all Cauchy sequences (so it’s a complete uniform space) and least upper bounds for all nonempty subsets which have any upper bounds at all (so the order is Dedekind complete), and yet small enough to exclude infinitesimals and infinites (so it’s Archimedean).

Because the properties that make the real numbers do their thing are all wrapped up in the topology, it’s no surprise that we’re really interested in continuous functions, and we have quite a lot of them. At the most basic, the constant function $f(x)=1$ for all real numbers $x$ is continuous, as is the identity function $f(x)=x$.

We also have ways of combining continuous functions, many of which are essentially inherited from the field structure on $\mathbb{R}$. We can add and multiply functions just by adding and multiplying their values, and we can multiply a function by a real number too.

• $\left[f+g\right](x)=f(x)+g(x)$
• $\left[fg\right](x)=f(x)g(x)$
• $\left[cf\right](x)=cf(x)$

Since all the nice properties of these algebraic constructions carry over from $\mathbb{R}$, this makes the collection of continuous functions into an algebra over the field of real numbers. We get additive inverses as usual in a module by multiplying by $-1$, so we have an $\mathbb{R}$-module using addition and scalar multiplication. We have a bilinear multiplication because of the distributive law holding in the ring $\mathbb{R}$ where our functions take their values. We also have a unit for multiplication — the constant function $1$ — and a commutative law for multiplication. I’ll leave you to verify that all these operations give back continuous functions when we start with continuous functions.

What we don’t have is division. Multiplicative inverses are tough because we can’t invert any function which takes the value zero anywhere. Even the identity function $f(x)=\frac{1}{x}$ is very much not continuous at $x=0$. In fact, it’s not even defined there! So how can we deal with this?

Well, the answer is sitting right there. The function $\frac{1}{x}$ is not continuous at that point. We have two definitions (by neighborhood systems and by nets) of what it means for a function between two topological spaces to be continuous at one point or another, and we said a function is continuous if it’s continuous at every point in its domain. So we can throw out some points and restrict our attention to a subspace where the function is continuous. Here, for instance, we can define a function $f:\mathbb{R}\setminus\{0\}\rightarrow\mathbb{R}$ by $f(x)=\frac{1}{x}$, and this function is continuous at each point in its domain.

So what we should really be considering is this: for each subspace $X\subseteq\mathbb{R}$ we have a collection $C^0(X)$ of those real-valued functions which are continuous on $X$. Each of these is a commutative $\mathbb{R}$-algebra, just like we saw for the collection of functions continuous on all of $\mathbb{R}$.

But we may come up with two functions over different domains that we want to work with. How do we deal with them together? Well, let’s say we have a function $f\in C^0(X)$ and another one $g\in C^0(Y)$, where $Y\subseteq X$. We may not be able to work with $g$ at the points in $X$ that aren’t in $Y$, but we can certainly work with $f$ at just those points of $X$ that happen to be in $Y$. That is, we can restrict the function $f$ to the function $f|_Y$. It’s the exact same function, except it’s only defined on $Y$ instead of all of $X$. This gives us a homomorphism of $\mathbb{R}$-algebras $\underline{\hphantom{X}}|_Y:C^0(X)\rightarrow C^0(Y)$. (If you’ve been reading along for a while, how would a category theorist say this?)

As an example, we have the identity function $f(x)=x$ in $C^0(\mathbb{R})$ and the reciprocal function $g(x)=\frac{1}{x}$ in $C^0(\mathbb{R}\setminus\{0\})$. We can restrict the identity function by forgetting that it has a value at ${0}$ to get another function $f|_{\mathbb{R}\setminus\{0\}}$, which we will also denote by $x$. Then we can multiply $f|_{\mathbb{R}\setminus\{0\}}g$ to get the function $1\in C^0(\mathbb{R}\setminus\{0\})$. Notice that the resulting function we get is not the constant function on $\mathbb{R}$ because it’s not defined at ${0}$.

Now as far as language goes, we usually drop all mention of domains and assume by default that the domain is “wherever the function makes sense”. That is, whenever we see $\frac{1}{x}$ we automatically restrict to nonzero real numbers, and whenever we combine two functions on different domains we automatically restrict to the intersection of their domains, all without explicit comment.

We do have to be a bit careful here, though, because when we see $\frac{x}{x}$, we also restrict to nonzero real numbers. This is not the constant function $1:\mathbb{R}\rightarrow\mathbb{R}$ because as it stands it’s not defined for $x=0$. Clearly, this is a little nutty and pedantic, so tomorrow we’ll come back and see how to cope with it.

December 18, 2007 Posted by | Analysis, Calculus | 2 Comments

The Orbit Method

Over at Not Even Wrong, there’s a discussion of David Vogan’s talks at Columbia about the “orbit method” or “orbit philosophy”. This is the view that there is — or at least there should be — a correspondence between unitary irreps of a Lie group $G$ and the orbits of a certain action of $G$. As Woit puts it

This is described as a “method” or “philosophy” rather than a theorem because it doesn’t always work, and remains poorly understood in some cases, while at the same time having shown itself to be a powerful source of inspiration in representation theory.

What he doesn’t say in so many words (but which I’m just rude enough to) is that the same statement applies to a lot of theoretical physics. Path integrals are, as they currently stand, prima facie nonsense. In some cases we’ve figured out how to make sense of them, and to give real meaning to the conceptual framework of what should happen. And this isn’t a bad thing. Path integrals have proven to be a powerful source of inspiration, and a lot of actual, solid mathematics and physics has come out of trying to determine what the hell they’re supposed to mean.

Where this becomes a problem is when people take the conceptual framework as literal truth rather than as the inspirational jumping-off point it properly is.

December 18, 2007

Archimedean Groups and the Largest Archimedean Field

Okay, I’d promised to get back to the fact that the real numbers form the “largest” Archimedean field. More precisely, any Archimedean field is order-isomorphic to a subfield of $\mathbb{R}$.

There’s an interesting side note here. I was thinking about this and couldn’t quite see my way forward. So I started asking around Tulane’s math department and seeing if anyone knew. Someone pointed me towards Mike Mislove, and when I asked him, he suggested we ask Laszlo Fuchs around the corner from him. Dr. Fuchs, it turned out, did know the answer, and it was in a book he’d written himself: Partially Ordered Algebraic Systems. It’s an interesting little volume, which I may come back and mine later for more topics.

Anyhow, we’ll do this a little more generally. First let’s talk about Archimedean ordered groups a bit. In a totally-ordered group $G$ we’ll say two elements $a$ and $b$ are “Archimedean equivalent” ($A\sim B$) if there are natural numbers $m$ and $n$ so that $|a|<|b|^m$ and $|b|<|a|^n$ (here I’m using the absolute value that comes with any totally-ordered group). That is, neither one is infinitesimal with respect to the other. This can be shown to be an equivalence relation, so it chops the elements of $G$ into equivalence classes. There are always at least two in any nontrivial group because the identity element is infinitesimal with respect to everything else. We say a group is Archimedean if there are only two Archimedean equivalence classes. That is, for any $a$ and $b$ other than the identity, there is a natural number $n$ with $|a|<|b|^n$.

Now we have a theorem of Hölder which says that any Archimedean group is order-isomorphic to a subgroup of the real numbers with addition. In particular, we will see that any Archimedean group is commutative.

Now either $G$ has a least positive element $g$ or it doesn’t. If it does, then $e\leq x implies that $x=e$ ($e$ is the identity of the group). By the Archimedean property, any element $a$ has an integer $n$ so that $g^n\leq a. Then we can multiply by $g^{-n}$ to find that $e\leq g^{-n}a, so $g^{-n}a=e$. Every element is thus some power of $g$, and the group is isomorphic to the integers $\mathbb{Z}\subseteq\mathbb{R}$.

On the other hand, what if given a positive $x$ we can always find a positive $y$ with $y? In this case, $y^2$ may be greater than $x$, but in this case we can show that $(xy^{-1})^2\leq x$, and $xy^{-1}$ itself is less than $x$, so in either case we have an element $z$ with $e and $z^2\leq x$.

Now if two positive elements $a$ and $b$ fail to commute then without loss of generality we can assume $ba. Then we pick $x=aba^{-1}b^{-1}>e$ and choose a $z$ to go with this $x$. By the Archimedean property we’ll have numbers $m$ and $n$ with $z^m\leq a and $z^n\leq b. Thus we find that $x, which contradicts how we picked $z$. And thus $G$ is commutative.

So we can pick some positive element $a\in G$ and just set $f(a)=1\in\mathbb{R}$. Now we need to find where to send every other element. To do this, note that for any $b\in G$ and any rational number $\frac{m}{n}\in\mathbb{Q}$ we’ll either have $a^m\leq b^n$ or $a^m\geq b^n$, and both of these situations must arise by the Archimedean property. This separates the rational numbers into two nonempty collections — a cut! So we define $f(b)$ to be the real number specified by this cut. It’s straightforward now to show that $f(bc)=f(b)+f(c)$, and thus establish the order isomorphism.

So all Archimedean groups are just subgroups of $\mathbb{R}$ with addition as its operation. In fact, homomorphisms of such groups are just as simple.

Say that we have a nontrivial Archimedean group $A\subseteq\mathbb{R}$, a (possibly trivial) Archimedean group $B\subseteq\mathbb{R}$, and a homomorphism $f:A\rightarrow B$. If $f(a)=0$ for some positive $a\in A$ then this is just the trivial homomorphism sending everything to zero, since for any positive $x$ there is a natural number $n$ so that $x. In this case the homomorphism is “multiply by ${0}$“.

On the other hand, take any two positive elements $a_1,a_2\in A$ and consider the quotients (in $\mathbb{R}$) $\frac{a_1}{a_2}$ and $\frac{f(a_1)}{f(a_2)}$. If they’re different (say, $\frac{f(a_1)}{f(a_2)}<\frac{a_1}{a_2}$) then we can pick a rational number $\frac{m}{n}$ between them. Then $nf(a_1), while $ma_2, which contradicts the order-preserving property of the isomorphism! Thus we find the ratio $\frac{f(a)}{a}$ must be a constant $r>0$, and the homomorphism is “multiply by $r$“.

Now let’s move up to Archimedean rings, whose definition is the same as that for Archimedean fields. In this case, either the product of any two elements is ${0}$ (we have a “zero ring”) and the additive group is order-isomorphic to a subgroup of $\mathbb{R}$, or the ring is order-isomorphic to a subring of $\mathbb{R}$. If we have a zero ring, then the only data left is an Archimedean group, which the above discussion handles, so we’ll just assume that we have some nonzero product and show that we have an order-isomorphism with a subring of $\mathbb{R}$.

So we’ve got some Archimedean ring $R$ and its additive group $R_+$. By the theorem above, $R_+$ is order-isomorphic to a subgroup of $\mathbb{R}$. We also know that for any positive $a\in R$ the operation $\lambda_a(x)=a\cdot x$ (the dot will denote the product in $R$) is an order-homomorphism from $R_+$ to itself. Thus there is some non-negative real number $r_a$ so that $\lambda_a(x)=r_ax$. If we define $r_{-a}=-r_a$ then the assignment $a\mapsto r_a$ gives us an order-homomorphism from $R_+$ to some group $S_+\subseteq\mathbb{R}$.

Again, we must have $r_a=sa$ for some non-negative real number $s$. If $s=0$ then all multiplications in $R$ would give zero, so $0, and so the assignment is invertible. Now we see that $a\cdot b=r_ab=sab$. Similarly, we have $r_{a\cdot b}=s(a\cdot b)=(sa)(sb)=r_ar_b$, and so the function $a\mapsto r_a$ is an order-isomorphism of rings.

In particular, a field $\mathbb{F}$ can’t be a zero ring, and so there must be an injective order-homomorphism $\mathbb{F}\rightarrow\mathbb{R}$. In fact, there can be only one, for if there were more than one the images would be related by multiplication by some positive $r\in\mathbb{R}$: $\phi_1(a)=r\phi_2(a)$. But then $r\phi_2(a)\phi_2(b)=r\phi_2(a\cdot b)=\phi_1(a\cdot b)=\phi_1(a)\phi_1(b)=r^2\phi_2(a)\phi_2(b)$, and so $r=1$.

We can sum this up by saying that the real numbers $\mathbb{R}$ are a terminal object in the category of Archimedean fields.

December 17, 2007