# The Unapologetic Mathematician

## Limits at Infinity

One of our fundamental concepts is the limit of a function at a point. But soon we’ll need to consider what happens as we let the input to a function grow without bound.

So let’s consider a function $f(x)$ defined for $x\in\left(a,\infty\right)$, where this interval means the set $\left\{x\in\mathbb{R}|a. It really doesn’t matter here what $a$ is, just that we’ve got some point where $f$ is defined for all larger numbers. We want to come up with a sensible definition for $\lim\limits_{x\rightarrow\infty}f(x)$.

When we took a limit at a point $p$ we said that $\lim\limits_{x\rightarrow p}f(x)=L$ if for every $\epsilon>0$ there is a $\delta>0$ so that $0<\left|x-p\right| implies $\left|f(x)-L\right|<\epsilon$. But this talk of $\epsilon$ and $\delta$ is all designed to stand in for neighborhoods in a metric space. Picking a $\delta$ defines a neighborhood of the point $p$. All we need is to come up with a notion of a “neighborhood” of $\infty$.

What we’ll use is a ray just like the one above: $\left(R,\infty\right)$. This seems to make sense as the collection of real numbers “near” infinity. So let’s drop it into our definition: the limit of a function at infinity, $\lim\limits_{x\rightarrow\infty}f(x)$ is $L$ if for every $\epsilon>0$ there is an $R$ so that $x>R$ implies $\left|f(x)-L\right|<\epsilon$. It’s straightforward to verify from here that this definition of limit satisfies the same laws of limits as the earlier definition.

Finally, we can define neighborhoods of $-\infty$ as leftward rays $\left(-\infty,R\right)=\left\{x\in\mathbb{R}|x. Then we get a similar definition of the limit of a function at $-\infty$.

One particular limit that’s useful to have as a starting point is $\lim\limits_{x\rightarrow\infty}\frac{1}{x}=0$. Indeed, given $\epsilon>0$ we can set $R=\frac{1}{\epsilon}$. Then if $x>\frac{1}{\epsilon}$ we see that $\epsilon>\frac{1}{x}>0$, establishing the limit.

From here we can handle the limit at infinity of any rational function $f(x)=\frac{P(x)}{Q(x)}$. Let’s split off the top degree terms from the polynomials $P(x)=ax^m+p(x)$ and $Q(x)=bx^n+q(x)$. Divide through top and bottom by $bx^n$ to write

$\displaystyle f(x)=\frac{\frac{a}{b}x^{m-n}+\frac{p(x)}{bx^n}}{1+\frac{q(x)}{bx^n}}$

Now every term in $q(x)$ has degree less than $n$, so each is a multiple of some power of $\frac{1}{x}$. The laws of limits then tell us that they go to ${0}$, and the limit of the denominator of $f$ is $1$. Thus our limit is the limit of the numerator.

If $m>n$ we have a positive power of $x$ as our leading term, which goes up to $\infty$ or down to $-\infty$ (depending on the sign of $\frac{a}{b}$. If $m, all the powers are negative, and thus the limit is ${0}$. And if $m=n$, then all the other powers are negative, and the limit is $\frac{a}{b}$.

So if the numerator of $f$ has the higher degree, we have $\lim\limits_{x\rightarrow\infty}f(x)=\pm\infty$. If the denominator has higher degree, then $\lim\limits_{x\rightarrow\infty}f(x)=0$. If the degrees are equal, we compare the leading coefficients and find $\lim\limits_{x\rightarrow\infty}f(x)=\frac{a}{b}$.

April 17, 2008 Posted by | Analysis, Calculus | 4 Comments

## Differentiable Convex Functions

We showed that all convex functions are continuous. Now let’s assume that we’ve got one that’s differentiable too. Actually, this isn’t a very big imposition. It turns out that a result called Rademacher’s Theorem will tell us that any Lipschitz function is differentiable “almost everywhere”.

Okay, so what does differentiability mean? Remember our secant-slope function:

$\displaystyle s(\left[a,b\right])=\frac{f(b)-f(a)}{b-a}$

Differentiability says that as we shrink the interval $\left[a,b\right]$ down to a single point $c$ the function has a limit, and that limit is $f'(c)$.

So now take $a. We can pick a $c$ between them and points $x$ and $y$ so that $a. Now we compare slopes to find

$s(\left[a,x\right])\leq s(\left[a,c\right])\leq s(\left[c,b\right])\leq s(\left[y,b\right])$

so as we let $x$ approach $a$ and $y$ approach $b$ we find

$f'(a)\leq s(\left[a,c\right])\leq s(\left[c,b\right])\leq f'(b)$

And so the derivative of $f$ must be nondecreasing.

Let’s look at the statement $f'(a)\leq s(\left[a,x\right])$ a little more closely. We can expand this out to say

$\displaystyle f'(a)\leq\frac{f(x)-f(a)}{x-a}$

which we can rewrite as $f(a)+f'(x)(x-a)\leq f(x)$. That is, while the function lies below any of its secants it lies above any of its tangents. In particular, if we have a local minimum where $f'(a)=0$ then $f(a)\leq f(x)$, and the point is also a global minimum.

If the derivative $f'(x)$ is itself differentiable, then the differential mean-value theorem tells us that $f''(x)\geq0$ since $f'(x)$ is nondecreasing. This leads us back to the second derivative test to distinguish maxima and minima, since a function is convex near a local minimum.

April 16, 2008 Posted by | Analysis, Calculus | 4 Comments

## Convex Functions are Continuous

Yesterday we defined a function $f$ defined on an open interval $I$ to be “convex” if its graph lies below all of its secants. That is, given any $x_1 in $I$, for any point $x\in\left[x_1,x_2\right]$ we have

$\displaystyle f(x)\leq f(x_1)+\frac{f(x_2)-f(x_1)}{x_2-x_1}(x-x_1)$

which we can rewrite as

$\displaystyle\frac{f(x)-f(x_1)}{x-x_1}\leq\frac{f(x_2)-f(x_1)}{x_2-x_1}$

or (with a bit more effort) as

$\displaystyle\frac{f(x_2)-f(x_1)}{x_2-x_1}\leq\frac{f(x_2)-f(x)}{x_2-x}$

That is, the slope of the secant above $\left[x_1,x\right]$ is less than that above $\left[x_1,x_2\right]$, which is less than that above $\left[x,x_2\right]$. Here’s a graph to illustrate what I mean:

The slope of the red line segment is less than that of the green, which is less than that of the blue.

In fact, we can push this a bit further. Let $s$ be the function with takes a subinterval $\left[a,b\right]\subseteq I$ and gives back the slope of the secant over that subinterval:

$\displaystyle s(\left[a,b\right])=\frac{f(b)-f(a)}{b-a}$

Now if $\left[x_1,x_2\right]$ and $\left[x_3,x_4\right]$ are two subintervals of $I$ with $x_1\leq x_3$ and $x_2\leq x_4$ then we find

$s(\left[x_1,x_2\right])\leq s(\left[x_1,x_4\right])\leq s(\left[x_3,x_4\right])$

by using the above restatements of the convexity property. Roughly, as we move to the right our secants get steeper.

If $\left[a,b\right]$ is a subinterval of $I$, I claim that we can find a constant $C$ such that $\left|s(\left[x_1,x_2\right])\right|\leq C$ for all $\left[x_1,x_2\right]\subseteq\left[a,b\right]$. Indeed, since $I$ is open we can find points $a'$ and $b'$ in $I$ with $a' and $b. Then since secants get steeper we find that

$s(\left[a',a\right])\leq s(\left[x_1,x_2\right])\leq s(\left[b,b'\right])$

giving us the bound we need. This tells us that within $\left[a,b\right]$ we have $|f(x_2)-f(x_1)|\leq C|x_2-x_1|$ (the technical term here is that $f$ is “Lipschitz”, which is what Mr. Livshits kept blowing up about), and it’s straightforward from here to show that $f$ must be uniformly continuous on $\left[a,b\right]$, and thus continuous everywhere in $I$ (but maybe not uniformly so!)

April 15, 2008 Posted by | Analysis, Calculus | 2 Comments

## R.I.P., Dr. Wheeler

John Archibald Wheeler diad last night. This will take some getting used to, since for years whenever I’ve been reminded of him I’ve thought, “surely he’s not still kicking around, is he?” A truly singular individual.

April 14, 2008 Posted by | Uncategorized | Leave a comment

## Unapologetic and Unavailable in Brazil

(I accidentally wrote this as a page, I just noticed. This new WordPress interface takes a bit of getting used to.)

Evidently my Brazilian readers (if I have any) will be readers no longer, now that a Brazilian court has ordered all ISPs to block WordPress. Officially it’s about one well-known and powerful lawyer there being insulted by something one WordPress blogger said. Me, I think it’s all part of a cunning plan on Isabel‘s part to dominate the Brazilian blathosphere.

Hat tip to Frank Pasquale at Concurring Opinions.

April 12, 2008 Posted by | Uncategorized | 7 Comments

## Exponentials and Powers

The exponential function $\exp$ is, as might be expected, closely related to the operation of taking powers. In fact, any of our functions satisfying the exponential property will have a similar relation.

To this end, consider such a function $f$ and define a positive number $b=f(1)$. Then we can calculate $f(2)=f(1)f(1)=b^2$, $f(3)=b^3$, and so on. Since $f(\frac{1}{2})f(\frac{1}{2})=f(1)=b$ we see that $f(\frac{1}{2})=b^{\frac{1}{2}}$, and similarly for all other rational numbers.

So we have a function $f(x)$ defined on all real numbers, and we have a function $b^x$ defined on all rational numbers, and where both functions are defined they agree. Since the rationals are dense in the reals (the latter being the uniform completion of the former) there can be only one continuous extension of $b^x$ to the whole real line. We’ll discard the function $f(x)$ and just write $b^x$ for this extension from now on. In particular, the function $\exp$ gives us a special number $e=\exp(1)$, and we write $e^x=\exp(x)$.

Like we saw before, we can use the exponential function $e^x$ to give all the other exponentials $b^x$. We know that $b^x=e^{C_bx}$ for some constant $C_b$, but which? If we take the natural logarithm of both sides we see that $\ln(b^x)=C_b x$. In particular, setting $x=1$ we find $C_b=\ln(b)$. That is, given any positive real number $b$ we can define the exponential $b^x$ as $e^{\ln(b)x}$

April 11, 2008 Posted by | Analysis, Calculus | 1 Comment

## Differentiable Exponential Functions

The exponential property is actually a rather stringent condition on a differentiable function $f:\left(0,\infty\right)\rightarrow\mathbb{R}$. Let’s start by assuming that $f$ is a differentiable exponential function and see what happens.

We calculate the derivative as usual by taking the limit of the difference quotient

$\displaystyle f'(x)=\lim\limits_{\Delta x\rightarrow0}\frac{f(x+\Delta x)-f(x)}{\Delta x}$

Then the exponential property says that our derivative is

$\displaystyle f'(x)=\lim\limits_{\Delta x\rightarrow0}\frac{f(x)f(\Delta x)-f(0)f(x)}{\Delta x}=f(x)\lim\limits_{\Delta x\rightarrow0}\frac{f(\Delta x)-f(0)}{\Delta x}=f(x)f'(0)$

So we have a tight relationship between the function and its own derivative. Let’s see what happens for the exponential function $\exp$. Since it’s the functional inverse of $\ln$ we can use the chain rule to calculate

$\displaystyle\exp'(y)=\frac{1}{\ln'(\exp(y))}=\frac{1}{\frac{1}{\exp(y)}}=\exp(y)$

Showing that this function is its own derivative. That is, this is the exponential function with $f'(0)=1$.

Since a general (differentiable) exponential function $f$ is a homomorphism from the additive group of reals to the multiplicative group of positive reals, we can follow it by the natural logarithm. This gives a differentiable homomorphism from the additive reals to themselves, which must be multiplication by some constant $C_f$. That is: $\ln(f(x))=C_fx$. How can we calculate this constant? Take derivatives!

$\displaystyle\frac{d}{dx}\ln(f(x))=\frac{1}{f(x)}f'(x)=\frac{f(x)f'(0)}{f(x)}=f'(0)$

So our constant is the derivative $f'(0)$ from before. Of course we could also write

$\ln(f(x))=f'(0)x=\ln(\exp(f'(0)x))$

And since $\ln$ is invertible this tells us that $f(x)=\exp(f'(0)x)$. That is, every differentiable exponential function comes from $\exp$ by taking some constant multiple of the input.

By the usual yoga of inverse functions we can then see that every differentiable logarithmic function (an inverse to some differentiable exponential function) is a constant multiple of the natural logarithm $\ln$. That is, if $g(x)$ satisfies the logarithmic property, then $g(x)=C_g\ln(x)$

April 10, 2008 Posted by | Analysis, Calculus | 7 Comments

## The Exponential Property

We’ve defined the natural logarithm and shown that it is, in fact, a logarithm. That is, it’s a homomorphism from the multiplicative group of positive real numbers to the additive group of all real numbers. Now I assert that this function is in fact an isomorphism.

First off, the derivative of $\ln(x)$ is $\frac{1}{x}$, which is always positive for positive $x$. Thus it’s always strictly increasing. That is, if $0 then $\ln(x)<\ln(y)$. So no two distinct numbers ever have the same natural logarithm, and the function is thus injective.

Flipping this around tells us that we definitely have some nonzero values for the function. For example, we know that $0<\ln(2)$. Now, since the real numbers are an Archimedean field, no matter how big a number $y>0$ we pick, there will be some natural number $n$ so that $y, where the latter inequality follows from the logarithmic property.

That is, no matter how large a number we pick, $\ln$ takes values at least that large. But because $\ln$ is continuous on a connected interval there must be some number $x$ with $\ln(x)=y$. Similarly, if $y<0$ then there will be some $x$ with $\ln(x)=-y$, and thus $\ln(\frac{1}{x})=y$. Thus the natural logarithm is surjective.

So, since our function is one-to-one and onto, it has an inverse function. We will call this function the “exponential” (denoted $\exp$), and define it to be the unique function satisfying

$\exp( \ln(x) )=x$
$\ln(\exp(y))=y$

for all positive real $x$ and all real $y$.

From here it’s straightforward to see that $\exp$ must be the inverse homomorphism. That is, given two real numbers $y_1$ and $y_2$ we know there must be (unique!) positive real numbers $x_1$ and $x_2$ with $\ln(x_i)=y_i$. Then we calculate

$\exp(y_1+y_2)=\exp(\ln(x_1)+\ln(x_2))=\exp(\ln(x_1 x_2))=$
$x_1x_2=\exp(\ln(x_1))\exp(\ln(x_2))=\exp(y_1)\exp(y_2)$

And it’s clear from here that $\exp(0)=1$. A homomorphism from the additive reals to the multiplicative positive reals like this is said to satisfy the “exponential property”, which is just the reverse of the logarithmic property from last time.

April 10, 2008 Posted by | Analysis, Calculus | 8 Comments

## The Logarithmic Property

Whoops.. Between preparing my exam, practicing my rumba, and adapting to the new WordPress interface, I forgot to actually post today’s installment

Yesterday we defined the natural logarithm as the function

$\displaystyle\ln(x)=\int\limits_1^x\frac{dt}{t}$

on the interval $\left(0,\infty\right)$. This function is differentiable everywhere in this interval, and its derivative is $\frac{1}{x}$ at each point $x$.

We call this function a logarithm because it satisfies the “logarithmic property”. Simply put, it’s a homomorphism of groups from the group of positive real numbers under multiplication to the group of all real numbers under addition.

That is, since the real numbers are an ordered field they are a fortiori a group if we just throw away the multiplication and order structures. Also, if we get rid of that pesky noninvertible ${0}$ element, they’re a group under multiplication, and the positive elements are a subgroup. The logarithm takes elements of this group and sends them to the additive group, and the homomorphism property reads: $f(xy)=f(x)+f(y)$. In particular, we must have $f(1)=0$.

So is our “natural logarithm” a logarithm? First off, we can easily check that

$\displaystyle\ln(1)=\int\limits_1^1\frac{dt}{t}=0$

As for the other property, let’s write

$\displaystyle\ln(xy)=\int\limits_1^{xy}\frac{dt}{t}=\int\limits_1^x\frac{dt}{t}+\int\limits_x^{xy}\frac{dt}{t}=\ln(x)+\int\limits_x^{xy}\frac{dt}{t}$

Now let’s take the second term on the right here and perform a change of variables, setting $u=\frac{t}{x}$. Then we have $du=\frac{dt}{x}$, and as $t$ runs over $\left[x,xy\right]$ the new variable $u$ runs over $\left[1,y\right]$. That is, we have

$\displaystyle\int\limits_x^{xy}\frac{dt}{t}=\int\limits_1^y\frac{du}{u}=\ln(y)$

and the logarithmic property holds.

April 9, 2008 Posted by | Analysis, Calculus | 9 Comments

## The Natural Logarithm

Before this little break, we defined a function on the interval of integration. We proved some properties about the functions we get like this, lining them up against the Fundamental Theorem of Calculus. In particular, integrating like this can construct antiderivatives.

Now let’s consider some of the most basic functions of one variable — monomials — and their derivatives. We know that the derivative of $x^n$ is $nx^{n-1}$ whenever $n$ is an integer. Let’s try running this backwards by using Riemann integration.

First for $n\geq0$ we know that $x^n$ is defined everywhere, so we can consider the function defined for any real $x$ by

$\displaystyle f(x)=\int\limits_0^xt^ndt$

Whatever function this is will have $x^n$ as its derivative. We can see that $\frac{x^{n+1}}{n+1}$ has this derivative, and we know that any two antiderivatives differ by a constant. That is, $f(x)=\frac{x^{n+1}}{n+1}+C$ for some real constant $C$. But we can also tell that $f( 0 )=0$ because in that case we’re integrating over a degenerate interval of zero width. This tells us that $0=f(0)=\frac{0^{n+1}}{n+1}+C=C$, and we’ve determined our constant.

How about for $n\leq-2$? Now our integrand $x^n$ has an asymptote at $x=0$ so we can’t integrate across it. Let’s start at $1$ and define a function for all positive real $x$ by

$\displaystyle f(x)=\int\limits_1^xt^ndt$

Again we know that the derivative $f'(x)$ will be $x^n$, and that $\frac{x^{n+1}}{n+1}$ is such an antiderivative. We also know that $f(1)=0$, which tells us that $0=f(1)=\frac{1^{n+1}}{n+1}+C=\frac{1}{n+1}+C$ so our constant of integration is $-\frac{1}{n+1}$. That is, we’ve defined the function $f(x)=\frac{x^{n+1}-1}{n+1}$ on the interval $\left(x,\infty\right)$.

Now what happens when we take this exact same procedure and apply it to the function $\frac{1}{x}$? There is no monomial whose derivative is a scalar multiple of $\frac{1}{x}$, so the above procedure breaks down. Still, there’s some function out there. Indeed, consider the integral

$\displaystyle F(x)=\int\limits_1^x\frac{1}{t}dt$

For any positive real number $x$ the integrator $t$ is of bounded variation on $\left[1,x\right]$ (in fact it’s monotone), and $\frac{1}{t}$ is continuous for positive $t$, so the integral is indeed defined. Since the integrator is differentiable for all positive values, the integral ${F}$ must be as well, and $F'(x)=\frac{1}{x}$.

That is, this procedure has defined for us an antiderivative of $\frac{1}{x}$ on the interval $\left(0,\infty\right)$. We call this function the “natural logarithm” and denote it $\ln(x)$. Tomorrow we’ll start exploring some of its properties.

As a side note, those of you who have been paying close attention will notice that I have yet to use any function more complicated than a rational power of the variable yet. I’m following the pattern of “late transcendentals” in presenting the calculus. The alternative — “early transcendentals” — is to give a hand-waving (but not rigorous) definition of exponentials and logarithms early on to get more examples into the students’ hands. I advocate that position for college-level calculus classes for a number of reasons, but ultimately delaying the transcendentals makes for less unlearning later on.

April 7, 2008 Posted by | Analysis, Calculus | 4 Comments