The Unapologetic Mathematician

Laws of Limits

Okay, we know how to define the limit of a function at a point in the closure of its domain. But we don’t always want to invoke the whole machinery of all sequences converging to that point or that of neighborhoods with the $\epsilon$$\delta$ definition. Luckily, we have some shortcuts.

First off, we know that the constant function $f(x)=1$ and the identity function $f(x)=x$ are continuous and defined everywhere, so we immediately see that $\lim\limits_{x->x_0}1=1$ and $\lim\limits_{x\rightarrow x_0}x=x_0$. Those are the basic functions we defined. We also defined some ways of putting functions together, and we’ll have a rule for each one telling us how to build limits for more complicated functions from limits for simpler ones.

We can multiply a function by a constant real number. If we have $\lim\limits_{x\rightarrow x_0}f(x)=L$ then we find $\lim\limits_{x\rightarrow x_0}\left[cf\right](x)=cL$. Let’s say we’re given an error bound $\epsilon$. Then we can consider $\frac{\epsilon}{|c|}$, and use the assumption about the limit of $f$ to find a $\delta$ so that $0<|x-x_0|<\delta$ implies that $|f(x)-L|<\frac{\epsilon}{c}$. This, in turn, implies that $|\left[cf\right](x)-cL|=|c||f(x)-L|<|c|\frac{\epsilon}{|c|}=\epsilon$, and so the assertion is proved.

Similarly, we can add functions. If $\lim\limits_{x\rightarrow x_0}f_1(x)=L_1$ and $\lim\limits_{x\rightarrow x_0}f_2(x)=L_2$, then we find $\lim\limits_{x\rightarrow x_0}\left[f_1+f_2\right](x)=L_1+L_2$. Here we start with an $\epsilon$ and find $\delta_1$ and $\delta_2$ so that $0<|x-x_0|<\delta_i$ implies $|f_i(x)-L_i|<\frac{\epsilon}{2}$ for $i=1,2$. Then if we set $\delta$ to be the smaller of $\delta_1$ and $\delta_2$, we see that $0<|x-x_0|<\delta$ implies $|\left[f_1+f_2\right](x)-L_1+L_2|<|f_1(x)-L_1|+|f_2(x)-L_2|<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$.

From these two we can see that the process of taking a limit at a point is linear. In particular, we also see that $\lim\limits_{x\rightarrow x_0}\left[f_1-f_2\right](x)=\lim\limits_{x\rightarrow x_0}f_1(x)-\lim\limits_{x\rightarrow x_0}f_2(x)$ by combining the two rules above. Similarly we can show that $\lim\limits_{x\rightarrow x_0}\left[f_1f_2\right](x)=\lim\limits_{x\rightarrow x_0}f_1(x)\lim\limits_{x\rightarrow x_0}f_2(x)$, which I’ll leave to you to verify as we did the rule for addition above.

Another way to combine functions that I haven’t mentioned yet is composition. Let’s say we have functions $f_1:D_1\rightarrow\mathbb{R}$ and $f_2:D_2\rightarrow\mathbb{R}$. Then we can pick out those points $x\in D_1$ so that $f_1(x)\in D_2$ and call this collection $D$. Then we can apply the second function to get $f_1\circ f_2:D\rightarrow\mathbb{R}$, defined by $\left[f_1\circ f_2\right](x)=f_2(f_1(x))$. Our limit rule here is that if $f_2$ is continuous at $\lim\limits_{x\rightarrow x_0}f_1(x)$, then $\lim\limits_{x\rightarrow x_0}f_2(f_1(x))=f_2(\lim\limits_{x\rightarrow x_0}f_1(x))$. That is, we can pull limits past continuous functions. This is just a reflection of the fact that continuous functions are exactly those which preserve limits of sequences. In particular, a continuous function equals its own limit wherever it’s defined: $\lim\limits_{x\rightarrow x_0}f(x)=f(\lim\limits_{x\rightarrow x_0}x)=f(x_0)$.

As an application of this fact, we can check that $f(x)=\frac{1}{x}$ is continuous for all nonzero $x$. Then the limit rule tells us that as long as $\lim\limits_{x\rightarrow x_0}f(x)\neq0$, then $\lim\limits_{x\rightarrow x_0}\frac{1}{f(x)}=\frac{1}{\lim\limits_{x\rightarrow x_0}f(x)}$. Combining this with the rule for multiplication we see that as long as the limit of $g$ at $x_0$ is nonzero then $\lim\limits_{x\rightarrow x_0}\frac{f(x)}{g(x)}=\frac{\lim\limits_{x\rightarrow x_0}f(x)}{\lim\limits_{x\rightarrow x_0}g(x)}$.

Another thing that limits play well with is the order on the real numbers. If $f(x)\geq g(x)$ on their common domain $D$ then $\lim\limits_{x\rightarrow x_0}f(x)\geq\lim\limits_{x\rightarrow x_0}g(x)$ as long as both limits exist. Indeed, since both limits exist we can take any sequence converging to $x_0$. The image sequence under $f$ is always above the image sequence under $g$, and so the limits of the sequences are in the same order. Notice that we really just need $f(x)\geq g(x)$ to hold on some neighborhood of $x_0$, since we can then restrict to that neighborhood.

Similarly if we have three functions $f(x),$latex g(x)\$ and $h(x)$ with $f(x)\geq g(x)\geq h(x)$ on a common domain $D$ containing a neighborhood of $a$, and if $\lim\limits_{x\rightarrow a}f(x)=L=\lim\limits_{x\rightarrow a}h(x)$, then the limit of $g$ at $a$ exists and is also equal to $L$. Given any sequence $x_n\in D$ converging to $a$, our hypothesis tells us that $f(x_n)\geq g(x_n)\geq h(x_n)$. Given any neighborhood of $L$, $f(x_n)$ and $h(x_n)$ are both within the neighborhood for sufficiently large $n$, and then so will $g(x_n)$ be in the neighborhood. Thus the image of the sequence under $g$ is “squeezed” between the images under $f$ and $h$, and converges to $L$ as well.

These rules for limits suffice to calculate almost all the limits that we care about without having to mess around with the raw definitions. In fact, many calculus classes these days only skim the definition if they mention it at all. We can more or less get away with this while we’re only dealing with a single real variable, but later on the full power of the definition comes in handy.

There’s one more situation I should be a little more explicit about. If we are given a function $f$ on some domain $D$ and we want to find its limit at a border point $a$ (which includes the case of a single-point hole in the domain) and we can extend the function to a continuous function $\hat{f}$ on a larger domain $\hat{D}$ which contains a neighborhood of the point in question, then $\lim\limits_{x\rightarrow a}f(x)=\hat{f}(a)$. Indeed, given any sequence $x_n\in D$ converging to $a$ we have $f(x_n)=\hat{f}(x_n)$ (since they agree on $D$), and the limit of $\hat{f}$ is just its value at $a$. This extends what we did before to handle the case of $\frac{x}{x}$ at $x=0$, and similar situations will come up over and over in the future.

December 20, 2007 - Posted by | Analysis, Calculus

1. […] Laws of Differentiation Just like we had the laws of limits we have a collection of rules to help us calculate derivatives. Let’s start with the most […]

Pingback by Algebraic Laws of Differentiation « The Unapologetic Mathematician | December 26, 2007 | Reply

2. […] some cases establish the continuity of simple functions (like coordinate projections) and then use limit laws to build up a larger class. But this approach fails for functions superficially similar to the […]

Pingback by Multivariable Limits « The Unapologetic Mathematician | September 17, 2009 | Reply

3. i would luv 2 recieve questions

Comment by DANIEL DAVIES | October 4, 2009 | Reply