The Unapologetic Mathematician

Mathematics for the interested outsider

Albert Hofmann, R.I.P.

And in the “who knew he was still alive” category: Albert Hofmann died yesterday at the age of 102! Man, I wonder what he was taking that kept him alive so long. You don’t suppose…

April 30, 2008 Posted by | Uncategorized | 1 Comment

Abel’s Partial Summation Formula

When we consider an infinite series we construct the sequence of partial sums of the series. This is something like the indefinite integral of the sequence of terms of the series.

What’s the analogue of differentiation? We simply take a sequence A_n and write a_0=A_0 and a_n=A_n-A_{n-1} for n\geq1. Then we can take the sequence of partial sums


Similarly, we can take the sequence of differences of a sequence of partial sums


This behaves a lot like the Fundamental Theorem of Calculus, in that constructing the sequence of partial sums and constructing the sequence of differences invert each other.

Now how far can we push this analogy? Let’s take two sequences, a_n and B_n. We define the sequence of partial sums A_n=\sum_{k=0}^na_k and the sequence of differences b_0=B_0 and b_n=B_n-B_{n-1}. We calculate


This is similar to the formula for integration by parts, and is referred to as Abel’s partial summation formula. In particular, it tells us that the series \sum_{k=0}^\infty a_kB_k converges if both the series \sum_{k=0}^\infty A_kb_{k+1} and the sequence A_nB_{n+1} converge.

April 30, 2008 Posted by | Analysis, Calculus | 2 Comments

Examples of Convergent Series

Today I want to give two examples of convergent series that turn out to be extremely useful for comparisons.

First we have the geometric series whose terms are the sequence a_n=a_0r^n for some constant ratio r. The sequence of partial sums is


If r\neq1 we can multiply this sum by \frac{1-r}{1-r} to find


Then as n goes to infinity, this sequence either blows up (for |r|>1) or converges to \frac{a_0}{1-r} (for |r|<1). In the border case r=\pm1 we can also see that the sequence of partial sums fails to converge. Thus the geometric series converges if and only if |r|<1, and we have a nice simple formula telling us the sum.

The other one I want to hit is the so-called p-series, whose terms are a_n=n^{-p} starting at n=1. Here we use the integral test to see that


so the sum and integral either converge or diverge together. If p\neq1 the integral gives \frac{n^{1-p}-1}{1-p}, which converges for p>1 and diverges for p<1.

If p=1 we get \ln(n), which diverges. In this case, though, we have a special name for the limit of the difference D. We call it “Euler’s constant”, and denote it by \gamma. That is, we can write


where e(n) is an error term whose magnitude is bounded by \frac{1}{n}.

In general we have no good value for the sums of these series, even where they converge. It takes a bit of doing to find \sum\frac{1}{n^2}=\frac{\pi^2}{6}, as Euler did in 1735 (solving the “Basel Problem” that had stood for almost a century), and now we have values for other even natural number values of p. The sum \sum\frac{1}{n^3} is known as Apéry’s constant, after Roger Apéry who showed that it was irrational in 1979. Yes, we didn’t even know whether it was a rational number or not until 30 years ago. We have basically nothing about odd integer values of p.

If we say s instead of p, and let s take complex values (no, I haven’t talked about complex numbers yet, but some of you know what they are) we get Riemann’s function \zeta(s)=\sum\frac{1}{n^s}, which is connected to some of the deepest outstanding questions in mathematics today.

April 29, 2008 Posted by | Analysis, Calculus | 4 Comments

The Integral Test

Sorry for the delay. Students are panicking on the last day of classes and I have to write up a make-up exam for one who has a conflict at the scheduled time…

We can forge a direct connection between the sum of an infinite series and the improper integral of a function using the famed integral test for convergence.

I’ve spent a goodly amount of time last week trying to craft a proof hinging on converting the infinite sum to an improper integral using the integrator \lfloor x\rfloor, and comparing that one to those using the integrators x and x-1. But it doesn’t seem to be working. If you can make a go of it, I’ll be glad to hear it. Instead, here’s a proof adapted from Apostol.

We let f be a positive decreasing function defined on some ray. For our purposes, let’s let it be \left[1,\infty\right), but we could use any other and adapt the proof accordingly. What we require in any case, though, is that the limit \lim\limits_{x\rightarrow\infty}f(x)=0. We define three sequences:

\displaystyle s_n=\sum\limits_{k=1}^nf(k)
\displaystyle t_n=\int\limits_1^nf(x)dx

First off, I assert that d_n is nonincreasing, and sits between f(n) and f(1). That is, we have the inequalities

0<f(n+1)\leq d_{n+1}\leq d_n\leq f(1)

To see this, first let’s write the integral defining t_{n+1} as a sum of integrals over unit steps and notice that f(k) gives an upper bound to the size of f on the interval \left[k,k+1\right]. Thus we see:


From here we find that f(n+1)=s_{n+1}-s_n\leq s_{n+1}-t_{n+1}=d_{n+1}.

On the other hand, we see that d_n-d_{n+1}=t_{n+1}-t_n-(s_{n+1}-s_n). Reusing some pieces from before, we see that this is


which verifies that the sequence d_n is decreasing. And it’s easy to check that d_1=f(1), which completes our verification of these inequalities.

Now d_n is a monotonically decreasing sequence, which is bounded below by {0}, and so it must converge to some finite limit D. This D is the difference between the sum of the infinite series and the improper integral. Thus if either the sum or the integral converges, then the other one must as well.

We can actually do a little better, even, than simply showing that the sum and integral either both converge or both diverge. We can get some control on how fast the sequence d_n converges to D. Specifically, we have the inequalities 0\leq d_k-D\leq f(k), so the difference converges as fast as the function goes to zero.

To get here, we look back at the difference of two terms in the sequence:

\displaystyle0\leq d_n-d_{n+1}\leq\int\limits_n^{n+1}f(n)dx-f(n+1)=f(n)-f(n+1)

So take this inequality for n=k and add it to that for n=k+1. We see then that 0\leq d_k-d_{k+2}\leq f(k)-f(k+2). Then add the inequality for n=k+2, and so on. At each step we find 0\leq d_k-d_{k+l}\leq f(k)-f(k+l). So as l goes to infinity, we get the asserted inequalities.

April 29, 2008 Posted by | Analysis, Calculus | 1 Comment

Convergence Tests for Infinite Series

Now that we’ve seen infinite series as improper integrals, we can immediately import our convergence tests and apply them in this special case.

Take two sequences a_k and b_k with b_k\geq a_k\geq0 for all k beyond some point N. Now if the series \sum_{k=0}^\infty a_k diverges then the series \sum_{k=0}^\infty b_k does too, and if the series \sum_{k=0}^\infty b_k converges to b then the series of \sum_{k=0}^\infty a_k converges to a\leq b.

[UPDATE]: I overstated things a bit here. If the series of b_k converge, then so does that of a_k, but the inequality only holds for the tail beyond N. That is:

\displaystyle\sum\limits_{k=N}^\infty a_k\leq\sum\limits_{k=N}^\infty b_k

but the terms of the sequence a_k before N may, of course, be so large as to swamp the series of b_k.

If we have two nonnegative sequences a_k and b_k so that \lim\limits_{k\rightarrow\infty}\frac{a_k}{b_k}=c\neq0 then the series \sum_{k=0}^\infty a_k and \sum_{k=0}^\infty b_k either both converge or both diverge.

We read in Cauchy’s condition as follows: the series \sum_{k=0}^\infty a_k converges if and only if for every \epsilon>0 there is an N so that for all n\geq m\geq N the sum \left|\sum_{k=m}^n a_k\right|<\epsilon.

We also can import the notion of absolute convergence. We say that a series \sum_{k=0}^\infty a_k is absolutely convergent if the series \sum_{k=0}^\infty|a_k| is convergent (which implies that the original series converges). We say that a series is conditionally convergent if it converges, but the series of its absolute values diverges.

April 25, 2008 Posted by | Analysis, Calculus | 6 Comments

Infinite Series

And now we come to one of the most hated parts of second-semester calculus: infinite series. An infinite series is just the sum of a (countably) infinite number of terms, and we usually collect those terms together as the image of a sequence. That is, given a sequence a_k of real numbers, we define the sequence of “partial sums”:

\displaystyle s_n=\sum\limits_{k=0}^na_k

and then define the sum of the series as the limit of this sequence:

\displaystyle\sum\limits_{k=0}^\infty a_k=\lim\limits_{n\rightarrow\infty}s_n

Notice, though, that we’ve seen a way to get finite sums before: using step functions as integrators. So let’s use the step function \lfloor x\rfloor, which is defined for any real number x as the largest integer less than or equal to x.

This function has jumps of unit size at each integer, and is continuous from the right at the jumps. Further, over any finite interval, its total variation is finite. Thus if f is any function continuous from the left at every integer it will be integrable with respect to \lfloor x\rfloor over any finite interval. Further, we can easily see

\displaystyle\int\limits_a^bf(x)d\lfloor x\rfloor=\sum\limits_{\substack{k\in\mathbb{Z}\\a<k\leq b}}f(k)

Now given any sequence a_n we can define a function f by setting f(x)=a_{\lceil x\rceil} for any x>-1. That is, we round each number up to the nearest integer n and then give the value a_n. This gives us a step function with the value a_n on the subinterval \left(n-1,n\right], which we see is continuous from the left at each jump. Thus we can always define the integral

\displaystyle\int\limits_{-\frac{1}{2}}^bf(x)d\lfloor x\rfloor=\sum\limits_{k=0}^{\lfloor b\rfloor}a_k=s_{\lfloor b\rfloor}

Then as we let b go to infinity, \lfloor b\rfloor goes to infinity with it. Thus the sum of the series is the same as the improper integral.

So this shows that any infinite series can be thought of as a Riemann-Stieltjes integral of an appropriate function. Of course, in many cases the terms a_k of the sequence are already given as values f(k) of some function, and in that case we can just use that function instead of this step-function we’ve cobbled together.

April 24, 2008 Posted by | Analysis, Calculus | 8 Comments

Cauchy’s Condition

We defined the real numbers to be a complete uniform space, meaning that limits of sequences are convergent if and only if they are Cauchy. Let’s write these two out in full:

  • A sequence a_n is convergent if there is some L so that for every \epsilon there is an N such that n>N implies |a_n-L|<\epsilon.
  • A sequence a_n is Cauchy if for every \epsilon there is an N such that m>N and n>N implies |a_m-a_n|<\epsilon.

See how similar the two definitions are. Convergent means that the points of the sequence are getting closer and closer to some fixed L. Cauchy means that the points of the sequence are getting closer to each other.

Now there’s no reason we can’t try the same thing when we’re taking the limit of a function at \infty. In fact, the definition of convergence of such a limit is already pretty close to the above definition. How can we translate the Cauchy condition? Simple. We just require that for every \epsilon>0 there exist some R so that for any two points x,y>R we have |f(x)-f(y)|<\epsilon.

So let’s consider a function f defined in the ray \left[a,\infty\right). If the limit \lim\limits_{x\rightarrow\infty}f(x) exists, with value L, then for every \epsilon>0 there is an R so that x>R implies |f(x)-L|<\frac{\epsilon}{2}. Then taking y>R as well, we see that


and so the Cauchy condition holds.

Now let’s assume that the Cauchy condition holds. Define the sequence a_n=f(a+n). This is now a Cauchy sequence, and so it converges to a limit A, which I assert is also the limit of f. Given an \epsilon>0, choose an R so that

  • |f(x)-f(y)|<\frac{\epsilon}{2} for any two points x and y above R
  • |a_n-A|<\frac{\epsilon}{2} whenever a+n\geq B

Just take a B for each condition, and go with the larger one. In fact, we may as well round B up so that B=a+N for some natural number N. Then for any b>B we have


and so the limit at infinity exists.

In the particular case of an improper integral, we have I(b)=\int_a^bf\,d\alpha. Then I(c)-I(b)=\int_b^cf\,d\alpha. Our condition then reads:

For every \epsilon>0 there is a B so that c>b>B implies \left|\int_b^cfd\alpha\right|<\epsilon.

April 23, 2008 Posted by | Analysis, Calculus | 3 Comments

Absolute Convergence

Let’s apply one of the tests from last time. Let \alpha be a nondecreasing integrator on the ray \left[a,\infty\right), and f be any function integrable with respect to \alpha through the whole ray. Then if the improper integral \int_a^\infty|f|d\alpha converges, then so does \int_a^\infty f\,d\alpha.

To see this, notice that -|f(x)|\leq f(x)\leq|f(x)|, and so 0\leq|f(x)|+f(x)\leq2|f(x)|. Then since \int_a^\infty2|f|\,d\alpha converges we see that \int_a^\infty|f|+f\,d\alpha converges. Subtracting off the integral of |f| we get our result. (Technically to do this, we need to extend the linearity properties of Riemann-Stieltjes integrals to improper integrals, but this is straightforward).

When the integral of |f| converges like this, we say that the integral of f is “absolutely convergent”. The above theorem shows us that absolute convergence implies convergence, but it doesn’t necessarily hold the other way around. If the integral of f converges, but that of |f| doesn’t, we say that the former is “conditionally convergent”.

April 22, 2008 Posted by | Analysis, Calculus | 3 Comments

Convergence Tests for Improper Integrals

We have a few tests that will come in handy for determining if an improper integral converges. In all of these we’ll have an integrator \alpha on the ray \left[a,\infty\right), and a function f which is integrable with respect to \alpha on the interval \left[a,b\right] for all b>a.

First, say \alpha is nondecreasing and f is nonnegative. Then the integral \int_a^\infty f\,d\alpha converges if and only if there is a constant M>0 so that

\displaystyle I(b)=\int\limits_a^bfd\alpha\leq M

for every b\geq a. This follows because the function I(b) is then nondecreasing, and a nondecreasing function bounded above must have a finite limit at infinity. Indeed, the set of values of I must be bounded above, and so there is a least upper bound \sup\limits_{x\geq a}I(x). It’s straightforward to show that the limit \lim\limits_{x\rightarrow\infty}I(x) is this least upper bound.

Now if \alpha is nondecreasing and f(x)\leq g(x) are two nonnegative functions, then if the improper integral of g converges then so does that of f, and we have the inequality

\displaystyle\int\limits_a^\infty fd\alpha\leq\int\limits_a^\infty gd\alpha

since for every b\geq a we have

\displaystyle\int\limits_a^bfd\alpha\leq\int\limits_a^bgd\alpha\leq\int\limits_a^\infty gd\alpha

On the other hand, if the improper integral of f diverges, then that of g must diverge.

If \alpha is nondecreasing and we have two nonnegative functions f and g so that


then their improper integrals either both converge or both diverge. This limit implies there must be some R beyond which we have \frac{1}{2}\leq\frac{f(x)}{g(x)}\leq2. Equivalently, for x\geq R we have \frac{1}{2}g(x)\leq f(x)\leq2g(x), and the result follows by two applications of the previous theorem.

Notice that this last theorem also follows if the limit of the ratio converges to any nonzero number. Also notice how the convergence of the integral only depends on the behavior of our functions in some neighborhood of \infty. We use their behavior in the ray \left[R,\infty\right) when we started by looking for convergence over the ray \left[a,\infty\right).

April 21, 2008 Posted by | Analysis, Calculus | 3 Comments

Improper Integrals I

We’ve dealt with Riemann integrals and their extensions to Riemann-Stieltjes integrals. But these are both defined to integrate a function over a finite interval. What if we want to integrate over an infinite ray, like all positive numbers?

As a specific example, let’s consider the function f(x)=\frac{1}{x^2}, and let it be defined on the ray \left[1,\infty\right). For any real number b>1 we can pick some a'>a. In the interval \left[1,b'\right] the function f is continuous and of bounded variation (in fact it’s decreasing), and so it’s integrable with respect to x. Then it’s integrable over the subinterval \left[1,b\right]. Why not just start by saying it’s integrable over \left[1,b\right]? Because now we have a function on \left[1,b'\right] defined by

\displaystyle F(x)=\int\limits_1^x\frac{1}{t^2}dt

Since t is differentiable and \frac{1}{t^2} is continuous at a, we see that F is differentiable here, and its derivative is F'(b)\frac{1}{b^2}. This result is independent of the b' we picked.

Since we can do this for any b>1 we get a function F(b) defined for b\in\left(1,\infty\right). Its derivative must be \frac{1}{b^2}, and we can check that \frac{-1}{b} also has this derivative, so these two functions can only differ by a constant. Clearly we want F(1)=0, since at that point we’re “integrating” over a degenerate interval consisting of a single point. This fixes our function as F(b)=1-\frac{1}{b}.

Now our question is, what happens as we take a to get larger and larger? Our intervals \left[1,b\right] get bigger and bigger, trying to fill out the whole ray \left[1,\infty\right). And for each one we have a value for the integral: 1-\frac{1}{b}. So we take the limit as b approaches infinity: \lim\limits_{b\rightarrow\infty}1-\frac{1}{b}=1. This will be the value of the integral over the entire ray.

We turn this rubric into a definition: given a function f that is integrable with respect to \alpha over the interval \left[a,b\right] for all b>a, we can define a function F on \left[a,\infty\right) by

\displaystyle F(b)=\int\limits_a^bfd\alpha

We define the improper integral to be the limit

\displaystyle\int\limits_a^\infty fd\alpha=\lim\limits_{b\rightarrow\infty}\int_a^bfd\alpha

if this limit exists. Otherwise we say that the integral diverges.

We can similarly define improper integrals for leftward rays as


And over the entire real line by choosing an arbitrary point c and defining

\displaystyle\int\limits_{-\infty}^\infty fd\alpha=\int\limits_{-\infty}^cfd\alpha+\int\limits_c^\infty fd\alpha

That is, we take the two bounds of integration to go to their respective infinities separately. It must be noted that the limit where they go to infinity together:

\displaystyle\int\limits_{-\infty}^\infty fd\alpha=\lim\limits_{b\rightarrow\infty}\int\limits_{-b}^bfd\alpha

may exist even if the improper integral diverges. In this case we call it the “Cauchy principal value” of the integral, but it is not the only justifiable value we could assign to the integral. For example, it’s easy to check that


so the Cauchy principal value is {0}. However, we might also consider


which diverges.

April 18, 2008 Posted by | Analysis, Calculus | 8 Comments


Get every new post delivered to your Inbox.

Join 366 other followers