Albert Hofmann, R.I.P.
And in the “who knew he was still alive” category: Albert Hofmann died yesterday at the age of 102! Man, I wonder what he was taking that kept him alive so long. You don’t suppose…
Abel’s Partial Summation Formula
When we consider an infinite series we construct the sequence of partial sums of the series. This is something like the indefinite integral of the sequence of terms of the series.
What’s the analogue of differentiation? We simply take a sequence and write
and
for
. Then we can take the sequence of partial sums
Similarly, we can take the sequence of differences of a sequence of partial sums
This behaves a lot like the Fundamental Theorem of Calculus, in that constructing the sequence of partial sums and constructing the sequence of differences invert each other.
Now how far can we push this analogy? Let’s take two sequences, and
. We define the sequence of partial sums
and the sequence of differences
and
. We calculate
This is similar to the formula for integration by parts, and is referred to as Abel’s partial summation formula. In particular, it tells us that the series converges if both the series
and the sequence
converge.
Examples of Convergent Series
Today I want to give two examples of convergent series that turn out to be extremely useful for comparisons.
First we have the geometric series whose terms are the sequence for some constant ratio
. The sequence of partial sums is
If we can multiply this sum by
to find
Then as goes to infinity, this sequence either blows up (for
) or converges to
(for
). In the border case
we can also see that the sequence of partial sums fails to converge. Thus the geometric series converges if and only if
, and we have a nice simple formula telling us the sum.
The other one I want to hit is the so-called -series, whose terms are
starting at
. Here we use the integral test to see that
so the sum and integral either converge or diverge together. If the integral gives
, which converges for
and diverges for
.
If we get
, which diverges. In this case, though, we have a special name for the limit of the difference
. We call it “Euler’s constant”, and denote it by
. That is, we can write
where is an error term whose magnitude is bounded by
.
In general we have no good value for the sums of these series, even where they converge. It takes a bit of doing to find , as Euler did in 1735 (solving the “Basel Problem” that had stood for almost a century), and now we have values for other even natural number values of
. The sum
is known as Apéry’s constant, after Roger Apéry who showed that it was irrational in 1979. Yes, we didn’t even know whether it was a rational number or not until 30 years ago. We have basically nothing about odd integer values of
.
If we say instead of
, and let
take complex values (no, I haven’t talked about complex numbers yet, but some of you know what they are) we get Riemann’s function
, which is connected to some of the deepest outstanding questions in mathematics today.
The Integral Test
Sorry for the delay. Students are panicking on the last day of classes and I have to write up a make-up exam for one who has a conflict at the scheduled time…
We can forge a direct connection between the sum of an infinite series and the improper integral of a function using the famed integral test for convergence.
I’ve spent a goodly amount of time last week trying to craft a proof hinging on converting the infinite sum to an improper integral using the integrator , and comparing that one to those using the integrators
and
. But it doesn’t seem to be working. If you can make a go of it, I’ll be glad to hear it. Instead, here’s a proof adapted from Apostol.
We let be a positive decreasing function defined on some ray. For our purposes, let’s let it be
, but we could use any other and adapt the proof accordingly. What we require in any case, though, is that the limit
. We define three sequences:
First off, I assert that is nonincreasing, and sits between
and
. That is, we have the inequalities
To see this, first let’s write the integral defining as a sum of integrals over unit steps and notice that
gives an upper bound to the size of
on the interval
. Thus we see:
From here we find that .
On the other hand, we see that . Reusing some pieces from before, we see that this is
which verifies that the sequence is decreasing. And it’s easy to check that
, which completes our verification of these inequalities.
Now is a monotonically decreasing sequence, which is bounded below by
, and so it must converge to some finite limit
. This
is the difference between the sum of the infinite series and the improper integral. Thus if either the sum or the integral converges, then the other one must as well.
We can actually do a little better, even, than simply showing that the sum and integral either both converge or both diverge. We can get some control on how fast the sequence converges to
. Specifically, we have the inequalities
, so the difference converges as fast as the function goes to zero.
To get here, we look back at the difference of two terms in the sequence:
So take this inequality for and add it to that for
. We see then that
. Then add the inequality for
, and so on. At each step we find
. So as
goes to infinity, we get the asserted inequalities.
Convergence Tests for Infinite Series
Now that we’ve seen infinite series as improper integrals, we can immediately import our convergence tests and apply them in this special case.
Take two sequences and
with
for all
beyond some point
. Now if the series
diverges then the series
does too, and if the series
converges to
then the series of
converges to
.
[UPDATE]: I overstated things a bit here. If the series of converge, then so does that of
, but the inequality only holds for the tail beyond
. That is:
but the terms of the sequence before
may, of course, be so large as to swamp the series of
.
If we have two nonnegative sequences and
so that
then the series
and
either both converge or both diverge.
We read in Cauchy’s condition as follows: the series converges if and only if for every
there is an
so that for all
the sum
.
We also can import the notion of absolute convergence. We say that a series is absolutely convergent if the series
is convergent (which implies that the original series converges). We say that a series is conditionally convergent if it converges, but the series of its absolute values diverges.
Infinite Series
And now we come to one of the most hated parts of second-semester calculus: infinite series. An infinite series is just the sum of a (countably) infinite number of terms, and we usually collect those terms together as the image of a sequence. That is, given a sequence of real numbers, we define the sequence of “partial sums”:
and then define the sum of the series as the limit of this sequence:
Notice, though, that we’ve seen a way to get finite sums before: using step functions as integrators. So let’s use the step function , which is defined for any real number
as the largest integer less than or equal to
.
This function has jumps of unit size at each integer, and is continuous from the right at the jumps. Further, over any finite interval, its total variation is finite. Thus if is any function continuous from the left at every integer it will be integrable with respect to
over any finite interval. Further, we can easily see
Now given any sequence we can define a function
by setting
for any
. That is, we round each number up to the nearest integer
and then give the value
. This gives us a step function with the value
on the subinterval
, which we see is continuous from the left at each jump. Thus we can always define the integral
Then as we let go to infinity,
goes to infinity with it. Thus the sum of the series is the same as the improper integral.
So this shows that any infinite series can be thought of as a Riemann-Stieltjes integral of an appropriate function. Of course, in many cases the terms of the sequence are already given as values
of some function, and in that case we can just use that function instead of this step-function we’ve cobbled together.
Cauchy’s Condition
We defined the real numbers to be a complete uniform space, meaning that limits of sequences are convergent if and only if they are Cauchy. Let’s write these two out in full:
- A sequence
is convergent if there is some
so that for every
there is an
such that
implies
.
- A sequence
is Cauchy if for every
there is an
such that
and
implies
.
See how similar the two definitions are. Convergent means that the points of the sequence are getting closer and closer to some fixed . Cauchy means that the points of the sequence are getting closer to each other.
Now there’s no reason we can’t try the same thing when we’re taking the limit of a function at . In fact, the definition of convergence of such a limit is already pretty close to the above definition. How can we translate the Cauchy condition? Simple. We just require that for every
there exist some
so that for any two points
we have
.
So let’s consider a function defined in the ray
. If the limit
exists, with value
, then for every
there is an
so that
implies
. Then taking
as well, we see that
and so the Cauchy condition holds.
Now let’s assume that the Cauchy condition holds. Define the sequence . This is now a Cauchy sequence, and so it converges to a limit
, which I assert is also the limit of
. Given an
, choose an
so that
for any two points
and
above
whenever
Just take a for each condition, and go with the larger one. In fact, we may as well round
up so that
for some natural number
. Then for any
we have
and so the limit at infinity exists.
In the particular case of an improper integral, we have . Then
. Our condition then reads:
For every there is a
so that
implies
.
Absolute Convergence
Let’s apply one of the tests from last time. Let be a nondecreasing integrator on the ray
, and
be any function integrable with respect to
through the whole ray. Then if the improper integral
converges, then so does
.
To see this, notice that , and so
. Then since
converges we see that
converges. Subtracting off the integral of
we get our result. (Technically to do this, we need to extend the linearity properties of Riemann-Stieltjes integrals to improper integrals, but this is straightforward).
When the integral of converges like this, we say that the integral of
is “absolutely convergent”. The above theorem shows us that absolute convergence implies convergence, but it doesn’t necessarily hold the other way around. If the integral of
converges, but that of
doesn’t, we say that the former is “conditionally convergent”.
Convergence Tests for Improper Integrals
We have a few tests that will come in handy for determining if an improper integral converges. In all of these we’ll have an integrator on the ray
, and a function
which is integrable with respect to
on the interval
for all
.
First, say is nondecreasing and
is nonnegative. Then the integral
converges if and only if there is a constant
so that
for every . This follows because the function
is then nondecreasing, and a nondecreasing function bounded above must have a finite limit at infinity. Indeed, the set of values of
must be bounded above, and so there is a least upper bound
. It’s straightforward to show that the limit
is this least upper bound.
Now if is nondecreasing and
are two nonnegative functions, then if the improper integral of
converges then so does that of
, and we have the inequality
since for every we have
On the other hand, if the improper integral of diverges, then that of
must diverge.
If is nondecreasing and we have two nonnegative functions
and
so that
then their improper integrals either both converge or both diverge. This limit implies there must be some beyond which we have
. Equivalently, for
we have
, and the result follows by two applications of the previous theorem.
Notice that this last theorem also follows if the limit of the ratio converges to any nonzero number. Also notice how the convergence of the integral only depends on the behavior of our functions in some neighborhood of . We use their behavior in the ray
when we started by looking for convergence over the ray
.
Improper Integrals I
We’ve dealt with Riemann integrals and their extensions to Riemann-Stieltjes integrals. But these are both defined to integrate a function over a finite interval. What if we want to integrate over an infinite ray, like all positive numbers?
As a specific example, let’s consider the function , and let it be defined on the ray
. For any real number
we can pick some
. In the interval
the function
is continuous and of bounded variation (in fact it’s decreasing), and so it’s integrable with respect to
. Then it’s integrable over the subinterval
. Why not just start by saying it’s integrable over
? Because now we have a function on
defined by
Since is differentiable and
is continuous at
, we see that
is differentiable here, and its derivative is
. This result is independent of the
we picked.
Since we can do this for any we get a function
defined for
. Its derivative must be
, and we can check that
also has this derivative, so these two functions can only differ by a constant. Clearly we want
, since at that point we’re “integrating” over a degenerate interval consisting of a single point. This fixes our function as
.
Now our question is, what happens as we take to get larger and larger? Our intervals
get bigger and bigger, trying to fill out the whole ray
. And for each one we have a value for the integral:
. So we take the limit as
approaches infinity:
. This will be the value of the integral over the entire ray.
We turn this rubric into a definition: given a function that is integrable with respect to
over the interval
for all
, we can define a function
on
by
We define the improper integral to be the limit
if this limit exists. Otherwise we say that the integral diverges.
We can similarly define improper integrals for leftward rays as
And over the entire real line by choosing an arbitrary point and defining
That is, we take the two bounds of integration to go to their respective infinities separately. It must be noted that the limit where they go to infinity together:
may exist even if the improper integral diverges. In this case we call it the “Cauchy principal value” of the integral, but it is not the only justifiable value we could assign to the integral. For example, it’s easy to check that
so the Cauchy principal value is . However, we might also consider
which diverges.