Taylor’s Theorem
I’ve decided I really do need one convergence result for Taylor series. In the form we’ll consider today, it’s an extension of the ideas in the Fundamental Theorem of Calculus.
Recall that if the function has a continuous derivative
, then the Fundamental Theorem of Calculus states that
Or, rearranging a bit
That is, we start with the value at , and we can integrate up the derivative to find how to adjust and find the value at the nearby point
. Now if
is itself continuously differentiable we can integrate by parts to find
Then we use the FToC to replace
And if is itself continuously differentiable we can proceed to find
Is this starting to look familiar?
At the th step we’ve got
and if is continuously differentiable we can integrate by parts and use the FToC to find
The sum is the th Taylor polynomial for
— the beginning of the Taylor series of
— at the point
, and the integral we call the “integral remainder term”
. For infinitely-differentiable functions we can define
for all
and get a sequence. The function
is then analytic if this sequence of errors converges to
in a neighborhood of
.
Analytic Functions
Okay, we know that power series define functions, and that the functions so defined have derivatives, which have power series expansions. And thus these derivatives have derivatives themselves, and so on. Thus a function defined by a power series in a given disk is actually infinitely differentiable within that same disk.
What about the other way? Say if we have a function with arbitrarily high derivatives at the point
. We know that if this function has a power series about
, then the only possible sequence of coefficients is given by the formula
But does this sequence actually give a power series expansion of ? That is, does the (in)famous “Taylor series”
converge to the function in any neighborhood of
? If so, we’ll call the function “analytic” at
.
So, are all infinitely-differentiable functions analytic? Are all functions for which the Taylor series at actually the limit of said Taylor series near
? Well, the fact that we have a special name should give a hint that the answer isn’t always “yes”.
We’ve been working with complex power series, but let’s specialize now to real power series. That is, all the coefficients are real, we center them around real points, and they converge within a real disk — an interval — of a given radius.
Now in this context we can consider the function defined by away from
. It’s straightforward to calculate
And if we define it turns out to even be differentiable there. The derivative turns out to be
. And we can also calculate
And so on. The th-derivative will be
, where
is a polynomial. We can calculate
That is, we can set and
, and thus recursively define a sequence of polynomials.
Thus for each degree we have a polynomial in multiplied by
, and in the limit the latter clearly wins the race. For each derivative we can fill in the “gap” at
by defining
.
But now when we set up the Taylor series around what happens? The series is
Which clearly converges to the constant function . That is, the Taylor series of this function at
converges to nothing like the function itself. This function is infinitely differentiable at
, but it is not analytic there.
There are a lot of theorems about what conditions on an infinitely-differentiable function make it analytic, but I’m going to leave them alone for now.
Inverses of Power Series
Now that we know how to compose power series, we can invert them. But against expectations I’m talking about multiplicative inverses instead of compositional ones.
More specifically, say we have a power series expansion
within the radius , and such that
. Then there is some radius
within which the reciprocal has a power series expansion
In particular, we have .
In the proof we may assume that — we can just divide the series through by
— and so
. We can set
within the radius . Since we know that
, continuity tells us that there’s
so that
implies
.
Now we set
And then we can find a power series expansion of .
It’s interesting to note that you might expect a reciprocal formula to follow from the multiplication formula. Set the product of and an undetermined
to the power series
, and get an infinite sequence of algebraic conditions determining
in terms of the
. Showing that these can all be solved is possible, but it’s easier to come around the side like this.
Composition of Power Series
Now that we can take powers of functions defined by power series and define them by power series in the same radii.. well, we’re all set to compose functions defined by power series!
Let’s say we have two power series expansions about :
within the radius , and
within the radius .
Now let’s take a with
and
. Then we have a power series expansion for the composite:
.
The coefficients are defined as follows: first, define
to be the coefficient of
in the expansion of
, then we set
To show this, first note that the hypothesis on assures that
, so we can write
If we are allowed to exchange the order of summation, then formally the result follows. To justify this (at least as well as we’ve been justifying such rearrangements recently) we need to show that
converges. But remember that each of the coefficients is itself a finite sum, so we find
On the other hand, in parallel with our computation last time we find that
where
So we find
which must then converge.
Breathe!
Products of Power Series
Formally, we defined the product of two power series to be the series you get when you multiply out all the terms and collect terms of the same degree. Specifically, consider the series and
. Their product will be the series
, where the coefficients are defined by
Now if the series converge within radii and
, respectively, it wouldn’t make sense for the product of the functions to be anything but whatever this converges to. But in what sense is this the case?
Like when we translated power series, I’m going to sort of wave my hands here, motivating it by the fact that absolute convergence makes things nice.
Let’s take a point inside both of the radii of convergence. Then we know that the series
and
both converge absolutely. We want to consider the product of these limits
Since the limit of the first sequence converges, we’ll just take it as a constant and distribute it over the other:
And now we’ll just distribute each across the sum it appears with:
And now we’ll use the fact that all the series in sight converge absolutely to rearrange this sum, adding up all the terms of the same total degree at once, and pull out factors of :
As a special case, we can work out powers of power series. Say that within a radius of
. Then within the same radius of
we have
where the coefficients are defined by
Uniqueness of Power Series Expansions
Sorry for the delay. Grading.
Now we have power series expansions of functions around various points, and within various radii of convergence. We even have formulas to relate expansions about nearby points. But when we move from one point to a nearby point, the resulting series is only guaranteed to converge in a disk contained within the original disk. But then moving back to the original point we are only guaranteed convergence in an even smaller disk. Something seems amiss here.
Let’s look closely at the power series expansion about a given point :
converging for . We know that this function has a derivative, which again has a power series expansion about
:
converging in the same radius. And so on, we find arbitrarily high derivatives
Now, we can specialize this by evaluating at the central point to find . That is, we have the formula for the power series coefficients:
This formula specifies the sequence of coefficients of a power series expansion of a function about a point uniquely in terms of the derivatives of the function at that point. That is, there is at most one power series expansion of any function about a given point.
So in our first example, of moving away from a point and back, the resulting series has the same coefficients we started with. Thus even though we were only assured that the series would converge in a much smaller disk, it actually converges in a larger disk than our formula guaranteed. In fact, this happens a lot: moving from one point to another we actually break new ground and “analytically continue” the function to a larger domain.
That is, we now have two overlapping disks, and each one contains points the other misses. Each disk has a power series expansion of a function. These expansions agree on the parts of the disks that overlap, so it doesn’t matter which rule we use to compute the function in that region. We thus have expanded the domain of our function by choosing different points about which to expand a power series.
Derivatives of Power Series
The uniform convergence of a power series establishes that the function it represents must be continuous. Not only that, but it turns out that the limiting function must be differentiable.
A side note here: we define the derivative of a complex function by exactly the same limit of a difference quotient as before. There’s a lot to be said about derivatives of complex functions, but we’ll set the rest aside until later.
Now, to be specific: if the power series converges for
to a function
, then
has a derivative
, which itself has a power series expansion
which converges within the same radius .
Given a point within
of
, we can expand
as a power series about
:
convergent within some radius of
. Then for
in this smaller disk of convergence we have
by manipulations we know to work for series. Then the series on the right must converge to a continuous function, and continuity tells us that each term vanishes as approaches
. Thus
exists and equals
. But our formula for
tells us
Finally, we can apply the root test again. The terms are now . Since the first radical expression goes to
, the limit superior is the same as in the original series for
:
. Thus the derived series has the same radius of convergence.
Notice now that we can apply the exact same reasoning to , and find that it has a derivative
, which has a power series expansion
which again converges within the same radius. And so on, we determine that the limiting function of the power series has derivatives of arbitrarily large orders.
Translating Power Series
So we know that we can have two power series expansions of the same function about different points. How are they related? An important step in this direction is given by the following theorem.
Suppose that the power series converges for
, and that it represents the function
in some open subset
of this disk. Then for every point
there is some open disk around
of radius
contained in
, in which
has a power series expansion
where
The proof is almost straightforward. We expand
Now we need to interchange the order of summation. Strictly speaking, we haven’t established a condition that will allow us to make this move. However, I hope you’ll find it plausible that if this double series converges absolutely, we can adjust the order of summations freely. Indeed, we’ve seen examples of other rearrangements that all go through as soon as the convergence is absolute.
Now we consider the absolute values
Where we set . But then
, where the last inequality holds because the disk around
of radius
fits within
, which fits within the disk of radius
around
. And so this series of absolute values must converge, and we’ll take it on faith for the moment (to be shored up when we attack double series more thoroughly) that we can now interchange the order of summations.
This result allows us to recenter our power series expansions, but it only assures that the resulting series will converge in a disk which is contained within the original disk of convergence, so we haven’t necessarily gotten anything new. Yet.
Power Series Expansions
Up to this point we’ve been talking about power series like , where “power” refers to powers of
. This led to us to show that when we evaluate a power series, the result converges in a disk centered at
. But what’s so special about zero?
Indeed, we could just as well write a series like for any point
. The result is just like picking up our original power series and carrying it over a bit. In particular, it still converges — and within the same radius — but now in a disk centered at
.
So when we have an equation like , where the given series converges within the radius
, we say that the series “represents”
in the disk of convergence. Alternately, we call the series itself a “power series expansion” of
about
.
For example, consider the series . A simple application of the root test tells us that this series converges in the disk
, of radius
about the point
. Some algebra shows us that if we multiply this series by
we get
. Thus the series is a power series expansion of
about
.
This new power series expansion actually subsumes the old one, since every point within of
is also within
of
. But sometimes disks overlap only partly. Then each expansion describes the behavior of the function at values of
that the other one cannot. And of course no power series expansion can describe what happens at a discontinuity.