Now that we know how to compose power series, we can invert them. But against expectations I’m talking about multiplicative inverses instead of compositional ones.
More specifically, say we have a power series expansion
within the radius , and such that . Then there is some radius within which the reciprocal has a power series expansion
In particular, we have .
In the proof we may assume that — we can just divide the series through by — and so . We can set
within the radius . Since we know that , continuity tells us that there’s so that implies .
Now we set
And then we can find a power series expansion of .
It’s interesting to note that you might expect a reciprocal formula to follow from the multiplication formula. Set the product of and an undetermined to the power series , and get an infinite sequence of algebraic conditions determining in terms of the . Showing that these can all be solved is possible, but it’s easier to come around the side like this.
Now that we can take powers of functions defined by power series and define them by power series in the same radii.. well, we’re all set to compose functions defined by power series!
Let’s say we have two power series expansions about :
within the radius , and
within the radius .
Now let’s take a with and . Then we have a power series expansion for the composite:
The coefficients are defined as follows: first, define to be the coefficient of in the expansion of , then we set
To show this, first note that the hypothesis on assures that , so we can write
If we are allowed to exchange the order of summation, then formally the result follows. To justify this (at least as well as we’ve been justifying such rearrangements recently) we need to show that
converges. But remember that each of the coefficients is itself a finite sum, so we find
On the other hand, in parallel with our computation last time we find that
So we find
which must then converge.
Formally, we defined the product of two power series to be the series you get when you multiply out all the terms and collect terms of the same degree. Specifically, consider the series and . Their product will be the series , where the coefficients are defined by
Now if the series converge within radii and , respectively, it wouldn’t make sense for the product of the functions to be anything but whatever this converges to. But in what sense is this the case?
Let’s take a point inside both of the radii of convergence. Then we know that the series and both converge absolutely. We want to consider the product of these limits
Since the limit of the first sequence converges, we’ll just take it as a constant and distribute it over the other:
And now we’ll just distribute each across the sum it appears with:
And now we’ll use the fact that all the series in sight converge absolutely to rearrange this sum, adding up all the terms of the same total degree at once, and pull out factors of :
As a special case, we can work out powers of power series. Say that within a radius of . Then within the same radius of we have
where the coefficients are defined by
Sorry for the delay. Grading.
Now we have power series expansions of functions around various points, and within various radii of convergence. We even have formulas to relate expansions about nearby points. But when we move from one point to a nearby point, the resulting series is only guaranteed to converge in a disk contained within the original disk. But then moving back to the original point we are only guaranteed convergence in an even smaller disk. Something seems amiss here.
Let’s look closely at the power series expansion about a given point :
converging for . We know that this function has a derivative, which again has a power series expansion about :
converging in the same radius. And so on, we find arbitrarily high derivatives
Now, we can specialize this by evaluating at the central point to find . That is, we have the formula for the power series coefficients:
This formula specifies the sequence of coefficients of a power series expansion of a function about a point uniquely in terms of the derivatives of the function at that point. That is, there is at most one power series expansion of any function about a given point.
So in our first example, of moving away from a point and back, the resulting series has the same coefficients we started with. Thus even though we were only assured that the series would converge in a much smaller disk, it actually converges in a larger disk than our formula guaranteed. In fact, this happens a lot: moving from one point to another we actually break new ground and “analytically continue” the function to a larger domain.
That is, we now have two overlapping disks, and each one contains points the other misses. Each disk has a power series expansion of a function. These expansions agree on the parts of the disks that overlap, so it doesn’t matter which rule we use to compute the function in that region. We thus have expanded the domain of our function by choosing different points about which to expand a power series.
The uniform convergence of a power series establishes that the function it represents must be continuous. Not only that, but it turns out that the limiting function must be differentiable.
A side note here: we define the derivative of a complex function by exactly the same limit of a difference quotient as before. There’s a lot to be said about derivatives of complex functions, but we’ll set the rest aside until later.
Now, to be specific: if the power series converges for to a function , then has a derivative , which itself has a power series expansion
which converges within the same radius .
Given a point within of , we can expand as a power series about :
convergent within some radius of . Then for in this smaller disk of convergence we have
by manipulations we know to work for series. Then the series on the right must converge to a continuous function, and continuity tells us that each term vanishes as approaches . Thus exists and equals . But our formula for tells us
Finally, we can apply the root test again. The terms are now . Since the first radical expression goes to , the limit superior is the same as in the original series for : . Thus the derived series has the same radius of convergence.
Notice now that we can apply the exact same reasoning to , and find that it has a derivative , which has a power series expansion
which again converges within the same radius. And so on, we determine that the limiting function of the power series has derivatives of arbitrarily large orders.
So we know that we can have two power series expansions of the same function about different points. How are they related? An important step in this direction is given by the following theorem.
Suppose that the power series converges for , and that it represents the function in some open subset of this disk. Then for every point there is some open disk around of radius contained in , in which has a power series expansion
The proof is almost straightforward. We expand
Now we need to interchange the order of summation. Strictly speaking, we haven’t established a condition that will allow us to make this move. However, I hope you’ll find it plausible that if this double series converges absolutely, we can adjust the order of summations freely. Indeed, we’ve seen examples of other rearrangements that all go through as soon as the convergence is absolute.
Now we consider the absolute values
Where we set . But then , where the last inequality holds because the disk around of radius fits within , which fits within the disk of radius around . And so this series of absolute values must converge, and we’ll take it on faith for the moment (to be shored up when we attack double series more thoroughly) that we can now interchange the order of summations.
This result allows us to recenter our power series expansions, but it only assures that the resulting series will converge in a disk which is contained within the original disk of convergence, so we haven’t necessarily gotten anything new. Yet.
Up to this point we’ve been talking about power series like , where “power” refers to powers of . This led to us to show that when we evaluate a power series, the result converges in a disk centered at . But what’s so special about zero?
Indeed, we could just as well write a series like for any point . The result is just like picking up our original power series and carrying it over a bit. In particular, it still converges — and within the same radius — but now in a disk centered at .
So when we have an equation like , where the given series converges within the radius , we say that the series “represents” in the disk of convergence. Alternately, we call the series itself a “power series expansion” of about .
For example, consider the series . A simple application of the root test tells us that this series converges in the disk , of radius about the point . Some algebra shows us that if we multiply this series by we get . Thus the series is a power series expansion of about .
This new power series expansion actually subsumes the old one, since every point within of is also within of . But sometimes disks overlap only partly. Then each expansion describes the behavior of the function at values of that the other one cannot. And of course no power series expansion can describe what happens at a discontinuity.
We’ll just consider our power series to be in , because if we have a real power series we can consider each coefficient as a complex number instead. Now we take a complex number and try to evaluate the power series at this point. We get a series of complex numbers
by evaluating each polynomial truncation of the power series at and taking the limit of the sequence. For some this series may converge and for others it may not. The amazing fact is, though, that we can always draw a circle in the complex plane — — within which the series always converges absolutely, and outside of which it always diverges. We’ll say nothing in general about whether it converges on the circle, though.
The tool here is the root test. We take the th root of the size of the th term in the series to find . Then we can pull the -dependance completely out of the limit superior to write . The root test tells us that if this is less than the series will converge absolutely, while if it’s greater than the series will diverge.
So let’s define . The root test now says that if we have absolute convergence, while if the series diverges. Thus is the radius of convergence that we seek.
Now there are examples of series with all sorts of behavior on the boundary of this disk. The series has radius of convergence (as we can tell from the above procedure), but it doesn’t converge anywhere on the boundary circle. On the other hand, the series has the same radius of convergence, but it converges everywhere on the boundary circle. And, just to be perverse, the series has the same radius of convergence but converges everywhere on the boundary but the single point .
First we have to get down an explicit condition on convergence of complex sequences. In any metric space we can say that the sequence converges to a limit if for every there is some so that for all . Of course, here we’ll be using our complex distance function . Now we just have to replace any reference to real absolute values with complex absolute values and we should be good.
Cauchy’s condition comes in to say that the series converges if and only for every there is an so that for all the sum .
Similarly, we say that the series is absolutely convergent if the series is convergent, and this implies that the original series converges.
Since the complex norm is multiplicative, everything for the geometric series goes through again: if , and it diverges if . The case where is more complicated, but it can be shown to diverge as well.
The ratio and root tests are basically proven by comparing series of norms with geometric series. Since once we take the norm we’re dealing with real numbers, and since the norm is multiplicative, we find that the proofs go through again.
Now that we’ve got some topological fields to use as examples, let’s focus in on power series over or .
Remember that a power series is like an infinite polynomial. In fact, we introduced a topology so we could see in any power series a sequence of polynomials that converged to it. To be explicit, we write the series as a limit
where the are coefficients selected from our base field.
Now evaluation of power series is specified by two conditions: it should agree with evaluation of polynomials when we’ve got a power series that cuts off after a finite number of terms, and it should be continuous.
The first condition says that each of our approximating polynomials should evaluate just the same as it did before. That is, if we cut off after the degree- term and evaluate at the point in the base field, we should get .
The second condition says that evaluation should preserve limits. And we’ve got a sequence right here: the th term is the evaluation of the th approximating polynomial! So the power series should evaluate to the limit . If this limit exists, that is. And that’s why we need a topological field to make sense of evaluations.
Now we’re back in the realm of infinite series, and taking the limit of a sequence of partial sums. The series in question has as its th term the evaluated monomial . We can start using our old techniques to sum these series.