What is it that makes the exponential what it is? We defined it as the inverse of the logarithm, and this is defined by integrating . But the important thing we immediately showed is that it satisfies the exponential property.
But now we know the Taylor series of the exponential function at :
In fact, we can work out the series around any other point the same way. Since all the derivatives are the exponential function back again, we find
Or we could also write this by expanding around and writing the relation as a series in the displacement :
Then we can expand out the part as a series itself:
But then (with our usual handwaving about rearranging series) we can pull out the inner series since it doesn’t depend on the outer summation variable at all:
And these series are just the series defining and , respectively. That is, we have shown the exponential property directly from the series expansion.
That is, whatever function the power series defines, it satisfies the exponential property. In a sense, the fact that the inverse of this function turns out to be the logarithm is a big coincidence. But it’s a coincidence we’ll tease out tomorrow.
For now I’ll note that this important exponential property follows directly from the series. And we can write down the series anywhere we can add, subtract, multiply, divide (at least by integers), and talk about convergence. That is, the exponential series makes sense in any topological ring of characteristic zero. For example, we can define the exponential of complex numbers by the series
Finally, this series will have the exponential property as above, so long as the ring is commutative (like it is for the complex numbers). In more general rings there’s a generalized version of the exponential property, but I’ll leave that until we eventually need to use it.
Sorry for the lack of a post yesterday, but I was really tired after this weekend.
So what functions might we try finding a power series expansion for? Polynomials would be boring, because they already are power series that cut off after a finite number of terms. What other interesting functions do we have?
So we can construct the Taylor series at . The coefficient formula tells us
which gives us the series
Thus the series converges absolutely no matter what value we pick for . The radius of convergence is thus infinite, and the series converges everywhere.
But does this series converge back to the exponential function? Taylor’s Theorem tells us that
where there is some between and so that .
Now the derivative of is again, and takes only positive values. And so we know that is everywhere increasing. What does this mean? Well, if then , and so . On the other hand if then , and so . Either way, we have some uniform bound on no matter what the are.
So now we know . And it’s not too hard to see (though I can’t seem to find it in my archives) that grows much faster than for any fixed . Basically, the idea is that each time you’re multiplying by , which eventually gets less than and stays less than one. The upshot is that the remainder term must converge to for any fixed , and so the series indeed converges to the function .
Now that we know how to compose power series, we can invert them. But against expectations I’m talking about multiplicative inverses instead of compositional ones.
More specifically, say we have a power series expansion
within the radius , and such that . Then there is some radius within which the reciprocal has a power series expansion
In particular, we have .
In the proof we may assume that — we can just divide the series through by — and so . We can set
within the radius . Since we know that , continuity tells us that there’s so that implies .
Now we set
And then we can find a power series expansion of .
It’s interesting to note that you might expect a reciprocal formula to follow from the multiplication formula. Set the product of and an undetermined to the power series , and get an infinite sequence of algebraic conditions determining in terms of the . Showing that these can all be solved is possible, but it’s easier to come around the side like this.
Now that we can take powers of functions defined by power series and define them by power series in the same radii.. well, we’re all set to compose functions defined by power series!
Let’s say we have two power series expansions about :
within the radius , and
within the radius .
Now let’s take a with and . Then we have a power series expansion for the composite:
The coefficients are defined as follows: first, define to be the coefficient of in the expansion of , then we set
To show this, first note that the hypothesis on assures that , so we can write
If we are allowed to exchange the order of summation, then formally the result follows. To justify this (at least as well as we’ve been justifying such rearrangements recently) we need to show that
converges. But remember that each of the coefficients is itself a finite sum, so we find
On the other hand, in parallel with our computation last time we find that
So we find
which must then converge.
Formally, we defined the product of two power series to be the series you get when you multiply out all the terms and collect terms of the same degree. Specifically, consider the series and . Their product will be the series , where the coefficients are defined by
Now if the series converge within radii and , respectively, it wouldn’t make sense for the product of the functions to be anything but whatever this converges to. But in what sense is this the case?
Let’s take a point inside both of the radii of convergence. Then we know that the series and both converge absolutely. We want to consider the product of these limits
Since the limit of the first sequence converges, we’ll just take it as a constant and distribute it over the other:
And now we’ll just distribute each across the sum it appears with:
And now we’ll use the fact that all the series in sight converge absolutely to rearrange this sum, adding up all the terms of the same total degree at once, and pull out factors of :
As a special case, we can work out powers of power series. Say that within a radius of . Then within the same radius of we have
where the coefficients are defined by
Sorry for the delay. Grading.
Now we have power series expansions of functions around various points, and within various radii of convergence. We even have formulas to relate expansions about nearby points. But when we move from one point to a nearby point, the resulting series is only guaranteed to converge in a disk contained within the original disk. But then moving back to the original point we are only guaranteed convergence in an even smaller disk. Something seems amiss here.
Let’s look closely at the power series expansion about a given point :
converging for . We know that this function has a derivative, which again has a power series expansion about :
converging in the same radius. And so on, we find arbitrarily high derivatives
Now, we can specialize this by evaluating at the central point to find . That is, we have the formula for the power series coefficients:
This formula specifies the sequence of coefficients of a power series expansion of a function about a point uniquely in terms of the derivatives of the function at that point. That is, there is at most one power series expansion of any function about a given point.
So in our first example, of moving away from a point and back, the resulting series has the same coefficients we started with. Thus even though we were only assured that the series would converge in a much smaller disk, it actually converges in a larger disk than our formula guaranteed. In fact, this happens a lot: moving from one point to another we actually break new ground and “analytically continue” the function to a larger domain.
That is, we now have two overlapping disks, and each one contains points the other misses. Each disk has a power series expansion of a function. These expansions agree on the parts of the disks that overlap, so it doesn’t matter which rule we use to compute the function in that region. We thus have expanded the domain of our function by choosing different points about which to expand a power series.
The uniform convergence of a power series establishes that the function it represents must be continuous. Not only that, but it turns out that the limiting function must be differentiable.
A side note here: we define the derivative of a complex function by exactly the same limit of a difference quotient as before. There’s a lot to be said about derivatives of complex functions, but we’ll set the rest aside until later.
Now, to be specific: if the power series converges for to a function , then has a derivative , which itself has a power series expansion
which converges within the same radius .
Given a point within of , we can expand as a power series about :
convergent within some radius of . Then for in this smaller disk of convergence we have
by manipulations we know to work for series. Then the series on the right must converge to a continuous function, and continuity tells us that each term vanishes as approaches . Thus exists and equals . But our formula for tells us
Finally, we can apply the root test again. The terms are now . Since the first radical expression goes to , the limit superior is the same as in the original series for : . Thus the derived series has the same radius of convergence.
Notice now that we can apply the exact same reasoning to , and find that it has a derivative , which has a power series expansion
which again converges within the same radius. And so on, we determine that the limiting function of the power series has derivatives of arbitrarily large orders.
So we know that we can have two power series expansions of the same function about different points. How are they related? An important step in this direction is given by the following theorem.
Suppose that the power series converges for , and that it represents the function in some open subset of this disk. Then for every point there is some open disk around of radius contained in , in which has a power series expansion
The proof is almost straightforward. We expand
Now we need to interchange the order of summation. Strictly speaking, we haven’t established a condition that will allow us to make this move. However, I hope you’ll find it plausible that if this double series converges absolutely, we can adjust the order of summations freely. Indeed, we’ve seen examples of other rearrangements that all go through as soon as the convergence is absolute.
Now we consider the absolute values
Where we set . But then , where the last inequality holds because the disk around of radius fits within , which fits within the disk of radius around . And so this series of absolute values must converge, and we’ll take it on faith for the moment (to be shored up when we attack double series more thoroughly) that we can now interchange the order of summations.
This result allows us to recenter our power series expansions, but it only assures that the resulting series will converge in a disk which is contained within the original disk of convergence, so we haven’t necessarily gotten anything new. Yet.
Up to this point we’ve been talking about power series like , where “power” refers to powers of . This led to us to show that when we evaluate a power series, the result converges in a disk centered at . But what’s so special about zero?
Indeed, we could just as well write a series like for any point . The result is just like picking up our original power series and carrying it over a bit. In particular, it still converges — and within the same radius — but now in a disk centered at .
So when we have an equation like , where the given series converges within the radius , we say that the series “represents” in the disk of convergence. Alternately, we call the series itself a “power series expansion” of about .
For example, consider the series . A simple application of the root test tells us that this series converges in the disk , of radius about the point . Some algebra shows us that if we multiply this series by we get . Thus the series is a power series expansion of about .
This new power series expansion actually subsumes the old one, since every point within of is also within of . But sometimes disks overlap only partly. Then each expansion describes the behavior of the function at values of that the other one cannot. And of course no power series expansion can describe what happens at a discontinuity.
We’ll just consider our power series to be in , because if we have a real power series we can consider each coefficient as a complex number instead. Now we take a complex number and try to evaluate the power series at this point. We get a series of complex numbers
by evaluating each polynomial truncation of the power series at and taking the limit of the sequence. For some this series may converge and for others it may not. The amazing fact is, though, that we can always draw a circle in the complex plane — — within which the series always converges absolutely, and outside of which it always diverges. We’ll say nothing in general about whether it converges on the circle, though.
The tool here is the root test. We take the th root of the size of the th term in the series to find . Then we can pull the -dependance completely out of the limit superior to write . The root test tells us that if this is less than the series will converge absolutely, while if it’s greater than the series will diverge.
So let’s define . The root test now says that if we have absolute convergence, while if the series diverges. Thus is the radius of convergence that we seek.
Now there are examples of series with all sorts of behavior on the boundary of this disk. The series has radius of convergence (as we can tell from the above procedure), but it doesn’t converge anywhere on the boundary circle. On the other hand, the series has the same radius of convergence, but it converges everywhere on the boundary circle. And, just to be perverse, the series has the same radius of convergence but converges everywhere on the boundary but the single point .