The Unapologetic Mathematician

Mathematics for the interested outsider

Taylor’s Theorem

I’ve decided I really do need one convergence result for Taylor series. In the form we’ll consider today, it’s an extension of the ideas in the Fundamental Theorem of Calculus.

Recall that if the function f has a continuous derivative f', then the Fundamental Theorem of Calculus states that

\displaystyle f(x)-f(x_0)=\int\limits_{x_0}^xf'(t)dt

Or, rearranging a bit

\displaystyle f(x)=f(x_0)+\int\limits_{x_0}^xf'(t)dt

That is, we start with the value at x_0, and we can integrate up the derivative to find how to adjust and find the value at the nearby point x. Now if f' is itself continuously differentiable we can integrate by parts to find

\displaystyle f(x)=f(x_0)+xf'(x)-x_0f'(x_0)-\int\limits_{x_0}^xtf''(t)dt

Then we use the FToC to replace f'(x)

\displaystyle\begin{aligned}f(x)=f(x_0)+x\left(f'(x_0)+\int\limits_{x_0}^xf''(t)dt\right)-x_0f'(x_0)-\int\limits_{x_0}^xtf''(t)dt\\=f(x_0)+(x-x_0)f'(x_0)+\int\limits_{x_0}^x(x-t)f''(t)dt\end{aligned}

And if f'' is itself continuously differentiable we can proceed to find

\displaystyle f(x)=f(x_0)+(x-x_0)f'(x_0)+\frac{1}{2}(x-x_0)^2f''(x_0)+\frac{1}{2}\int\limits_{x_0}^x(x-t)^2f'''(t)dt

Is this starting to look familiar?

At the nth step we’ve got

\displaystyle f(x)=\left(\sum\limits_{k=0}^n\frac{f^{(k)}(x_0)}{k!}(x-x_0)^k\right)+\int\limits_{x_0}^x\frac{f^{(n+1)}(t)}{n!}(x-t)^ndt

and if f^{(n+1)} is continuously differentiable we can integrate by parts and use the FToC to find

\displaystyle f(x)=\left(\sum\limits_{k=0}^{n+1}\frac{f^{(k)}(x_0)}{k!}(x-x_0)^k\right)+\int\limits_{x_0}^x\frac{f^{(n+2)}(t)}{(n+1)!}(x-t)^{n+1}dt

The sum is the nth Taylor polynomial for f — the beginning of the Taylor series of f — at the point x_0, and the integral we call the “integral remainder term” R_n(x). For infinitely-differentiable functions we can define R_n for all n and get a sequence. The function f is then analytic if this sequence of errors converges to {0} in a neighborhood of x_0.

September 30, 2008 Posted by | Analysis, Calculus | 8 Comments

Analytic Functions

Okay, we know that power series define functions, and that the functions so defined have derivatives, which have power series expansions. And thus these derivatives have derivatives themselves, and so on. Thus a function defined by a power series in a given disk is actually infinitely differentiable within that same disk.

What about the other way? Say if we have a function f with arbitrarily high derivatives at the point z_0. We know that if this function has a power series about z_0, then the only possible sequence of coefficients is given by the formula

\displaystyle a_k=\frac{f^{(k)}(z_0)}{k!}

But does this sequence actually give a power series expansion of f? That is, does the (in)famous “Taylor series”

\displaystyle\sum\limits_{k=0}^\infty\frac{f^{(k)}(z_0)}{k!}(z-z_0)^k

converge to the function f in any neighborhood of z_0? If so, we’ll call the function “analytic” at z_0.

So, are all infinitely-differentiable functions analytic? Are all functions for which the Taylor series at z_0 actually the limit of said Taylor series near z_0? Well, the fact that we have a special name should give a hint that the answer isn’t always “yes”.

We’ve been working with complex power series, but let’s specialize now to real power series. That is, all the coefficients are real, we center them around real points, and they converge within a real disk — an interval — of a given radius.

Now in this context we can consider the function defined by f(x)=e^{-x^{-2}} away from x=0. It’s straightforward to calculate

\displaystyle\lim\limits_{x\rightarrow0}e^{-x^{-2}}=0

And if we define f(0)=0 it turns out to even be differentiable there. The derivative turns out to be \left(2x^{-3}\right)e^{-x^{-2}}. And we can also calculate

\displaystyle\lim\limits_{x\rightarrow0}2x^{-3}e^{-x^{-2}}=0

And so on. The nth-derivative will be P_n(x^{-1})e^{-x^{-2}}, where P_n is a polynomial. We can calculate

\frac{d}{dx}P_n(x^{-1})e^{-x^{-2}}=\left(P'_n(x^{-1})(-x^{-2})+P_n(x^{-1})(2x^{-3})\right)e^{-x^{-2}}

That is, we can set P_0=1 and P_{n+1}=2X^3P_n-X^2P_n', and thus recursively define a sequence of polynomials.

Thus for each degree we have a polynomial in x^{-1} multiplied by e^{-x^{-2}}, and in the limit the latter clearly wins the race. For each derivative we can fill in the “gap” at x=0 by defining f^{(n)}(0)=0.

But now when we set up the Taylor series around x_0=0 what happens? The series is

\displaystyle\sum\limits_{k=0}^\infty\frac{f^{(n)}(0)}{k!}x^k=\sum\limits_{k=0}^\infty\frac{0}{k!}x^k

Which clearly converges to the constant function {0}. That is, the Taylor series of this function at x_0=0 converges to nothing like the function itself. This function is infinitely differentiable at {0}, but it is not analytic there.

There are a lot of theorems about what conditions on an infinitely-differentiable function make it analytic, but I’m going to leave them alone for now.

September 27, 2008 Posted by | Analysis, Calculus | 5 Comments

Inverses of Power Series

Now that we know how to compose power series, we can invert them. But against expectations I’m talking about multiplicative inverses instead of compositional ones.

More specifically, say we have a power series expansion

\displaystyle p(x)=\sum\limits_{n=0}^\infty p_nz^n

within the radius r, and such that p(0)=p_0\neq0. Then there is some radius \delta within which the reciprocal has a power series expansion

\displaystyle\frac{1}{p(x)}=\sum\limits_{n=0}^\infty q_nz^n

In particular, we have q_0=\frac{1}{p_0}.

In the proof we may assume that p_0=1 — we can just divide the series through by p_0 — and so p(0)=1. We can set

\displaystyle P(z)=1+\sum\limits_{n=1}^\infty\left|p_nz^n\right|

within the radius h. Since we know that P(0)=1, continuity tells us that there’s \delta so that |z|<\delta implies |P(z)-1|<1.

Now we set

\displaystyle f(z)=\frac{1}{1-z}=\sum\limits_{n=0}^\infty z^n
\displaystyle g(z)=1-p(z)=\sum\limits_{n=0}^\infty -p_nz^n

And then we can find a power series expansion of f\left(g(z)\right)=\frac{1}{p(z)}.

It’s interesting to note that you might expect a reciprocal formula to follow from the multiplication formula. Set the product of p(z) and an undetermined q(z) to the power series 1+0z+0z^2+..., and get an infinite sequence of algebraic conditions determining q_n in terms of the p_i. Showing that these can all be solved is possible, but it’s easier to come around the side like this.

September 24, 2008 Posted by | Analysis, Calculus, Power Series | Leave a comment

Sudocube

DO WANT

Tipped off to its existence by Alexandre Borovik.

September 24, 2008 Posted by | Rubik\'s Cube | 1 Comment

Composition of Power Series

Now that we can take powers of functions defined by power series and define them by power series in the same radii.. well, we’re all set to compose functions defined by power series!

Let’s say we have two power series expansions about z=0:

\displaystyle f(z)=\sum\limits_{n=0}^\infty a_nz^n

within the radius r, and

\displaystyle g(z)=\sum\limits_{n=0}^\infty b_nz^n

within the radius R.

Now let’s take a z_1 with |z|<R and \sum\limits_{n=0}^\infty\left|b_nz_1^n\right|<r. Then we have a power series expansion for the composite:

\displaystyle f\left(g(z)\right)=\sum\limits_{n=0}^\infty c_nz^n.

The coefficients c_n are defined as follows: first, define b_n(k) to be the coefficient of z^n in the expansion of g(z)^k, then we set

\displaystyle c_n=\sum\limits_{k=0}^\infty a_kb_n(k)

To show this, first note that the hypothesis on z_1 assures that |g(z_1)|<r, so we can write

\displaystyle f\left(g(z_1)\right)=\sum\limits_{k=0}^\infty a_kg(z_1)^k=\sum\limits_{k=0}^\infty\sum\limits_{n=0}^\infty a_kb_n(k)z_1^n

If we are allowed to exchange the order of summation, then formally the result follows. To justify this (at least as well as we’ve been justifying such rearrangements recently) we need to show that

\displaystyle\sum\limits_{k=0}^\infty\sum\limits_{n=0}^\infty\left|a_kb_n(k)z_1^n\right|=\sum\limits_{k=0}^\infty\left|a_k\right|\sum\limits_{n=0}^\infty\left|b_n(k)z_1^n\right|

converges. But remember that each of the coefficients b_n(k) is itself a finite sum, so we find

\displaystyle\left|b_n(k)\right|\leq\sum\limits_{m_1+...+m_k=n}\left|b_{m_1}\right|...\left|b_{m_k}\right|

On the other hand, in parallel with our computation last time we find that

\displaystyle\left(\sum\limits_{n=0}^\infty\left|b_n\right|z^n\right)^n=\sum\limits_{n=0}^\infty B_n(k)z^n

where

\displaystyle B_n(k)=\sum\limits_{m_1+...+m_k=n}\left|b_{m_1}\right|...\left|b_{m_k}\right|

So we find

\displaystyle\begin{aligned}\sum\limits_{k=0}^\infty\left|a_k\right|\sum\limits_{n=0}^\infty\left|b_n(k)z_1^n\right|\leq\sum\limits_{k=0}^\infty\left|a_k\right|\sum\limits_{n=0}^\infty B_n(k)\left|z_1^n\right|\\=\sum\limits_{k=0}\left|a_k\right|\left(\sum\limits_{n=0}^\infty\left|b_nz_1^n\right|\right)^k\end{aligned}

which must then converge.

Breathe!

September 24, 2008 Posted by | Analysis, Calculus, Power Series | 1 Comment

Products of Power Series

Formally, we defined the product of two power series to be the series you get when you multiply out all the terms and collect terms of the same degree. Specifically, consider the series \sum\limits_{n=0}^\infty a_nz^n and \sum\limits_{n=0}^\infty b_nz^n. Their product will be the series \sum\limits_{n=0}^\infty c_nz^n, where the coefficients are defined by

\displaystyle c_n=\sum\limits_{k+l=n}a_kb_l=\sum\limits_{k=0}^na_kb_{n-k}

Now if the series converge within radii R_a and R_b, respectively, it wouldn’t make sense for the product of the functions to be anything but whatever this converges to. But in what sense is this the case?

Like when we translated power series, I’m going to sort of wave my hands here, motivating it by the fact that absolute convergence makes things nice.

Let’s take a point z_1 inside both of the radii of convergence. Then we know that the series \sum\limits_{n=0}^\infty a_nz_1^n and \sum\limits_{n=0}^\infty b_nz_1^n both converge absolutely. We want to consider the product of these limits

\displaystyle\left(\sum\limits_{k=0}^\infty a_kz_1^k\right)\left(\sum\limits_{l=0}^\infty b_lz_1^l\right)

Since the limit of the first sequence converges, we’ll just take it as a constant and distribute it over the other:

\displaystyle\sum\limits_{l=0}^\infty\left(\sum\limits_{k=0}^\infty a_kz_1^k\right)b_lz_1^l

And now we’ll just distribute each b_lz_1^l across the sum it appears with:

\displaystyle\sum\limits_{l=0}^\infty\left(\sum\limits_{k=0}^\infty a_kb_lz_1^{k+l}\right)

And now we’ll use the fact that all the series in sight converge absolutely to rearrange this sum, adding up all the terms of the same total degree at once, and pull out factors of z_1^n:

\displaystyle\sum\limits_{n=0}^\infty\left(\sum\limits_{k+l=n}a_kb_lz_1^n\right)=\sum\limits_{n=0}^\infty\left(\sum\limits_{k+l=n}a_kb_l\right)z_1^n

As a special case, we can work out powers of power series. Say that f(z)=\sum\limits_{n=0}^\infty a_n(z-z_0)^n within a radius of R. Then within the same radius of R we have

\displaystyle f(z)^p=\sum\limits_{n=0}^\infty c_n(p)z^n

where the coefficients are defined by

\displaystyle c_n(p)=\sum\limits_{m_1+m_2+...+m_p=n}a_{m_1}a_{m_2}...a_{m_p}

September 22, 2008 Posted by | Analysis, Calculus, Power Series | 2 Comments

Uniqueness of Power Series Expansions

Sorry for the delay. Grading.

Now we have power series expansions of functions around various points, and within various radii of convergence. We even have formulas to relate expansions about nearby points. But when we move from one point to a nearby point, the resulting series is only guaranteed to converge in a disk contained within the original disk. But then moving back to the original point we are only guaranteed convergence in an even smaller disk. Something seems amiss here.

Let’s look closely at the power series expansion about a given point z_0:

\displaystyle f(z)=\sum\limits_{n=0}^\infty a_n(z-z_0)^n

converging for |z-z_0|<r. We know that this function has a derivative, which again has a power series expansion about z_0:

\displaystyle f'(z)=\sum\limits_{n=1}^\infty na_n(z-z_0)^{n-1}

converging in the same radius. And so on, we find arbitrarily high derivatives

\displaystyle f^{(k)}(z)=\sum\limits_{n=k}^\infty\frac{n!}{(n-k)!}(z-z_0)^{n-k}

Now, we can specialize this by evaluating at the central point to find f^{(k)}(z_0)=k!a_k. That is, we have the formula for the power series coefficients:

\displaystyle a_k=\frac{f^{(k)}(z_0)}{k!}

This formula specifies the sequence of coefficients of a power series expansion of a function about a point uniquely in terms of the derivatives of the function at that point. That is, there is at most one power series expansion of any function about a given point.

So in our first example, of moving away from a point and back, the resulting series has the same coefficients we started with. Thus even though we were only assured that the series would converge in a much smaller disk, it actually converges in a larger disk than our formula guaranteed. In fact, this happens a lot: moving from one point to another we actually break new ground and “analytically continue” the function to a larger domain.

That is, we now have two overlapping disks, and each one contains points the other misses. Each disk has a power series expansion of a function. These expansions agree on the parts of the disks that overlap, so it doesn’t matter which rule we use to compute the function in that region. We thus have expanded the domain of our function by choosing different points about which to expand a power series.

September 18, 2008 Posted by | Analysis, Calculus, Power Series | 1 Comment

Derivatives of Power Series

The uniform convergence of a power series establishes that the function it represents must be continuous. Not only that, but it turns out that the limiting function must be differentiable.

A side note here: we define the derivative of a complex function by exactly the same limit of a difference quotient as before. There’s a lot to be said about derivatives of complex functions, but we’ll set the rest aside until later.

Now, to be specific: if the power series \sum\limits_{n=0}^\infty a_n(z-z_0)^n converges for |z-z_0|<r to a function f(z), then f has a derivative f', which itself has a power series expansion

\displaystyle f'(z)=\sum\limits_{n=1}^\infty na_n(z-z_0)^{n-1}

which converges within the same radius r.

Given a point z_1 within r of z_0, we can expand f as a power series about z_1:

\displaystyle f(z)=\sum\limits_{k=0}^\infty b_k(z-z_1)^k

convergent within some radius R of z_1. Then for z in this smaller disk of convergence we have

\displaystyle\frac{f(z)-f(z_1)}{z-z_1}=b_1+\sum\limits_{k=1}^\infty b_{k+1}(z-z_1)^k

by manipulations we know to work for series. Then the series on the right must converge to a continuous function, and continuity tells us that each term vanishes as z approaches z_1. Thus f'(z_1) exists and equals b_1. But our formula for b_1 tells us

\displaystyle f'(z_1)=b_1=\sum\limits_{n=1}^\infty\binom{n}{1}a_n(z_1-z_0)^{n-1}=\sum\limits_{n=1}^\infty na_n(z_1-z_0)^{n-1}

Finally, we can apply the root test again. The terms are now \sqrt[n]{n}\sqrt[n]{|a_n|}. Since the first radical expression goes to {1}, the limit superior is the same as in the original series for f: \frac{1}{r}. Thus the derived series has the same radius of convergence.

Notice now that we can apply the exact same reasoning to f'(z), and find that it has a derivative f''(z), which has a power series expansion

\displaystyle f'(z)=\sum\limits_{n=2}^\infty n(n-1)a_n(z-z_0)^{n-2}

which again converges within the same radius. And so on, we determine that the limiting function of the power series has derivatives of arbitrarily large orders.

September 17, 2008 Posted by | Analysis, Calculus, Power Series | 3 Comments

Translating Power Series

So we know that we can have two power series expansions of the same function about different points. How are they related? An important step in this direction is given by the following theorem.

Suppose that the power series \sum\limits_{n=0}^\infty a_n(z-z_0)^n converges for |z-z_0|<r, and that it represents the function f(z) in some open subset S of this disk. Then for every point z_1\in S there is some open disk around z_1 of radius R contained in S, in which f has a power series expansion

\displaystyle f(z)=\sum\limits_{k=0}^\infty b_k(z-z_1)^k

where

\displaystyle b_k=\sum\limits_{n=k}^\infty\binom{n}{k}a_n(z_1-z_0)^{n-k}

The proof is almost straightforward. We expand

\displaystyle\begin{aligned}f(z)=\sum\limits_{n=0}^\infty a_n(z-z_0)^n=\sum\limits_{n=0}^\infty a_n(z-z_1+z_1-z_0)^n\\=\sum\limits_{n=0}^\infty a_n\sum_{k=0}^n\binom{n}{k}(z-z_1)^k(z_1-z_0)^{n-k}\end{aligned}

Now we need to interchange the order of summation. Strictly speaking, we haven’t established a condition that will allow us to make this move. However, I hope you’ll find it plausible that if this double series converges absolutely, we can adjust the order of summations freely. Indeed, we’ve seen examples of other rearrangements that all go through as soon as the convergence is absolute.

Now we consider the absolute values

\displaystyle\begin{aligned}\sum_{n=0}^\infty|a_n|\sum\limits_{k=0}^n\binom{n}{k}|z-z_1|^k|z_1-z_0|^{n-k}=\sum_{n=0}^\infty|a_n|(|z-z_1|+|z_1-z_0|)^n\\=\sum_{n=0}^\infty|a_n|(z_2-z_0)^n\end{aligned}

Where we set z_2=z_0+|z-z_1|+|z_1-z_0|. But then |z_2-z_0|<R+|z_1-z_0|\leq r, where the last inequality holds because the disk around z_1 of radius R fits within S, which fits within the disk of radius r around z_0. And so this series of absolute values must converge, and we’ll take it on faith for the moment (to be shored up when we attack double series more thoroughly) that we can now interchange the order of summations.

\displaystyle f(z)=\sum\limits_{k=0}^\infty\left(\sum_{n=k}^\infty a_n\binom{n}{k}(z_1-z_0)^{n-k}\right)(z-z_1)^k

This result allows us to recenter our power series expansions, but it only assures that the resulting series will converge in a disk which is contained within the original disk of convergence, so we haven’t necessarily gotten anything new. Yet.

September 16, 2008 Posted by | Analysis, Calculus, Power Series | 5 Comments

Power Series Expansions

Up to this point we’ve been talking about power series like \sum\limits_{n=0}^\infty c_nz^n, where “power” refers to powers of z. This led to us to show that when we evaluate a power series, the result converges in a disk centered at {0}. But what’s so special about zero?

Indeed, we could just as well write a series like \sum\limits_{n=0}^\infty c_n(z-z_0)^n for any point z_0. The result is just like picking up our original power series and carrying it over a bit. In particular, it still converges — and within the same radius — but now in a disk centered at z_0.

So when we have an equation like f=\sum\limits_{n=0}^\infty c_n(z-z_0)^n, where the given series converges within the radius R, we say that the series “represents” f in the disk of convergence. Alternately, we call the series itself a “power series expansion” of f about z_0.

For example, consider the series \sum\limits_{n=0}^\infty\left(\frac{2}{3}\right)^{n+1}\left(z+\frac{1}{2}\right)^n. A simple application of the root test tells us that this series converges in the disk \left|z+\frac{1}{2}\right|<\frac{3}{2}, of radius \frac{3}{2} about the point z_0=-\frac{1}{2}. Some algebra shows us that if we multiply this series by 1-z=\frac{3}{2}-\left(z+\frac{1}{2}\right) we get {1}. Thus the series is a power series expansion of \frac{1}{1-z} about z_0=-\frac{1}{2}.

This new power series expansion actually subsumes the old one, since every point within {1} of {0} is also within \frac{3}{2} of -\frac{1}{2}. But sometimes disks overlap only partly. Then each expansion describes the behavior of the function at values of z that the other one cannot. And of course no power series expansion can describe what happens at a discontinuity.

September 15, 2008 Posted by | Analysis, Calculus, Power Series | 6 Comments

Follow

Get every new post delivered to your Inbox.

Join 389 other followers