# The Unapologetic Mathematician

## Some Continuous Duals

I really wish I could just say $L^p$ in post titles.

Anyway, I want to investigate the continuous dual of $L^p(\mu)$ for $1\leq p<\infty$. That is, we’re excluding the case where either $p$ (but not its Hölder conjugate $q$) is infinite. And I say that when $(X,\mathcal{S},\mu)$ is $\sigma$-finite, the space $L^p(\mu)'$ of bounded linear functionals on $L^p(\mu)$ is isomorphic to $L^q(\mu)$.

First, I’m going to define a linear map $\kappa_p:L^q\to\left(L^p\right)'$. Given a function $g\in L^q$, let $\kappa_p(g)$ be the linear functional defined for any $f\in L^p$ by

$\displaystyle\left[\kappa_p(g)\right](f)=\int fg\,d\mu$

It’s clear from the linearity of multiplication and of the integral itself, that this is a linear functional on $L^p$. Hölder’s inequality itself shows us that not only does the integral on the right exist, but

$\displaystyle\lvert\left[\kappa_p(g)\right](f)\rvert\leq\lVert fg\rVert_1\leq\lVert f\rVert_p\lVert g\rVert_q$

That is, $\kappa_p(g)$ is a bounded linear functional, and the operator norm $\lVert\kappa_p(g)\rVert_\text{op}$ is at most the $L^q$ norm of $g$. The extremal case of Hölder’s inequality shows that there is some $f$ for which this is an equality, and thus we conclude that $\lVert\kappa_p(g)\rVert_\text{op}=\lVert g\rVert_q$. That is, $\kappa_p:L^q\to\left(L^p\right)'$ is an isometry of normed vector spaces. Such a mapping has to be an injection, because if $\kappa_p(g)=0$ then $0=\lVert\kappa_p(g)\rVert_\text{op}=\lVert g\rVert_q$, which implies that $g=0$.

Now I say that $\kappa_p$ is also a surjection. That is, any bounded linear functional $\Lambda:L^p\to\mathbb{R}$ is of the form $\kappa_p(g)$ for some $g\in L^q$. Indeed, if $\Lambda=0$ then we can just pick $g=0$ as a preimage. Thus we may assume that $\Lambda$ is a nonzero bounded linear functional on $L^p$, and $\lVert\Lambda\rVert_\text{op}>0$. We first deal with the case of a totally finite measure space.

In this case, we define a set function on measurable sets by $\lambda(E)=\Lambda(\chi_E)$. It’s straightforward to see that $\lambda$ is additive. To prove countable additivity, suppose that $E$ is the countable disjoint union of a sequence $\{E_n\}$. If we write $A_k$ for the union of $E_1$ through $E_k$, we find that

$\displaystyle\lVert\chi_E-\chi_{A_k}\rVert_p=\left(\mu(E\setminus A_k)\right)^\frac{1}{p}\to0$

Since $\Lambda$ is continuous, we conclude that $\lambda(A_k)\to\lambda(E)$, and thus that $\lambda$ is a (signed) measure. It should also be clear that $\mu(E)=0$ implies $\lambda(E)=0$, and so $\lambda\ll\mu$. The Radon-Nikodym theorem now tells us that there exists an integrable function $g$ so that

$\displaystyle\Lambda(\chi_E)=\lambda(E)=\int\limits_Eg\,d\mu=\int\chi_Eg\,d\mu$

Linearity tells us that

$\displaystyle\Lambda(f)=\int fg\,d\mu$

for simple functions $f$, and also for every $f\in L^\infty(\mu)$, since each such function is the uniform limit of simple functions. We want to show that $g\in L^q$.

If $p=1$, then we must show that $g$ is essentially bounded. In this case, we find

$\displaystyle\left\lvert\int\limits_Eg\,d\mu\right\rvert\leq\lVert\Lambda\rVert_\text{op}\,\lVert\chi_E\rVert_1=\lVert\Lambda\rVert_\text{op}\mu(E)$

for every measurable $E$, from which we conclude that $\lvert g(x)\rvert\leq\lVert\Lambda\rVert_\text{op}$ a.e., or else we could find some set on which this inequality was violated. Thus $\lVert g\rVert_\infty\leq\lVert\Lambda\rVert_\text{op}$.

For other $p$, we can find a measurable $\alpha$ with $\lvert\alpha\rvert=1$ so that $\alpha g=\lvert g\rvert$. Setting $E_n=\{x\in X\vert n\geq\lvert g(x)\rvert\}$ and defining $f=\chi_{E_n}\lvert g\rvert^{q-1}\alpha$, we find that $\lvert f\rvert^p=\lvert g\rvert^q$ on $E_n$, $f\in L^\infty$, and so

$\displaystyle\int\limits_{E_n}\lvert g\rvert^q\,d\mu=\int\limits_Xfg\,d\mu=\Lambda(f)\leq\lVert\Lambda\rVert_\text{op}=\lVert\Lambda\rVert_\text{op}\left(\int\limits_{E_n}\lvert g\rvert^q\,d\mu\right)^\frac{1}{p}$

We thus find

$\displaystyle\left(\int\limits_{E_n}\lvert g\rvert^q\,d\mu\right)^\frac{1}{q}=\left(\int\limits_{E_n}\lvert g\rvert^q\,d\mu\right)^{1-\frac{1}{p}}\leq\lVert\Lambda\rVert_\text{op}$

and thus

$\displaystyle\int\limits_X\chi_{E_n}\lvert g\rvert^q\,d\mu\leq\lVert\Lambda\rVert_\text{op}^q$

Applying the monotone convergence theorem as $n\to\infty$ we find that $\lVert g\rVert_q\leq\lVert\Lambda\rVert_\text{op}$.

Thus in either case we’ve found a $g\in L^q$ so that $\Lambda=\kappa_p(g)$.

In the $\sigma$-finite case, we can write $X$ as the countable disjoint union of sets $X_i$ with $\mu(X_i)<\infty$. We let $Y_k$ be the union of the first $k$ of these sets. We note that $\lVert\chi_E f\rVert_p\leq\lVert f\rVert_p$ for every measurable set $E$, so $f\mapsto\Lambda(\chi_Ef)$ is a linear functional on $L^p$ of norm at most $\lVert\Lambda\rVert_\text{op}$. The finite case above shows us that there are functions $g_i$ on $X_i$ so that

$\displaystyle\Lambda(\chi_{X_i}f)=\int\limits_{X_i}fg_i\,d\mu$.

We can define $g_i(x)=0$ if $x\notin X_i$, and let $g$ be the sum of all these $g_i$. We see that

$\displaystyle\Lambda(\chi_{Y_k}f)=\int\limits_{Y_k}f(g_1+\dots+g_k)\,d\mu$

for every $f\in L^p$, and since $\mu(Y_k)<\infty$ we find that $\lVert g_1+\dots+g_k\rVert_q\leq\lVert\Lambda\rVert_\text{op}$. Then Fatou’s lemma shows us that $\lVert g\rVert_q\leq\lVert\Lambda\rVert_\text{op}$. Thus the $\sigma$-finite case is true as well.

One case in particular is especially worthy of note: since $2$ is Hölder-coonjugate to itself, we find that $L^2$ is isomorphic to its own continuous dual space in the same way that a finite-dimensional inner-product space is isomorphic to its own dual space.

September 3, 2010

## Bounded Linear Transformations

In the context of normed vector spaces we have a topology on our spaces and so it makes sense to ask that maps $f:V\to W$ between them be continuous. In the finite-dimensional case, all linear functions are continuous, so this hasn’t really come up before in our study of linear algebra. But for functional analysis, it becomes much more important.

Now, really we only need to require continuity at one point — the origin, to be specific — because if it’s continuous there then it’ll be continuous everywhere. Indeed, continuity at $v'$ means that for any $\epsilon>0$ there is a $\delta>0$ so that $\lVert v-v'\rVert_V<\delta$ implies $\lVert f(v)-f(v')\rVert_W=\lVert f(v-v')\rVert_W<\epsilon$. In particular, if $v'=0$, then this means $\lVert v\rVert_V<\delta$ implies $\lVert f(v)\rVert_W<\epsilon$. Clearly if this holds, then the general version also holds.

But it turns out that there’s another equivalent condition. We say that a linear transformation $f:V\to W$ is “bounded” if there is some $M>0$ such that $\lVert f(v)\rVert_W\leq M\lVert v\rVert_V$ for all $v\in V$. That is, the factor by which $f$ stretches the length of a vector is bounded. By linearity, we only really need to check this on the unit sphere $\lVert v\rVert_V=1$, but it’s often just as easy to test it everywhere.

Anyway, I say that a linear transformation is continuous if and only if it’s bounded. Indeed, if $f:V\to W$ is bounded, then we find

$\displaystyle M\lVert h \rVert_V\geq\lVert f(h)\rVert_W=\lVert f(v+h)-f(v)\rVert_W$

so as we let $h$ approach $0$ — as $v+h$ approaches $v$ — the difference between $f(v+h)$ and $f(v)$ approaches zero as well. And so $f$ is continuous.

Conversely, if $f$ is continuous, then it is bounded. Since it’s continuous, we let $\epsilon=1$ and find a $\delta$ so that $\lVert f(h)\rVert_W<1$ for all vectors $h$ with $\lVert h\rVert_V<\delta$. Thus for all nonzero $v\in V$ we find

\displaystyle\begin{aligned}\lVert f(v)\rVert_W&=\left\lVert\frac{\lVert v\rVert_V}{\delta}f\left(\delta\frac{v}{\lVert v\rvert_V}\right)\right\rVert_W\\&=\frac{\lVert v\rVert_V}{\delta}\left\lVert f\left(\delta\frac{v}{\lVert v\rVert_V}\right)\right\rVert_W\\&\leq\frac{\lVert v\rVert_V}{\delta}\,1\\&=\frac{1}{\delta}\lVert v\rVert_V\end{aligned}

Thus we can use $M=\frac{1}{\delta}$ and conclude that $f$ is bounded.

The least such $M$ that works in the condition for $f$ to be bounded is called the “operator norm” of $f$, which we write as $\lVert f\rVert_\text{op}$. It’s straightforward to verify that $\lVert cf\rVert_\text{op}=\lvert c\rvert\lVert f\rVert_\text{op}$, and that $\lVert f\rVert_\text{op}=0$ if and only if $f$ is the zero operator. It remains to verify the triangle identity.

Let’s say that we have bounded linear transformations $f:V\to W$ and $g:T\to W$ with operator norms $M=\lVert f\rVert_\text{op}$ and $N=\lVert g\rVert_\text{op}$, respectively. We will show that $M+N$ works as a bound for $f+g$, and thus conclude that $\lVert f+g\rVert_\text{op}\leq\lVert f\rVert_\text{op}+\lVert g\rVert_\text{op}$. Indeed, we check that

\displaystyle\begin{aligned}\lVert[f+g](v)\rVert_W&=\lVert f(v)+g(v)\rVert_W\\&\leq\lVert f(v)\rVert_W+\lVert g(v)\rVert_W\\&\leq M\lVert v\rVert_V+N\lVert v\rVert_V\\&=(M+N)\lVert v\rVert_V\end{aligned}

and our assertion follows. In particular, when our base field is itself a normed linear space (like $\mathbb{C}$ or $\mathbb{R}$ itself) we can conclude that the “continuous dual space$V'$ consisting of bounded linear functionals $\Lambda:V\to\mathbb{F}$ is a normed linear space using the operator norm on $V'$.

September 2, 2010

## Some Banach Spaces

To complete what we were saying about the $L^p$ spaces, we need to show that they’re complete. As it turns out, we can adapt the proof that mean convergence is complete, but we will take a somewhat different approach. It suffices to show that for any sequence of functions $\{u_n\}$ in $L^p$ so that the series of $L^p$-norms converges

$\displaystyle\sum\limits_{n=1}^\infty\lVert u_n\rVert_p<\infty$

the series of functions converges to some function $f\in L^p$.

For finite $p$, Minkowski’s inequality allows us to conclude that

$\displaystyle\int\left(\sum\limits_{n=1}^\infty\lvert u_n\rvert\right)^p\,d\mu=\left(\sum\limits_{n=1}^\infty\lVert u_n\rVert_p\right)^p<\infty$

The monotone convergence theorem now tells us that the limiting function

$\displaystyle f=\sum\limits_{n=1}^\infty u_n$

is defined a.e., and that $f\in L^p$. The dominated convergence theorem can now verify that the partial sums of the series are $L^p$-convergent to $f$:

$\displaystyle\int\left\lvert f-\sum\limits_{k=1}^n u_k\right\rvert^p\,d\mu\leq\int\left(\sum\limits_{l=n+1}^\infty\lvert u_l\rvert\right)^p\to0$

In the case $p=\infty$, we can write $c_n=\lVert u_n\rVert_\infty$. Then $\lvert u_n(x)\rvert except on some set $E_n$ of measure zero. The union of all the $E_n$ must also be negligible, and so we can throw it all out and just have $\lvert u_n(x)\rvert. Now the series of the $c_n$ converges by assumption, and thus the series of the $u_n$ must converge to some function bounded by the sum of the $c_n$ (except on the union of the $E_n$).

August 31, 2010

## The L¹ Norm

We can now introduce a norm to our space of integrable simple functions, making it into a normed vector space. We define

$\displaystyle\lVert f\rVert_1=\int\lvert f\rvert\,d\mu$

Don’t worry about that little $1$ dangling off of the norm, or why we call this the “$L^1$ norm”. That will become clear later when we generalize.

We can easily verify that $\lVert cf\rVert_1=\lvert c\rvert\lVert f\rVert_1$ and that $\lVert f+g\rVert_1\leq\lVert f\rVert_1+\lVert g\rVert_1$, using our properties of integrals. The catch is that $\lVert f\rVert_1=0$ doesn’t imply that $f$ is identically zero, but only that $f=0$ almost everywhere. But really throughout our treatment of integration we’re considering two functions that are equal a.e. to be equivalent, and so this isn’t really a problem — $\lVert f\rVert_1=0$ implies that $f$ is equivalent to the constant zero function for our purposes.

Of course, a norm gives rise to a metric:

$\displaystyle\rho(f,g)=\lVert g-f\rVert_1=\int\lvert g-f\rvert\,d\mu$

and this gives us a topology on the space of integrable simple functions. And with a topology comes a notion of convergence!

We say that a sequence $\{f_n\}$ of integrable functions is “Cauchy in the mean” or is “mean Cauchy” if $\lVert f_n-f_m\rVert_1\to0$ as $m$ and $n$ get arbitrarily large. We won’t talk quite yet about convergence because our situation is sort of like the one with rational numbers; we have a sense of when functions are getting close to each other, but most of these mean Cauchy sequences actually don’t converge within our space. That is, the normed vector space is not a Banach space.

However we can say some things about this notion of convergence. For one, a sequence $\{f_n\}$ that is Cauchy in the mean is Cauchy in measure as well. Indeed, for any $\epsilon>0$ we can define the sets

$\displaystyle E_{mn}=\left\{x\in X\big\vert\lvert f_n(x)-f_m(x)\rvert\geq\epsilon\right\}$

And then we find that

$\displaystyle\lVert f_n-f_m\rVert=\int\lvert f_n-f_m\rvert\,d\mu\geq\int\limits_{E_mn}\lvert f_n-f_m\rvert\,d\mu\geq\epsilon\mu(E_mn)$

As $m$ and $n$ get arbitrarily large, the fact that the sequence is mean Cauchy tells us that the left hand side of this inequality gets pushed down to zero, and so the right hand side must as well.

This notion of convergence will play a major role in our study of integration.

May 28, 2010

## Convergence Almost Everywhere

Okay, so let’s take our idea of almost everywhere and apply it to convergence of sequences of measurable functions.

Given a sequence $\{f_n\}_{n=1}^\infty$ of extended real-valued functions on a measure space $X$, we say that $f_n$ converges a.e. to the function $f$ if there is a set $E_0\subseteq X$ with $\mu(E_0)=0$ so that $\lim\limits_{n\to\infty}f_n(x)=f(x)$ for all $x\in{E_0}^c$. Similarly, we say that the sequence $f_n$ is Cauchy a.e. if there exists a set $E_0$ of measure zero so that $\{f_n(x)\}$ is a Cauchy sequence of real numbers for all $x\in{E_0}^c$. That is, given $x\notin E_0$ and $\epsilon>0$ there is some natural number $N$ depending on $x$ and $\epsilon$ so that whenever $m,n\geq N$ we have $\lvert f_m(x)-f_n(x)\rvert<\epsilon$

Because the real numbers $\mathbb{R}$ form a complete metric space, being Cauchy and being convergent are equivalent — a sequence of finite real numbers is convergent if and only if it is Cauchy, and a similar thing happens here. If a sequence of finite-valued functions is convergent a.e., then $\{f_n(x)\}$ converges to $f(x)$ away from a set of measure zero. Each of these sequences $\{f_n(x)\}$ is thus Cauchy, and so $\{f_n\}$ is Cauchy almost everywhere. On the other hand, if $\{f_n\}$ is Cauchy a.e. then the sequences $\{f_n(x)\}$ are Cauchy away from a set of measure zero, and these sequences then converge.

We can also define what it means for a sequence of functions to converge uniformly almost everywhere. That is, there is some set $E_0$ of measure zero so that for every $\epsilon>0$ we can find a natural number $N$ so that for all $n\geq N$ and $x\notin E_0$ we have $\lvert f_n(x)-f(x)\rvert<\epsilon$. The uniformity means that $N$ is independent of $x\in{E_0}^c$, but if we choose a different negligible $E_0$ we may have to choose different values of $N$ to get the desired control on the sequence.

As it happens, the topology defined by uniform a.e. convergence comes from a norm: the essential supremum; using this notion of convergence makes the algebra of essentially bounded measurable functions on a measure space $X$ into a normed vector space. Indeed, we can check what it means for a sequence of functions $\{f_n\}$ to converge to $f$ under the essential supremum norm — for any $\epsilon>0$ there is some $N$ so that for all $n\geq N$ we have $\text{ess sup}(f_n-f)<\epsilon$. Unpacking the definition of the essential supremum, this means that there is some measurable set $E_0$ with measure zero so that $\lvert f_n(x)-f(x)\rvert<\epsilon$ for all $x\notin E_0$, which is exactly what we said for uniform a.e. convergence above.

We can also turn around and define what it means for a sequence to be uniformly Cauchy almost everywhere — for any $\epsilon>0$ there is some $N$ so that for all $m,n\geq N$ we have $\text{ess sup}(f_m-f_n)<\epsilon$. Unpacking again, there is some measurable set $E_0$ so that $\lvert f_m(x)-f_n(x)\rvert<\epsilon$ for all $x\notin E_0$. It’s straightforward to check that a sequence that converges uniformly a.e. is uniformly Cauchy a.e., and vice versa. That is, the topology defined by the essential supremum norm is complete, and the algebra of essentially bounded measurable functions on a measure space $X$ is a Banach space.

May 14, 2010

## Topological Vector Spaces, Normed Vector Spaces, and Banach Spaces

Before we move on, we want to define some structures that blend algebraic and topological notions. These are all based on vector spaces. And, particularly, we care about infinite-dimensional vector spaces. Finite-dimensional vector spaces are actually pretty simple, topologically. For pretty much all purposes you have a topology on your base field $\mathbb{F}$, and the vector space (which is isomorphic to $\mathbb{F}^n$ for some $n$) will get the product topology.

But for infinite-dimensional spaces the product topology is often not going to be particularly useful. For example, the space of functions $f:X\to\mathbb{R}$ is a product; we write $f\in\mathbb{R}^X$ to mean the product of one copy of $\mathbb{R}$ for each point in $X$. Limits in this topology are “pointwise” limits of functions, but this isn’t always the most useful way to think about limits of functions. The sequence

$\displaystyle f_n=n\chi_{\left[0,\frac{1}{n}\right]}$

converges pointwise to a function $f(x)=0$ for $n\neq0$ and $f(0)=\infty$. But we will find it useful to be able to ignore this behavior at the one isolated point and say that $f_n\to0$. It’s this connection with spaces of functions that brings such infinite-dimensional topological vector spaces into the realm of “functional analysis”.

Okay, so to get a topological vector space, we take a vector space and put a (surprise!) topology on it. But not just any topology will do: Remember that every point in a vector space looks pretty much like every other one. The transformation $u\mapsto u+v$ has an inverse $u\mapsto u-v$, and it only makes sense that these be homeomorphisms. And to capture this, we put a uniform structure on our space. That is, we specify what the neighborhoods are of $0$, and just translate them around to all the other points.

Now, a common way to come up with such a uniform structure is to define a norm on our vector space. That is, to define a function $v\mapsto\lVert v\rVert$ satisfying the three axioms

• For all vectors $v$ and scalars $c$, we have $\lVert cv\rVert=\lvert c\rvert\lVert v\rVert$.
• For all vectors $v$ and $w$, we have $\lVert v+w\rVert\leq\lVert v\rVert+\lVert w\rVert$.
• The norm $\lVert v\rVert$ is zero if and only if the vector $v$ is the zero vector.

Notice that we need to be working over a field in which we have a notion of absolute value, so we can measure the size of scalars. We might also want to do away with the last condition and use a “seminorm”. In any event, it’s important to note that though our earlier examples of norms all came from inner products we do not need an inner product to have a norm. In fact, there exist norms that come from no inner product at all.

So if we define a norm we get a “normed vector space”. This is a metric space, with a metric function defined by $d(u,v)=\lVert u-v\rVert$. This is nice because metric spaces are first-countable, and thus sequential. That is, we can define the topology of a (semi-)normed vector space by defining exactly what it means for a sequence of vectors to converge, and in particular what it means for them to converge to zero.

Finally, if we’ve got a normed vector space, it’s a natural question to ask whether or not this vector space is complete or not. That is, we have all the pieces in place to define Cauchy sequences in our vector space, and we would like for all of these sequences to converge under our uniform structure. If this happens — if we have a complete normed vector space — we call our structure a “Banach space”. Most of the spaces we’re concerned with in functional analysis are Banach spaces.

Again, for finite-dimensional vector spaces (at least over $\mathbb{R}$ or $\mathbb{C}$) this is all pretty easy; we can always define an inner product, and this gives us a norm. If our underlying topological field is complete, then the vector space will be as well. Even without considering a norm, convergence of sequences is just given component-by-component. But infinite-dimensional vector spaces get hairier. Since our algebraic operations only give us finite sums, we have to take some sorts of limits to even talk about most vectors in the space in the first place, and taking limits of such vectors could just complicate things further. Studying these interesting topologies and seeing how linear algebra — the study of vector spaces and linear transformations — behaves in the infinite-dimensional context is the taproot of functional analysis.

May 12, 2010

## Uniform Convergence of Power Series

So, what’s so great right now about uniform convergence?

As we’ve said before, when we evaluate a power series we get a regular series at each point, which may or may not converge. If we restrict to those points where it converges, we get a function. That is the series of functions converges pointwise to a limiting function. What’s great is that for any compact set contained within the radius of convergence of the series, this convergence is uniform!

To be specific, take a power series $\sum\limits_{n=0}^\infty c_nz^n$ which converges for $|z|, and let $T$ be a compact subset of the disk of radius $R$. Now the function $|z|$ is a continuous, real-valued function on $T$, and the image of a compact space is compact, so $|z|$ takes some maximum value on $T$.

That is, there is some point $p$ so that for every point $z\in T$ we have $|z|\leq|p|. And thus we have $|c_nz^n|\leq|c_np^n|$ for all $z\in T$. Setting $M_n=|c_np^n|$, we invoke the Weierstrass M-test — the series $\sum\limits_{k=0}^\infty|c_np^n|$ converges because $p$ is within the disk of convergence, and thus evaluation at $p$ converges absolutely.

Now every point within the disk of convergence is separated by some compact set (closed disks are compact, so pick a radius small less than the distance from the point to the boundary of the disk of convergence), within which the convergence is uniform. Since each term is continuous, the uniform limit will also be continuous at the point in question. Thus inside the radius of convergence a power series evaluates to a continuous function.

This gives us our first hint as to what can block a power series. As an explicit example, consider the geometric series $\sum\limits_{n=0}^\infty z^n$, which converges for $|z|<1$ to the function $\frac{1}{1-z}$. This function is clearly discontinuous at $z=1$, and so the power series can’t converge in any disk containing that point, since if it did it would have to be continuous there. And indeed, we can calculate the radius of convergence to be exactly ${1}$.

It’s important to note something in this example. For $|z|<1$, we have $\frac{1}{1-z}=\sum\limits_{k=0}^\infty z^n$, but these two functions are definitely not equal outside that region. Indeed, at $z=-2$ the function clearly has the value $\frac{1}{3}$, while the geometric series diverges wildly. The equality only holds within the radius of convergence.

September 10, 2008

## Uniform Convergence of Series

Since series of anything are special cases of sequences, we can import our notions to series. We say that a series $\sum\limits_{n=0}^\infty f_n$ converges uniformly to a sum $f$ if the sequence of partial sums $s_n=\sum\limits_{k=0}^nf_k$ converges uniformly to $f$. That is, if for every $\epsilon>0$ there is an $N$ so that $n>N$ implies that $\left|f-\sum\limits_{k=0}^nf_k(x)\right|<\epsilon$ for all $x$ in the domain under consideration.

And we’ve got Cauchy’s condition: a series converges uniformly if for every $\epsilon>0$ there is an $N$ so that $m$ and $n$ both greater than zero implies that $\left|\sum\limits_{k=m}^nf_k(x)\right)<\epsilon$ for all $x$ in the domain.

Here’s a a great way to put this to good use: the Weierstrass M-test, which is sort of like the comparison test. Say that we have a positive bound for the size of each term in the series: $\left|f_n(x)\right| for all $x$ in the domain. And further assume that the series $\sum\limits_{n=0}^\infty M_n$ converges. Then the series $\sum\limits_{n=0}^\infty f_n(x)$ must converge uniformly.

Since the series of the $M_n$ converges, Cauchy’s condition for series of numbers tells us that for every $\epsilon>0$ there is some $N$ so that when $m$ and $n$ are bigger than $N$, $\sum\limits_{k=m}^nM_n<\epsilon$. But now when we consider $\left|\sum\limits_{k=m}^nf_n(x)\right|$ we note that it’s just a finite sum, and so we can use the triangle inequality to write

$\left|\sum\limits_{k=m}^nf_n(x)\right|\leq\sum\limits_{k=m}^n\left|f_n(x)\right|<\sum\limits_{k=m}^nM_n<\epsilon$

So Cauchy’s condition tells us that the series $\sum\limits_{n=0}^\infty f_n(x)$ converges uniformly in the domain under consideration.

September 9, 2008

## Cauchy’s Condition for Uniform Convergence

As I said at the end of the last post, uniform convergence has some things in common with convergence of numbers. And, in particular, Cauchy’s condition comes over.

Specifically, a sequence $f_n$ converges uniformly to a function $f$ if and only if for every $\epsilon>0$ there exists an $N$ so that $m>N$ and $n>N$ imply that $|f_m(x)-f_n(x)|<\epsilon$.

One direction is straightforward. Assume that $f_n$ converges uniformly to $f$. Given $\epsilon$ we can pick $N$ so that $n>N$ implies that $|f_n(x)-f(x)|<\frac{\epsilon}{2}$ for all $x$. Then if $m>N$ and $n>N$ we have

$|f_m(x)-f_n(x)|<|f_m(x)-f(x)|+|f(x)-f_n(x)|<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$

In the other direction, if the Cauchy condition holds for the sequence of functions, then the Cauchy condition holds for the sequence of numbers we get by evaluating at each point $x$. So at least we know that the sequence of functions must converge pointwise. We set $f(x)=\lim\limits_{n\rightarrow\infty}f_n(x)$ to be this limit, and we’re left to show that the convergence is uniform.

Given an $\epsilon$ the Cauchy condition tells us that we have an $N$ so that $n>N$ implies that $|f_n(x)-f_{n+k}(x)|<\frac{\epsilon}{2}$ for every natural number $k$. Then taking the limit over $k$ we find

$|f_n(x)-f(x)|=\lim\limits_{k\rightarrow\infty}|f_n(x)-f_{n+k}(x)|\leq\frac{\epsilon}{2}<\epsilon$

Thus the convergence is uniform.

September 8, 2008

## Uniform Convergence

Today we’ll give the answer to the problem of pointwise convergence. It’s analogous to the notion of uniform continuity in a metric space. In that case we noted that things became nicer if we could choose our $\delta$ the same for every point, and something like that will happen here.

To reiterate: we say that a sequence $f_n$ converges pointwise to a function $f$ if for every $x$, and for every $\epsilon$, there is an $N$ so that $n>N$ implies that $|f_n(x)-f(x)|<\epsilon$. Just like we did for uniform continuity we’re going to move around the quantifiers so that $N$ can depend only on $\epsilon$, not on $x$.

We say that a sequence of functions converges uniformly to a function $f$ if for every $\epsilon$ there is an $N$ so that for every $x$, $n>N$ implies that $|f_n(x)-f(x)|<\epsilon$. In pointwise convergence, the value at each point does converge to the value of the limiting function, but the rates can vary widely enough to make it impossible to control convergence at two different parts of the domain simultaneously. But in uniform convergence we have “uniform” control of the convergence over the entire domain.

So let’s see how we can use this to show that the limiting function $f$ is continuous if each function $f_n$ in the sequence is. Uniform convergence tells us that for every $\epsilon$ there is an $N$ so that $n>N$ implies that $|f_n(x)-f(x)|<\frac{\epsilon}{3}$ for every $x$. But since $f_n$ is continuous at $x_0$ there is some $\delta$ so that $|x-x_0|<\delta$ implies that $|f_n(x)-f_n(x_0)|<\frac{\epsilon}{3}$.

And now we can use this $\delta$ to show the continuity of $f$. For if $|x-x_0|<\delta$, we find

\begin{aligned}|f(x)-f(x_0)|<|f(x)-f_n(x)|+|f_n(x)-f_n(x_0)|+|f_n(x_0)-f(x_0)|\\<\frac{\epsilon}{3}+\frac{\epsilon}{3}+\frac{\epsilon}{3}=\epsilon\end{aligned}

The essential point here is that we were able to keep control of the convergence of the sequence both at the point of interest $x_0$, and at all points $x$ in the $\delta$-wide neighborhood.

Uniform convergence isn’t the only way to be assured of continuity in the limit, but it’s surely one of the most convenient. One thing that’s especially nice about uniform convergence is the way that we can control the separation of sequence terms from the limiting function by a single number $\epsilon$ instead of a whole function of them.

That is, instead of fixing an $\epsilon$, fix an $N$ and consider how far sequence terms can be from the limit. Take the maximum

$\max\limits_{n>N}|f_n(x)-f(x)|$

This depends on $x$, but if the convergence is uniform we can keep it down below some constant function. For pointwise convergence that isn’t uniform, no matter how big we pick the $N$ there will still be arbitrarily large differences.

In this way, uniform convergence is more like convergence of numbers than pointwise convergence of functions. Uniform convergence just isn’t as floppy as pointwise convergence can be.

September 5, 2008