The Unapologetic Mathematician

Mathematics for the interested outsider

Indefinite Integrals and Convergence I

Let’s see how the notion of an indefinite integral plays with sequences of simple functions in the L^1 norm.

If \{f_n\} is a mean Cauchy sequence of integrable simple functions, and if each f_n has indefinite integral \nu_n, then the limit \nu(E)=\lim_n\nu_n(E) exists for all measurable sets E\subseteq X. Indeed, for each E we have a sequence of real numbers \{\nu_n(E)\}. We compare

\displaystyle\begin{aligned}\lvert\nu_n(E)-\nu_m(E)\rvert&=\left\lvert\int\limits_Ef_n\,d\mu-\int\limits_Ef_m\,d\mu\right\rvert\\&=\left\lvert\int\limits_Ef_n-f_m\,d\mu\right\rvert\\&\leq\int\limits_E\lvert f_n-f_m\rvert\,d\mu\\&\leq\int\limits\lvert f_n-f_m\rvert\,d\mu\end{aligned}

and find that since the sequence of simple functions \{f_n\} is mean Cauchy the sequence of real numbers \{\nu_n(E)\} is Cauchy. And thus it must converge to a limiting value, which we define to be \nu(E). In fact, the convergence is uniform, since the last step of our inequality had nothing to do with the particular set E!

Now, this set function \nu(E) is finite-valued as the uniform limit of a sequence of finite-valued functions. Since limits commute with finite sums, and since each \nu_n is finitely additive, we see that \nu(E) is finitely additive as well; it turns out that it’s actually countable additive.

If \{E_n\} is a disjoint sequence of measurable sets whose (countable) union is E, then for every pair of positive integers n and k the triangle inequality tells us that

\displaystyle\left\lvert\nu(E)-\sum\limits_{i=1}^k\nu(E_i)\right\rvert\leq\left\lvert\nu(E)-\nu_n(E)\right\rvert+\left\lvert\nu_n(E)-\sum\limits_{i=1}^k\nu_n(E_i)\right\rvert+\left\lvert\nu_n\left(\bigcup\limits_{i=1}^kE_i\right)-\nu\left(\bigcup\limits_{i=1}^kE_i\right)\right\rvert

Choosing a large enough n we can make the first and third terms arbitrarily small, and then we can choose a large enough k to make the second term arbitrarily small. And thus we establish that

\displaystyle\nu(E)=\lim\limits_{k\to\infty}\sum\limits_{i=1}^k\nu(E_i)=\sum\limits_{i=1}^\infty\nu(E_i)

We can say something about the sequence of set functions \{\nu_n\}: each of them is — as an indefinite integral — absolutely continuous, but in fact the sequence is uniformly absolutely continuous. That is, for every \epsilon>0 there is a \delta independent of n so that \lvert\nu_n(E)\rvert<\epsilon for every measurable set E with \mu(E)<\delta.

Let N be a sufficiently large integer so that for n,m\geq N we have

\displaystyle\int\lvert f_n-f_m\rvert\,d\mu<\frac{\epsilon}{2}

which exists by the fact that \{f_n\} is mean Cauchy. Then we can pick a \delta so that

\displaystyle\lvert\nu_n(E)\rvert=\left\lvert\int\limits_Ef_n\,d\mu\right\rvert\leq\int\limits_E\lvert f_n\rvert\,d\mu<\frac{\epsilon}{2}

for all n\leq N and E with \mu(E)<\delta. We know that such a \delta exists for each f_n by absolute continuity, and so we just pick the smallest of them for n\leq N.

This \delta will then work for all n\leq N, but what if n>N? Well, then we can write

\displaystyle\begin{aligned}\lvert\nu_n(E)\rvert&=\left\lvert\int\limits_Ef_n\,d\mu\right\rvert\\&\leq\int\limits_E\lvert f_n\rvert\,d\mu\\&\leq\int\limits_E\lvert f_n-f_N\rvert+\lvert f_N\rvert\,d\mu\\&\leq\int\limits_E\lvert f_n-f_N\rvert\,d\mu+\int\limits_E\lvert f_N\rvert\,d\mu\\&<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon\end{aligned}

and so the same \delta works for all n>N as well.

May 31, 2010 Posted by | Analysis, Measure Theory | 3 Comments

The L¹ Norm

We can now introduce a norm to our space of integrable simple functions, making it into a normed vector space. We define

\displaystyle\lVert f\rVert_1=\int\lvert f\rvert\,d\mu

Don’t worry about that little 1 dangling off of the norm, or why we call this the “L^1 norm”. That will become clear later when we generalize.

We can easily verify that \lVert cf\rVert_1=\lvert c\rvert\lVert f\rVert_1 and that \lVert f+g\rVert_1\leq\lVert f\rVert_1+\lVert g\rVert_1, using our properties of integrals. The catch is that \lVert f\rVert_1=0 doesn’t imply that f is identically zero, but only that f=0 almost everywhere. But really throughout our treatment of integration we’re considering two functions that are equal a.e. to be equivalent, and so this isn’t really a problem — \lVert f\rVert_1=0 implies that f is equivalent to the constant zero function for our purposes.

Of course, a norm gives rise to a metric:

\displaystyle\rho(f,g)=\lVert g-f\rVert_1=\int\lvert g-f\rvert\,d\mu

and this gives us a topology on the space of integrable simple functions. And with a topology comes a notion of convergence!

We say that a sequence \{f_n\} of integrable functions is “Cauchy in the mean” or is “mean Cauchy” if \lVert f_n-f_m\rVert_1\to0 as m and n get arbitrarily large. We won’t talk quite yet about convergence because our situation is sort of like the one with rational numbers; we have a sense of when functions are getting close to each other, but most of these mean Cauchy sequences actually don’t converge within our space. That is, the normed vector space is not a Banach space.

However we can say some things about this notion of convergence. For one, a sequence \{f_n\} that is Cauchy in the mean is Cauchy in measure as well. Indeed, for any \epsilon>0 we can define the sets

\displaystyle E_{mn}=\left\{x\in X\big\vert\lvert f_n(x)-f_m(x)\rvert\geq\epsilon\right\}

And then we find that

\displaystyle\lVert f_n-f_m\rVert=\int\lvert f_n-f_m\rvert\,d\mu\geq\int\limits_{E_mn}\lvert f_n-f_m\rvert\,d\mu\geq\epsilon\mu(E_mn)

As m and n get arbitrarily large, the fact that the sequence is mean Cauchy tells us that the left hand side of this inequality gets pushed down to zero, and so the right hand side must as well.

This notion of convergence will play a major role in our study of integration.

May 28, 2010 Posted by | Analysis, Functional Analysis, Measure Theory | 11 Comments

Indefinite Integrals

We can use integrals to make a set function \nu out of any integrable function f.

\displaystyle\nu(E)=\int\limits_E f\,d\mu

We call this set function the “indefinite integral” of f, and it is defined for all measurable subsets E\subseteq X. This isn’t quite the same indefinite integral that we’ve worked with before. In that case we only considered functions f:\mathbb{R}\to\mathbb{R}, picked a base-point a, and defined a new function F on the domain. In our new language, we’d write F(x)=\nu\left([a,x]\right), so the two concepts are related but they’re not quite the same.

Anyhow, the indefinite integral \nu is “absolutely continuous”. That is, for every \epsilon>0 there is a \delta so that \lvert\nu(E)\rvert<\epsilon for all measurable E with \mu(E)<\delta. Indeed, if c is an upper bound for \lvert f\rvert, then we can show that

\displaystyle\lvert\nu(E)\rvert=\left\lvert\int\limits_Ef\,d\mu\right\rvert\leq\int\limits_E\lvert f\rvert\,d\mu\leq c\mu(E)

And so if we make \mu(E) small enough we can keep \lvert\nu(E)\rvert small.

Further, an indefinite integral \nu is countably additive. Indeed, if f=\chi_S is a characteristic function then countable additivity of \nu follows immediately from countable additivity of \mu. And countable additivity for general simple functions f is straightforward by writing each such function as a finite linear combination of characteristic functions.

With the exception of this last step, nothing we’ve said today depends on the function f being simple, and so once we generalize our basic linearity and order properties we will immediately have absolutely continuous indefinite integrals.

May 27, 2010 Posted by | Analysis, Measure Theory | 19 Comments

More Properties of Integrals

Today we will show more properties of integrals of simple functions. But the neat thing is that they will follow from the last two properties we showed yesterday. And so their proofs really have nothing to do with simple functions. We will be able to point back to this post once we establish the same basic linearity and order properties for the integrals of wider and wider classes of functions.

First up: if f and g are integrable simple functions with f\geq g a.e. then

\displaystyle\int f\,d\mu\geq\int g\,d\mu

Indeed, the function f-g is nonnegative a.e., and so we conclude that

\displaystyle0\leq\int f-g\,d\mu=\int f\,d\mu-\int g\,d\mu

Next, if f and g are integrable simple functions then

\displaystyle\int\lvert f+g\rvert\,d\mu\leq\int\lvert f\rvert\,d\mu+\int\lvert g\rvert\,d\mu

Here we use the triangle inequality \lvert f+g\rvert\leq\lvert f\rvert+\lvert g\rvert and invoke the previous result.

Now, if f is an integrable simple function then

\displaystyle\left\lvert\int f\,d\mu\right\rvert\leq\int\lvert f\rvert\,d\mu

The absolute value \lvert f\rvert is greater than both f and -f, and so we find

\displaystyle\begin{aligned}\int f\,d\mu\leq&\int\lvert f\rvert\,d\mu\\-\int f\,d\mu\leq&\int\lvert f\rvert\,d\mu\end{aligned}

which implies the inequality we asserted.

As a heuristic, this last result is sort of like the triangle inequality to the extent that the integral is like a sum; adding inside the absolute value gives a smaller result than adding outside the absolute value. However, we have to be careful here; the integral we’re working with is not the limit of a sum like the Riemann integral was. In fact, we have no reason yet to believe that this integral and the Riemann integral have all that much to do with each other. But that shouldn’t stop us from using this analogy to remember the result.

Finally, if f is an integrable simple function, E is a measurable set, and \alpha and \beta are real numbers so that \alpha\leq f(x)\leq\beta for almost all x\in E, then

\displaystyle\alpha\mu(E)\leq\int\limits_Ef\,d\mu\leq\beta\mu(E)

Indeed, the assumed inequality is equivalent to the assertion that \alpha\chi_E\leq f\chi_E\leq\beta\chi_E a.e., and so — as long as \mu(E)<\infty — we conclude that

\displaystyle\int\alpha\chi_E\,d\mu\leq\int f\chi_E\,d\mu\leq\int\beta\chi_E\,d\mu

which is equivalent to the above. On the other hand, if \mu(E)=\infty, then f must be zero on all but a portion of E of finite measure or else it wouldn’t be integrable. Thus, in order for the assumed inequalities to hold, we must have \alpha\leq0 and \beta\geq0. The asserted inequalities are then all but tautological.

May 26, 2010 Posted by | Analysis, Measure Theory | 5 Comments

Basic Properties of Integrable Simple Functions

We want to nail down a few basic properties of integrable simple functions. We define two simple functions f\sum\alpha_i\chi_{E_i} and g\sum\beta_j\chi_{F_j} to work with.

First of all, if f is simple then the scalar multiple \alpha f is simple for any real number \alpha. Indeed, \alpha f=\sum\alpha\alpha_i\chi_{E_i}, and so the exact same sets E_i must have finite measure for both f and \alpha f to be integrable. It’s similarly easy to see that if f and g are both integrable, then f+g is integrable. Thus the integrable simple functions form a linear subspace of the vector space of all simple functions.

Now if f is integrable then the product fg is integrable whether or not g is. We can write

\displaystyle fg=\sum\limits_{i,j}\alpha_i\beta_j\chi_{E_i}\chi_{F_j}=\sum\limits_{i,j}\alpha_i\beta_j\chi_{E_i\cap F_j}

If each E_i has finite measure, then so does each E_i\cap F_j, whether each F_j does or not. Thus we see that the integrable simple functions form an ideal of the algebra of all simple functions.

We can use this to define the integral of a function over some range other than all of X. If f is an integrable simple function and E is a measurable set, then f\chi_E is again an integrable simple function. We define the integral of f over E as

\displaystyle\int\limits_Ef\,d\mu=\int f\chi_E\,d\mu

This has the effect of leaving f the same on E and zeroing it out away from E. Thus the integral over the rest of the space E^c contributes nothing.

The next two properties are easy to prove for integrable simple functions, but they’re powerful. Other properties of integration will be proven in terms of these properties, and so when we widen the class of functions under consideration we’ll just have to reprove these two. The ones we will soon consider will immediately have proofs parallel to those for simple functions.

Not only is the function \alpha f+\beta g integrable, but we know its integral:

\displaystyle\int\alpha f+\beta g\,d\mu=\alpha\int f\,d\mu+\beta\int g\,d\mu

Indeed, if you were paying attention yesterday you’d have noticed that we said we wanted integration to be linear, but we never really showed that it was. But it’s not really complicated: the expression \sum(\alpha\alpha_i)\chi_{E_i}+\sum(\beta\beta_j)\chi_{F_j} represents f+g as a simple function, and it’s clear that the formula holds as stated.

Almost as obvious is the fact that if f is nonnegative a.e., then \int f\,d\mu\geq0. Indeed in any representation of f as a simple function, any term E_i corresponding to a negative \alpha_i must have \mu(E_i)=0 or else f wouldn’t be nonnegative almost everywhere. But then the term \alpha_i\mu(E_i) contributes nothing to the integral! Every other term has a nonnegative \alpha_i and a nonnegative measure \mu(E_i), and thus every term in the integral is nonnegative. This is the basis for all the nice order properties we will find for the integral.

May 25, 2010 Posted by | Analysis, Measure Theory | 4 Comments

Integrating Simple Functions

We start our turn from measure in the abstract to applying it to integration, and we start with simple functions. In fact, we start a bit further back than that even; the simple functions are exactly the finite linear combinations of characteristic functions, and so we’ll start there.

Given a measurable set E, there’s an obvious way to define the integral of the characteristic function \chi_E: the measure \mu(E)! In fact, if you go back to the “area under the curve” definition of the Riemann integral, this makes sense: the graph of \chi_E is a “rectangle” (possibly in many pieces) with one side a line of length 1 and the other “side” the set E. Since \mu(E) is our notion of the “size” of E, the “area” will be the product of 1 and \mu(E). And so we define

\displaystyle\int\chi_E\,d\mu=\mu(E)

That is, the integral of the characteristic function \chi_E with respect to the measure \mu is \mu(E). Of course, this only really makes sense if \mu(E)<\infty.

Now, we’re going to want our integral to be linear, and so given a linear combination f=\sum\alpha_i\chi_{E_i} we define the integral

\displaystyle\int f\,d\mu=\int\left(\sum\alpha_i\chi_{E_i}\right)\,d\mu=\sum\alpha_i\int\chi_{E_i}\,d\mu=\sum\alpha_i\mu(E_i)

Again, this only really makes sense if all the E_i associated to nonzero \alpha_i have finite measure. When this happens, we call our function f “integrable”.

Since every simple function f is a finite linear combination of characteristic functions, we can always use this to define the integral of any simple function. But there might be a problem: what if we have two different representations of a simple function as linear combinations of characteristic functions? Do we always get the same integral?

Well, first off we can always choose an expression for f so that the E_i are disjoint. As an example, say that we write f=\alpha\chi_A+\beta\chi_B, where A and B overlap. We can rewrite this as f=\alpha\chi_{A\setminus B}+\beta\chi_{B\setminus A}+(\alpha+\beta)\chi_{A\cap B}. If f is integrable, then A and B both have finite measure, and so \mu is subtractive. Thus we can verify

\displaystyle\begin{aligned}\int\alpha\chi_{A\setminus B}+\beta\chi_{B\setminus A}+(\alpha+\beta)\chi_{A\cap B}&=\alpha\mu(A\setminus B)+\beta\mu(B\setminus A)+(\alpha+\beta)\mu(A\cap B)\\&=\alpha\left(\mu(A)-\mu(A\cap B)\right))+\beta\left(\mu(B)-\mu(A\cap B)\right)+(\alpha+\beta)\mu(A\cap B)\\&=\alpha\mu(A)+\beta\mu(B)\\&=\int\alpha\chi_A+\beta\chi_B\,d\mu\end{aligned}

Thus given any representation the corresponding disjoint representation gives us the same integral.

But what if we have two different disjoint representations f=\sum\alpha_i\chi_{E_i} and f=\sum\beta_j\chi_{F_j}? Our function can only take a finite number of nonzero values \{\gamma_k\}. We can define G_k to be the (measurable) set where f takes the value \gamma_k. For any given k, we can consider all the i so that \alpha_i=\gamma_k. The corresponding sets E_i must be a disjoint partition of G_k, and additivity tells us that the sum of these \mu(E_i) is equal to \mu(G_k). But the same goes for the F_j corresponding to values \beta_j=\gamma_k. And so both our representations give the same integral as f=\sum\gamma_k\chi_{G_k}. Everything in sight is linear, so this is all very straightforward.

At the end of the day, the integral of any simple function f is well-defined so long as all the preimage G_k of each nonzero value \gamma_k has a finite measure. Again, we call these simple functions “integrable”.

May 24, 2010 Posted by | Analysis, Measure Theory | 10 Comments

Convergence in Measure and Algebra

Unlike our other methods of convergence, it’s not necessarily apparent that convergence in measure plays nicely with algebraic operations on the algebra of measurable functions. All our other forms are basically derived from pointwise convergence, and so the limit laws clearly hold; but it takes some work to see that the same is true for convergence in measure. So, for the rest of this post assume that \{f_n\} and \{g_n\} are sequences of finite-valued measurable functions converging in measure to f and g, respectively.

First up: if \alpha and \beta are real constants, then \{\alpha f_n+\beta g_n\} converges in measure to \alpha f+\beta g. Indeed, we find that

\displaystyle\begin{aligned}\lvert(\alpha f_n(x)+\beta g_n(x))-(\alpha f(x)+\beta g(x))\rvert&=\lvert\alpha(f_n(x)-f(x))+\beta(g_n(x)-g(x))\rvert\\&\leq\lvert\alpha\rvert\lvert f_n(x)-f(x)\rvert+\lvert\beta\rvert\lvert g_n(x)-g(x)\rvert\end{aligned}

Thus if \lvert f_n(x)-f(x)\rvert<\frac{\epsilon}{2\alpha} and \lvert g_n(x)-g(x)\rvert<\frac{\epsilon}{2\beta}, then \lvert(\alpha f_n(x)+\beta g_n(x))-(\alpha f(x)+\beta g(x))\rvert<\epsilon. That is,

\displaystyle\begin{aligned}&\left\{x\in X\big\vert\lvert(\alpha f_n(x)+\beta g_n(x))-(\alpha f(x)+\beta g(x))\rvert\geq\epsilon\right\}\\&\subseteq\left\{x\in X\bigg\vert\lvert f_n(x)-f(x)\rvert\geq\frac{\epsilon}{2\alpha}\right\}\cup\left\{x\in X\bigg\vert\lvert g_n(x)-g(x)\rvert\geq\frac{\epsilon}{2\beta}\right\}\end{aligned}

Since \{f_n\} and \{g_n\} converge in measure to f and g, we can control the size of each of these sets by choosing a sufficiently large n, and thus \{\alpha f_n+\beta g_n\} converges in measure to \alpha f+\beta g.

Next, if f=0 a.e., the sequence \{f_n^2\} converges in measure to f^2. Indeed, the set \left\{x\in X\big\vert\lvert f_n(x)-f(x)\rvert\geq\sqrt{\epsilon}\right\} differs negligibly from the set \left\{x\in X\big\vert\lvert f_n(x)\rvert\geq\sqrt{\epsilon}\right\}. This, in turn, is exactly the same as the set \left\{x\in X\big\vert\lvert f_n(x)^2\rvert\geq\epsilon\right\}, which differs negligibly from the set \left\{x\in X\big\vert\lvert f_n(x)^2-f(x)^2\rvert\geq\epsilon\right\}. Thus control on the measure of one of these sets is control on all of them.

Now we’ll add the assumption that the whole space X is measurable, and that \mu(X) is finite (that is, the measure space is “totally finite”). This will let us conclude that the sequence \{f_ng\} converges in measure to fg. As the constant c increases, the measurable set E_c=\left\{x\in X\big\vert\lvert g(x)\rvert\leq c\right\} gets larger and larger, while its complement X\setminus E_c gets smaller and smaller; this complement is measurable because X is measurable.

In fact, the measure of the complement must decrease to zero, or else we’d have some set of positive measure on which g(x) is larger than any finite c, and thus g(x)=\infty on a set of positive measure. But then \{g_n\} couldn’t converge to g in measure. Since \mu is totally finite, the measure \mu(X\setminus E_c) must start at some finite value and decrease to zero; if \mu(X) were infinite, these measures might all be infinite. And so for every \delta>0 there is some c so that \mu(X\setminus E_C)<\delta.

In particular, we can pick a c so that \mu(X\setminus E_c)<\frac{\delta}{2}. On E_c, then, we have \lvert f_ng-fg\rvert\leq\lvert f_n-f\rvert c. Convergence in measure tells us that we can pick a large enough n so that

\displaystyle\left\{x\in E_c\bigg\vert\lvert f_n-f\rvert\geq\frac{\epsilon}{c}\right\}

has measure less than \frac{\delta}{2} as well. The set \left\{x\in E_c\big\vert\lvert f_ng-fg\rvert\geq\epsilon\right\} must be contained between these two sets, and thus will have measure less than \delta for sufficiently large n.

Now we can show that \{f_n^2\} converges in measure to f^2 for any f, not just ones that are a.e. zero. We can expand (f_n-f)^2=f_n^2-2f_nf+f^2, and thus rewrite f_n^2=(f_n-f)^2+2f_nf-f^2. Our first result shows that \{f_n-f\} converges to f-f=0, and our second result then shows that \{(f_n-f)^2\} also converges to 0^2=0. Our third result shows that f_nf converges to f^2. We use our first result to put everything together again and conclude that \{(f_n-f)^2+2f_nf-f^2\} converges to f^2 as we asserted.

And finally we can show that \{f_ng_n\} converges in measure to fg. We can use the same polarization trick as we’ve used before. Write f_ng_n=\frac{1}{4}\left((f_n+g_n^2)-(f_n-g_n)^2\right); we’ve just verified that the squares converge to squares, and we know that linear combinations also converge to linear combinations, and so f_ng_n converges in measure to fg.

May 21, 2010 Posted by | Analysis, Measure Theory | 2 Comments

Convergence in Measure II

Sorry, I forgot to post this before I left this morning.

The proposition we started with yesterday shows us that on a set E of finite measure, a.e. convergence is equivalent to convergence in measure, and a sequence is Cauchy a.e. if and only if it’s Cauchy in measure. We can strengthen it slightly by removing the finiteness assumption, but changing from a.e. convergence to almost uniform convergence: almost uniform convergence implies convergence in measure. Indeed, if \{f_n\} converges to f almost uniformly then for any two positive real numbers \epsilon and \delta there is a measurable set F with \mu(F)<\delta and \lvert f_n(x)-f(x)\rvert<\epsilon for all x\in E\setminus F and sufficiently large n. Thus we can make the set F where f_n and f are separated as small as we like, as required by convergence in measure.

We also can show some common-sense facts about sequences converging and Cauchy in measure. First, if \{f_n\} converges in measure to f, then \{f_n\} is Cauchy in measure. We find that

\displaystyle\left\{x\in X\big\vert\lvert f_n(x)-f_m(x)\rvert\geq\epsilon\right\}\subseteq\left\{x\in X\bigg\vert\lvert f_n(x)-f(x)\rvert\geq\frac{\epsilon}{2}\right\}\cup\left\{x\in X\bigg\vert\lvert f_m(x)-f_m(x)\rvert\geq\frac{\epsilon}{2}\right\}

because if both f_n(x) and f_m(x) are within \frac{\epsilon}{2} of the same number f(x), then they’re surely within \epsilon of each other. And so if we have control on the measures of the sets on the right, we have control of the measure of the set on the left.

Secondly, if \{f_n\} also converges in measure to g, then it only makes sense that f and g should be “the same”. It wouldn’t do for a convergence method to have many limits for a convergent sequence. Of course, this being measure theory, “the same” means a.e. we have f(x)=g(x). But this uses almost the same relation:

\displaystyle\left\{x\in X\big\vert\lvert f(x)-g(x)\rvert\geq\epsilon\right\}\subseteq\left\{x\in X\bigg\vert\lvert f_n(x)-f(x)\rvert\geq\frac{\epsilon}{2}\right\}\cup\left\{x\in X\bigg\vert\lvert f_n(x)-g(x)\rvert\geq\frac{\epsilon}{2}\right\}

Since we can make each of the sets on the right arbitrarily small by choosing a large enough n, we must have \left\{x\in X\big\vert\lvert f(x)-g(x)\rvert\geq\epsilon\right\}=0 for every \epsilon>0; this implies that f=g almost everywhere.

Slightly deeper, if \{f_n\} is a sequence of measurable functions that is Cauchy in measure, then there is some subsequence \{f_{n_k}\} which is almost uniformly Cauchy. For every positive integer k we find some integer \bar{n}(k) so that if n,m\geq\bar{n}(k)

\displaystyle\mu\left(\left\{x\in X\bigg\vert\lvert f_n(x)-f_m(x)\rvert\geq\frac{1}{2^k}\right\}\right)<\frac{1}{2^k}

We define n_1=\bar{n}(1), and n_k to be the larger of n_{k-1}+1 or \bar{n}(k), to make sure that n_k is a strictly increasing sequence of natural numbers. We also define

\displaystyle E_k=\left\{x\in X\bigg\vert\lvert f_{n_k}(x)-f_{n_{k+1}}(x)\rvert\geq\frac{1}{2^k}\right\}

If k\leq i\leq j then for every x not in E_k\cup E_{k+1}\cup E_{k+2}\cup\dots we have

\displaystyle\lvert f_{n_i}(x)-f_{n_j}(x)\rvert\leq\sum\limits_{m=i}^\infty\lvert f_{n_m}(x)-f_{n_{m+1}}(x)\rvert<\frac{1}{2^{i-1}}

That is, the subsequence \{f_{n_k}\} is uniformly Cauchy on the set X\setminus\left(\cup_{i\geq k}E_i\right). But we also know that

\displaystyle\mu\left(\bigcup\limits_{m=k}^\infty E_m\right)\leq\sum\limits_{m=k}^\infty\mu(E_m)<\frac{1}{2^{k-1}}

and so \{f_{n_k}\} is almost uniformly Cauchy, as asserted.

Finally, we can take this subsequence that is almost uniformly Cauchy, and see that it must be a.e. Cauchy. We write f(x)=\lim_kf_{n_k}(x) at all x where this sequence converges. And then for every \epsilon>0,

\displaystyle\left\{x\in X\big\vert\lvert f_n(x)-f(x)\rvert\geq\epsilon\right\}\subseteq\left\{x\in X\bigg\vert\lvert f_n(x)-f_{n_k}(x)\rvert\geq\frac{\epsilon}{2}\right\}\cup\left\{x\in X\bigg\vert\lvert f_{n_k}(x)-f(x)\rvert\geq\frac{\epsilon}{2}\right\}

The measure of the first set on the right is small for sufficiently large n and n_k by the assumption that \{f_n\} is Cauchy in measure. The measure of the second approaches zero because almost uniform convergence implies convergence in measure.

And thus we conclude that if \{f_n\} is Cauchy in measure, then there is some measurable function f(x) to which \{f_n\} converges in measure. The topology of convergence in measure may not come from a norm, but it is still complete.

May 21, 2010 Posted by | Analysis, Measure Theory | 4 Comments

Convergence in Measure I

Suppose that f and all the f_n (for positive integers n) are real-valued measurable functions on a set E of finite measure. For every \epsilon>0 we define

\displaystyle E_n(\epsilon)=\left\{x\in X\big\vert\lvert f_n(x)-f(x)\rvert\geq\epsilon\right\}

That is, E_n(\epsilon) is the set where the value of f_n is at least \epsilon away from the value of f. I say that \{f_n\} converges a.e. to f if and only if

\displaystyle\lim\limits_{n\to\infty}\mu\left(E\cap\bigcup\limits_{m=n}^\infty E_n(\epsilon)\right)=0

for every \epsilon>0.

Given a point x\in X, the sequence \{f_n(x)\} fails to converge to f if and only if there is some positive number \epsilon so that x\in E_n(\epsilon) for infinitely many values of n. That is, if D is the set of points where f_n doesn’t converge to f, then

\displaystyle D=\bigcup\limits_{\epsilon>0}\limsup\limits_{n\to\infty}E_n(\epsilon)=\bigcup\limits_{k=1}^\infty\limsup\limits_{n\to\infty}E_n\left(\frac{1}{k}\right)

Of course, if \{f_n\} is to converge to f a.e., we need \mu(E\cap D)=0. A necessary and sufficient condition is that \mu(E\cap\limsup\limits_{n\to\infty}E_n(\epsilon))=0 for all \epsilon>0. Then we can calculate

\displaystyle\begin{aligned}\mu(E\cap\limsup\limits_{n\to\infty}E_n(\epsilon))&=\mu\left(E\cap\bigcap\limits_{n=1}^\infty\bigcup_{m=n}^\infty E_n(\epsilon)\right)\\&=\lim\limits_{n\to\infty}\mu\left(E\cap\bigcup\limits_{m=n}^\infty E_n(\epsilon)\right)\end{aligned}

Our necessary and sufficient condition is thus equivalent to the one we stated at the outset.

We’ve shown that over a set of finite measure, a.e. convergence is equivalent to this other condition. Extracting it a bit, we get a new notion of convergence which will (as we just showed) be equivalent to a.e. convergence over sets of finite measure, but may not be in general. We say that a sequence \{f_n\} of a.e. finite-valued measurable functions “converges in measure” to a measurable function f if for every \epsilon>0 we have

\displaystyle\lim\limits_{n\to\infty}\mu\left(\left\{x\in X\big\vert\lvert f_n(x)-f(x)\rvert\geq\epsilon\right\}\right)=0

Now, it turns out that there is no metric which gives this sense of convergence, but we still refer to a sequence as being “Cauchy in measure” if for every \epsilon>0 we have

\displaystyle\mu\left(\left\{x\in X\big\vert\lvert f_m(x)-f_n(x)\rvert\geq\epsilon\right\}\right)\to0\quad\text{as }n,m\to\infty

May 19, 2010 Posted by | Analysis, Measure Theory | 8 Comments

Almost Uniform Convergence

From the conclusion of Egoroff’s Theorem we draw a new kind of convergence. We say that a sequence \{f_n\} of a.e. finite-valued measurable functions converges to the measurable function f “almost uniformly” if for every \epsilon>0 there is a measurable set F with \mu(F)<\epsilon so that the sequence \{f_n\} converges uniformly to f on E\setminus F.

We have to caution ourselves that this is not almost everywhere uniform convergence, which would be a sequence that converges uniformly once we cut out a subset of measure zero. Our example sequence f_n(x)=x^n yesterday converges almost uniformly to the constant zero function, but it doesn’t converge almost everywhere uniformly. Maybe “nearly uniform” would be better, but “almost uniformly” has become standard.

Now we can restate Egoroff’s Theorem to say that on a set of finite measure, a.e. convergence implies almost uniform convergence. Conversely, if \{f_n\} is a sequence of functions (on any measure space) that converges to f almost uniformly, then it converges pointwise almost everywhere.

We start by picking a set F_m for every positive integer m so that \mu(F_m)<\frac{1}{m}, and so that \{f_n\} converges uniformly to f on {F_m}^c. We set

\displaystyle F=\bigcap\limits_{m=1}^\infty F_m

and conclude that \mu(F)=0, since \mu(F)\leq\mu(F_m)<\frac{1}{m} for all m. If x\in F^c, then there must be some m for which x\in{F_m}^c. Since \{f_n\} converges to f uniformly on {F_m}^c, we conclude that \{f_n(x)\} converges to f(x). Thus \{f_n\} converges pointwise to f except on the set F of measure zero.

May 18, 2010 Posted by | Analysis, Measure Theory | 1 Comment

Follow

Get every new post delivered to your Inbox.

Join 366 other followers