The Unapologetic Mathematician

Mathematics for the interested outsider

Integration with Respect to a Signed Measure

If \mu is a signed measure then we know that the total variation \lvert\mu\rvert is a measure. It then makes sense to discuss whether or not a measurable function f is integrable with respect to \lvert\mu\rvert. In this case, f will be integrable with respect to both \mu^+ and \mu^-. Indeed, since \mu^(E)\leq\lvert\mu\rvert(E) this is obviously true for simple f, and general integrable functions are limits of simple integrable functions.

This being the case, we can define both integrals

\displaystyle\begin{aligned}&\int f\,d\mu^+\\&\int f\,d\mu^-\end{aligned}

and, since \mu=\mu^+-\mu-, it makes sense to define

\displaystyle\int f\,d\mu=\int f\,d\mu^+-\int f\,d\mu

This integral shares some properties with “positive” integrals. For instance, it’s clearly linear:

\displaystyle\int \alpha f+\beta g\,d\mu=\alpha\int f\,d\mu+\beta\int g\,d\mu

Unfortunately, it doesn’t play well with order. Indeed, if E is a measurable \mu-negative set, then \chi_E\geq0 everywhere, but

\displaystyle\int\chi_E\,d\mu=\mu(E)\leq0=\int0\,d\mu

This throws off most of our basic properties. However, some can be salvaged. It’s no longer necessary that f=0 a.e. for the integral to be zero, but it’s sufficient. And, thus, if f=g a.e. then their integrals are equal, although the converse doesn’t hold.

One interesting fact is that for every measurable set E we find

\displaystyle\lvert\mu\rvert(E)=\sup\left\lvert\int\limits_Ef\,d\mu\right\rvert

where we take the supremum over all measurable functions f with \lvert f(x)\rvert\leq1 everywhere. Indeed, if we take a Hahn decomposition X=A\uplus B for \mu, then since E is measurable so are E\cap A and E\cap B. If we take f^+=\chi_{E\cap A} and f^-=\chi{E\cap B}, then we find

\displaystyle\begin{aligned}\int_Ef\,d\mu&=\int_E(f^+-f^-)\,d\mu^+-\int_E(f^+-f^-)\,d\mu^-\\&=\int_Ef^+\,d\mu^+-\int_Ef^-\,d\mu^+-\int_Ef^+\,d\mu^-+\int_Ef^-\,d\mu^-\\&=\int_E\chi_{E\cap A}\,d\mu^+-\int_E\chi_{E\cap B}\,d\mu^+-\int_E\chi_{E\cap A}\,d\mu^-+\int_E\chi_{E\cap B}\,d\mu^-\\&=\int_{E\cap A}\,d\mu^+-\int_{E\cap B}\,d\mu^+-\int_{E\cap A}\,d\mu^-+\int_{E\cap B}\,d\mu^-\\&=\mu^+(E\cap A)-\mu^+(E\cap B)-\mu^-(E\cap A)+\mu^-(E\cap B)\\&=\mu^+(E\cap A)-0-0+\mu^-(E\cap B)\\&=\mu^+(E\cap A)+\mu^+(E\cap B)+\mu^-(E\cap A+\mu^-(E\cap B)\\&=\mu^+(E)+\mu^-(E)\\&=\lvert\mu\rvert(E)\end{aligned}

Thus we can actually attain this value. Can we get any larger? No. We can’t achieve anything by adding to the value of f outside E, since the integral is only taken over E anyway. And within E we could only increase the positive component of the integral by increasing the value of f in E\cap A, or increase the negative component by decreasing the value of f in E\cap B. Either way, we’d make some \lvert f(x)\rvert>1, which isn’t allows. Thus the total variation over E is indeed this supremum.

June 30, 2010 Posted by | Analysis, Measure Theory | 2 Comments

The Jordan Decomposition of an Indefinite Integral

So, after all our setup it shouldn’t be surprising that we take an integrable function f and define its indefinite integral:

\displaystyle\nu(E)=\int\limits_E f\,d\mu

Now, as we’ve pointed out, this will be a measure so long as f is a.e. non-negative. But now if f is any integrable function at all, the indefinite integral \nu is a finite signed measure.

Let’s give a Hahn decomposition corresponding to \nu. I say that if we set

\displaystyle\begin{aligned}A&=\{x\in X\vert f(x)\geq0\}\\B&=\{x\in X\vert f(x)<0\}\end{aligned}

then A is positive, B is negative, and X=A\uplus B is a Hahn decomposition. Indeed, we know that A and B are measurable. Thus if E is measurable, then E\cap A is measurable, and we find

\displaystyle\nu(E\cap A)=\int\limits_{E\cap A}f\,d\mu=\int\limits_E f\chi_A\,d\mu\geq0

since f\chi_A=f^+\geq0 everywhere. Similarly, we verify that B is negative.

Now we can use this to find the Jordan decomposition of \nu. We define

\displaystyle\begin{aligned}\nu^+(E)=\nu(E\cap A)&=\int\limits_{E\cap A}f\,d\mu=\int\limits_Ef^+\,d\mu\\\nu^-(E)=\nu(E\cap B)&=\int\limits_{E\cap B}f\,d\mu=-\int\limits_Ef^-\,d\mu\end{aligned}

That is, the upper variation of \nu is the indefinite integral of the positive part of f, while the lower variation of \nu is the indefinite integral of the negative part of f. And then we can calculate

\displaystyle\lvert\nu\rvert(E)=\nu^+(E)+\nu^-(E)=\int\limits_Ef^+\,d\mu+\int\limits_Ef^-\,d\mu=\int\limits_Ef^++f^-\,d\mu=\int\limits_E\lvert f\rvert\,d\mu

The total variation of \nu is the indefinite integral of the absolute value of f.

June 29, 2010 Posted by | Analysis, Measure Theory | 1 Comment

The Banach Space of Totally Finite Signed Measures

Today we consider what happens when we’re working over a \sigma-algebra — so the whole space X is measurable — and we restrict our attention to totally finite signed measures. These form a vector space, since the sum of two finite signed measures is again a finite signed measure, as is any scalar multiple (positive or negative) of a finite signed measure.

Now, it so happens that we can define a norm on this space. Indeed, taking the Jordan decomposition, we must have both \mu^+(X)<\infty and \mu^-(X)<\infty, and thus \lvert\mu\rvert(X)<\infty. We define \lVert\mu\rVert=\lvert\mu\rvert(X), and use this as our norm. It’s straightforward to verify that \lVert c\mu\rVert=\lvert c\rvert\lVert\mu\rVert, and that \lVert\mu\rVert=0 implies that \mu is the zero measure. The triangle inequality takes a bit more work. We take a Hahn decomposition X=A\uplus B for \mu+\nu and write

\displaystyle\begin{aligned}\lVert\mu+\nu\rVert&=\lvert\mu+\nu\rvert(X)\\&=(\mu+\nu)^+(X)+(\mu+\nu)^-(X)\\&=(\mu+\nu)(X\cap A)-(\mu+\nu)(X\cap B)\\&=\mu(A)+\nu(A)-\mu(B)-\nu(B)\\&\leq\lvert\mu\rvert(A)+\lvert\mu\rvert(B)+\lvert\nu\rvert(A)+\lvert\nu\rvert(B)\\&=\lvert\mu\rvert(X)+\lvert\nu\rvert(X)\\&=\lVert\mu\rVert+\lVert\nu\rVert\end{aligned}

So we know that this defines a norm on our space.

But is this space, as asserted, a Banach space? Well, let’s say that \{\mu_n\} is a Cauchy sequence of finite signed measures so that given any \epsilon>0 we have \lvert\mu_n-\mu_m\rvert(X)<\epsilon for all sufficiently large m and n. But this is larger than any \lvert\mu_n-\mu_m\rvert(E), which itself is greater than \lvert\lvert\mu_n\rvert(E)-\lvert\mu_m\rvert(E)\rvert. If E\subseteq A is a positive measurable set then this shows that \lvert\mu_n(E)-\mu_m(E)\rvert is kept small, and we find similar control over the measures of negative measurable sets. And so the sequence \{\mu_n(E)\} is always Cauchy, and hence convergent. It’s straightforward to show that the limiting set function \mu will be a signed measure, and that we will have control over \lVert\mu_n-\mu\rVert. And so the space of totally finite signed measures is indeed a Banach space.

June 28, 2010 Posted by | Analysis, Measure Theory | 4 Comments

Jordan Decompositions

It’s not too hard to construct examples showing that Hahn decompositions for a signed measure, though they exist, are not unique. But if we have two of them — X=A_1\uplus B_1=A2\uplus B_2 — there’s something we can show to be unique. For every measurable set E we have \mu(E\cap A_1)=\mu(E\cap A_2) and \mu(E\cap B_1)=\mu(E\cap B_2).

Indeed, it’s easy to see that E\cap(A_1\setminus A_2)\subseteq E\cap A_1, so (since A_1 is positive) \mu(E\cap(A_1\setminus A_2))\geq0. But we can also see that E\cap(A_1\setminus A_2)\subseteq E\cap B_2, so (since B_2 is negative) \mu(E\cap(A_1\setminus A_2))\leq0. And so \mu(E\cap(A_1\setminus A_2))=0, and similarly \mu(E\cap(A_2\setminus A_1))=0. We then see that

\displaystyle\mu(E\cap A_1)=\mu(E\cap(A_1\cup A_2))=\mu(E\cap A_2)

The proof that \mu(E\cap B_1)=\mu(E\cap B_2) is similar.

We can thus unambiguously define two set functions on the class of all measurable sets

\displaystyle\begin{aligned}\mu^+(E)&=\mu(E\cap A)\\\mu^-(E)&=-\mu(E\cap B)\end{aligned}

for any Hahn decomposition X=A\uplus B. We call these the “upper variation” and “lower variation”, respectively, of \mu. We can also define a set function

\displaystyle\lvert\mu\rvert(E)=\mu^+(E)+\mu^-(E)

called the “total variation” of \mu. It should be noted that \lvert\mu\rvert(E) and \lvert\mu(E)\rvert have almost nothing to do with each other.

Now, I say that each of these set functions — \mu^+, \mu^-, and \lvert\mu\rvert — is a measure, and that \mu(E)=\mu^+(E)-\mu^-(E). If \mu is (totally) finite or \sigma-finite, then so are \mu^+ and \mu^-, and at least one of them will always be finite.

Each of these variations is clearly non-negative, and countable additivity is also clear from the definitions. For example, given a pairwise-disjoint sequence \{E_n\} we find

\displaystyle\begin{aligned}\mu^+\left(\biguplus\limits_{n=1}^\infty E_n\right)&=\mu\left(\left(\biguplus\limits_{n=1}^\infty E_n\right)\cap A\right)\\&=\mu\left(\biguplus\limits_{n=1}^\infty(E_n\cap A)\right)\\&=\sum\limits_{n=1}^\infty\mu(E_n\cap A)\\&=\sum\limits_{n=1}^\infty\mu^+(E_n)\end{aligned}

and similarly for \mu^- and \lvert\mu\rvert. Thus each one is a measure. The equation mu=\mu^+-\mu^- is clear from the definitions. The fact that \mu takes at most one of \pm\infty implies that one of \mu^\pm is finite. Finally, if every measurable set (say, E) is a countable union of \mu-finite sets (say, E=\biguplus E_n), then E\cap A is the countable union of the E_n\cap A, and so \lvert\mu^+(E_n)\rvert=\lvert\mu(E_n\cap A)\rvert<\infty as well, and similarly for \mu^-(E_n). Thus \mu^+ and \mu^- are \sigma-finite.

We see that every signed measure \mu can be written as the difference of two measures, one of which is finite. The representation \mu=\mu^+-\mu^- of a signed measure as the difference between its upper and lower variations is called the “Jordan decomposition” of \mu.

June 25, 2010 Posted by | Analysis, Measure Theory | 3 Comments

Hahn Decompositions

Given a signed measure \mu on a measurable space (X,\mathcal{S}), we can use it to break up the space into two pieces. One of them will contribute positive measure, while the other will contribute negative measure. First: some preliminary definitions.

We call a set E\subseteq X “positive” (with respect to \mu) if for every measurable F\in\mathcal{S} the intersection E\cap F is measurable, and \mu(E\cap F)\geq0. That is, it’s not just that E has positive measure, but every measurable part of E has positive measure. Similarly, we say that E is “negative” if for every measurable F the intersection E\cap F is measurable, and \mu(E\cap F)\leq0. For example, the empty set is both positive and negative. It should be clear from these definitions that the difference of two negative sets is negative, and any disjoint countable union of negative sets is negative, and (thus) any countable union at all of negative sets is negative.

Now, for every signed measure \mu there is a “Hahn decomposition” of X. That is, there are two disjoint sets A and B, with A positive and B negative with respect to \mu, and whose union is all of X. We’ll assume that -\infty<\mu(E)\leq\infty , but if \mu takes the value -\infty (and not \infty) the modifications aren’t difficult.

We write \beta=\inf\mu(B), taking the infimum over all measurable negative sets B. We must be able to find a sequence \{B_i\} of measurable negative sets so that the limit of the \mu(B_i) is \beta — just pick B_i so that \beta\leq\mu(B_i)<\beta+\frac{1}{i} — and we can pick the sequence to be monotonic, with B_i\subseteq B_{i+1}. If we define B as the union — the limit — of this sequence, then we must have \mu(B)=\beta. The measurable negative set B has minimal measure \mu(B).

Now we pick A=X\setminus B, and we must show that A is positive. If it wasn’t, there would be a measurable subset E_0\subseteq A with \mu(E_0)<0. This E_0 cannot itself be negative, or else B\uplus E_0 would be negative and we’d have \mu(B\uplus E_0)=\mu(B)+\mu(E_0)<\mu(B), contradicting the minimality of \mu(B).

So E_0 must contain some subsets of positive measure. We let k_1 be the smallest positive integer so that E_0 contains a subset E_1\subseteq E_0 with \mu(E_1)\geq\frac{1}{k_1}. Then observe that

\displaystyle\mu(E_0\setminus E_1)=\mu(E_0)-\mu(E_1)\leq\mu(E_0)-\frac{1}{k_1}<0

So everything we just said about E_0 holds as well for E_0\setminus E_1. We let k_2 be the smallest positive integer so that E_0\setminus E_1 contains a subset E_2\subseteq E_0\setminus E_1 with \mu(E_2)\geq\frac{1}{k_2}. And so on we go until in the limit we’re left with

\displaystyle F_0=E_0\setminus\biguplus\limits_{i=1}^\infty E_i

after taking out all the sets E_i.

Since -\infty<\mu(E_0)<0, the measure of E_0 is finite, and so the measure of any subset of E_0 must be finite as well. Thus the limits of the \frac{1}{k_n} must be zero, so that the measure of the countable disjoint union of all the E_n can converge. And so any remaining measurable set F that can fit into F_0 must have \mu(F)\leq0. That is, F_0 must be a measurable negative set disjoint from B. But we must have

\displaystyle\mu(F_0)=\mu(E_0)-\sum\limits_{i=1}^\infty\mu(E_i)\leq\mu(E_0)<0

which contradicts the minimality of \mu(B) just like E_0 would have if it had been a negative set. And thus the assumption that \mu(E_0)<0 is untenable, and so every measurable subset of A has positive measure.

June 24, 2010 Posted by | Analysis, Measure Theory | 6 Comments

Signed Measures and Sequences

We have a couple results about signed measures and certain sequences of sets.

If \mu is a signed measure and \{E_n\} is a disjoint sequence of measurable sets so that the measure of their disjoint union is finite:

\displaystyle\left\lvert\mu\left(\biguplus\limits_{n=1}^\infty E_n\right)\right\rvert=\left\lvert\sum\limits_{n=1}^\infty\mu(E_n)\right\rvert<\infty

then the series

\displaystyle\sum\limits_{n=1}^\infty\mu(E_n)

is absolutely convergent. We already know it converges since the measure of the union is finite, but absolute convergence will give us all sorts of flexibility to reassociate and rearrange our series.

We want to separate out the positive and the negative terms in this series. We write E_n^+=E_n if \mu(E_n)\geq0 and E_n^+=\emptyset otherwise. Similarly, we write E_n^-=E_n if \mu(E_n)\leq0 and E_n^-=\emptyset otherwise. Then we write the two series

\displaystyle\begin{aligned}\mu\left(\biguplus\limits_{n=1}^\infty E_n^+\right)&=\sum\limits_{n=1}^\infty\mu(E_n^+)\\\mu\left(\biguplus\limits_{n=1}^\infty E_n^-\right)&=\sum\limits_{n=1}^\infty\mu(E_n^-)\end{aligned}

The terms of each series have a constant sign — positive for the first and negative for the second — and so if they diverge they can only diverge definitely — to \infty in the first case and to -\infty in the second. But at least one must converge or else we’d have \mu obtaining both infinite values. But the sum of all the E_n converges, and so both series must converge — if the series of \mu(E_n^+) diverged to \infty and the seris of \mu(E_n^-) converged, then there wouldn’t be enough negative terms in the series of \mu(E_n) for the whole thing to converge. But then since the positive terms and the negative terms both converge, the whole series is absolutely convergent.

Now we turn to some continuity properties. If \{E_n\} is a monotone sequence — if it’s decreasing we also ask that at least one \lvert\mu(E_n)\rvert<\infty — then

\displaystyle\mu\left(\lim\limits_{n\to\infty}E_n\right)=\lim\limits_{n\to\infty}\mu(E_n)

The proofs of both of these facts are exactly the same as for measures, except we need the monotonicity result from the end of yesterday’s post to be sure that once we hit one finite \mu(E_n), all the later \mu(E_m) will stay finite.

June 23, 2010 Posted by | Analysis, Measure Theory | Leave a comment

Signed Measures

We continue what we started yesterday by extending the notion of a measure. We want something that captures the indefinite integrals of every function for which it’s defined.

And so we introduce a “signed measure”. This is essentially just like a measure, except we allow negative values as well. That is, \mu is an extended real-valued, countably additive set function. But we want to prune the concept slightly.

First off, we insist that \mu(\emptyset)=0. Additivity tells us that \mu(E)=\mu(E\uplus\emptyset)=\mu(E)+\mu(\emptyset); if there is any measurable set E at all with \mu(E) finite, then \mu(\emptyset)=0 follows, so our condition just rules out the degenerate cases where \mu(E)=\infty or \mu(E)=-\infty for all measurable E.

Secondly, we insist that \mu can take only one of the values \pm\infty. That is, we can’t have one measurable E with \mu(E)=\infty and another measurable F with \mu(F)=-\infty. Indeed, if this were the case then we’d have to deal with some indeterminate sums. We can’t quite be sure of this just considering E\cup F since the two might not be disjoint, but we can consider the following three equations that follow from additivity:

\displaystyle\begin{aligned}\mu(E)&=\mu(E\setminus F)+\mu(E\cap F)\\\mu(F)&=\mu(F\setminus E)+\mu(E\cap F)\\\mu(E\Delta F)&=\mu(E\setminus F)+\mu(F\setminus E)\end{aligned}

Either \mu(E\cap F) is finite or it’s not. If it’s finite, then the first two equations tell us that \mu(E\setminus F)=\infty and \mu(F\setminus E)=-\infty, and so the sum in the third equation is indeterminate. On the other hand, if \mu(E\cap F)=\infty then the sum in the second equation must be indeterminate to satisfy \mu(F)=-\infty, and similarly the sum in the first equation would have to be indeterminate if \mu(E\cap F)=-\infty. To avoid these indeterminate sums we make our restriction.

On the other hand, there are certain pathologies we don’t have to worry about. For instance, if \{E_n\} is a pairwise disjoint sequence of measurable sets, then countable additivity tells us that

\displaystyle\sum\limits_{n=1}^\infty\mu(E_n)=\mu\left(\biguplus\limits_{n=1}^\infty E_n\right)

and so the sum either converges (if the measure of the union is finite) or it definitely diverges to \pm\infty. That is, even though we may have negative terms we don’t have to worry about an oscillating sum that fails to converge because the sequence of partial sums jumps around and never settles down. So it always makes sense to write such a sum down, even if its value may be infinite.

All the language about a measure being finite, \sigma-finite, totally finite, or totally \sigma-finite carries over. The only modification is that we have to ask for \lvert\mu(E)\rvert<\infty or (equivalently) -\infty<\mu(E)<\infty instead of \mu(E)<\infty in the definitions.

Of course, just as for measures, signed measures are finitely additive (which we used above) and thus subtractive. It won’t be monotone, in general, though; Given a set E and a subset F\subseteq E we can write

\displaystyle\mu(E)=\mu(E\setminus F)+\mu(F)

If \mu(E\setminus F)<0 then \mu(E)<\mu(F), even though F is the subset. However, we can at least say that if \lvert\mu(E)\rvert<\infty then \lvert\mu(F)\rvert<\infty as well. Indeed, using the same equation if either one of the summands on the right is infinite then \mu(E) is as well. If both are, then they’re both infinite in the same direction (since \mu can only assume one of \pm\infty) and so \mu(E) is again infinite. The only possibility for a finite \mu(E) is for both \mu(E\setminus F) and \mu(F) to be finite.

June 22, 2010 Posted by | Analysis, Measure Theory | 7 Comments

Extending the Integral

Given an integrable function f, we’ve defined the indefinite integral \nu(E) to be the set function

\displaystyle\nu(E)=\int\limits_Ef\,d\mu=\int f\chi_E\,d\mu

This is clearly real-valued, and we’ve seen that it’s countably additive. If f is a.e. non-negative, then \nu(E) will also be non-negative, and so the indefinite integral is a measure. Since f is integrable we see that

\displaystyle\nu(X)=\int f\,d\mu<\infty

and so \nu is a totally finite measure.

But this situation feels a bit artificially restrictive in a couple ways. First of all, measures can be extended real-valued — why do we never find \nu(E)=\pm\infty? Well, it makes sense to extend the definition of at least the symbol of integration a bit. If f is not integrable, but f\geq0 a.e., there is really only one possibility: there is no upper bound on the integrals of simple functions smaller than f. And so in this situation it makes sense to define

\displaystyle\int f\,d\mu=\infty

Similarly, if f\leq0 a.e. and fails to be integrable, it makes sense to define

\displaystyle\int f\,d\mu=-\infty

In general, we can break a function f into its positive and negative parts f^+ and f^-, and then define

\displaystyle f\,d\mu=\int f^+\,d\mu-\int f^-\,d\mu

for all functions for which at most one of f^+ and f^- fails to be integrable. That is, if the positive part f^+ is integrable but the negative part f^- is not, then the integral can be defined to be -\infty. If the negative part is integrable but the positive part isn’t, we can define the integral to be \infty. If both positive and negative parts are integrable then the whole function is integrable, while if neither part is integrable we still leave the integral undefined. We don’t know in general how to deal with the indeterminate form \infty-\infty.

And so now we find that any a.e. non-negative function — integrable or not — defines a measure by its indefinite integral. If f isn’t integrable, then we get an extended real-valued set function, but this doesn’t prevent it from being a measure. As a matter of terminology, we should point out that we don’t call a function whose integral is now defined to be positive or negative \infty to be “integrable”. That term is still reserved for those functions whose indefinite integrals are totally finite, as above.

June 21, 2010 Posted by | Analysis, Measure Theory | 3 Comments

An Alternate Approach to Integration

We can wrap up this introduction to the Lebesgue integral by outlining the alternate approach that commenter Cristi was referring to. We’ll do this from the perspective of our current track, and it should be clear how the alternative definitions would lead us to the same place.

If f is a non-negative integrable function on a measure space (X,\mathcal{S},\mu). For every measurable set E\in\mathcal{S}, we define

\displaystyle a(E)=\inf\limits_{x\in E}f(x)

Also, for every finite, pairwise disjoint collection \mathcal{E}=\{E_1,\dots,E_n\}\subseteq\mathcal{S} of measurable sets we define

\displaystyle s(\mathcal{E})=\sum\limits_{i=1}^na(E_i)\mu(E_i)

We then assert that the supremum of all numbers s(\mathcal{E}) for all finite, pairwise disjoint collections \mathcal{E}\subseteq\mathcal{S} is equal to the integral of f:

\displaystyle\int f\,d\mu=\sup s(\mathcal{E})

If f is simple, this is obvious. Indeed, if \mathcal{E} is the collection of sets used to write f as a finite linear combination of characteristic functions, then s(\mathcal{E}) is exactly the integral of f by definition. Any set E that extends outside one of these sets will have a(E)=0, and so we can’t get any larger than the integral of f.

On the other hand, for a general integrable function f we consider a non-negative simple g with g\leq f, and we let \mathcal{E} be the sets used to express g as a finite linear combination of characteristic functions:

\displaystyle g=\sum\limits_{i=1}^n\alpha_i\chi_{E_i}

We see that

\displaystyle\int g\,d\mu=\sum\limits_{i=1}^n\alpha_i\mu(E_i)\leq\sum\limits_{i=1}^n a(E_i)\mu(E_i)=s(\mathcal{E})

If \{g_n\} is an increasing sequence of non-negative simple functions converging pointwise a.e. to f, then

\displaystyle\int f\,d\mu\lim\limits_{n\to\infty}\int g_n\,d\mu\leq\sup s(\mathcal{E})

where we use the definition of integrability, and we take the supremum over finite, pairwise disjoint collections \mathcal{E}. But it’s also clear that for every \mathcal{E} we have

\displaystyle s(\mathcal{E})=\int h\,d\mu\leq\int f\,d\mu

for some non-negative simple h\leq f.

So the alternate approach proceeds by defining the integral of a simple function as before, and defining general integrals of non-negative functions by the supremum above. General integrable functions overall are handled by using their positive and negative parts. Then you can prove the monotone convergence theorem, followed by Fatou’s lemma, and then the Fatou-Lebesgue theorem, which leads to dominated convergence theorem, and we’re pretty much back where we started.

June 18, 2010 Posted by | Analysis, Measure Theory | Leave a comment

The Fatou-Lebesgue Theorem

Now we turn to the Fatou-Lebesgue theorem. Let \{f_n\} be a sequence of integrable functions (this time we do not assume they are non-negative) and g be some other function which dominates this sequence in absolute value. That is, we have \lvert f_n(x)\rvert\leq g(x) a.e. for all n. We define the functions

\displaystyle\begin{aligned}f_*(x)&=\liminf\limits_{n\to\infty} f_n(x)\\f^*(x)&=\limsup\limits_{n\to\infty} f_n(x)\end{aligned}

These two functions are integrable, and we have the sequence of inequalities

\displaystyle\int f_*\,d\mu\leq\liminf\limits_{n\to\infty}\int f_n\,d\mu\leq\limsup\limits_{n\to\infty}\int f_n\,d\mu\leq\int f^*\,d\mu

Again, this is often stated for a sequence of measurable functions, but the dominated convergence theorem allows us to immediately move to the integrable case. In fact, if the sequence \{f_n\} converges pointwise a.e., then f_*=f^* a.e. and the inequality collapses and gives us exactly the dominated convergence theorem back again.

Since g dominates the sequence \{f_n\}, the sequence \{g+f_n\} will be non-negative. Fatou’s lemma then tells us that

\displaystyle\begin{aligned}\int g\,d\mu+\int f_*\,d\mu&=\int g+f_*\,d\mu\\&=\int\liminf\limits_{n\to\infty}(g+f_n)\,d\mu\\&\leq\liminf\limits_{n\to\infty}\int g+f_n\,d\mu\\&=\int g\,d\mu+\liminf\limits_{n\to\infty}\int f_n\,d\mu\end{aligned}

Cancelling the integral of g we find the first asserted inequality. The second one is true by the definition of limits inferior and superior. The third one is essentially the same as the first, only using the non-negative sequence \{g-f_n\}.

June 17, 2010 Posted by | Analysis, Measure Theory | 1 Comment