# The Unapologetic Mathematician

## Integrals are Additive Over Regions

Before I state today’s proposition, I need to define what I mean by saying that two sets in $\mathbb{R}^n$ are “nonoverlapping”. Intuitively, we might think that this means they have no intersection, but that’s not quite it. We’ll allow some intersection, but only at boundary points. Since the regions we’re interested in for integrals are Jordan measurable, and their boundaries have zero Jordan content, so we know changing things along these boundaries in an integral will make no difference.

Let $\left\{S_i\right\}_{i=1}^n$ be a collection of bounded regions in $\mathbb{R}^n$, so that any two of these regions are nonoverlapping. We define their union

$\displaystyle S=\bigcup\limits_{i=1}^nS_i$

and let $f:S\rightarrow\mathbb{R}$ be a bounded function defined on this union. Then $f$ is integrable on $S$ if and only if it’s integrable on each $S_i$, and we find

$\displaystyle\int\limits_Sf\,dx=\sum\limits_{i=1}^n\int\limits_{S_i}f\,dx$

Indeed, if $R$ is some $n$-dimensional interval containing $S$, then it will also contain each of the smaller regions $S_i$, and we can define

$\displaystyle\int\limits_Sf\,dx=\int\limits_Rf(x)\chi_S(x)\,dx$

and

$\displaystyle\int\limits_{S_i}f\,dx=\int\limits_Rf(x)\chi_{S_i}(x)\,dx$

We can use Lebesgue’s condition to establish our assertion about integrability. On the one hand, the discontinuities of $f$ in $S_i$ must be contained within those of $S$, so if the latter set has measure zero then so must the former. On the other hand, the discontinuities of $f$ consist of those within each of the $S_i$, and maybe some along the boundaries. Since the boundaries have measure zero, and we assume that the discontinuities in each $S_i$ are of measure zero, their (countable) union will also have measure zero. And then so must the set of discontinuities of $f$ in $S$ have measure zero as well.

The inclusion-exclusion principle tells us that we can rewrite the characteristic function of $S$:

\displaystyle\begin{aligned}\chi_S&=\chi_{S_1\cup\dots\cup S_n}\\&=\sum\limits_{1\leq i\leq n}\chi_{S_i}-\sum\limits_{1\leq i

We can put this into our integral and use the fact that integrals are additive with respect to finite sums in the integrand

\displaystyle\begin{aligned}\int\limits_Sf\,dx&=\int\limits_Rf(x)\chi_S(x)\,dx\\&=\sum\limits_{1\leq i\leq n}\int\limits_Rf(x)\chi_{S_i}(x)\,dx-\sum\limits_{1\leq i

But we assumed that all the $S_i$ are nonoverlapping, so any intersection of two or more of them must lie only along their boundaries. And since these boundaries all have Jordan content zero the integrals over them must come out to zero. We’re left with only the sum over each subregion, as we wanted.

December 30, 2009 Posted by | Analysis, Calculus | Leave a comment

## The Mean Value Theorem for Multiple Integrals

As in the single variable case, multiple integrals satisfy a mean value property.

First of all, we should note that, like one-dimensional Riemann-Stieltjes integrals with increasing integrators, integration preserves order. That is, if $f$ and $g$ are both integrable over a Jordan-measurable set $S$, and if $f(x)\leq g(x)$ at each point $x\in S$, then we have

$\displaystyle\int\limits_Sf(x)\,dx\leq\int\limits_Sg(x)\,dx$

This is a simple consequence of the definition of a multiple integral as the limit of Riemann sums, since every Riemann sum for $f$ will be smaller than the corresponding sum for $g$.

Now if $f$ and $g$ are integrable on $S$ and $g(x)\geq0$ for every $x\in S$, then we set $m=\inf f(S)$ and $M=\sup f(S)$ — the infimum and supremum of the values attained by $f$ on $S$. I assert that there is some $\lambda$ in the interval $[m,M]$ so that

$\displaystyle\int\limits_Sf(x)g(x)\,dx=\lambda\int\limits_Sg(x)\,dx$

In particular, we can set $g(x)=1$ and find

$\displaystyle mc(S)\leq\int\limits_Sf(x)\,dx\leq Mc(S)$

giving bounds on the integral in terms of the Jordan content of $S$. Incidentally, $g(x)\,dx$ here is serving a similar role to the integrator $d\alpha$ in the integral mean value theorem for Riemann-Stieltjes integrals.

Okay, so since $g(x)\geq0$ we have $mg(x)\leq f(x)g(x)\leq Mg(x)$ for every $x\in S$. Since integration preserves order, this yields

$\displaystyle m\int\limits_Sg(x)\,dx\leq\int\limits_Sf(x)g(x)\,dx\leq M\int\limits_Sg(x)\,dx$

If the integral of $g$ is zero, then our result automatically holds for any value of $\lambda$. Otherwise we can divide through by this integral and set

$\displaystyle\lambda=\frac{\displaystyle\int\limits_Sf(x)g(x)\,dx}{\displaystyle\int\limits_Sg(x)\,dx}$

which will be between $m$ and $M$.

One particularly useful case is when $S$ has Jordan content zero. In this case, we find that any integral over $S$ is itself automatically zero.

December 29, 2009 Posted by | Analysis, Calculus | 3 Comments

## Integrals Over More General Sets

To this point we’ve only discussed multiple integrals over $n$-dimensional intervals. But often we’re interested in more general regions, like circles or ellipsoids or even more rectangular solids that are just tilted with respect to the coordinate axes. How can we handle integrating over these more general sets?

One attempt might be to fill up the region from inside. We can chop up regions into smaller pieces, each of which is an $n$-dimensional interval. But overall this requires an incredibly involved limiting process, and we’ll never get anything calculated that way.

Instead, we come at it from outside. Given a bounded region $S\subseteq\mathbb{R}^n$, we can put it into an $n$-dimensional interval $R$. Then if $f:S\rightarrow\mathbb{R}^n$ is a function, we can try to define some sort of integral of $f$ over $S$ in terms of its integral over $R$.

The obvious problem with $\int_Rf\,dx$ is that it includes all the points in $R$ that aren’t in $S$, and we don’t want to include the integral of $f$ over that region. Worse, what if $f$ has a big cluster of discontinuities within $R$ but outside of $S$? Clearly that shouldn’t make $f$ fail to be integrable over $S$. What we need is to mask off $S$, like a stencil or masking tape does when painting.

The mask we’ll use is the characteristic function of $S$, which I’ve mentioned before. I’ll actually go more deeply into them in a bit, but for now we’ll recall that the characteristic function of a set $S$ is written $\chi_S$, and it’s defined as

$\displaystyle\chi_S(x)=\left\{\begin{matrix}1&:x\in S\\{0}&:x\notin S\end{matrix}\right.$

Now look what happens when we multiply our function by this mask:

$\displaystyle f(x)\chi_S(x)=\left\{\begin{matrix}f(x)&:x\in S\\{0}&:x\notin S\end{matrix}\right.$

Now our function has been redefined outside $S$ by setting it equal to zero there. Then we proceed to define

$\displaystyle\int\limits_Sf(x)\,dx=\int\limits_Rf(x)\chi_S(x)\,dx$

The integral over $R$ should be the “integral” over $S$ plus the “integral” over the region $R\setminus S$ outside $S$. I put these in quotes because we haven’t really defined what these integrals mean (that’s what we’re trying to do!), but we can make reasonable assertions about properties that they should have, whatever they are.

Now $f(x)\chi_S(x)$ is zero on the outer region, so the second integral is zero. And $f(x)\chi_S(x)$ is just equal to $f(x)$ inside $S$, so when “integrating” over $S$ itself we may as well just drop the $\chi_S$ factor. This justifies the definition of integrating over $S$.

It should be clear that this definition doesn’t depend on the interval $R$ at all. Indeed, if we have two different intervals $R_1$ and $R_2$ which both contain $S$, then any region the one contains which the other does not must fall outside of $S$, and $f(x)\chi_S(x)$ will be zero there anyway, and so will the difference between their integrals.

Now, this definition isn’t without its problems. The clearest of which is that we’ve almost certainly introduced new discontinuities. All around the boundary of $S$, if $f$ wasn’t already zero it suddenly and discontinuously becomes zero when we add our mask. This could cause trouble when trying to integrate the masked function over $R$. We must ask that $S$ be Jordan measurable, because this will happen if and only if the boundary $\partial S$ has zero Jordan content, and thus zero outer Lebesgue measure. Since the collection of new discontinuities must be contained in this boundary, it will also have measure zero.

This leads us to an integrability criterion over a Jordan measurable set $S$: $f$ will be integrable over $S$ if and only if the discontinuities of $f$ in $S$ form a set of measure zero. Indeed, Lebesgue’s condition tells us that the discontinuities of $f(x)\chi_S(x)$ must have measure zero. These discontinuities either come from those of $f$ inside $S$, or from the characteristic function $\chi_S$ at the boundary $\partial S$. By assuming that $S$ is Jordan measurable, we force the second kind to have measure zero, and so the total collection of discontinuities will have measure zero and satisfy Lebesgue’s condition if and only if the discontinuities of $f$ inside $S$ do.

December 22, 2009 Posted by | Analysis, Calculus | 4 Comments

## Iterated Integrals IV

So we’ve established that as long as a double integral exists, we can use an iterated integral to evaluate it. What happens when the dimension of our space is even bigger?

In this case, we’re considering integrating an integrable function $f:R\rightarrow\mathbb{R}$ over some $n$-dimensional interval $R=[a^1,b^1]\times\dots\times[a^n,b^n]$. We want something like iterated integrals to allow us to evaluate this multiple integral. We’ll do this by peeling off a single integral from the outside and leaving an integral over an $n-1$-dimensional integral inside.

Specifically, we can project the interval $R$ onto the coordinate hyperplane defined by $x^k=0$ just by leaving the coordinates $x^i$ of each point $x\in R$ the same if $i\neq k$ and setting $x^k=0$. We’ll call the resulting interval

$\displaystyle R_k=[a^1,b^1]\times\dots\times\widehat{[a^k,b^k]}\times\dots\times[a^n,b^n]$

where the wide hat means that we just leave out that one factor in the product. We’ll also write $\hat{x}$ to mean the remaining coordinates on $R_k$.

Essentially, we want to integrate first over $R_k$, and then let $x^k$ run from $a^k$ to $b^k$. We have a collection of assertions that parallel those from the two-dimensional case

• $\displaystyle{\int\limits_-}_Rf(x)\,dx\leq{\int\limits_-}_{a^k}^{b^k}{\int\limits^-}_{R_k}f(\hat{x},x^k)\,d\hat{x}\,dx^k\leq{\int\limits^-}_{a^k}^{b^k}{\int\limits^-}_{R_k}f(\hat{x},x^k)\,d\hat{x}\,dx^k\leq{\int\limits^-}_Rf(x)\,dx$
• $\displaystyle{\int\limits_-}_Rf(x)\,dx\leq{\int\limits_-}_{a^k}^{b^k}{\int\limits_-}_{R_k}f(\hat{x},x^k)\,d\hat{x}\,dx^k\leq{\int\limits^-}_{a^k}^{b^k}{\int\limits_-}_{R_k}f(\hat{x},x^k)\,d\hat{x}\,dx^k\leq{\int\limits^-}_Rf(x)\,dx$
• If $\int_Rf(x)\,dx$ exists, then we have
$\displaystyle\int\limits_Rf(x)\,dx=\int\limits_{a^k}^{b^k}{\int\limits_-}_{R_k}f(\hat{x},x^k)\,d\hat{x}\,dx^k=\int\limits_{a^k}^{b^k}{\int\limits^-}_{R_k}f(\hat{x},x^k)\,d\hat{x}\,dx^k$

with a copy of these three for each index $k$ between ${1}$ and $n$. The proofs of these are pretty much identical to the proofs in the two-dimensional case, and so I’ll just skip them.

Anyhow, once we’ve picked one of the $n$ variables and split it off as the outermost integral, we’re left with an $n-1$-dimensional integral on the inside. We can pick any one of these variables and split it off, leaving an $n-2$-dimensional integral on the inside, and so on. For each of the $n!$ orderings of the original $n$ variables, we get a way of writing the $n$-dimensional integral over $R$ as a sequence of $n$ integrals, each over a one-dimensional interval. Now, we may find some of these iterated integrals easier to evaluate than others, but in principle, if each of the $m$-dimensional integrals in the sequence exists it doesn’t matter which of the orderings we use.

So, for example, if we’re considering a bounded function $f$ defined on a three-dimensional interval $R=[a^1,b^1]\times[a^2,b^2]\times[a^3,b^3]$, we can write (up to) six different iterated integrals, assuming that all the integrals in sight exist.

\displaystyle\begin{aligned}\iiint\limits_R&=\int\limits_{a^3}^{b^3}\int\limits_{a^2}^{b^2}\int\limits_{a^1}^{b^1}f(x,y,z)\,dx\,dy\,dz=\int\limits_{a^2}^{b^2}\int\limits_{a^3}^{b^3}\int\limits_{a^1}^{b^1}f(x,y,z)\,dx\,dz\,dy\\&=\int\limits_{a^3}^{b^3}\int\limits_{a^1}^{b^1}\int\limits_{a^2}^{b^2}f(x,y,z)\,dy\,dx\,dz=\int\limits_{a^1}^{b^1}\int\limits_{a^3}^{b^3}\int\limits_{a^2}^{b^2}f(x,y,z)\,dy\,dz\,dx\\&=\int\limits_{a^2}^{b^2}\int\limits_{a^1}^{b^1}\int\limits_{a^3}^{b^3}f(x,y,z)\,dz\,dx\,dy=\int\limits_{a^1}^{b^1}\int\limits_{a^2}^{b^2}\int\limits_{a^3}^{b^3}f(x,y,z)\,dz\,dy\,dx\end{aligned}

December 21, 2009 Posted by | Analysis, Calculus | 4 Comments

## Iterated Integrals III

I recently heard a characterization (if someone remembers the source, please let me know) of the situation in analysis as being that there are no theorems — only conjectures that don’t yet have counterexamples. Today’s counterexample is adapted from one in Apostol’s book, but it’s far simpler than his seems to be.

We might guess that we can always evaluate double integrals by iterated integrals as we’ve been discussing. After all, that’s exactly what we do in multivariable calculus courses as soon as we introduce iterated integrals, never looking back to all those messy double and triple Riemann sums again. Unfortunately, the existence of the iterated integrals — even if both of them exist and their values agree — is not enough to guarantee that the double integral exists. Today, we will see a counterexample.

Let $S$ be the set of points $(x,y)$ in the unit square $[0,1]\times[0,1]$ so that $x=\frac{m}{q}$ and $y=\frac{n}{q}$, where $\frac{m}{q}$ and $\frac{n}{q}$ are two fractions with the same denominator, each of which are in lowest terms. That is, it contains the point $\left(\frac{1}{3},\frac{2}{3}\right)$, but not the point $\left(\frac{2}{4},\frac{3}{4}\right)$, since when we write these latter two fractions in lowest terms they are no longer over a common denominator. We will consider the characteristic function $\chi_S$, which is ${1}$ on points in $S$ and ${0}$ elsewhere in the unit square.

First, I assert that both iterated integrals exist and have the same value. That is

$\displaystyle\int\limits_0^1\int\limits_0^1\chi_S(x,y)\,dx\,dy=0=\int\limits_0^1\int\limits_0^1\chi_S(x,y)\,dy\,dx$

Indeed, the set $S$ is symmetric between the two coordinates, so we only need to evaluate one of these iterated integrals and the other one will automatically have the same value.

If $y$ is an irrational number in $[0,1]$, then there is no $x$ at all so that $(x,y)\in S$. Thus we can easily calculate the inner integral

$\displaystyle\int\limits_0^1\chi_S(x,y)\,dx=\int\limits_0^10\,dx=0$

On the other hand, if $y$ is a rational number, we can write it in lowest terms as $y=\frac{n}{q}$. Then there are only a finite number of points $x=\frac{m}{q}$ having the same denominator at all. Thus we can break the interval $[0,1]$ into a finite number of pieces, on each of which the characteristic function has the constant value zero. Thus we can calculate the inner integral

$\displaystyle\int\limits_0^1\chi_S(x,y)\,dx=\sum\limits_{i=1}^q\int\limits_\frac{i-1}{q}^\frac{i}{q}\chi_S\left(x,\frac{n}{q}\right)\,dx=\sum\limits_{i=1}^q\int\limits_\frac{i-1}{q}^\frac{i}{q}0\,dx=\sum\limits_{i=1}^q0=0$

And so we see that for any $y$ the inner integral evaluates to ${0}$. Then it’s easy to calculate the outer integral

$\displaystyle\int\limits_0^10\,dy=0$

and, as we said before, the other iterated integral also has the value ${0}$.

On the other hand, the double integral does not exist. Yes, $S$ is countable, and so it has measure zero. However, it’s also dense, which means $\chi_S$ is discontinuous everywhere in the unit square.

Saying that $S$ is dense in the square means that every neighborhood of every point of the square contains some point of $S$. Indeed, consider a point $(x,y)$ in the square and some radius $\epsilon$. Since the real numbers are Archimedean, we can pick some $N>\frac{2}{\epsilon}$, and as many people on Nick‘s Twitter experiment (remember to follow @DrMathochist!) reminded us, there are infinitely many prime numbers. Thus we can pick a prime $p>\frac{2}{\epsilon}$. Then we can round $x$ up to the next larger fraction of the form $\frac{m}{p}$, which will be in lowest terms unless $m=p$, in which case we round $x$ down to $\frac{p-1}{p}$. Similarly, we can round $y$ up (or down) to a fraction $\frac{n}{p}$ in lowest terms. This gives us a new point $\left(\frac{m}{p},\frac{n}{p}\right)\in S$, and we can calculate the distance

$\displaystyle\left\lVert\left(x-\frac{m}{p},y-\frac{n}{p}\right)\right\rVert=\sqrt{\left(x-\frac{m}{p}\right)^2+\left(y-\frac{n}{p}\right)^2}\leq\sqrt{\frac{2}{p^2}}<\frac{2}{p}<\epsilon$

So there is a point in $S$ within any radius $\epsilon$ of $(x,y)$.

But now when we try to set up the upper and lower integrals to check Riemann’s condition we find that every subinterval of any partition $P$ must contain some points in $S$ and some points not in $S$. The points within $S$ tell us that the upper sum gets a sample value of ${1}$ for each subinterval, giving a total upper sum of ${1}$. Meanwhile the points outside of $S$ tell us that the lower sum gets a sample value of ${0}$ for each subinterval, giving a total lower sum of ${0}$. Clearly Riemann’s condition fails to hold, and thus the double integral

$\displaystyle\iint\limits_{[0,1]\times[0,1]}\chi_S(x,y)\,d(x,y)$

fails to exist, despite the iterated integrals existing and agreeing.

December 18, 2009 Posted by | Analysis, Calculus | 3 Comments

## Iterated Integrals II

Let’s get to proving the assertions we made last time, starting with

$\displaystyle{\int\limits_-}Rf(x,y)\,d(x,y)\leq{\int\limits_-}_a^b{\int\limits^-}_c^df(x,y)\,dy\,dx\leq{\int\limits^-}_a^b{\int\limits^-}_c^df(x,y)\,dy\,dx\leq{\int\limits^-}Rf(x,y)\,d(x,y)$

where $f$ is a bounded function defined on the rectangle $R=[a,b]\times[c,d]$.

We can start by defining

$\displaystyle F(x)={\int\limits^-}_c^df(x,y)\,dy$

And we easily see that $\lvert F(x)\rvert\leq M(d-c)$, where $M$ is the supremum of $\lvert f\rvert$ on the rectangle $R$, so this is a bounded function as well. Thus the upper integral

$\displaystyle\overline{I}={\int\limits^-}_a^bF(x)\,dx={\int\limits^-}_a^b{\int\limits^-}_c^df(x,y)\,dy\,dx$

and the lower integral

$\displaystyle\underline{I}={\int\limits_-}_a^bF(x)\,dx={\int\limits_-}_a^b{\int\limits^-}_c^df(x,y)\,dy\,dx$

are both well-defined.

Now if $P_x=\{x_0,\dots,x_m\}$ is a partition of $[a,b]$, and $P_y=\{y_0,\dots,y_n\}$ is a partition of $[c,d]$, then $P=P_x\times P_y$ is a partition of $R$ into $mn$ subrectangles $R_{ij}$. We will define

\displaystyle\begin{aligned}\overline{I}_{ij}&={\int\limits^-}_{x_{i-1}}^{x_i}{\int\limits^-}_{y_{j-1}}^{y_j}f(x,y)\,dy\,dx\\\underline{I}_{ij}&={\int\limits_-}_{x_{i-1}}^{x_i}{\int\limits^-}_{y_{j-1}}^{y_j}f(x,y)\,dy\,dx\end{aligned}

Clearly, we have

$\displaystyle{\int\limits^-}_c^df(x,y)\,dy=\sum\limits_{j=1}^n{\int\limits^-}_{y_{j-1}}^{y_j}f(x,y)\,dy$

and so we find

\displaystyle\begin{aligned}{\int\limits^-}_a^b{\int\limits^-}_c^df(x,y)\,dy\,dx&={\int\limits^-}_a^b\sum\limits_{j=1}^n{\int\limits^-}_{y_{j-1}}^{y_j}f(x,y)\,dy\,dx\\&\leq\sum\limits_{j=1}^n{\int\limits^-}_a^b{\int\limits^-}_{y_{j-1}}^{y_j}f(x,y)\,dy\,dx\\&=\sum\limits_{j=1}^n\sum\limits_{i=1}^m{\int\limits^-}_{x_{i-1}}^{x_i}{\int\limits^-}_{y_{j-1}}^{y_j}f(x,y)\,dy\,dx\end{aligned}

That is

$\displaystyle\overline{I}\leq\sum\limits_{j=1}^n\sum\limits_{i=1}^m\overline{I}_{ij}$

and, similarly

$\displaystyle\underline{I}\geq\sum\limits_{j=1}^n\sum\limits_{i=1}^m\underline{I}_{ij}$

We also define $m_{ij}$ and $M_{ij}$ to be the infimum and supremum of $f$ over the rectangle $R_{ij}$, which gives us the inequalities

$\displaystyle m_{ij}(y_j-y_{j-1})\leq{\int\limits^-}_{y_{j-1}}^{y_j}f(x,y)\,dy\leq M_{ij}(y_j-y_{j-1})$

and from here we find

$\displaystyle m_{ij}\mathrm{vol}(R_{ij})\leq{\int\limits_-}_{x_{i-1}}^{x_i}{\int\limits^-}_{y_{j-1}}^{y_j}f(x,y)\,dy\,dx\leq{\int\limits^-}_{x_{i-1}}^{x_i}{\int\limits^-}_{y_{j-1}}^{y_j}f(x,y)\,dy\,dx\leq M_{ij}\mathrm{vol}(R_{ij})$

Summing on both $i$ and $j$, and sing the above inequalities, we get

$\displaystyle L_P(f)\leq\underline{I}\leq\overline{I}\leq U_P(f)$

and since this holds for all partitions $P$, the assertion that we’re trying to prove follows.

The second assertion from last time can be proven similarly, just replacing $F(x)$ by the lower integral over $[c,d]$. And then the third and fourth assertions are just the same, but interchanging the roles of $[a,b]$ and $[c,d]$. Finally, the last assertion is a consequence of the first four. Indeed, if the integral over $R$ exists, then the upper and lower integrals are equal, which collapses all of the inequalities into equalities.

December 17, 2009 Posted by | Analysis, Calculus | 1 Comment

## Iterated Integrals I

We may remember from a multivariable calculus class that we can evaluate multiple integrals by using iterated integrals. For example, if $f:R\rightarrow\mathbb{R}^{\geq0}$ is a continuous, nonnegative function on a two-dimensional rectangle $R=[a,b]\times[c,d]$ then the integral

$\displaystyle\iint\limits_Rf(x,y)\,d(x,y)$

measures the volume contained between the graph $z=f(x,y)$ of the function and the $x$$y$ plane within the rectangle. If we fix some constant $\hat{y}$ between $c$ and $d$ we can calculate the single integral

$\displaystyle\int\limits_a^bf(x,\hat{y})\,dx$

which describes the area that the plane $y=\hat{y}$ cuts out of this volume. It exists because because the integrand is continuous as a function of $x$. In such classes, we make the reasonable assumption that as we vary $\hat{y}$ this area varies continuously. This gives us a continuous function on $[c,d]$, which will then be integrable:

$\displaystyle\int\limits_c^d\left(\int\limits_a^bf(x,y)\,dx\right)\,dy$

This is an “iterated integral”, since we perform more than one integral in sequence. We usually leave out the big parens and trust in the notation to tell us when the inner integral is closed. Our handwaving argument then justifies the belief that this iterated integral is the same as the double integral above. And this is true:

$\displaystyle\iint\limits_Rf(x,y)\,d(x,y)=\int\limits_c^d\int\limits_a^bf(x,y)\,dx\,dy$

but we haven’t really proven it.

Besides, we’re interested in more general situations. What if, say, $f$ is discontinuous along the whole line $(x,\hat{y})$ for some fixed $c\leq\hat{y}\leq d$? This line can be contained in an arbitrarily thin rectangle, so it has outer Lebesgue measure zero in the rectangle $R$. If these are the only discontinuities, then $f$ is integrable on $R$, but we can’t follow the above prescription anymore, even if it were actually rigorous. We need some method of handling this sort of thing.

To this end, we have five assertions relating the upper and lower single and double integrals involving a function $f$ which is defined and bounded on the rectangle $R$ above. Unfortunately, our notation for upper and lower integrals gets a little cumbersome here, and the $\LaTeX$ support on WordPress isn’t the most elegant. Still, we soldier on and write

$\displaystyle{\int\limits_-}_a^bf(x)\,dx=\underline{I}_{[a,b]}(f)$

and similarly for upper integrals, and for lower and upper double integrals. Now, our assertions:

• $\displaystyle{\int\limits_-}Rf(x,y)\,d(x,y)\leq{\int\limits_-}_a^b{\int\limits^-}_c^df(x,y)\,dy\,dx\leq{\int\limits^-}_a^b{\int\limits^-}_c^df(x,y)\,dy\,dx\leq{\int\limits^-}Rf(x,y)\,d(x,y)$
• $\displaystyle{\int\limits_-}Rf(x,y)\,d(x,y)\leq{\int\limits_-}_a^b{\int\limits_-}_c^df(x,y)\,dy\,dx\leq{\int\limits^-}_a^b{\int\limits_-}_c^df(x,y)\,dy\,dx\leq{\int\limits^-}Rf(x,y)\,d(x,y)$
• $\displaystyle{\int\limits_-}Rf(x,y)\,d(x,y)\leq{\int\limits_-}_c^d{\int\limits^-}_a^bf(x,y)\,dx\,dy\leq{\int\limits^-}_c^d{\int\limits^-}_a^bf(x,y)\,dx\,dy\leq{\int\limits^-}Rf(x,y)\,d(x,y)$
• $\displaystyle{\int\limits_-}Rf(x,y)\,d(x,y)\leq{\int\limits_-}_c^d{\int\limits_-}_a^bf(x,y)\,dx\,dy\leq{\int\limits^-}_c^d{\int\limits_-}_a^bf(x,y)\,dx\,dy\leq{\int\limits^-}Rf(x,y)\,d(x,y)$
• If $\int_Rf(x,y)\,d(x,y)$ exists, then we have
\displaystyle\begin{aligned}\int\limits_Rf(x,y)\,d(x,y)&=\int\limits_a^b{\int\limits_-}_c^df(x,y)\,dy\,dx=\int\limits_a^b{\int\limits^-}_c^df(x,y)\,dy\,dx\\&=\int\limits_c^d{\int\limits_-}_a^bf(x,y)\,dx\,dy=\int\limits_c^d{\int\limits^-}_a^bf(x,y)\,dx\,dy\end{aligned}

Okay, as ugly as all those are, they’re what we’ll prove next time.

December 16, 2009 Posted by | Analysis, Calculus | 10 Comments

## Lebesgue’s Condition

At last we come to Lebesgue’s condition for Riemann-integrability in terms of Lebesgue measure. It asserts, simply enough, that a bounded function $f:[a,b]\rightarrow\mathbb{R}$ defined on an $n$-dimensional interval $[a,b]$ is Riemann integrable on that interval if and only if the set $D$ of discontinuities of $f$ has measure zero. Our proof will go proceed by way of our condition in terms of Jordan content.

As in our proof of this latter condition, we define

$\displaystyle J_\epsilon=\{x\in[a,b]\vert\omega_f(x)\geq\epsilon\}$

and by our earlier condition we know that $\overline{c}(J_\epsilon)=0$ for all $\epsilon>0$. In particular, it holds for $\epsilon=\frac{1}{n}$ for all natural numbers $n$.

If $x\in D$ is a point where $f$ is discontinuous, then the oscillation $\omega_f(x)$ must be nonzero, and so $\omega_f(x)>\frac{1}{n}$ for some $n$. That is

$\displaystyle D\subseteq\bigcup\limits_{n=1}^\infty J_{\frac{1}{n}}$

Since $\overline{c}(J_{\frac{1}{n}})=0$, we also have $\overline{m}(J_{\frac{1}{n}})=0$, and therefore have $\overline{m}(D)=0$ as well.

Conversely, let’s assume that $\overline{m}(D)=0$. Given an $\epsilon>0$, we know that $J_\epsilon$ is a closed set contained in $D$. From this, we conclude that $\overline{c}(J_\epsilon)\leq\overline{m}(D)=0$. Since this is true for all $\epsilon$, the Jordan content condition holds, and $f$ is Riemann integrable.

December 15, 2009 Posted by | Analysis, Calculus | 4 Comments

## Jordan Content Integrability Condition

We wanted a better necessary and sufficient condition for integrability than Riemann’s condition, and now we can give a result halfway to our goal. We let $f:[a,b]\rightarrow\mathbb{R}$ be a bounded function defined on the $n$-dimensional interval $[a,b]$.

The condition hinges on defining a certain collection of sets. For every $\epsilon>0$ we define the set

$\displaystyle J_\epsilon=\left\{x\in[a,b]\vert\omega_f(x)\geq\epsilon\right\}$

of points where the oscillation exceeds the threshold value $\epsilon$. The first thing to note about $J_\epsilon$ is that it’s a closed set. That is, it should contain all its accumulation points. So let $x$ be such an accumulation point and assume that is isn’t in $J_\epsilon$, so $\omega_f(x)<\epsilon$. So there must exist a neighborhood $N$ of $x$ so that $\Omega_f(N)<\epsilon$ and this means that $\omega_f(y)<\epsilon$ for any point $y\in N$, and so none of these points can be in $J_\epsilon$. But if this is the case, then $x$ can’t be an accumulation point of $J_\epsilon$, and so we must have $x\in J_\epsilon$.

And now for our condition. The function $f$ is integrable if and only if the Jordan content $c(J_\epsilon)=0$ for every $\epsilon>0$.

We start by assuming that $\overline{c}(J_\epsilon)\neq0$ for some $\epsilon>0$, and we’ll show that Riemann’s condition can’t hold. Given a partition $P$ of $[a,b]$ we calculate the difference between the upper and lower sums

\displaystyle\begin{aligned}U_P(f)-L_P(f)&=\sum\limits_{i_1=1}^{m_1}\dots\sum\limits_{i_n=1}^{m_n}\left(\max\limits_{t\in I_{i_1\dots i_n}}\{f(t)\}-\min\limits_{t\in I_{i_1\dots i_n}}\{f(t)\}\right)\mathrm{vol}(I_{i_1\dots i_n})\\&=A_1+A_2\geq A_1\end{aligned}

where $A_1$ is the part of the sum involving subintervals which contain points of $J_\epsilon$ and $A_2$ is the rest. The intervals in $A_1$ have total length $\overline{J}(P,J_\epsilon)\geq\overline{c}(J_\epsilon)$, and in these intervals we must have

$\displaystyle\max\limits_{t\in I_{i_1\dots i_n}}\{f(t)\}-\min\limits_{t\in I_{i_1\dots i_n}}\{f(t)\}\geq\epsilon$

because if the difference were less then the subinterval would be a neighborhood with oscillation less than $\epsilon$ and thus couldn’t contain any points in $J_\epsilon$. Thus we conclude that $A_1\geq\epsilon\overline{c}(J_\epsilon)$, and the difference between the upper and lower sums is at least as big that. This happens no matter what partition we pick, and so the upper and lower integrals must also differ by at least this much, violating Riemann’s condition. Thus if the function is integrable, we must have $\overline{c}(J_\epsilon)=0$.

Conversely, take an arbitrary $\epsilon>0$ and assume that $\overline{c}(J_\epsilon)=0$. For this to hold, there must exist a partition $P_0$ so that $\overline{J}(P_0,J_\epsilon)<\epsilon$. In each of the subintervals not containing points of $J_\epsilon$, we have $\omega_f(x)<\epsilon$ for all $x$ in the subinterval. Then we know there exists a $\delta$ so that we can subdivide the subinterval into smaller subintervals each with a diameter less than $\delta$, and the oscillation on each of these subintervals will be less than $\epsilon$. We will call this refined partition $P_\epsilon$.

Now, if $P$ is finer than $P_\epsilon$, we can again write

\displaystyle\begin{aligned}U_P(f)-L_P(f)&=\sum\limits_{i_1=1}^{m_1}\dots\sum\limits_{i_n=1}^{m_n}\left(\max\limits_{t\in I_{i_1\dots i_n}}\{f(t)\}-\min\limits_{t\in I_{i_1\dots i_n}}\{f(t)\}\right)\mathrm{vol}(I_{i_1\dots i_n})\\&=A_1+A_2\end{aligned}

where again $A_1$ contains the terms from subintervals containing points in $J_\epsilon$ and $A_2$ is the remainder. In all of these latter subintervals we know the difference between the maximum and minimum values of $f$ is less than $\epsilon$, and so

$\displaystyle A_2<\epsilon\mathrm{vol}([a,b])$

For $A_1$, on the other hand, we let $M$ and $m$ be the supremum and infimum of $f$ on all of $[a,b]$, and we find

$\displaystyle A_1\leq\overline{J}(P_0,J_\epsilon)(M-m)<\epsilon(M-m)$

Thus we conclude that

$U_P(f)-L_P(f)<\epsilon(M-m+\mathrm{vol}([a,b]))$

Since this inequality is valid for any $\epsilon>0$, we see that Riemann’s condition must hold.

December 9, 2009 Posted by | Analysis, Calculus | 4 Comments

## From Local Oscillation to Neighborhoods

When we defined oscillation, we took a limit to find the oscillation “at a point”. This is how much the function $f$ changes due to its behavior within every neighborhood of a point, no matter how small. If the function has a jump discontinuity at $x$, for instance, it shows up as an oscillation in $\omega_f(x)$. We now want to investigate to what extent such localized oscillations contribute to the oscillation of $f$ over a spread-out neighborhood of a point.

To this end, let $f:X\rightarrow\mathbb{R}$ be some bounded function on a compact region $X\in\mathbb{R}^n$. Given an $\epsilon>0$, assume that $\omega_f(x)<\epsilon$ for every point $x\in X$. Then there exists a $\delta>0$ so that for every closed neighborhood $N$ we have $\Omega_f(N)<\epsilon$ whenever the diameter of $d(N)$ is less than $\delta$. The diameter, incidentally, is defined by

$\displaystyle d(N)=\sup\limits_{x,y\in N}\left\{d(x,y)\right\}$

in a metric space with distance function $d$. That is, it’s the supremum of the distance between any two points in $N$.

Anyhow, for each $x$ we have some metric ball $N_x=N(x;\delta_x)$ so that

$\displaystyle\Omega_f(N_x\cap X)<\omega_f(x)+\left(\epsilon-\omega_f(x)\right)=\epsilon$

because by picking a small enough neighborhood of $x$ we can bring the oscillation over the neighborhood within any positive distance of $\omega_f(x)$. This is where we use the assumption that $\epsilon-\omega_f(x)>0$.

The collection of all the half-size balls $N\left(x;\frac{\delta_x}{2}\right)$ forms an open cover of $X$. Thus, since $X$ is compact, we have a finite subcover. That is, the half-size balls at some finite collection of points $x_i$ still covers $X$. We let $\delta$ be the smallest of these radii $\frac{\delta_{x_i}}{2}$.

Now if $N$ is some closed neighborhood with diameter less than $\delta$, it will be partially covered by at least one of these half-size balls, say $N\left(x_p;\frac{\delta_{x_p}}{2}\right)$. The corresponding full-size ball $N_{x_p}$ then fully covers $N$. Further, we chose this ball so that the $\Omega_f(N_x\cap X)<\epsilon$, and so we have

$\displaystyle\Omega_f(N)\leq\Omega(N_x\cap X)<\epsilon$

and we’re done.

December 8, 2009 Posted by | Analysis, Calculus | 7 Comments