The Unapologetic Mathematician

Mathematics for the interested outsider

Lebesgue’s Condition

At last we come to Lebesgue’s condition for Riemann-integrability in terms of Lebesgue measure. It asserts, simply enough, that a bounded function f:[a,b]\rightarrow\mathbb{R} defined on an n-dimensional interval [a,b] is Riemann integrable on that interval if and only if the set D of discontinuities of f has measure zero. Our proof will go proceed by way of our condition in terms of Jordan content.

As in our proof of this latter condition, we define

\displaystyle J_\epsilon=\{x\in[a,b]\vert\omega_f(x)\geq\epsilon\}

and by our earlier condition we know that \overline{c}(J_\epsilon)=0 for all \epsilon>0. In particular, it holds for \epsilon=\frac{1}{n} for all natural numbers n.

If x\in D is a point where f is discontinuous, then the oscillation \omega_f(x) must be nonzero, and so \omega_f(x)>\frac{1}{n} for some n. That is

\displaystyle D\subseteq\bigcup\limits_{n=1}^\infty J_{\frac{1}{n}}

Since \overline{c}(J_{\frac{1}{n}})=0, we also have \overline{m}(J_{\frac{1}{n}})=0, and therefore have \overline{m}(D)=0 as well.

Conversely, let’s assume that \overline{m}(D)=0. Given an \epsilon>0, we know that J_\epsilon is a closed set contained in D. From this, we conclude that \overline{c}(J_\epsilon)\leq\overline{m}(D)=0. Since this is true for all \epsilon, the Jordan content condition holds, and f is Riemann integrable.


December 15, 2009 Posted by | Analysis, Calculus | 4 Comments

Some Sets of Measure Zero

Here’s a useful little tool:

Let F=\{F_1,F_2,\dots\} be a countable collection of sets of measure zero in \mathrm{R}^n. That is, \overline{m}(F_k)=0 for all k. We define S to be the union

\displaystyle S=\bigcup\limits_{k=1}^\infty F_k

Then it turns out that \overline{m}(S)=0 as well.

To see this, pick some \epsilon>0. For each set F_k we can pick a Lebesgue covering C_k of F_k so that \mathrm{vol}(C_k)<\frac{\epsilon}{2^k}. We can throw all the intervals in each of the C_k together into one big collection C, which will be a Lebesgue covering of all of S. Indeed, the union of a countable collection of countable sets is still countable. We calculate the volume of this covering:


where the final summation converges because it’s a geometric series with initial term \frac{\epsilon}{2} and ratio \frac{1}{2}. This implies that \overline{m}(S)=0.

As an example, a set consisting of a single point has measure zero because we can put it into an arbitrarily small open box. The result above then says that any countable collection of points in \mathbb{R}^n also has measure zero. For instance, the collection of rational numbers in \mathbb{R}^1 is countable (as Kate mentioned in passing recently), and so it has measure zero. The result is useful because otherwise it might be difficult to imagine how to come up with a Lebesgue covering of all the rationals with arbitrarily small volume.

December 14, 2009 Posted by | Analysis, Measure Theory | 4 Comments

Outer Lebesgue Measure and Content

Before I begin, I’d like to mention something in passing about Lebesgue measure. It’s pronounced “luh-bayg”. The “e” is a long “a”, the “s” is completely silent, and the “gue” is like in “analogue”. Moving on…

There is, as we might expect, a relationship between outer Lebesgue measure and Jordan content, some aspects of which we will flesh out now.

First off, if S\mathbb{R}^n is a bounded subset of n-dimensional Euclidean space, then we have \overline{m}(S)\leq\overline{c}(S). Indeed, if it’s bounded, then we can put it into a box [a,b], and choose a partition P of this box. List out the n-dimensional intervals of P which contain points of \mathrm{Cl}(S) as I_k=[a_k,b_k] for 1\leq k\leq m. Then by definition we have


Now given an \epsilon>0, define the open n-dimensional intervals C_k=(a_k-\frac{\epsilon}{2m},b_k+\frac{\epsilon}{2m}). These form a Lebesgue covering C of S for which


Thus \overline{m}(S)\leq\overline{J}(P,S)+\epsilon, and passing to the infimum we find \overline{m}(S)\leq\overline{c}(S)+\epsilon. Since \epsilon is arbitrary, we have \overline{m}(S)\leq\overline{c}(S).

Secondly, if S\subseteq\mathbb{R}^n is bounded, and T\subseteq S is a compact subset, then \overline{c}(T)\leq\overline{m}(S). Start by putting S into a box [a,b], and take some \epsilon>0. We can find a Lebesgue covering C of S so that \mathrm{vol}(C)<\overline{m}(S)+\epsilon, and this will also cover T. Since T is compact, we can pick out a finite collection C' of open intervals which still manage to cover T. Finally, we can choose a partition P of [a,b] so that the corners of each interval in C' are partition points. Given all of these choices, we find


And since \epsilon is arbitrary we conclude \overline{c}(T)\leq\overline{m}(S).

Finally, putting these two results together we can see that if T\subseteq\mathbb{R}^n is a compact set, then \overline{c}(T)=\overline{m}(T).

December 11, 2009 Posted by | Analysis, Measure Theory | 8 Comments

Outer Lebesgue Measure

We’re still not quite to a very usable condition to determine integrability, so we need to further refine the idea of Jordan content. We want to get away from two restrictions in our definition of Jordan content.

First, we have some explicit box within which we make our calculations. We showed the choice of such a box is irrelevant, but it’s still inelegant to have to make the choice in the first place. It’s also annoying to have to deal with all the subintervals in the box that aren’t involved in the set at all. More importantly, we’re restricted to cutting the box up into a finite number of pieces. We can gain some flexibility if we allow a countably infinite number of pieces, and allow ourselves to take limits. Limits will also allow us to avoid having to take the closure of the set we’re really interested in. We can still deal with boundary points, because we can take get them in limits of open sets.

Okay, so let S\subseteq\mathbb{R}^n be a set in n-dimensional space. A Lebesgue covering of S is a countable collection of open n-dimensional intervals C=\{I_1,I_2,\dots\} so that each point of S lies in at least one of the intervals. If \mathrm{vol}(I_k) is the n-dimensional volume of the kth interval, then the n-dimensional volume of the cover is defined as


as long as this infinite series converges, and define it to be +\infty if the series diverges. We then define the outer Lebesgue measure as the infimum


over all Lebesgue covers C of S. This may seem odd, since two intervals in a Lebesgue cover may overlap, and so that volume ends up getting counted twice, but it’s supposed to be an overestimate of the “volume” of S, and in practice more refined Lebesgue covers can shrink such overlaps down arbitrarily small.

We also allow for the possibility that there may be only finitely many intervals in our cover. In this case, the infinite series above is simply a finite sum. For example, if S is bounded, we can put it into some large enough interval [a,b], which is then a Lebesgue cover itself. We then find that 0\leq\overline{m}(S)\leq\mathrm{vol}([a,b]).

We’re most concerned with the case when \overline{m}(S)=0, and we then say that S is a set of measure zero. There is a corresponding notion of inner Lebesgue measure, which we will not describe (yet), and the inner measure is always less than the outer measure and greater than zero. Thus if the outer measure is zero, then the inner measure must be as well.

It should also be clear, by the way, that if S\subseteq T then we have the inequality \overline{m}(S)\leq\overline{m}(T).

December 10, 2009 Posted by | Analysis, Measure Theory | 6 Comments

Jordan Content Integrability Condition

We wanted a better necessary and sufficient condition for integrability than Riemann’s condition, and now we can give a result halfway to our goal. We let f:[a,b]\rightarrow\mathbb{R} be a bounded function defined on the n-dimensional interval [a,b].

The condition hinges on defining a certain collection of sets. For every \epsilon>0 we define the set

\displaystyle J_\epsilon=\left\{x\in[a,b]\vert\omega_f(x)\geq\epsilon\right\}

of points where the oscillation exceeds the threshold value \epsilon. The first thing to note about J_\epsilon is that it’s a closed set. That is, it should contain all its accumulation points. So let x be such an accumulation point and assume that is isn’t in J_\epsilon, so \omega_f(x)<\epsilon. So there must exist a neighborhood N of x so that \Omega_f(N)<\epsilon and this means that \omega_f(y)<\epsilon for any point y\in N, and so none of these points can be in J_\epsilon. But if this is the case, then x can’t be an accumulation point of J_\epsilon, and so we must have x\in J_\epsilon.

And now for our condition. The function f is integrable if and only if the Jordan content c(J_\epsilon)=0 for every \epsilon>0.

We start by assuming that \overline{c}(J_\epsilon)\neq0 for some \epsilon>0, and we’ll show that Riemann’s condition can’t hold. Given a partition P of [a,b] we calculate the difference between the upper and lower sums

\displaystyle\begin{aligned}U_P(f)-L_P(f)&=\sum\limits_{i_1=1}^{m_1}\dots\sum\limits_{i_n=1}^{m_n}\left(\max\limits_{t\in I_{i_1\dots i_n}}\{f(t)\}-\min\limits_{t\in I_{i_1\dots i_n}}\{f(t)\}\right)\mathrm{vol}(I_{i_1\dots i_n})\\&=A_1+A_2\geq A_1\end{aligned}

where A_1 is the part of the sum involving subintervals which contain points of J_\epsilon and A_2 is the rest. The intervals in A_1 have total length \overline{J}(P,J_\epsilon)\geq\overline{c}(J_\epsilon), and in these intervals we must have

\displaystyle\max\limits_{t\in I_{i_1\dots i_n}}\{f(t)\}-\min\limits_{t\in I_{i_1\dots i_n}}\{f(t)\}\geq\epsilon

because if the difference were less then the subinterval would be a neighborhood with oscillation less than \epsilon and thus couldn’t contain any points in J_\epsilon. Thus we conclude that A_1\geq\epsilon\overline{c}(J_\epsilon), and the difference between the upper and lower sums is at least as big that. This happens no matter what partition we pick, and so the upper and lower integrals must also differ by at least this much, violating Riemann’s condition. Thus if the function is integrable, we must have \overline{c}(J_\epsilon)=0.

Conversely, take an arbitrary \epsilon>0 and assume that \overline{c}(J_\epsilon)=0. For this to hold, there must exist a partition P_0 so that \overline{J}(P_0,J_\epsilon)<\epsilon. In each of the subintervals not containing points of J_\epsilon, we have \omega_f(x)<\epsilon for all x in the subinterval. Then we know there exists a \delta so that we can subdivide the subinterval into smaller subintervals each with a diameter less than \delta, and the oscillation on each of these subintervals will be less than \epsilon. We will call this refined partition P_\epsilon.

Now, if P is finer than P_\epsilon, we can again write

\displaystyle\begin{aligned}U_P(f)-L_P(f)&=\sum\limits_{i_1=1}^{m_1}\dots\sum\limits_{i_n=1}^{m_n}\left(\max\limits_{t\in I_{i_1\dots i_n}}\{f(t)\}-\min\limits_{t\in I_{i_1\dots i_n}}\{f(t)\}\right)\mathrm{vol}(I_{i_1\dots i_n})\\&=A_1+A_2\end{aligned}

where again A_1 contains the terms from subintervals containing points in J_\epsilon and A_2 is the remainder. In all of these latter subintervals we know the difference between the maximum and minimum values of f is less than \epsilon, and so

\displaystyle A_2<\epsilon\mathrm{vol}([a,b])

For A_1, on the other hand, we let M and m be the supremum and infimum of f on all of [a,b], and we find

\displaystyle A_1\leq\overline{J}(P_0,J_\epsilon)(M-m)<\epsilon(M-m)

Thus we conclude that


Since this inequality is valid for any \epsilon>0, we see that Riemann’s condition must hold.

December 9, 2009 Posted by | Analysis, Calculus | 4 Comments

From Local Oscillation to Neighborhoods

When we defined oscillation, we took a limit to find the oscillation “at a point”. This is how much the function f changes due to its behavior within every neighborhood of a point, no matter how small. If the function has a jump discontinuity at x, for instance, it shows up as an oscillation in \omega_f(x). We now want to investigate to what extent such localized oscillations contribute to the oscillation of f over a spread-out neighborhood of a point.

To this end, let f:X\rightarrow\mathbb{R} be some bounded function on a compact region X\in\mathbb{R}^n. Given an \epsilon>0, assume that \omega_f(x)<\epsilon for every point x\in X. Then there exists a \delta>0 so that for every closed neighborhood N we have \Omega_f(N)<\epsilon whenever the diameter of d(N) is less than \delta. The diameter, incidentally, is defined by

\displaystyle d(N)=\sup\limits_{x,y\in N}\left\{d(x,y)\right\}

in a metric space with distance function d. That is, it’s the supremum of the distance between any two points in N.

Anyhow, for each x we have some metric ball N_x=N(x;\delta_x) so that

\displaystyle\Omega_f(N_x\cap X)<\omega_f(x)+\left(\epsilon-\omega_f(x)\right)=\epsilon

because by picking a small enough neighborhood of x we can bring the oscillation over the neighborhood within any positive distance of \omega_f(x). This is where we use the assumption that \epsilon-\omega_f(x)>0.

The collection of all the half-size balls N\left(x;\frac{\delta_x}{2}\right) forms an open cover of X. Thus, since X is compact, we have a finite subcover. That is, the half-size balls at some finite collection of points x_i still covers X. We let \delta be the smallest of these radii \frac{\delta_{x_i}}{2}.

Now if N is some closed neighborhood with diameter less than \delta, it will be partially covered by at least one of these half-size balls, say N\left(x_p;\frac{\delta_{x_p}}{2}\right). The corresponding full-size ball N_{x_p} then fully covers N. Further, we chose this ball so that the \Omega_f(N_x\cap X)<\epsilon, and so we have

\displaystyle\Omega_f(N)\leq\Omega(N_x\cap X)<\epsilon

and we’re done.

December 8, 2009 Posted by | Analysis, Calculus | 7 Comments


Oscillation in a function is sort of a local and non-directional version of variation. If f:X\rightarrow\mathbb{R} is a bounded function on some region X\subseteq\mathbb{R}^n, and if T is a nonempty subset of X, then we define the oscillation of f on T by the formula

\displaystyle\Omega_f(T)=\sup\limits_{x,y\in T}\left\{f(y)-f(x)\right\}

measuring the greatest difference in values of f on T.

We also want a version that’s localized to a single point x\in X. To do this, we first note that the collection of all subsets N of X which contain x form a poset as usual by inclusion. But we want to reverse this order and say that M\preceq N if and only if N\subseteq M.

Now for any two subsets x\in N_1\subseteq X and x\in N_2\subseteq X, their intersection N_1\cap N_2 is another such subset containing x. And since it’s contained in both N_1 and N_2, it’s above both of them in our partial order, which makes this poset a directed set, and the oscillation of f is a net.

In fact, it’s easy to see that if N\subseteq M then \Omega_f(N)\leq\Omega_f(M), so this net is monotonically decreasing as the subset gets smaller and smaller. Further, we can see that \Omega_f(N)\geq0, since if we can always consider the difference f(t)-f(t)=0, the supremum must be at least this big.

Anyhow, now we know that the net has a limit, and we define


where N is a subset of X containing x, and we take the limit as N gets smaller and smaller.

In fact, this is slightly overdoing it. Our domain is a topological subspace of \mathbb{R}^n, and is thus a metric space. If we want we can just work with metric balls and define

\displaystyle\omega_f(x)=\lim\limits_{r\rightarrow0^+}\Omega_f(N(x;r)\cap X)

where N(x;r) is the ball of radius r around x. These definitions are exactly equivalent in metric spaces, but the net definition works in more general topological spaces, and it’s extremely useful in its own right later, so it’s worth thinking about now.

Oscillation provides a nice way to restate our condition for continuity, and it works either using the metric space definition or the neighborhood definition of continuity. I’ll work it out in the latter case for generality, but it’s worth writing out the parallel proof for the \epsilon\delta definition.

Our assertion is that f is continuous at a point x if and only if \omega_f(x)=0. If f is continuous, then for every \epsilon there is some neighborhood N of x so that \lvert f(y)-f(x)\rvert<\frac{\epsilon}{3} for all y\in N. Then we can check that


for all y_1 and y_2 in N, and so \Omega_f(N)<\epsilon. Further, any smaller neighborhood of x will also satisfy this inequality, so the net is eventually within \epsilon of {0}. Since this holds for any \epsilon, we find that the net has limit {0}.

Conversely, let’s assume that the oscillation of f at x is zero. That is, for any \epsilon we have some neighborhood N of x so that \Omega_f(N)<\frac{\epsilon}{2}, and the same will automatically hold for smaller neighborhoods. This tells us that f(y)-f(x)<\epsilon for all y\in N, and also f(x)-f(y)<\epsilon. Together, these tell us that \lvert f(y)-f(x)\rvert<\epsilon, and so f is continuous at x.

December 7, 2009 Posted by | Analysis, Calculus | 4 Comments

Jordan Content and Boundaries

As a first exercise working with Jordan content, let’s consider how it behaves at the boundary of a region.

I’ve used this term a few times when it’s been pretty clear from context, but let me be clear. We know about the interior and closure of a set, and particularly of a subset of \mathbb{R}^n. The boundary of such a set will consist of all the points in the closure of the set, but not in its interior. That is, we have

\mathrm{Cl}(S)=\mathrm{int}(S)\uplus\partial S

That is, a point x is in \partial S if x\in\mathrm{Cl}(S), but any neighborhood of x contains points not in S.

So let’s put S inside a box [a,b] and partition the box with a partition P. When we calculate \overline{J}(P,S), we include all the subintervals that we do when we calculate \underline{J}(P,S), along with some other subintervals which contain both points within \mathrm{int}(S) and points not in \mathrm{int}(S). I say that each of these are exactly the subintervals which contain points in \partial S. Indeed, if a subinterval contains a point of \partial S it cannot be included when calculating \underline{J}(P,S), but must be included when calculating \overline{J}(P,S). Inversely, if a subinterval contains no point of \partial S then it is either contained completely within \mathrm{int}(S) — and is included in both calculations — or it is contained completely within the complement of \mathrm{Cl}(S) — and is contained in neither computation. Thus we have the relation

\displaystyle\overline{J}(P,S)=\underline{J}(P,S)+\overline{J}(P,\partial S)

which we rewrite

\displaystyle\overline{J}(P,\partial S)=\overline{J}(P,S)-\underline{J}(P,S)

We can then pass to infima to find

\displaystyle\overline{c}(\partial S)\geq\overline{c}(S)-\underline{c}(S)

Check this carefully to see how the equality is weakened to an inequality.

On the other hand, given any \epsilon>0 we can pick partitions P_1 so that \overline{J}(P_1,S)<\overline{c}(S)+\frac{\epsilon}{2} and P_2 so that \underline{J}(P_2,S)>\underline{c}(S)-\frac{\epsilon}{2}. We let P be a common refinement of P_1 and P_2, which will then satisfy both of these inequalities. We find

\displaystyle\overline{c}(S)\leq\overline{J}(P,\partial S)=\overline{J}(P,S)-\underline{J}(P,S)<\overline{c}(S)-\underline{c}(S)+\epsilon

since \epsilon is arbitrary, we find \overline{c}(\partial S)\leq\overline{c}(S)-\underline{c}(S).

Together with the previous result, we conclude that \overline{c}(\partial S)=\overline{c}(S)-\underline{c}(S). In particular, we find that S is Jordan measurable if and only if c(\partial S)=\overline{c}(\partial S)=0.

December 4, 2009 Posted by | Analysis, Measure Theory | 6 Comments

Jordan Content

We’ve got Riemann’s condition, which is necessary and sufficient to say that a given function is integrable on a given interval. In one dimension, this really served us well, because it let us say that continuous functions were integrable. We could break a piecewise continuous function up into intervals on which it was continuous, and thus integrable, and this got us almost every function we ever cared about.

But in higher dimensions it’s not quite so nice. The region on which a function is continuous may well be irregular, and it often is for many functions we’re going to be interested. We need a more robust necessary and sufficient condition than Riemann’s. And towards this end we need to introduce a few concepts from measure theory. I want to say at the outset, though, that this will not be a remotely exhaustive coverage of measure theory (yet). Mostly we need to build up the concept of a set in a Euclidean space having Lebesgue measure zero, and the related notion of Jordan content.

So let’s say we’ve got a set bounded S\subseteq\mathbb{R}^n. Put it inside an n-dimensional box — an interval [a,b]. Then partition this interval with some partition P\in\mathcal{P}[a,b] like we did when we set up higher-dimensional Riemann sums. Then we just count up all the subintervals in the partition P that are completely contained in the interior \mathrm{int}(S) and add their volumes together. This is the volume of some collection of boxes which is completely contained within S, and we call this volume \underline{J}(P,S).

Any reasonable definition of “volume” would have to say that since S contains this collection of boxes, the volume of S must be greater than the volume \underline{J}(P,S). Further, as we refine P any box which was previously contained in S will still have all of its subdivisions contained in S, but boxes which were only partially contained in S may now have subdivisions completely contained in S. And so if P' is a refinement of P, then \underline{J}(P',S)\geq\underline{J}(P,S). That is, we have a monotonically increasing net. We then define the “lower Jordan content” of S by taking the supremum


sort of like how the lower integral is the supremum of all the lower Riemann sums. In fact, this is the lower integral of the characteristic function \chi_S, which is defined to be {1} on elements of S and {0} elsewhere. If a subinterval contains any points not in S the lowest sample will be {0}, but otherwise the sample must be {1}, and the lower Riemann sum for the subdivision P is L_P(\chi_S)=\underline{J}(P,S).

It should be clear, by the way, that this doesn’t really depend on the interval [a,b] we start with. Indeed, if we started with a different interval [a',b'] that also contains S, then we can immediately partition each one so that one subinterval is the intersection with the other one, and this intersection contains all of S anyway. Then we can throw away all the other subintervals in these initial partitions because they don’t touch S at all, and the rest of the calculations proceed exactly as before, and we get the same answer in either case.

Similarly, given a partition P of an interval [a,b] containing a set S we could count up the volumes of all the subintervals that contain any point of the closure \mathrm{Cl}(S) at all. This gives the volume of a collection of boxes that together contains all of S, and we call this total volume \overline{J}(P,S). Again, since the whole of S is contained in this collection of boxes, the volume of S will be less than \overline{J}(P,S). And this time as we refine the partition we may throw out subdivisions which no longer touch any point of S, so this net is monotonically decreasing. We define the “upper Jordan content” of S by taking the infimum


Again, this doesn’t depend on the original interval [a,b] containing S, and for the same reason. As before, we find that an upper Riemann sum for the characteristic function of S is U_P(\chi_S)=\overline{J}(P,S), so the upper Jordan content is the upper integral of \chi_S.

If S is well-behaved, we will find \overline{c}(S)=\underline{c}(S). In this case we say that S is “Jordan measurable”, and we define the Jordan content c(S) to be this common value. By Riemann’s condition, we find that

\displaystyle c(S)=\int\limits_{[a,b]}\chi_S(x)\,dx

December 3, 2009 Posted by | Analysis, Measure Theory | 11 Comments

Upper and Lower Integrals and Riemann’s Condition

Yesterday we defined the Riemann integral of a multivariable function defined on an interval [a,b]\subseteq\mathbb{R}^n. We want to move towards some understanding of when a given function f is integrable on a given interval [a,b].

First off, we remember how we set up Darboux sums. These were given by prescribing specific methods of tagging a given partition P. In one, we always picked the point in the subinterval where f attained its maximum within that subinterval, and in the other we always picked the point where f attained its minimum. We even extended these to the Riemann-Stieltjes case and built up upper and lower integrals. And we can do the same thing again.

Given a partition P of [a,b] and a function defined on [a,b], we define the upper Riemann sum by

\displaystyle U_P(f)=\sum\limits_{i_1=1}^{m_1}\dots\sum\limits_{i_n=1}^{m_n}\max\limits_{t\in I_{i_1\dots i_n}}\{f(t)\}\mathrm{vol}(I_{i_1\dots i_n})

In each subinterval we pick a sample point which gives the largest possible sample function value in that subinterval. We similarly define a lower Riemann sum by

\displaystyle L_P(f)=\sum\limits_{i_1=1}^{m_1}\dots\sum\limits_{i_n=1}^{m_n}\min\limits_{t\in I_{i_1\dots i_n}}\{f(t)\}\mathrm{vol}(I_{i_1\dots i_n})

As before, any Riemann sum must fall between these upper and lower sums, since the value of the function on each subinterval is somewhere between its maximum and minimum.

Just like when we did this for single-variable Riemann-Stieltjes integrals, we find that these nets are monotonic. That is, if P' is a refinement of P, then U_{P'}(f)\leq U_P(f) and L_{P'}(f)\geq L_P(f). As we refine the partition, the upper sum can only get smaller and smaller, while the lower sum can only get larger and larger. And so we define


The upper integral is the infimum of the upper sums, while the lower integral is the supremum of the lower sums.

Again, as before we find that the upper integral is convex over its integrand, while the lower integral is concave


and if we break up an interval into a collection of nonoverlapping subintervals, the upper and lower integrals over the large interval are the sums of the upper and lower integrals over each of the subintervals, respectively.

And, finally, we have Riemann’s condition. The function f satisfies Riemann’s condition on [a,b] we can make upper and lower sums arbitrarily close. That is, if for every \epsilon>0 there is some partition P_\epsilon so that U_{P_\epsilon}(f)-L_{P_\epsilon}(f)<\epsilon. In this case, the upper and lower integrals will coincide, and we can show that f is actually integrable over [a,b]. The proof is almost exactly the same one we gave before, and so I’ll just refer you back there.

December 2, 2009 Posted by | Analysis, Calculus | 9 Comments