The Unapologetic Mathematician

Mathematics for the interested outsider

The Integral as a Function of the Interval

Let’s say we take an integrator \alpha of bounded variation on an interval \left[a,b\right] and a function f that’s Riemann-Stieltjes integrable with respect to \alpha over that interval. Then we know that f is also integrable with respect to \alpha over the subinterval \left[a,x\right]\subseteq\left[a,b\right]. Let’s use this to define a function F on \left[a,b\right] by

\displaystyle F(x)=\int\limits_{\left[a,x\right]}fd\alpha

We can immediately say some interesting things about this function. First of all, F is, like \alpha, of bounded variation. Next, wherever \alpha is continuous, so is F. Finally, if \alpha is increasing, then F is differentiable wherever \alpha is differentiable and f is continuous. At such points, we have F'(x)=f(x)\alpha'(x). Notice that, as usual, the first two results will hold if we show that they hold for increasing integrators.

These results are similar to those we get from the Fundamental Theorem of Calculus, and we can use some of the same techniques to prove them. In particular, we call on the Integral Mean-Value Theorem for Riemann-Stieltjes integrals. If we take points {x} and y in \left[a,b\right], this tells us that

\displaystyle F(y)-F(x)=\int\limits_{\left[a,y\right]}fd\alpha-\int\limits_{\left[a,x\right]}fd\alpha=\int\limits_{\left[x,y\right]}fd\alpha=f(x_0)\left(\alpha(y)-\alpha(x)\right)

where x_0 is some point between x and y.

Now we let M be the supremum of |F| on \left[a,b\right]. For any partition of \left[a,b\right] we calculate the variation

\displaystyle\sum\limits_{i=1}^n|F(x_i)-F(x_{i-1})|\leq M\sum\limits_{i=1}^n|\alpha(x_i)-\alpha(x_{i-1})|\leq MV_\alpha(a,b)

thus giving an upper bound to the variation of F.

Similarly, let {x} be a point where \alpha is continuous. Then given an \epsilon>0 we can find a \delta so that |y-x|<\delta implies that |\alpha(y)-\alpha(x)|<\frac{\epsilon}{M}. Thus |F(y)-F(x)|<\epsilon, and we see that F is continuous at {x} as well.

Finally, we can divide by y-x to find


Then as y gets closer to {x}, x_0 gets squeezed in towards {x} as well. If f is continuous at {x} and \alpha is differentiable there, then the limit of this difference quotient exists, and has the value stated.

March 31, 2008 Posted by | Analysis, Calculus | 2 Comments

Two Mean Value Theorems

We’ve got two different analogues of the integral mean value theorem for the Riemann-Stieltjes integral.

The first one says that if \alpha is increasing on \left[a,b\right] and f is integrable with respect to \alpha, with supremum M and infimum m in the interval, then there is some “average value” c between m and M. This satisfies


In particular, we should note that if f is continuous then the intermediate value theorem tells us that there is some x_0 with f(x_0)=c. That is, there is some x_0 such that

\displaystyle f(x_0)=\frac{1}{\alpha(b)-\alpha(a)}\int\limits_{\left[a,b\right]}fd\alpha

When \alpha(x)=x this gives us the old integral mean value theorem back again.

So why does this work? Well, if \alpha(a)=\alpha(b) then both sides are zero and the theorem is trivially true. Now, the lowest lower sum is L_{\alpha,\{a,b\}}(f)=m\left(\alpha(b)-\alpha(a)\right), while the highest upper sum is U_{\alpha,\{a,b\}}(f)=M\left(\alpha(b)-\alpha(a)\right). The integral itself, which we’re assuming to exist, lies between these bounds:

\displaystyle m\left(\alpha(b)-\alpha(a)\right)\leq\int\limits_{\left[a,b\right]}fd\alpha\leq M\left(\alpha(b)-\alpha(a)\right)

So we can divide through by \int_{\left[a,b\right]}d\alpha=\alpha(b)-\alpha(a) to get the result we seek.

We can get a similar result which focuses on the integrator by using integration by parts. Let’s assume \alpha is continuous and f is increasing on \left[a,b\right]. Our sufficient conditions tell us that the integral of f with respect to \alpha exists, and the integration by parts formula says

\displaystyle\int\limits_{\left[a,b\right]}fd\alpha=f(b)\alpha(b)-f(a)\alpha(a)-\int\limits_{\left[a,b\right]}\alpha df

But the first integral mean value theorem tells us that the integral on the right is equal to \alpha(x_0)\left(f(b)-f(a)\right) for some x_0. Then we can rearrange the above formula to read


So there is some point x_0 so that the integral of f is the same as the integral of the step function taking the value f(a) until x_0 and the value f(b) after it.

March 28, 2008 Posted by | Analysis, Calculus | 4 Comments

2008 Abel Prize

As Chris Hillman just pointed out in a comment, the 2008 Abel Prize went to John Griggs Thompson and Jacques Tits “for their profound achievements in algebra and in particular for shaping modern group theory”. The comment went on my recent throwaway post about the 7x7x7 Rubik’s Cube, but a more appropriate one might have been this one from over a year ago, in which I discuss the Feit-Thompson theorem in passing.

Incidentally, I think I’ve met both of the winners. Tits I’m sure of, since I tried and failed horribly to take a short course he gave at Yale on “buildings”. Thompson I believe showed up for Walter Feit’s memorial, but I could be wrong about that. I wish I could say I was particularly close to one or the other, but I suppose that will have to wait until Adams and Vogan win the prize.

March 27, 2008 Posted by | Uncategorized | 19 Comments

Step Function Integrators

Now that we know how a Riemann-Stieltjes integral behaves where the integrand has a jump, we can put jumps together into more complicated functions. The ones we’re interested in are called “step functions” because their graphs look like steps: flat stretches between jumps up and down.

More specifically, let’s say we have a sequence of points

\displaystyle a\leq c_1<c_2<...<c_n\leq b

and define a function \alpha to be constant in each open interval \left(c_{i-1},c_i\right). We can have any constant values on these intervals, and any values at the jump points. The difference \alpha(c_k^+)-\alpha(c_k^-) we call the “jump” of \alpha at c_k. We have to be careful here about the endpoints, though: if c_1=a then the jump at c_1 is \alpha(c_1^+)-\alpha(c_1), and if c_n=b then the jump at c_n is \alpha(c_n)-\alpha(c_n^-). We’ll designate the jump of \alpha at c_k by \alpha_k.

So, as before, the function \alpha may or may not be continuous from one side or the other at a jump point c_k. And if we have a function f discontinuous on the same side of the same point, then the integral can’t exist. So let’s consider any function f so that at each c_k, at least one of \alpha or f is continuous from the left, and at least one is continuous from the right. We can chop up the interval \left[a,b\right] into chunks so that each one contains only one jump, and then the result from last time (along with the “linearity” of the integral in the interval) tells us that


That is, each jump gives us the function at the jump point times the jump at that point, and we just add them all together. So finite weighted sums of function evaluations are just special cases of Riemann-Stieltjes integrals.

Here’s a particularly nice family of examples. Let’s start with any interval \left[a,b\right] and some natural number n. Define a step function \alpha_n by starting with \alpha_n(a)=a and jumping up by \frac{b-a}{n} at a+\frac{1}{2}\frac{b-a}{n}, a+\frac{3}{2}\frac{b-a}{n}, a+\frac{5}{2}\frac{b-a}{n}, and so on. Then the integral of any continuous function on \left[a,b\right] gives


But notice that this is just a Riemann sum for the function f. Since f is continuous, we know that it’s Riemann integrable, and so as n gets larger and larger, these Riemann sums must converge to the Riemann integral. That is


But at the same time we see that \alpha_n(x) converges to {x}. Clearly there is some connection between convergence and integration to be explored here.

March 27, 2008 Posted by | Analysis, Calculus | 1 Comment

Integrating Across a Jump

In the discussion of necessary conditions for Riemann-Stieltjes integrability we saw that when the integrand and integrator are discontinuous from the same side of the same point, the integral can’t exist. But how close can we come to that situation? It turns out that as long as one of the two functions is continuous from each side, then things generally work out.

Specifically, let’s consider jump discontinuities. These are especially useful to understand, since they’re the only sort a function of bounded variation can have. So let’s say we take an interior point c\in\left(a,b\right) and define as simple a function \alpha as we can with a jump there. We let it be constant on either side, with \alpha(x)=\alpha(a) for x\in\left[a,c\right) and \alpha(x)=\alpha(b) for x\in\left(c,b\right]. We’ll let \alpha(c) be anything at all. Generally it will be discontinuous from both sides at c, but if \alpha(c)=\alpha(a) then we’ll have continuity from the left, and similarly on the right. Of course, we could have \alpha(a)=\alpha(b), with c being the only point with a different value for \alpha.

Now let’s let f be any other function on \left[a,b\right]. We know that we can’t let it be discontinuous from the left at c if \alpha is, or from the right either, so let’s assume it’s continuous from the left or the right at c to satisfy these conditions, but put no other assumptions on it. I assert that f is then integrable with respect to \alpha on \left[a,b\right], and has the value


Where \alpha(c^+) is the limit of \alpha as we approach c from the right. In our situation this will be \alpha(b), but we’ll telegraph a bit by writing it like this. Similarly, \alpha(c^-) is the limit of \alpha as we approach c from the left, which here is \alpha(a).

To see that this is the case, take a tagged partition {x}, and if it doesn’t already contain c just refine it by throwing the new point in. Now every term in the Riemann-Stieltjes sum

\displaystyle f_{\alpha,x}=\sum\limits_{i=1}^nf(t_i)\left(\alpha(x_i)-\alpha(x_{i-1})\right)

is zero except for the one on either side of c=x_k. We can then write the difference f_{\alpha,x}-f(c)\left(\alpha(c^+)-\alpha(c^-)\right) as


Since either f or \alpha is continuous from the left, the first term above must go to zero. Similarly, because at least one is continuous from the right, the second term must also go to zero. Thus the sums f_{\alpha,x} converge to the value asserted.

March 26, 2008 Posted by | Analysis, Calculus | 3 Comments

Necessary Conditions for Integrability

We’ve talked about sufficient conditions for integrability, which will tell us that a given integral does exist. Now we’ll consider the situation where we know that a given integral exists, and see what we can tell about its integrand or integrator.

First, let’s consider an increasing integrator \alpha over an interval \left[a,b\right], and an interior point c\in\left(a,b\right). Let’s assume that both \alpha and the integrand f are discontinuous from the right at c. Then the integral \int_{\left[a,b\right]} f\,d\alpha cannot exist.

Consider what the assumption of discontinuity means: there is some \epsilon so that for every \delta>0, no matter how small, there are points x,y\in\left(c,c+\delta\right) so that |f(x)-f(c)|\geq\epsilon and |\alpha(y)-\alpha(c)|\geq\epsilon. That is, as we approach c from the right we can always find a sample point that differs from the value we want by at least \epsilon, both in the integrand and in the integrator. So let’s see what happens when we throw c in as a partition point.

Suddenly we’re stuck! Riemann’s condition tells us to look at the sum

\displaystyle U_{\alpha,x}(f)-L_{\alpha,x}(f)=\sum\limits_{i=1}^n\left(M_i(f)-m_i(f)\right)\left(\alpha(x_i)-\alpha(x_{i-1})\right)

where each term is positive. So to make this sum go down to zero in Riemann’s condition we need to make each term go down to zero. But for the subinterval whose left endpoint is c, we can’t make this happen! The change in \alpha will always be at least \epsilon for some smaller subinterval, and the difference in sample values of f must always exceed \epsilon as well. Thus the difference between the upper and lower sum will always be at least \epsilon^2>0, violating Riemann’s condition, and thus integrability.

Of course, we could run through the same argument approaching c from the left. We could also use an integrator \alpha of bounded variation, as usual.

What this tells us is that while we may allow integrands or integrators to be discontinuous, they can’t be discontinuous from the same side at the same point. For example, any discontinuity in an integrator of bounded variation is fine as long as we’re pairing it off with a continuous integrand, as we did last time.

March 25, 2008 Posted by | Analysis, Calculus | 1 Comment

Sufficient Conditions for Integrability

Let’s consider some conditions under which we’ll know that a given Riemann-Stieltjes integral will exist. First off, we have a straightforward adaptation of our old result that continuous functions are Riemann integrable. Now I assert that any continuous function f on an interval \left[a,b\right] is Riemann-Stieltjes integrable over that interval with respect to any function \alpha of bounded variation on the same interval. In particular, the function \alpha(x)=x is clearly of bounded variation, and so we will recover our old result.

In fact, we can even adapt the old proof. The Heine-Cantor theorem says that the function f, being continuous on the compact interval \left[a,b\right] is uniformly compact. As usual, we can assume that \alpha is increasing on \left[a,b\right]. And now Riemann’s condition tells us to consider the difference

\displaystyle U_{\alpha,x}(f)-L_{\alpha,x}(f)=\sum\limits_{i=1}^n\left(M_i(f)-m_i(f)\right)\left(\alpha(x_i)-\alpha(x_{i-1})\right)

We want this difference to go to zero as we choose finer and finer partitions x

By uniform continuity we can pick a small enough \delta (depending only on \epsilon) so that when |x-y|<\delta we’ll have |f(x)-f(y)|<\frac{\epsilon}{V}, where V is the total variation \alpha(b)-\alpha(a). Then picking a partition whose subintervals are thinner than \delta makes it so that M_i(f)-m_i(f)<\frac{\epsilon}{V}, which we can then pull out of the sum. What remains sums to exactly the total variation V, and so the difference U_{\alpha,x}(f)-L_{\alpha,x}(f) is below \epsilon, and our theorem holds.

Immediately from this result and integration by parts we come up with another set of sufficient conditions. If \alpha is continuous and f is of bounded variation on an interval \left[a,b\right] then f is Riemann-Stieltjes integrable with respect to \alpha over \left[a,b\right]. Since the integrator \alpha(x)=x is also continuous, this tells us that any function of bounded variation is Riemann integrable!

Of course, these conditions are just sufficient. That is, if they hold then we know that the integral exists. However, if an integral exists, we can’t use these to conclude anything about either the integrand or the integrator. For that we need necessary conditions.

March 24, 2008 Posted by | Analysis, Calculus | 2 Comments

Integrability over subintervals

As I noted when I first motivated bounded variation, we’re often trying to hold down Riemann-Stieltjes sums to help them converge. In a sense, we’re sampling both the integrand f and the variation of the integrator \alpha, and together they’re not big enough to make the Riemann-Stieltjes sums blow up as we take more and more samples. And it seems reasonable that if these sums don’t blow up over the whole interval, then they’ll not blow up over a subinterval.

More specifically, I assert that if \alpha is a function of bounded variation, f is integrable with respect to \alpha on \left[a,b\right], and c is a point between a and b, then f is integrable with respect to \alpha on \left[a,c\right].

Then, in the equation expressing “linearity” in the interval


we have two of these integrals known to exist. Therefore the third one does as well, and the equation is true$. If we have a subinterval \left[c,d\right]\subseteq\left[a,b\right], then this theorem states that f is integrable over \left[c,b\right], and another invocation of the theorem shows that f is integrable over \left[c,d\right]. So being integrable over an interval implies that the function is integrable over any subinterval.

As we said earlier, we can handle all integrators of bounded variation by just considering increasing integrators. And then we can use Riemann’s condition. So, given an \epsilon>0 there is a partition x_\epsilon of \left[a,b\right] so that U_{\alpha,x}(f)-L_{\alpha,x}(f)<\epsilon for any partition x finer than x_\epsilon.

We may as well assume that c is a partition point of x_\epsilon, since we can just throw it in if it isn’t already. Then the partition points up to c form a partition x_\epsilon' of \left[a,c\right]. Further, any refinement x' of x_\epsilon' is similarly part of a refinement x of x_\epsilon. So by assumption we know that U_{\alpha,x}(f)-L_{\alpha,x}(f)<\epsilon, and we get down to x' by throwing away terms in the sum for partition points above c. Each of these terms is nonnegative, and so we see that


That is, f satisfies Riemann’s condition with respect to \alpha on \left[a,c\right], and so it’s integrable.

March 21, 2008 Posted by | Analysis, Calculus | 2 Comments

Products of integrable functions

From the linearity of the Riemann-Stieltjes integral in the integrand, we know that the collection of functions that are integrable with respect to a given integrator over a given interval form a real vector space. That is, we can add and subtract them and multiply by real number scalars. It turns out that if the integrator is of bounded variation, then they actually form a real algebra — we can multiply them too.

First of all, let’s show that we can square a function. Specifically, if \alpha is a function of bounded variation on \left[a,b\right], and of f is bounded and integrable with respect to \alpha on this interval, then so is f^2. We know that we can specialize right away to an increasing integrator \alpha. This will work here (unlike for the order properties) because nothing in sight gets broken by subtraction.

Okay, first off we notice that f(x)^2 is the same thing as |f(x)|^2, and so they have the same supremum in any subinterval of a partition Then the supremum of |f(x)|^2 is the square of the supremum of |f(x)| because squaring is an increasing operation that preserves suprema (and, incidentally, infima). The upshot is that M_i(f^2)=M_i(|f|)^2. Similarly we can show that m_i(f^2)=m_i(|f|)^2. This lets us write

\displaystyle M_i(f^2)-m_i(f^2)=M_i(|f|)^2-m_i(|f|)^2=\left(M_i(|f|)+m_i(|f|)\right)\left(M_i(|f|)-m_i(|f|)\right)

where M is an upper bound for |f| on \left[a,b\right]. Riemann’s condition then tells us that f^2 is integrable.

Now let’s take two bounded integrable functions f and g. We’ll write


and then invoke the previous result and the linearity of integration to show that the product fg is integrable.

March 20, 2008 Posted by | Analysis, Calculus | 8 Comments

Increasing Integrators and Order

For what we’re about to do, I’m going to need a couple results about increasing integrators, and how Riemann-Stieltjes integrals with respect to them play nicely with order properties of the real numbers.

When we consider an increasing integrator we have a certain positivity result: if the integrand is nonnegative and the integral exists, then it is nonnegative as well. That is, for \alpha increasing and f(x)\geq0 on \left[a,b\right] we have \int_{\left[a,b\right]}f\,d\alpha\geq0 as long as it exists. This should be clear, since every Riemann-Stieltjes sum takes the form

\displaystyle f_{\alpha,x}=\sum\limits_{i=1}^nf(t_i)\left(\alpha(x_i)-\alpha(x_{i-1})\right)\geq0

where the inequality follows because each value f(t_i) and each difference \alpha(x_i)-\alpha(x_{i-1}) is nonnegative. Thus the limit of the sums must be nonnegative as well. From this and the linearity of the integral we see that if \alpha is increasing and f(x)\geq g(x) on \left[a,b\right], then we have the inequality


as long as both integrals exist.

Now, when we talked about absolute values — the metric for the real numbers — we saw that the absolute value of a sum was always less than the sum of the absolute values. That is, |x+y|\leq|x|+|y|. And since an integral is just a limit of sums, it stands to reason that a similar result would hold here. Specifically, if \alpha is increasing and f is integrable with respect to \alpha on \left[a,b\right], then so is the function |f|, and further we have the inequality


Indeed, given a partition of \left[a,b\right] the difference M_i(f)-m_i(f) between the supremum and infimum of f on the ith subinterval is the supremum of f(x)-f(y), where x and y range across \left[x_{i-1},x_i\right]. Then, adapting the above inequality we see that


and so we conclude that

M_i(|f|)-m_i(|f|)\leq M_i(f)-m_i(f)

Then we can multiply by \alpha(x_i)-\alpha(x_{i-1}) and sum over a partition to find

U_{\alpha,x}(|f|)-L_{\alpha,x}(|f|)\leq U_{\alpha,x}(f)-L_{\alpha,x}(f)

Riemann’s condition then tells us that |f| is integrable, and the inequality follows by the previous result.

We might hope to extend these results to integrators of bounded variation, but it won’t work right. This is because we go from increasing functions to functions of bounded variation by subtracting, and this operation will break the order properties.

March 18, 2008 Posted by | Analysis, Calculus | 1 Comment