I’ve been going over notes in preparation for tomorrow’s talk at the University of Pennsylvania (scroll down a bit).
For anyone who happens to be there (Isabel, Charles…) I’ll be heading out from a little south of Baltimore early enough to (hopefully) compensate for the fact that I-95 is closed a little north of the exit for UPenn. I should be there in plenty of time for lunch with Jim Stasheff, and dinner later on. Drop an email (if you remember that I teach at Tulane it’s not too hard to find the address) with any contact information you want to pass along.
Now I need to find a follow-on to this paper and start applying to those financial math jobs.
From our study of functions of bounded variation, we know that can be written as the difference between two increasing functions . Then the linearity of the integral in the integrator tells us that
Or does it? Remember that we have to understand this equation as saying that if two of the integrals exist then the third one does and the equality holds. But we also know that we’ve got a lot of choice in how to carve up , and it’s easily possible to do it in such a way that the integrals on the right don’t exist.
Luckily, we can show that there’s always at least one splitting of into two parts like we want, and further so that all the integrals above exist and the equation holds. And it turns out that the first function is just the variation ! That is, I assert that if is Riemann-Stieltjes integrable with respect to on the interval and there exists some with on , then is Riemann-Stieltjes integrable with respect to on the same interval. And because is increasing, we can prove it by showing that satisfies Riemann’s condition with respect to on .
So, given we need to find a partition so that for any finer partition we have .
We start by picking a partition so that for any finer partition and any choices and of tags for we have
Such a partition exists because the sum is the difference between two Riemann-Stieltjes sums for , and we’re assuming that these sums converge to the value of the integral.
Now let’s refine it a bit. Any refinement will still satisfy the same condition we already have, but let’s make it also satisfy
This we can do because the total variation is the supremum of such sums over all partitions, and so we can find a fine enough partition to come within of it. This new partition will be the one we need to establish Riemann’s condition. Now we must actually prove that it works.
First, notice that , since the variation only grows as fast as the function itself changes. Also, the supremum and infimum in the th subinterval are each less than in absolute value, so . Together, these show us that
by the second property of we established when we refined our partition.
Now we set . We want to pick two sets of tags so that for subintervals where , and so that for subintervals where . Then we find the inequality
by the first property of the partition we chose. Notice that in some of the subintervals I took the absolute value off of the change in by switching which sample of I subtracted from which, introducing a new negative sign.
So, adding these two big inequalities together, we find that
which shows that the upper and lower sums for the partition differ by less than . Therefore indeed satisfies Riemann’s condition, and it thus integrable with respect to on .
What this means is that integrals with respect to integrators of bounded variation can always be reduced to those with respect to increasing integrators, and thus to situations where Riemann’s condition can be brought to bear.
Actually, I did go to an event today, but despite rather than because of the day. Jeffrey Bub was talking up at UMBC, and it gave me the chance to clothesline him and ask about convex sets and ordered linear spaces, which Howard Barnum had said he (Dr. Bub) knew something about the interpretation of as state- and measurement-spaces.
If we want our Riemann-Stieltjes sums to converge to some value, we’d better have our upper and lower sums converge to that value in particular. On the other hand, since the upper and lower sums sandwich in all the others, their convergence is enough for the rest. And their convergence is entirely captured by their lower and upper bounds, respectively — the upper and lower Stieltjes integrals. So we want to know when .
We’ll prove this equality in general by showing that the difference has to be arbitrarily small. That is, for any partition of we have the inequalities
by definition. Subtracting the one from the other we find
So if given an we can find a partition for which the upper and lower sums differ by less than then the difference between the upper and lower integrals must be even less. If we can do this for any , we say that the function satisfies Riemann’s condition with respect to on .
The lead-up to the definition of Riemann’s condition shows us that if satisfies this condition then the lower and upper integrals are equal. Then just like we saw happen with Darboux sums we can squeeze any Riemann-Stieltjes sum between and upper and a lower sum. So if the upper and lower integrals are both equal to some value, then the limit of the Riemann-Stieltjes sums over tagged partitions must exist and equal that value, and thus is Riemann-Stieltjes integrable with respect to on .
Now what if the is Riemann-Stieltjes integrable with respect to on ? We would hope that then satisfies Riemann’s condition with respect to on , and so these three conditions are equivalent. So given we need to find an actual partition of so that .
Since we’re assuming that is Riemann-Stieltjes integrable, we’ll call the value of the integral . Then we can find a tagged partition so that for any finer tagged partitions and we have
Combining these we find that
Now as we pick different and we can make the difference in values of get as large as that between and . So for any we can choose tags so that . In particular, we can consider , which is positive because is increasing.
The difference between the upper and lower sums is
which is then less than
which is then less than .
Thus we establish the equivalence of Riemann’s condition and Riemann-Stieltjes integrability, as long as the integrator is increasing.
One of the few things that this city gets right is its airport. Free high-speed wireless and plenty of open power taps. What more could a junkie like me want? Well, maybe for the newsstands to carry Scientific American so I could read Scott Aaronson’s article on the plane. It seems everyone at the Clifford Lectures had been reading it, even though most of us are already rather familiar with the ins and outs of quantum computation to begin with. That’s sort of the point, this year.
Yes, despite how great a topic it is, I’ve got to leave on a jet plane this morning for DC, since if I waited until the end of the conference it would be a lot more expensive, and I’m not making that sweet, sweet postdoc money yet, let along tenure-track. Worst of all is that I have to miss Sam Lomonaco’s talk about his upcoming paper with Lou Kauffman (when are the old guard going to get blaths?). Not to worry, though. He gave me the inside scoop yesterday, and while I can’t go public with them yet I can say that they’re taking the interactions between knot theory and quantum computation in a completely new direction, and it’ll be interesting to see where that leads.
I spent much of the day shamelessly self-promoting my new paper to the assembled luminaries, especially pushing the introduction where I tie (no pun intended) my tangle program to topological quantum computation. And the group was very much inclined to think in terms of categories of tangles as well. In fact, the talks were kicked off by Phil Scott, whose topic was so close to that of John Baez’ and Mike Stay’s paper that he’s had to tweak his notes in the last couple days.
Incidentally, those of you who have been around for a while may remember him from when I talked about closed categories. I think both of us fell victim to the magic of the intarwobs back then, and overstated things a tad. I admit that there are other categories with closure but without monoidal structure, but I don’t see them arising naturally in what I do, but to say the definition I gave is “totally wrong” is a bit much. Actually, when he left that comment, he says he was thinking of a certain example he mentioned in his talk, which turns out to have a tensor product — we just don’t know what it looks like! And when he mentioned that example, I thought, “well that’ll show that guy who left that comment…” Ah, what a small, small world academic mathematics is. It’s all good, though.
The main lectures are being given by Samson Abramsky, and they’re straight down the lines of my own thoughts on the structure of quantum (and otherwise non-classical) information and symmetric monoidal closed categories. And they’re very accessible, so the junior I’ve been advising through his reading of the Aharonov-Jones-Landau paper was able to keep up, and probably will through much of the rest of the series. Of course, introductions were made to Sam, and maybe he’ll apply to UMBC’s computer science department next year. Have I sabotaged a poor innocent undergraduate into a life of knots, categories, and quantum computers? Horror!
But the plane boards soon, and then I bum around College Park for the day. I’ll try to get back to the expository line tomorrow.
The draft of my paper “Categorifying Coloring Numbers” is up on the arXiv! Go, download it! Paper the walls of urban buildings like it’s a rock band’s poster!
Or you could just read it, especially if you’re going to be at the University of Pennsylvania on March 19th or the University of California at Riverside on April 2nd.
I left a few things out of last Saturday’s post. Since I’ve spent all morning finishing off that paper (I’ll post the arXiv link tomorrow when it shows up) I’m sort of mathed out for the moment. I’ll just tack up these addenda and take a nap or something to brace for tomorrow’s Clifford Lectures (which is the only day of them I’ll be able to see, due to airline pricing).
Okay, so we said that we can represent any function of bounded variation as the difference of two increasing functions. But we should notice here that this decomposition is by no means unique. In fact, if is an increasing function, then we can also write , and get a different representation of .
Usually nonuniqueness gets messy, but there’s a way this can come in handy. If we pick to be strictly increasing, then so will and be. So any function of bounded variation can be written as the difference between two strictly increasing functions. This restriction may be useful in some situations.
It also turns out that wherever the function is continuous, then so will its variation be. Thus any continuous function of bounded variation can be written as the difference of two continuous, (strictly) increasing functions. I’ll leave this fact to you, though it’s not really hard.
Now let’s look at bounded increasing functions for a moment. Such a function might jump up at certain points in its domain, like the Heaviside function that sends all negative numbers to and all nonnegative numbers to . However, a monotone function can’t have any other kinds of discontinuities. Further, since each jump must increase by a finite amount, we can only have a countable number of them in a finite interval, or else the function would have to take arbitrarily large values and would no longer be bounded!
So increasing functions can only have a countable number of jump discontinuities. But any function of bounded variation is the difference of two increasing functions. Thus any function of bounded variation can only have a countable number of discontinuities, where at worst the function jumps up or down by a finite amount at each one. The only other sort of discontinuity is a point where the function has a limit, but takes a different value. For instance, takes the value at every positive or negative number, but .
Way back when, we talked about Darboux sums, where we used a particular recipe to pick the tags. Specifically, we defined the upper sum by picking a local maximum of in each subinterval as our tag, and the lower sum similarly. Today, let’s consider how this works with Riemann-Stieltjes sums, and specifically with an increasing integrator .
So given a partition , we define the upper and lower Riemann-Stieltjes sums as follows:
Now since we’ve chosen to be increasing we can see that . Therefore we can find the inequalities
for any possible tag . And so any Riemann-Stieltjes sum for any collection of tags in the partition lies between the lower and upper sums: . Notice that we need to be increasing here — if not, we can construct some pathological function that makes any combination of these inequalities fail.
Now the next step in Darboux integration was noting that any refinement of a partition drops the upper sum and raises the lower sum. Just like then, we can simply consider the process of adding a single new partition point, since any further refinement is just a sequence of new partition points. Then since any two partitions have a common refinement, we will see that the upper sum for any partition is greater than the lower sum for any other partition.
As before, adding a new point between and replaces the th term in the sum with two terms:
Each of the two maxima is at most the one maximum we had before, so we find
which establishes this inequality. Notice that again we’ve had to multiply by differences between values of , and so as above this inequality hinges on the fact that our integrator is monotonically increasing.
Now that we know upper sums are always greater than lower sums (for increasing integrators!) we know that if they meet at all, it will be at the bottom of all the upper sums and the top of all the lower sums. Thus we define the “upper Stieltjes integral” as the greatest lower bound of all the upper sums. Notice that if any lower sum exists then it’s a lower bound for the set of upper sums, and so Dedekind completeness tells us that this upper integral exists. Similarly, we define the lower integral as the least upper bound of all the lower sums, with similar comments on its existence.
Since the upper sums are greater than the lower sums, we can see that the upper integral will be greater than the lower integral. Indeed, if is given then there is some partition so that , since the upper integral is a greatest lower bound. Then is an upper bound for the lower sums, and so . Since was arbitrary, the lower integral is less than or equal to the upper integral.
Upper and lower integrals are in some ways as nice as Riemann-Stieltjes integrals. For instance, they’re both linear over the region of integration:
However, the upper integral is only convex over its integrand, while the lower integral is concave:
I’ve been busy the last couple of days, so this post got delayed a bit.
We continue our study of functions of bounded variation by showing that total variation is “additive” over its interval. That is, if is of bounded variation on and , then is of bounded variation on and on . Further, we have .
First, let’s say we’ve got a partition of and a partition of . Then together they form a partition of . The sum for both partitions together must be bounded by , and so the sum of each partition is also bounded by this total variation. Thus is of bounded variation on each subinterval. This also establishes the inequality .
On the other hand, given any partition at all of we can add the point to it. This may split one of the parts of the partition, and thus increase the sum for that partition. Then we can break this new partition into a partition for and a partition for . The first will have a sum bounded by , and the second a sum bounded by . Thus we find that .
So, with both of these inequalities we have established the equality we wanted. Now we can define the “variation function” on the interval . Just set (and ). It turns out that both and are increasing functions on .
Indeed, given points in we can see that , and so . On the other hand, . But by definition we must have ! And so .
Given a function of bounded variation, we have constructed two increasing functions and . It is easily seen that , so any function of bounded variation is the difference between two increasing functions. On the other hand, we know that increasing functions are of bounded variation. And we also know that the difference of two functions of bounded variation is also of bounded variation. And so the difference between two increasing functions is a function of bounded variation. Thus this condition is both necessary and sufficient!
Even better, since many situations behave nicely with respect to differences of functions, it’s often enough to understand how increasing functions behave. Then we can understand the behavior of functions of bounded variation just by taking differences. For example, we started talking about functions of bounded variation to discuss integrators in Riemann-Stieltjes integrals. If we study these integrals when is increasing, then we can use the linearity of the integral with respect to the integrator to understand what happens when is of bounded variation!