The Unapologetic Mathematician

Mathematics for the interested outsider

Plans for tomorrow

I’ve been going over notes in preparation for tomorrow’s talk at the University of Pennsylvania (scroll down a bit).

For anyone who happens to be there (Isabel, Charles…) I’ll be heading out from a little south of Baltimore early enough to (hopefully) compensate for the fact that I-95 is closed a little north of the exit for UPenn. I should be there in plenty of time for lunch with Jim Stasheff, and dinner later on. Drop an email (if you remember that I teach at Tulane it’s not too hard to find the address) with any contact information you want to pass along.


March 18, 2008 Posted by | Uncategorized | 1 Comment

Category Theory Isn’t Useless After All!

Today on the arXiv, we find a posting of an old paper, which uses spans of “reflexive graphs” to give an algebraic framework for describing partita doppia — double-entry bookkeeping.

Now I need to find a follow-on to this paper and start applying to those financial math jobs.

March 18, 2008 Posted by | Algebra, Category theory | Leave a comment

Integrators of Bounded Variation

Today I want to start considering Riemann-Stieltjes integrals where the integrator \alpha is a function of bounded variation.

From our study of functions of bounded variation, we know that \alpha can be written as the difference between two increasing functions \alpha=\alpha_1-\alpha_2. Then the linearity of the integral in the integrator tells us that


Or does it? Remember that we have to understand this equation as saying that if two of the integrals exist then the third one does and the equality holds. But we also know that we’ve got a lot of choice in how to carve up \alpha, and it’s easily possible to do it in such a way that the integrals on the right don’t exist.

Luckily, we can show that there’s always at least one splitting of \alpha into two parts like we want, and further so that all the integrals above exist and the equation holds. And it turns out that the first function is just the variation V! That is, I assert that if f is Riemann-Stieltjes integrable with respect to \alpha on the interval \left[a,b\right] and there exists some M with |f(x)|\leq M on \left[a,b\right], then f is Riemann-Stieltjes integrable with respect to V on the same interval. And because V is increasing, we can prove it by showing that f satisfies Riemann’s condition with respect to V on \left[a,b\right].

So, given \epsilon>0 we need to find a partition x_\epsilon so that for any finer partition x we have 0\leq U_{V,x}(f)-L_{V,x}(f)<\epsilon.

We start by picking a partition so that for any finer partition x and any choices t_i and t_i' of tags for x we have


Such a partition exists because the sum is the difference between two Riemann-Stieltjes sums for \int_{\left[a,b\right]}f\,d\alpha, and we’re assuming that these sums converge to the value of the integral.

Now let’s refine it a bit. Any refinement will still satisfy the same condition we already have, but let’s make it also satisfy

\displaystyle V(b)<\sum\limits_{i=1}^n|\alpha(x_i)-\alpha(x_{i-1})|+\frac{\epsilon}{4M}

This we can do because the total variation is the supremum of such sums over all partitions, and so we can find a fine enough partition to come within \frac{\epsilon}{4M} of it. This new partition x_\epsilon will be the one we need to establish Riemann’s condition. Now we must actually prove that it works.

First, notice that \left(V(x_i)-V(x_{i-1})\right)-\left|\alpha(x_i)-\alpha(x_{i-1})\right|\geq0, since the variation only grows as fast as the function itself changes. Also, the supremum M_i(f) and infimum m_i(f) in the ith subinterval are each less than M in absolute value, so M_i(f)-m_i(f)\leq2M. Together, these show us that


by the second property of x_\epsilon we established when we refined our partition.

Now we set h=\frac{\epsilon}{4V(b)}. We want to pick two sets of tags so that f(t_i)-f(t_i')>M_i(f)-m_i(f)-h for subintervals where \alpha(x_i)\geq\alpha(x_{i-1}), and so that f(t_i')-f(t_i)>M_i(f)-m_i(f)-h for subintervals where \alpha(x_i)<\alpha(x_{i-1}). Then we find the inequality


by the first property of the partition we chose. Notice that in some of the subintervals I took the absolute value off of the change in \alpha by switching which sample of f I subtracted from which, introducing a new negative sign.

So, adding these two big inequalities together, we find that


which shows that the upper and lower sums for the partition x differ by less than \epsilon. Therefore f indeed satisfies Riemann’s condition, and it thus integrable with respect to V on \left[a,b\right].

What this means is that integrals with respect to integrators of bounded variation can always be reduced to those with respect to increasing integrators, and thus to situations where Riemann’s condition can be brought to bear.

March 17, 2008 Posted by | Analysis, Calculus | 3 Comments

Not today again…

XKCD did it at the top of today’s post. The first reference to the day. I’m not going to get all ranty. I’ll just refer to my rant from last year.

Actually, I did go to an event today, but despite rather than because of the day. Jeffrey Bub was talking up at UMBC, and it gave me the chance to clothesline him and ask about convex sets and ordered linear spaces, which Howard Barnum had said he (Dr. Bub) knew something about the interpretation of as state- and measurement-spaces.

March 14, 2008 Posted by | Uncategorized | 4 Comments

Riemann’s Condition

If we want our Riemann-Stieltjes sums to converge to some value, we’d better have our upper and lower sums converge to that value in particular. On the other hand, since the upper and lower sums sandwich in all the others, their convergence is enough for the rest. And their convergence is entirely captured by their lower and upper bounds, respectively — the upper and lower Stieltjes integrals. So we want to know when \underline{I}_{\alpha,\left[a,b\right]}(f)=\overline{I}_{\alpha,\left[a,b\right]}(f).

We’ll prove this equality in general by showing that the difference has to be arbitrarily small. That is, for any partition x of \left[a,b\right] we have the inequalities

\overline{I}_{\alpha,\left[a,b\right]}(f)\leq U_{\alpha,x}(f)
\underline{I}_{\alpha,\left[a,b\right]}(f)\geq L_{\alpha,x}(f)

by definition. Subtracting the one from the other we find

\overline{I}_{\alpha,\left[a,b\right]}(f)-\underline{I}_{\alpha,\left[a,b\right]}(f)\leq U_{\alpha,x}(f)-L_{\alpha,x}(f)

So if given an \epsilon>0 we can find a partition x for which the upper and lower sums differ by less than \epsilon then the difference between the upper and lower integrals must be even less. If we can do this for any \epsilon>0, we say that the function f satisfies Riemann’s condition with respect to \alpha on \left[a,b\right].

The lead-up to the definition of Riemann’s condition shows us that if f satisfies this condition then the lower and upper integrals are equal. Then just like we saw happen with Darboux sums we can squeeze any Riemann-Stieltjes sum between and upper and a lower sum. So if the upper and lower integrals are both equal to some value, then the limit of the Riemann-Stieltjes sums over tagged partitions must exist and equal that value, and thus f is Riemann-Stieltjes integrable with respect to \alpha on \left[a,b\right].

Now what if the f is Riemann-Stieltjes integrable with respect to \alpha on \left[a,b\right]? We would hope that f then satisfies Riemann’s condition with respect to \alpha on \left[a,b\right], and so these three conditions are equivalent. So given \epsilon>0 we need to find an actual partition x of \left[a,b\right] so that 0\leq U_{\alpha,x}(f)-L_{\alpha(x)}<\epsilon.

Since we’re assuming that f is Riemann-Stieltjes integrable, we’ll call the value of the integral A. Then we can find a tagged partition x_\epsilon so that for any finer tagged partitions x=((x_0,...,x_n),(t_1,...,t_n)) and x'=((x_0,...,x_n),(t_1',...,t_n')) we have


Combining these we find that


Now as we pick different t and t' we can make the difference in values of f get as large as that between M_i=\max\limits_{x_{i-1}\leq t\leq c}f(t) and m_i=\min\limits_{x_{i-1}\leq t\leq c}f(t). So for any h>0 we can choose tags so that f(t_i)-f(t_i')>M_i-m_i-h. In particular, we can consider h=\frac{\epsilon}{3(\alpha(b)-\alpha(a))}, which is positive because \alpha is increasing.

The difference between the upper and lower sums is


which is then less than


which is then less than \epsilon.

Thus we establish the equivalence of Riemann’s condition and Riemann-Stieltjes integrability, as long as the integrator \alpha is increasing.

March 14, 2008 Posted by | Analysis, Calculus | 1 Comment

Conference and travel

One of the few things that this city gets right is its airport. Free high-speed wireless and plenty of open power taps. What more could a junkie like me want? Well, maybe for the newsstands to carry Scientific American so I could read Scott Aaronson’s article on the plane. It seems everyone at the Clifford Lectures had been reading it, even though most of us are already rather familiar with the ins and outs of quantum computation to begin with. That’s sort of the point, this year.

Yes, despite how great a topic it is, I’ve got to leave on a jet plane this morning for DC, since if I waited until the end of the conference it would be a lot more expensive, and I’m not making that sweet, sweet postdoc money yet, let along tenure-track. Worst of all is that I have to miss Sam Lomonaco’s talk about his upcoming paper with Lou Kauffman (when are the old guard going to get blaths?). Not to worry, though. He gave me the inside scoop yesterday, and while I can’t go public with them yet I can say that they’re taking the interactions between knot theory and quantum computation in a completely new direction, and it’ll be interesting to see where that leads.

I spent much of the day shamelessly self-promoting my new paper to the assembled luminaries, especially pushing the introduction where I tie (no pun intended) my tangle program to topological quantum computation. And the group was very much inclined to think in terms of categories of tangles as well. In fact, the talks were kicked off by Phil Scott, whose topic was so close to that of John Baez’ and Mike Stay’s paper that he’s had to tweak his notes in the last couple days.

Incidentally, those of you who have been around for a while may remember him from when I talked about closed categories. I think both of us fell victim to the magic of the intarwobs back then, and overstated things a tad. I admit that there are other categories with closure but without monoidal structure, but I don’t see them arising naturally in what I do, but to say the definition I gave is “totally wrong” is a bit much. Actually, when he left that comment, he says he was thinking of a certain example he mentioned in his talk, which turns out to have a tensor product — we just don’t know what it looks like! And when he mentioned that example, I thought, “well that’ll show that guy who left that comment…” Ah, what a small, small world academic mathematics is. It’s all good, though.

The main lectures are being given by Samson Abramsky, and they’re straight down the lines of my own thoughts on the structure of quantum (and otherwise non-classical) information and symmetric monoidal closed categories. And they’re very accessible, so the junior I’ve been advising through his reading of the Aharonov-Jones-Landau paper was able to keep up, and probably will through much of the rest of the series. Of course, introductions were made to Sam, and maybe he’ll apply to UMBC’s computer science department next year. Have I sabotaged a poor innocent undergraduate into a life of knots, categories, and quantum computers? Horror!

But the plane boards soon, and then I bum around College Park for the day. I’ll try to get back to the expository line tomorrow.

March 13, 2008 Posted by | Uncategorized | 1 Comment

New Paper

The draft of my paper “Categorifying Coloring Numbers” is up on the arXiv! Go, download it! Paper the walls of urban buildings like it’s a rock band’s poster!

Or you could just read it, especially if you’re going to be at the University of Pennsylvania on March 19th or the University of California at Riverside on April 2nd.

March 12, 2008 Posted by | Uncategorized | 5 Comments

Bounded Variation addenda

I left a few things out of last Saturday’s post. Since I’ve spent all morning finishing off that paper (I’ll post the arXiv link tomorrow when it shows up) I’m sort of mathed out for the moment. I’ll just tack up these addenda and take a nap or something to brace for tomorrow’s Clifford Lectures (which is the only day of them I’ll be able to see, due to airline pricing).

Okay, so we said that we can represent any function \alpha of bounded variation as the difference \alpha_+-\alpha_- of two increasing functions. But we should notice here that this decomposition is by no means unique. In fact, if \beta is an increasing function, then we can also write \alpha=(\alpha_++\beta)-(\alpha_-+\beta), and get a different representation of \alpha.

Usually nonuniqueness gets messy, but there’s a way this can come in handy. If we pick \beta to be strictly increasing, then so will \alpha_++\beta and \alpha_-+\beta be. So any function of bounded variation can be written as the difference between two strictly increasing functions. This restriction may be useful in some situations.

It also turns out that wherever the function \alpha is continuous, then so will its variation V_\alpha be. Thus any continuous function of bounded variation can be written as the difference of two continuous, (strictly) increasing functions. I’ll leave this fact to you, though it’s not really hard.

Now let’s look at bounded increasing functions for a moment. Such a function f might jump up at certain points in its domain, like the Heaviside function H(x) that sends all negative numbers to {0} and all nonnegative numbers to 1. However, a monotone function can’t have any other kinds of discontinuities. Further, since each jump must increase by a finite amount, we can only have a countable number of them in a finite interval, or else the function would have to take arbitrarily large values and would no longer be bounded!

So increasing functions can only have a countable number of jump discontinuities. But any function of bounded variation is the difference of two increasing functions. Thus any function of bounded variation can only have a countable number of discontinuities, where at worst the function jumps up or down by a finite amount at each one. The only other sort of discontinuity is a point where the function has a limit, but takes a different value. For instance, H(x)-(-H(-x)) takes the value 1 at every positive or negative number, but H(0)=2.

March 11, 2008 Posted by | Analysis | 8 Comments

Upper and Lower Integrals

Way back when, we talked about Darboux sums, where we used a particular recipe to pick the tags. Specifically, we defined the upper sum by picking a local maximum of f in each subinterval as our tag, and the lower sum similarly. Today, let’s consider how this works with Riemann-Stieltjes sums, and specifically with an increasing integrator \alpha.

So given a partition x, we define the upper and lower Riemann-Stieltjes sums as follows:

\displaystyle U_{\alpha,x}(f)=\sum\limits_{i=1}^n\max\limits_{x_{i-1}\leq t\leq x_i}\{f(t)\}\left(\alpha(x_i)-\alpha(x_{i-1})\right)
\displaystyle L_{\alpha,x}(f)=\sum\limits_{i=1}^n\min\limits_{x_{i-1}\leq t\leq x_i}\{f(t)\}\left(\alpha(x_i)-\alpha(x_{i-1})\right)

Now since we’ve chosen \alpha to be increasing we can see that \alpha(x_i)-\alpha(x_{i-1})\geq0. Therefore we can find the inequalities

\min\limits_{x_{i-1}\leq t\leq x_i}\{f(t)\}\left(\alpha(x_i)-\alpha(x_{i-1})\right)\leq f(t_i)\left(\alpha(x_i)-\alpha(x_{i-1})\right)\leq\max\limits_{x_{i-1}\leq t\leq x_i}\{f(t)\}\left(\alpha(x_i)-\alpha(x_{i-1})\right)

for any possible tag t_i\in\left[x_{i-1},x_i\right]. And so any Riemann-Stieltjes sum for any collection of tags in the partition x lies between the lower and upper sums: L_{\alpha,x}(f)\leq f_{\alpha,x}\leq U_{\alpha,x}(f). Notice that we need \alpha to be increasing here — if not, we can construct some pathological function f that makes any combination of these inequalities fail.

Now the next step in Darboux integration was noting that any refinement of a partition drops the upper sum and raises the lower sum. Just like then, we can simply consider the process of adding a single new partition point, since any further refinement is just a sequence of new partition points. Then since any two partitions have a common refinement, we will see that the upper sum for any partition is greater than the lower sum for any other partition.

As before, adding a new point c between x_{i-1} and x_i replaces the ith term in the sum with two terms:

\max\limits_{x_{i-1}\leq t\leq c}f(t)\left(\alpha(c)-\alpha(x_{i-1})\right)+\max\limits_{c\leq t\leq x_i}f(t)\left(\alpha(x_i)-\alpha(c)\right)

Each of the two maxima is at most the one maximum we had before, so we find

\max\limits_{x_{i-1}\leq t\leq c}f(t)\left(\alpha(c)-\alpha(x_{i-1})\right)+\max\limits_{c\leq t\leq x_i}f(t)\left(\alpha(x_i)-\alpha(c)\right)\leq\max\limits_{x_{i-1}\leq t\leq c}f(t)\left(\alpha(x_i)-\alpha(c)+\alpha(c)-\alpha(x_{i-1})\right)

which establishes this inequality. Notice that again we’ve had to multiply by differences between values of \alpha, and so as above this inequality hinges on the fact that our integrator is monotonically increasing.

Now that we know upper sums are always greater than lower sums (for increasing integrators!) we know that if they meet at all, it will be at the bottom of all the upper sums and the top of all the lower sums. Thus we define the “upper Stieltjes integral” \overline{I}_{\alpha,\left[a,b\right]}(f) as the greatest lower bound of all the upper sums. Notice that if any lower sum exists then it’s a lower bound for the set of upper sums, and so Dedekind completeness tells us that this upper integral exists. Similarly, we define the lower integral \underline{I}_{\alpha,\left[a,b\right]}(f) as the least upper bound of all the lower sums, with similar comments on its existence.

Since the upper sums are greater than the lower sums, we can see that the upper integral will be greater than the lower integral. Indeed, if \epsilon>0 is given then there is some partition x so that U_{\alpha,x}(f)\leq\overline{I}_{\alpha,\left[a,b\right]}(f)+\epsilon, since the upper integral is a greatest lower bound. Then U_{\alpha,x}(f) is an upper bound for the lower sums, and so \underline{I}_{\alpha,\left[a,b\right]}(f)\leq\overline{I}_{\alpha,\left[a,b\right]}(f)+\epsilon. Since \epsilon was arbitrary, the lower integral is less than or equal to the upper integral.

Upper and lower integrals are in some ways as nice as Riemann-Stieltjes integrals. For instance, they’re both linear over the region of integration:


However, the upper integral is only convex over its integrand, while the lower integral is concave:


March 10, 2008 Posted by | Analysis, Calculus | 6 Comments

Functions of Bounded Variation III

I’ve been busy the last couple of days, so this post got delayed a bit.

We continue our study of functions of bounded variation by showing that total variation is “additive” over its interval. That is, if f is of bounded variation on \left[a,b\right] and c\in\left[a,b\right], then f is of bounded variation on \left[a,c\right] and on \left[c,b\right]. Further, we have V_f(a,b)=V_f(a,c)+V_f(c,b).

First, let’s say we’ve got a partition (x_0,...,x_m) of \left[a,c\right] and a partition (x_m,...,x_n) of \left[c,b\right]. Then together they form a partition of \left[a,b\right]. The sum for both partitions together must be bounded by V_f(a,b), and so the sum of each partition is also bounded by this total variation. Thus f is of bounded variation on each subinterval. This also establishes the inequality V_f(a,c)+V_f(c,b)\leq V_f(a,b).

On the other hand, given any partition at all of \left[a,b\right] we can add the point c to it. This may split one of the parts of the partition, and thus increase the sum for that partition. Then we can break this new partition into a partition for \left[a,c\right] and a partition for \left[c,b\right]. The first will have a sum bounded by V_f(a,c), and the second a sum bounded by V_f(c,b). Thus we find that V_f(a,b)\leq V_f(a,c)+V_f(c,b).

So, with both of these inequalities we have established the equality we wanted. Now we can define the “variation function” V on the interval \left[a,b\right]. Just set V(x)=V_f(a,x) (and V(a)=0). It turns out that both V and D=V-f are increasing functions on \left[a,b\right].

Indeed, given points x<y in \left[a,b\right] we can see that V_f(a,y)=V_f(a,x)+V_f(x,y), and so V(x)\leq V(y). On the other hand, D(y)-D(x)=V(y)-V(x)-(f(y)-f(x))=V_f(x,y)-(f(y)-f(x)). But by definition we must have f(y)-f(x)\leq V_f(x,y)! And so D(x)\leq D(y).

Given a function f of bounded variation, we have constructed two increasing functions V and D. It is easily seen that f=V-D, so any function of bounded variation is the difference between two increasing functions. On the other hand, we know that increasing functions are of bounded variation. And we also know that the difference of two functions of bounded variation is also of bounded variation. And so the difference between two increasing functions is a function of bounded variation. Thus this condition is both necessary and sufficient!

Even better, since many situations behave nicely with respect to differences of functions, it’s often enough to understand how increasing functions behave. Then we can understand the behavior of functions of bounded variation just by taking differences. For example, we started talking about functions of bounded variation to discuss integrators \alpha in Riemann-Stieltjes integrals. If we study these integrals when \alpha is increasing, then we can use the linearity of the integral with respect to the integrator to understand what happens when \alpha is of bounded variation!

March 9, 2008 Posted by | Analysis | 5 Comments