We’ve seen that commutativity fails for conditionally convergent series. It turns out, though, that things are much nicer for absolutely convergent series. Any rearrangement of an absolutely convergent series is again absolutely convergent, and to the same limit.
Let be an absolutely convergent series, and let be a bijection. Define the rearrangement .
Now given an , absolute convergence tells us we can pick an so that any tail of the series of absolute values past that point is small. That is, for any we have
Now for , the function takes only a finite number of values (the inverse function exists because is a bijection). Let be the largest such value. Thus if we will know that . Then for any such we have
and we know that the sum on the right is finite by the assumption of absolute convergence. Thus the tail of the series of — and thus the series itself — must converge. Now a similar argument to the one we used when we talked about associativity for absolutely convergent series shows that the rearranged series has the same sum as the original.
This is well and good, but it still misses something. We can’t handle reorderings that break up the order structure. For example, we might ask to add up all the odd terms, and then all the even terms. There is no bijection that handles this situation. And yet we can still make it work.
Unfortunately, I arrive in Maryland having left my references back in New Orleans. For now, I’ll simply assert that for absolutely convergent series we can perform these more general rearrangements, though I’ll patch this sometime.
We’ve seen that associativity may or may not hold for infinite sums, but it can be improved with extra assumptions. As it happens, commutativity breaks down as well, though the story is a bit clearer here.
First we should be clear about what we’re doing. When we add up a finite list of real numbers, we can reorder the list in many ways. In fact, reorderings of numbers form the symmetric group . If we look back at our group theory, we see that we can write any element in this group as a product of transpositions which swap neighboring entries in the list. Thus since the sum of two numbers is invariant under such a swap — — we can then rearrange any finite list of numbers and get the same sum every time.
Now we’re not concerned about finite sums, but about infinite sums. As such, we consider all possible rearrangements — bijections — which make up the “infinity symmetric group . Now we might not be able to effect every rearrangement by a finite number of transpositions, and commutativity might break down.
If we have a series with terms and a bijection , then we say that the series with terms is a rearrangement of the first series. If, on the other hand, is merely injective, then we say that the new series is a subseries of the first one.
Now, if is only conditionally convergent, I say that we can rearrange the series to give any value we want! In fact, given (where these could also be ) there will be a rearrangement so that
First we throw away any zero terms in the series, since those won’t affect questions of convergence, or the value of the series if it does converge. Then let be the th positive term in the sequence , and let be the th negative term.
The two series with positive terms and both diverge. Indeed, if one converged but the other did not, then the original series would diverge. On the other hand, if they both converged then the original series would converge absolutely. Conditional convergence happens when the subseries of positive terms and the subseries of negative terms just manage to balance each other out.
Now we take two sequences and converging to and respectively. Since the series of positive terms diverges, they’ll eventually exceed any positive number. We can take just enough of them (say so that
Similarly, we can then take just enough negative terms so that
Now take just enough of the remaining positive terms so that
and enough negatives so that
and so on and so forth. This gives us a rearrangement of the terms of the series.
Each time we add positive terms we come within of , and each time we add negative terms we come within of . But since the original sequence must be converging to zero (otherwise the series couldn’t converge), so must the and be converging to zero. And the sequences and are converging to and .
It’s straightforward from here to show that the limits superior and inferior of the partial sums of the rearranged series are as we claim. In particular, we can set them both equal to the same number and get that number as the sum of the rearranged series. So for conditionally convergent series, the commutativity property falls apart most drastically.
Today, Sam Lomonaco and Louis Kauffman posted to the arXiv a paper on “Quantum Knots and Mosaics”. I had the pleasure of a sneak preview back in March. Here’s what I said then (I haven’t had a chance to read the paper as posted, so some of this may be addressed):
About half the paper consists of setting up definitions of a mosaic and the Reidemeister moves. This concludes with the conjecture that before you allow superpositions the mosaic framework captures all of knot theory.
The grading by the size of the mosaic leads to an obvious conjecture: there exist mosaic knots which are mosaic equivalent, but which require arbitrarily many expansions. This is analogous to the same fact about crossing numbers.
Obviously, I’d write these combinatorial frameworks as categories with the mosaics as objects and the morphisms generated by the mosaic moves. Superpositions just seem to be the usual passage from a set to the vector space on that basis. See my new paper for how I say this for regular knots and Reidemeister moves.
Then (like I say in the paper) we want to talk about mosaic “covariants”. I think this ends up giving your notion of invariant after we decategorify (identify isomorphic outputs).
The only thing I’m wondering about (stopping shy of saying you two are “wrong”) is the quantum moves. The natural thing would be to go from the “group” (really its a groupoid like I said before) of moves to its linearization. That is, we should allow the “sum” of two moves as a move. This splits a basis mosaic input into a superposition.
In particular, the “surprising” result you state that one quantum mosaic is not quantum equivalent to the other must be altered. There is clearly a move in my view taking the left to the right. “Equivalence” is then the statement that two quantum mosaics are connected by an *invertible* move. I’m not sure that the move from left to right is invertible yet, but I think it is.
I’m leaving for DC soon, and may not have internet access all day. So you get this now!
We’ve seen that associativity doesn’t hold for infinite sums the way it does for finite sums. We can always “add parentheses” to a convergent sequence, but we can’t always remove them.
The first example we mentioned last time. Consider the series with terms :
Now let’s add parentheses using the sequence . Then . That is, we now have the sequence
So the resulting series does converge. However, the original series can’t converge.
The obvious fault is that the terms don’t get smaller. And we know that must be zero, or else we’ll have trouble with Cauchy’s condition. With the parentheses in place the terms go to zero, but when we remove them this condition can fail. And it turns out there’s just one more condition we need so that we can remove parentheses.
So let’s consider the two series with terms and , where the first is obtained from the second by removing parentheses using the function . Assume that , and also that there is some so that each of the is a sum of fewer than of the . That is, . Then the series either both diverge or both converge, and if they converge they have the same sum.
We set up the sequences of partial sums
We know from last time that , and so if the first series converges then the second one must as well. We need to show that if exists, then we also have .
To this end, pick an . Since the sequence of converge to , we can choose some so that for all . Since the sequence of terms converges to zero, we can increase until we also have for all .
Now take any . Then falls between and for some . We can see that , and that is definitely above . So the partial sum is the sum of all the up through , minus those terms past . That is
But this first sum is just the partial sum , while each term of the second sum is bounded in size by our assumptions above. We check
But since is between and , there must be fewer than terms in this last sum, all of which are bounded by . So we see
and thus we have established the limit.
As we’ve said before, the real numbers are a topological field. The fact that it’s a field means, among other things, that it comes equipped with an associative notion of addition. That is, for any finite sum we can change the order in which we perform the additions (though not the order of the terms themselves — that’s commutativity).
The topology of the real numbers means we can set up sums of longer and longer sequences of terms and talk sensibly about whether these sums — these series — converge or not. Unfortunately, this topological concept ends up breaking the algebraic structure in some cases. We no longer have the same freedom to change the order of summations.
When we write down a series, we’re implicitly including parentheses all the way to the left. Consider the partial sums:
But what if we wanted to add up the terms in a different order? Say we want to write
Well this is still a left-parenthesized expression, it’s just that the terms are not the ones we looked at before. If we write , , and then we have
So this is actually a partial sum of a different (though related) series whose terms are finite sums of terms from the first series.
More specifically, let’s choose a sequence of stopping points: an increasing sequence of natural numbers . In the example above we have , , and . Now we can define a new sequence
Then the sequence of partial sums of this series is a subsequence of the . Specifically
We say that the sequence is obtained from the sequence by “adding parentheses” (most clearly notable in the above expression for ). Alternately, we say that is obtained from by “removing parentheses”.
If the sequence converges, so must the subsequence , and moreover to the same limit. That is, if the series converges to , then any series obtained by adding parentheses also converges to .
However, convergence of a subsequence doesn’t imply convergence of the sequence. For example, consider and use . Then jumps back and forth between zero and one, but is identically zero. So just because a series converges, another one obtained by removing parentheses may not converge.
Now I want to bring out with two tests that will tell us about absolute convergence or (unconditional) divergence of an infinite series . As such they’ll tell us nothing about conditionally convergent series.
First is the ratio test. We take the ratio of one term in the series to the previous one and define the limits superior and inferior
Now if then the series converges absolutely. If then the series diverges. But if the test fails and we get no result.
In the first case, pick to be a number so that . Then there is some so that is an upper bound for the sequence of ratios past . For large enough , this means
On the other hand, if then eventually , so the terms of the series are getting bigger and bigger and bigger. But this would throw a monkey wrench into Cauchy’s condition for convergence of the series.
As for the root test, we will consider the sequence and define
If then the series converges absolutely. If then the series diverges. And if the test is inconclusive.
In the first case, as we did for the ratio test, pick so that . Then above some we have and the comparison test works straight away. On the other hand, if then infinitely often, and Cauchy’s criterion falls apart again.
As we look at sequences (and nets) of real numbers (and more general ordered spaces) a little more closely, we’ll occasionally need the finer notion of a “limit superior” (“limit inferior”). This is essentially the largest (smallest) value that a sequence takes in its tail.
In general, let be a net (indexed by ) in some ordered space . Then we can consider the “tail” of the index set consisting of all indices above a given index . We then ask what the least upper bound of the net is on this tail: . Alternately, we consider the greatest lower bound on the tail: .
Now as we move to tails further and further out in the net, the least upper bound (greatest lower bound) may drop (rise) as we pass maxima (minima). That is, the supremum (infimum) of a set bounds the suprema (infima) of its subsets. So? So if we pass such a maximum it clearly doesn’t affect the long-run behavior of the net, and we want to forget it. So we’ll take the lowest of the suprema of tails (the highest of the infima of tails).
Thus we finally come to defining the limit superior
and the limit inferior
Now these are related to our usual concept of a limit. First of all,
and the limit converges if and only if these two are both finite and equal. In this case, the limit is this common finite value. If they both go to infinity, the limit diverges to infinity, and similarly for negative infinity. If they’re not equal, then the limit bounces around between the two values.
If we’re considering a sequence of real numbers, then we’re taking a bunch of infima and suprema, all of which are guaranteed to exist. Thus the limits superior and inferior of any sequence must always exist.
As an illustrative example, work out the limits superior and inferior of the sequence . Show that this sequence diverges, but does so by oscillating rather than by blowing up.
Finally, note that we can consider a function defined on a ray to be a net on that ray, considered as a directed subset of real numbers. Then we get limits superior and inferior as goes to infinity, just as for sequences.
We can now use Abel’s partial summation formula to establish a couple other convergence tests.
If is a sequence whose sequence of partial sums form a bounded sequence, and if is a decreasing sequence converging to zero, then the series converges. Indeed, then the sequence also decreases to zero, so we just need to consider the series .
The bound on and the fact that is decreasing imply that , and the series clearly converges. Thus by the comparison test, the series converges absolutely, and our result follows. This is called Dirichlet’s test for convergence.
Let’s impose a bit more of a restriction on the and insist that this sequence actually converge. Correspondingly, we can weaken our restriction on and require that it be monotonic and convergent, but not specifically decreasing to zero. These two changes balance out and we still find that converges. Indeed, the sequence converges automatically as the product of two convergent sequences, and the rest is similar to the proof in Dirichlet’s test. We call this Abel’s test for convergence.