The Unapologetic Mathematician

Mathematics for the interested outsider

Commutativity in Series II

We’ve seen that commutativity fails for conditionally convergent series. It turns out, though, that things are much nicer for absolutely convergent series. Any rearrangement of an absolutely convergent series is again absolutely convergent, and to the same limit.

Let \sum_{k=0}^\infty a_k be an absolutely convergent series, and let p:\mathbb{N}\rightarrow\mathbb{N} be a bijection. Define the rearrangement b_k=a_{p(k)}.

Now given an \epsilon>0, absolute convergence tells us we can pick an N so that any tail of the series of absolute values past that point is small. That is, for any n\geq N we have


Now for 0\leq n\leq N, the function p^{-1} takes only a finite number of values (the inverse function exists because p is a bijection). Let M be the largest such value. Thus if m>M we will know that p(m)>N. Then for any such m we have


and we know that the sum on the right is finite by the assumption of absolute convergence. Thus the tail of the series of b_j — and thus the series itself — must converge. Now a similar argument to the one we used when we talked about associativity for absolutely convergent series shows that the rearranged series has the same sum as the original.

This is well and good, but it still misses something. We can’t handle reorderings that break up the order structure. For example, we might ask to add up all the odd terms, and then all the even terms. There is no bijection p that handles this situation. And yet we can still make it work.

Unfortunately, I arrive in Maryland having left my references back in New Orleans. For now, I’ll simply assert that for absolutely convergent series we can perform these more general rearrangements, though I’ll patch this sometime.

May 9, 2008 Posted by | Analysis, Calculus | 3 Comments

Commutativity in Series I

We’ve seen that associativity may or may not hold for infinite sums, but it can be improved with extra assumptions. As it happens, commutativity breaks down as well, though the story is a bit clearer here.

First we should be clear about what we’re doing. When we add up a finite list of real numbers, we can reorder the list in many ways. In fact, reorderings of n numbers form the symmetric group S_n. If we look back at our group theory, we see that we can write any element in this group as a product of transpositions which swap neighboring entries in the list. Thus since the sum of two numbers is invariant under such a swap — a+b=b+a — we can then rearrange any finite list of numbers and get the same sum every time.

Now we’re not concerned about finite sums, but about infinite sums. As such, we consider all possible rearrangements — bijections p:\mathbb{N}\rightarrow\mathbb{N} — which make up the “infinity symmetric group S_\infty. Now we might not be able to effect every rearrangement by a finite number of transpositions, and commutativity might break down.

If we have a series with terms a_k and a bijection p, then we say that the series with terms b_k=a_{p(k)} is a rearrangement of the first series. If, on the other hand, p is merely injective, then we say that the new series is a subseries of the first one.

Now, if \sum_{k=0}^\infty a_k is only conditionally convergent, I say that we can rearrange the series to give any value we want! In fact, given x\leq y (where these could also be \pm\infty) there will be a rearrangement b_k=a_{p(k)} so that


First we throw away any zero terms in the series, since those won’t affect questions of convergence, or the value of the series if it does converge. Then let p_n be the nth positive term in the sequence a_k, and let -q_n be the nth negative term.

The two series with positive terms \sum_{k=0}^\infty p_k and \sum_{k=0}^\infty q_k both diverge. Indeed, if one converged but the other did not, then the original series \sum_{k=0}^\infty a_k would diverge. On the other hand, if they both converged then the original series would converge absolutely. Conditional convergence happens when the subseries of positive terms and the subseries of negative terms just manage to balance each other out.

Now we take two sequences x_n and y_n converging to x and y respectively. Since the series of positive terms diverges, they’ll eventually exceed any positive number. We can take just enough of them (say k_1 so that


Similarly, we can then take just enough negative terms so that


Now take just enough of the remaining positive terms so that


and enough negatives so that


and so on and so forth. This gives us a rearrangement of the terms of the series.

Each time we add positive terms we come within p_{k_j} of y_j, and each time we add negative terms we come within q_{l_j} of x_j. But since the original sequence a_n must be converging to zero (otherwise the series couldn’t converge), so must the p_{k_j} and q_{l_j} be converging to zero. And the sequences x_j and y_j are converging to x and y.

It’s straightforward from here to show that the limits superior and inferior of the partial sums of the rearranged series are as we claim. In particular, we can set them both equal to the same number and get that number as the sum of the rearranged series. So for conditionally convergent series, the commutativity property falls apart most drastically.

May 8, 2008 Posted by | Analysis, Calculus | 4 Comments

Quantum Knot Mosaics

Today, Sam Lomonaco and Louis Kauffman posted to the arXiv a paper on “Quantum Knots and Mosaics”. I had the pleasure of a sneak preview back in March. Here’s what I said then (I haven’t had a chance to read the paper as posted, so some of this may be addressed):

About half the paper consists of setting up definitions of a mosaic and the Reidemeister moves. This concludes with the conjecture that before you allow superpositions the mosaic framework captures all of knot theory.

The grading by the size of the mosaic leads to an obvious conjecture: there exist mosaic knots which are mosaic equivalent, but which require arbitrarily many expansions. This is analogous to the same fact about crossing numbers.

Obviously, I’d write these combinatorial frameworks as categories with the mosaics as objects and the morphisms generated by the mosaic moves. Superpositions just seem to be the usual passage from a set to the vector space on that basis. See my new paper for how I say this for regular knots and Reidemeister moves.

Then (like I say in the paper) we want to talk about mosaic “covariants”. I think this ends up giving your notion of invariant after we decategorify (identify isomorphic outputs).

The only thing I’m wondering about (stopping shy of saying you two are “wrong”) is the quantum moves. The natural thing would be to go from the “group” (really its a groupoid like I said before) of moves to its linearization. That is, we should allow the “sum” of two moves as a move. This splits a basis mosaic input into a superposition.

In particular, the “surprising” result you state that one quantum mosaic is not quantum equivalent to the other must be altered. There is clearly a move in my view taking the left to the right. “Equivalence” is then the statement that two quantum mosaics are connected by an *invertible* move. I’m not sure that the move from left to right is invertible yet, but I think it is.

May 8, 2008 Posted by | Knot theory | Leave a comment

Associativity in Series II

I’m leaving for DC soon, and may not have internet access all day. So you get this now!

We’ve seen that associativity doesn’t hold for infinite sums the way it does for finite sums. We can always “add parentheses” to a convergent sequence, but we can’t always remove them.

The first example we mentioned last time. Consider the series with terms a_k=(-1)^k:

\displaystyle\sum\limits_{k=0}^\infty a_k=1+(-1)+1+(-1)+...

Now let’s add parentheses using the sequence d(j)=2j+1. Then b_j=(-1)^{(2(j-1)+1)+1}+(-1)^{2j+1}=1+(-1)=0. That is, we now have the sequence

\displaystyle\sum\limits_{j=0}^\infty b_j=(1+(-1))+(1+(-1))+...=0+0+...=0

So the resulting series does converge. However, the original series can’t converge.

The obvious fault is that the terms a_k don’t get smaller. And we know that \lim\limits_{k\rightarrow\infty}a_k must be zero, or else we’ll have trouble with Cauchy’s condition. With the parentheses in place the terms b_j go to zero, but when we remove them this condition can fail. And it turns out there’s just one more condition we need so that we can remove parentheses.

So let’s consider the two series with terms a_k and b_j, where the first is obtained from the second by removing parentheses using the function d(j). Assume that \lim_{k\rightarrow\infty}a_k=0, and also that there is some M>0 so that each of the b_j is a sum of fewer than M of the a_k. That is, d(j+1)-d(j)<M. Then the series either both diverge or both converge, and if they converge they have the same sum.

We set up the sequences of partial sums

\displaystyle s_n=\sum\limits_{k=0}^na_k
\displaystyle t_m=\sum\limits_{j=0}^mb_j

We know from last time that t_m=s_{d(m)}, and so if the first series converges then the second one must as well. We need to show that if t=\lim\limits_{m\rightarrow\infty}t_m exists, then we also have \lim\limits_{n\rightarrow\infty}s_n=t.

To this end, pick an \epsilon>0. Since the sequence of t_m converge to t, we can choose some N so that \left|t_m-t\right|<\frac{\epsilon}{2} for all m>N. Since the sequence of terms a_k converges to zero, we can increase N until we also have \left|a_k\right|<\frac{\epsilon}{2M} for all k>N.

Now take any n>d(N). Then n falls between d(m) and d(m+1) for some m. We can see that m\geq N, and that n is definitely above N. So the partial sum s_n is the sum of all the a_k up through k=d(m+1), minus those terms past k=n. That is

\displaystyle s_n=\sum\limits_{k=0}^na_k=\sum\limits_{k=0}^{d(m+1)}a_k-\sum\limits_{n+1}^{d(m+1)}a_k

But this first sum is just the partial sum t_{m+1}, while each term of the second sum is bounded in size by our assumptions above. We check


But since n is between d(m) and d(m+1), there must be fewer than M terms in this last sum, all of which are bounded by \frac{\epsilon}{2M}. So we see


and thus we have established the limit.

May 7, 2008 Posted by | Analysis, Calculus | 4 Comments

Associativity in Series I

As we’ve said before, the real numbers are a topological field. The fact that it’s a field means, among other things, that it comes equipped with an associative notion of addition. That is, for any finite sum we can change the order in which we perform the additions (though not the order of the terms themselves — that’s commutativity).

The topology of the real numbers means we can set up sums of longer and longer sequences of terms and talk sensibly about whether these sums — these series — converge or not. Unfortunately, this topological concept ends up breaking the algebraic structure in some cases. We no longer have the same freedom to change the order of summations.

When we write down a series, we’re implicitly including parentheses all the way to the left. Consider the partial sums:

\displaystyle s_n=\sum\limits_{k=0}^na_k=((...(((a_0+a_1)+a_2)+a_3)...+a_{n-1})+a_n)

But what if we wanted to add up the terms in a different order? Say we want to write

\displaystyle s_6=(((a_0+a_1)+(a_2+a_3))+((a_4+a_5)+a_6))

Well this is still a left-parenthesized expression, it’s just that the terms are not the ones we looked at before. If we write b_0=a_0+a_1, b_1=a_2+a_3, and b_2=a_4+a_5+a_6 then we have

\displaystyle s_6=((b_0+b_1)+b_2)=\sum\limits_{j=0}^2b_j=t_2

So this is actually a partial sum of a different (though related) series whose terms are finite sums of terms from the first series.

More specifically, let’s choose a sequence of stopping points: an increasing sequence of natural numbers d(j). In the example above we have d(0)=1, d(1)=3, and d(3)=6. Now we can define a new sequence

\displaystyle b_0=\sum\limits_{k=0}^{d(0)}a_k
\displaystyle b_j=\sum\limits_{k=d(j-1)+1}^{d(j)}a_k

Then the sequence of partial sums t_m of this series is a subsequence of the s_n. Specifically

\displaystyle t_m=\sum\limits_{j=0}^mb_j=\sum\limits_{k=0}^{d(0)}a_k+\sum\limits_{j=1}^m\left(\sum\limits_{k=d(j-1)+1}^{d(j)}a_k\right)=\sum\limits_{k=0}^{d(m)}a_k=s_{d(m)}

We say that the sequence t_m is obtained from the sequence s_n by “adding parentheses” (most clearly notable in the above expression for t_m). Alternately, we say that s_n is obtained from t_m by “removing parentheses”.

If the sequence s_n converges, so must the subsequence t_m=s_{d(m)}, and moreover to the same limit. That is, if the series \sum_{k=0}^\infty a_k converges to s, then any series \sum_{j=0}^\infty b_j obtained by adding parentheses also converges to s.

However, convergence of a subsequence doesn’t imply convergence of the sequence. For example, consider a_k=(-1)^k and use d(j)=2j+1. Then s_n jumps back and forth between zero and one, but t_m is identically zero. So just because a series converges, another one obtained by removing parentheses may not converge.

May 6, 2008 Posted by | Analysis, Calculus | 2 Comments

The Ratio and Root Tests

Now I want to bring out with two tests that will tell us about absolute convergence or (unconditional) divergence of an infinite series \sum_{k=0}^\infty a_k. As such they’ll tell us nothing about conditionally convergent series.

First is the ratio test. We take the ratio of one term in the series to the previous one and define the limits superior and inferior

\displaystyle R=\limsup\limits_{n\rightarrow\infty}\left|\frac{a_{n+1}}{a_n}\right|
\displaystyle r=\liminf\limits_{n\rightarrow\infty}\left|\frac{a_{n+1}}{a_n}\right|

Now if R<1 then the series converges absolutely. If r>1 then the series diverges. But if r\leq1\leq R the test fails and we get no result.

In the first case, pick x to be a number so that R<x<1. Then there is some N so that x is an upper bound for the sequence of ratios past N. For large enough n, this means


and so


Now if we set c=\frac{\left|a_N\right|}{x^N}, this tells us that |a_n|\leq cx^n. Then the comparison test with the geometric series tells us that \sum_{k=0}^\infty\left|a_k\right| converges.

On the other hand, if r>1 then eventually \left|a_{n+1}\right|>\left|a_n\right|, so the terms of the series are getting bigger and bigger and bigger. But this would throw a monkey wrench into Cauchy’s condition for convergence of the series.

As for the root test, we will consider the sequence \sqrt[n]{\left|a_n\right|} and define


If \rho<1 then the series converges absolutely. If \rho>1 then the series diverges. And if \rho=1 the test is inconclusive.

In the first case, as we did for the ratio test, pick x so that \rho<x<1. Then above some N we have \left|a_n\right|<x^n and the comparison test works straight away. On the other hand, if \rho>1 then \left|a_n\right|>1 infinitely often, and Cauchy’s criterion falls apart again.

May 5, 2008 Posted by | Analysis, Calculus | 5 Comments

Limits Superior and Inferior

As we look at sequences (and nets) of real numbers (and more general ordered spaces) a little more closely, we’ll occasionally need the finer notion of a “limit superior” (“limit inferior”). This is essentially the largest (smallest) value that a sequence takes in its tail.

In general, let x_\alpha be a net (indexed by \alpha\in A) in some ordered space X. Then we can consider the “tail” A_\alpha=\{\beta\in A|\beta\geq\alpha\} of the index set consisting of all indices above a given index \alpha. We then ask what the least upper bound of the net is on this tail: \sup\limits_{\beta\geq\alpha}x_\beta. Alternately, we consider the greatest lower bound on the tail: \inf\limits_{\beta\geq\alpha}x_\beta.

Now as we move to tails further and further out in the net, the least upper bound (greatest lower bound) may drop (rise) as we pass maxima (minima). That is, the supremum (infimum) of a set bounds the suprema (infima) of its subsets. So? So if we pass such a maximum it clearly doesn’t affect the long-run behavior of the net, and we want to forget it. So we’ll take the lowest of the suprema of tails (the highest of the infima of tails).

Thus we finally come to defining the limit superior

\displaystyle\limsup x_\alpha=\inf\limits_{\alpha\in A}\sup\limits_{\beta\geq\alpha}x_\alpha

and the limit inferior

\displaystyle\liminf x_\alpha=\sup\limits_{\alpha\in A}\inf\limits_{\beta\geq\alpha}x_\alpha

Now these are related to our usual concept of a limit. First of all,

\displaystyle\liminf x_\alpha\leq\limsup x_\alpha

and the limit converges if and only if these two are both finite and equal. In this case, the limit is this common finite value. If they both go to infinity, the limit diverges to infinity, and similarly for negative infinity. If they’re not equal, then the limit bounces around between the two values.

If we’re considering a sequence of real numbers, then we’re taking a bunch of infima and suprema, all of which are guaranteed to exist. Thus the limits superior and inferior of any sequence must always exist.

As an illustrative example, work out the limits superior and inferior of the sequence (-1)^n(1+\frac{1}{n}). Show that this sequence diverges, but does so by oscillating rather than by blowing up.

Finally, note that we can consider a function f(x) defined on a ray to be a net on that ray, considered as a directed subset of real numbers. Then we get limits superior and inferior as x goes to infinity, just as for sequences.

May 2, 2008 Posted by | Analysis, Calculus | 1 Comment

Dirichlet’s and Abel’s Tests

We can now use Abel’s partial summation formula to establish a couple other convergence tests.

If a_n is a sequence whose sequence A_n of partial sums form a bounded sequence, and if B_n is a decreasing sequence converging to zero, then the series \sum_{k=0}^\infty a_kB_k converges. Indeed, then the sequence A_nB_{n+1} also decreases to zero, so we just need to consider the series \sum_{k=0}^\infty A_kb_{k+1}.

The bound on |A_k| and the fact that B_k is decreasing imply that |A_k(B_k-B_{k+1})|\leq M(B_k-B_{k+1}), and the series \sum_{k=0}^\infty M(B_k-B_{k+1}) clearly converges. Thus by the comparison test, the series \sum_{k=0}^\infty A_kb_{k+1} converges absolutely, and our result follows. This is called Dirichlet’s test for convergence.

Let’s impose a bit more of a restriction on the A_n and insist that this sequence actually converge. Correspondingly, we can weaken our restriction on B_n and require that it be monotonic and convergent, but not specifically decreasing to zero. These two changes balance out and we still find that \sum_{k=0}^na_kB_k converges. Indeed, the sequence A_nB_{n+1} converges automatically as the product of two convergent sequences, and the rest is similar to the proof in Dirichlet’s test. We call this Abel’s test for convergence.

May 1, 2008 Posted by | Analysis, Calculus | 2 Comments


Get every new post delivered to your Inbox.

Join 366 other followers