I haven’t gotten much time to work on the promised deconstruction, so I’ll punt to a math post I wrote up earlier.
Okay, let’s look back and see what integration is really calculating. We started in on integration by trying to find the area between the horizontal axis and the graph of a positive function. But what happens as we extend the formalism of integration to handle more general situations?
What if the function we integrate is negative? Then is positive, and is the area between the horizontal axis and the graph of . But moving from to is just a reflection through the horizontal axis. The horizontal axis stays in the same place, and it seems the area should be the same. But by the basic rules of integration we spun off at the end of yesterday’s post, we see that
That is, we don’t get the same answer; we get its negative. So, integration counts areas below the horizontal axis as negative. We could also see this from the Riemann sums, where we replace all the function evaluations with their negatives, and factor out a from the whole sum.
How else could we extend the formalism of integration? What if we ran it “backwards”, from the right endpoint of our interval to the left? That is, let’s take an “interval” with . Then when we partition the interval we should get a string of partition points decreasing as we go along. Then when we set up the Riemann sum we’ll get negative values for each We can factor out all these signs to give an overall negative sign, along with a Riemann sum for the integral over . The upshot is that we can integrate over an interval from right to left at the cost of introducing an overall negative sign.
We can handle this by attaching a sign to an interval, just like we did to points yesterday. We write . Then when we integrate over a signed interval, we take its sign into account. Notice that if we integrate over both and the two parts cancel each other out, and we get .
I’m not about to sit down and work up a solution like we did before, but it shouldn’t be impossible to repeat the same sort of analysis. I will point out, however, that the solver in this video is making heavy use of both of our solution techniques: commutators and a tower of nested subgroups.
The nested subgroups are obvious. As the solution progresses, more and more structure becomes apparent, and is preserved as the solution continues. In particular, the solver builds up the centers of faces and then slips to the subgroup of maneuvers which leaves such “big centers” fixed in place. Near the end, almost all of the moves are twists of the outer faces, because these are assured not to affect anything but the edge and corner cubies.
The commutators take a quicker eye to spot, but they’re in there. Watch how many times he’ll do a couple twists, a short maneuver, and then undo those couple twists. Just as we used such commutators, these provide easy generalizations of basic cycles, and they form the heart of this solver’s algorithm.
Alexandre asked a question about the asymptotic growth of the “worst assembly time” for the cube. What this is really asking is for the “diameter” of the th Rubik’s group . I don’t know offhand what this would be, but here’s a way to get at a rough estimate.
First, find a similar expression for the structure of as we found before for . Then what basic twists do we have? For we had all six faces, which could be turned either way, and we let the center slices be fixed. In general we’ll have slices in each of six directions, each of which can be turned either way, for a total of generators (and their inverses). But each generator should (usually) be followed by a different one, and definitely not by its own inverse. Thus we can estimate the number of words of length as . Then the structure of gives us a total size of the group, and the diameter should be about . Notice that for this gives us , which isn’t far off from the known upper bound of quarter-turns.