## Geeking out

I seem to be the only one around who thinks this is hilarious, or even gets it. I am the biggest geek in the department of mathematics.

## Ring homomorphisms

There is a special kind of function between rings, just like we have in groups. Given rings and , a function is called a homomorphism if it preserves all the ring structure.

The sort of odd thing here is that we’ve got two different kinds of rings to consider: those with and without identities. If we’re considering rings in general, we require that

but if we’re restricting ourselves to rings with identities, we also require that

where the on the left is the identity of , and the one on the right is the identity for . If we have two rings with identities but we consider them as general rings there will be more homomorphisms than if we consider them as rings with identity. It becomes important to pay a bit of attention to what kind of rings we’re really concerned with.

As an exercise, consider an arbitrary ring and see what ring homomorphisms exist from to . If has an identity, which of these homomorphisms preserve the identity?

Oh, and I probably should mention this: all the terminology from groups comes along for the ride. An injective (one-to-one) ring homomorphism is a monomorphism. A surjective (onto) ring homomorphism is an epimorphism. One that’s both is an isomorphism. A homomorphism from a ring to itself is an endomorphism, and an isomorphism from a ring to itself is an automorphism.

*[EDIT: cleaned up LaTeX error and added comments at the end about terminology.]*

## KLV errata

I just got home from a long discussion with Dr. Zuckerman about this whole business. I’m not quite ready to say exactly what’s going on, but I want to correct a couple errors that I’ve made. Let it not be said that I don’t admit when I’m wrong.

Firstly, in my little added remarks about the Monster group in my “Why We Care” post, I was oversimplifying. First of all, the lattice is *not* the Leech lattice. The Leech lattice lives in 24-dimensional space for one thing (doh). Basically, you put together three copies of the lattice and then tweak it a bit.

Putting them together I can explain. The simplest lattice is just the integers sitting inside the real line. If you move to the plane, the points with integer coordinates sit at the corners of the squares in a checkerboard tiling of the plane. This is “adding two copies of the integer lattice”. For three copies of , we want 24-tuples of numbers so the first eight, second eight, and third eight are each the coordinates of a point in the lattice.

When you do this, it turns out there’s just enough room to squeeze in some more points to get a new lattice. *That’s* the Leech lattice. The Monster also isn’t quite just a group of symmetries of this lattice, so there’s still a few more steps to go, but it’s definitely related. So the connection isn’t quite as close as I’d implied, but it’s there.

The other thing is about real forms. I’d forgotten that *not every choice of “realification” of the Killing form gives a Lie group*, and further that *not every choice that does work gives a unique Lie group*.

What *is* true is that to every real form of a complex Lie group , there’s a largest compact subgroup . This means that its ends curve back in on themselves like the circle or the torus, and don’t run off to infinity like the line or the cylinder. Then we can “complexify” this group to get another complex group that’s really interesting to us. This group is a subgroup of , which will be important. In particular, if we take the compact real form of , its maximal compact subgroup is just itself, so its complexification is just back again.

## Coloring knots

Today I’m going to be talking to the graduate students about various topics relating to coloring knots. I think I’ll leave you with a little project to play with.

First, go to Bar-Natan’s table of knots. Notice how all the diagrams seem to be made up of arcs meeting up where one strand of the knot crosses under another. Pick a knot diagram and try to color each arc either red, green, or blue, subject to the following rule: at any crossing, the three arcs that meet (two for the undercrossing strand and one for the overcrossing) must either be all the same color or all different colors.

Which knots can you color using all three colors at least once? If that’s too easy for you, how many ways can you color a given knot? If *that’s* too easy for you, you’ve almost surely seen this before.

To get you started, I’ve tricolored the trefoil knot using all three colors.

## How to Play by Yourself

Sometime between dragging myself into my bed after the calculus exam and related activities last night and dragging myself back out of it in time to teach this morning, an article claiming to solve triangular peg solitaire went up on the arXiv. I’ve obviously not had time to read it, so I don’t know how good it is, but the subject matter at least should be pretty generally accessible.

## Different kinds of rings

There are a number of different kinds of rings differentiated (sorry) by properties of their multiplications. Most of them lead into their own specialized areas of study. I mentioned that a ring may or may not be commutative, and it may or may not have an identity, but there are a few more that will be useful.

One initially counterintuitive idea is that it’s entirely possible that a ring has “zero divisors”: two nonzero elements that multiply to give zero. Imagine starting with two copies of the integers, and , writing elements of the second copy as integers with a bar over them. Now consider pairs of elements, one from each copy, . Add pairs by adding the two components, but multiply them like this:

Notice that the product of any two elements of is zero! Weird. Eerie.

To be explicit: an element of this ring coming from is . We calculate the product:

So, any element for which there is a so that is called a left zero divisor. Right zero divisors are defined similarly. If a ring has no zero divisors, so the product of two nonzero elements is always nonzero, we call it an “integral domain”. The integers are just such an integral domain, fittingly enough.

Now if a ring has a multiplicative identity we can start talking about multiplicative inverses. We say an element has a left inverse if , or a right inverse if . If a ring has both a left and a right inverse they’re the same, since

In this case we call a unit and write its inverse as . We can also see that an element having a left (right) inverse cannot be a left (right) zero divisor:

If every nonzero element of a ring is a unit, we call it a division ring.

In the case of commutative rings, all these distinctions between “left” and “right” (zero divisors, inverses, etc.) disappear, since multiplication doesn’t care about the order of the factors. We actually have a special name for a commutative division ring: we call it a “field”, though everyone else in the world except the Belgians seems to call it a “(dead) body” (körper, corps, поле, test, lichaam, …).

*[EDIT: added explicit calculation verifying that elements from in the example are zero-divisors.]*

## The ring of integers

As I mentioned before, the primal example of a ring is the integers . So far we’ve got an ordered abelian group structure on the set of (equivalence classes of) pairs of natural numbers. Now we need to add a multiplication that distributes over the addition.

First we’ll figure out how to multiply natural numbers. This is pretty much as we expect. Remember that a natural number is either or for some number . We define

where we’ve already defined addition of natural numbers.

Firstly, this is commutative. This takes a few inductions. First show by induction that commutes with everything, then show by another induction that if commutes with everything then so does . Then by induction, every number commutes with every other. I’ll leave the details to you.

Similarly, we can use a number of inductions to show that this multiplication is associative — — and distributes over addition of natural numbers — . This is extremely tedious and would vastly increase the length of this post without really adding anything to the exposition, so I’ll again leave you the details. I’m reminded of something Jeff Adams said (honest, I’m not trying to throw these references in gratuitously) in his class on the classical groups. He told us to verify that the commutator in an associative algebra satisfies the Jacobi identity because, “It’s long and tedious and doesn’t add much, but I had to do it when *I* was a grad student, so now you’re grad students and it’s *your* turn.”

So now these operations — addition and multiplication — of natural numbers make into what some call a “semiring”. I prefer (following John Baez) to call it a “rig”, though: a “ri*n*g without *n*egatives”. We use this to build up the ring structure on the integers.

Recall that the integers are (for us) pairs of natural numbers considered as “differences”. We thus define the product

Our life now is vastly easier than it was above: since we know addition and multiplication of natural numbers is commutative, the above expression is *manifestly* commutative. No work needs to be done! Associativity is also easy: just set up both triple products and expand out, checking that each term is the same by the rig structure of the natural numbers. Similarly, we can check distributivity, that acts as an identity, and that the product of two integers is independent of the representing pair of natural numbers.

Lastly, multiplication by a positive integer preserves order. If and then . Together all these properties make the integers as we’ve defined them into a commutative ordered ring with unit. The proofs of all these things have been incredibly dull (I actually did them all today just to be sure how they worked), but it’s going to get a lot easier soon.

## Rubik’s Cube Wrapup

I want to tie up a few loose ends about Rubik’s group today.

We can fit Rubik’s group into a sequence that more clearly shows all the structure I’m talking about. Specifically, it’s a subgroup of the bigger group I mentioned back at the beginning. We can restate the three restrictions as saying the maneuvers in Rubik’s group are those in the kernel of a certain homomorphism. So, first let’s write down the big group.

The unrestricted edge and corner groups are just wreath products, which I’ll write out as semidirect products. Without restrictions, these two groups are independent, so we just have a direct product to give the unrestricted Rubik’s group.

I’ll write for a generic element of this group. Each part of this list corresponds to part of the expression for above.

Now we want to add up all the edge flips and make them come out to zero. We can write this sum as a homomorphism:

where the sum is taken in the group . You should be able to verify that this actually is a homomorphism. Similarly, we want the sum of the total twists as a homomorphism:

where the sum is taken in .

Finally, the permutation condition uses the “signum” homomorphism from a symmetric group to . It assigns the value to even permutations and the value to odd ones. We use it to write the last restriction as a homomorphism:

Now we assemble our overall restriction homomorphism as the direct product of these three:

and get the short exact sequence:

Commenter Dan Hoey brought up where my fundamental operations come from. To be honest, these four are just ones I remember off the top of my head. He’s right, though, that there are systematic ways of coming up with maneuvers that perform double-flips, double-twists, and -cycles. I’ll leave you to read his comment and work out yourself that you can realize four such basic maneuvers as commutators — products of elements of the form . This means that the commutator subgroup of Rubik’s group is *almost* all of itself. It just misses a single twist. In fact, — Rubik’s group is *highly* non-abelian.

Incidentally, this approach to the cube is not the first one I worked out, but it’s far more elegant than my pastiche of particular tools. I picked it up back when I was at the University of Maryland from a guy who had worked it out while *he* was at Yale as a graduate student back when the cube first came out: Jeff Adams.

## That’ll learn me

I wanted to see how a book I’d checked out from our library treated a certain topic, hoping that it might have a theorem all ready for me to use. Unfortunately I didn’t remember the authors nor exactly what it was called, but I did remember what it looked like. So I went to the library and tried to find it with no luck. As a fallback, I asked Paul.

Paul Lukasiewicz is our librarian, and has been around forever. People I know who were students here in the early ’80s thought of him as omniscient already. You can give him a title and author of any book in the library and he can tell you what color it is off the top of his head.

Unfortunately it doesn’t work in reverse, so I was driven back to the online directory of the library system here to search through hundreds of books on the topic to find the one I remembered. They really need to make those things searchable on appearance.

## More sketches, and why we care

Dr. Adams just sent me a link to an explanation of the technical details for mathematicians in other fields, but it’s still somewhat readable.

I also have been reading the slides for Dr. Vogan’s talk, *The Character Table for E _{8}, or How We Wrote Down a 453,060 x 453,060 Matrix and Found Happiness*. There’s also an audio recording available (7MB mp3). Incidentally, I’d have gone for

*The Split Real Form of E*, but it’s all good. This talk actually manages to be very generally accessible, and includes all sorts of pretty pictures. Those of you who wanted more visuals than I provided in my rough overview might like to check that one out.

_{8}, or How We Learned to Stop Worrying and Love the Character TableTogether, these two are my core that, together with some input from Dr. Zuckerman I’ll be trying to break down into smaller chunks. I *highly* advise reading at least Vogan’s slides and preferably also Adams’ notes.

I also want to respond to a comment basically asking, “so why the heck should we care about this?” It’s an excellent question, and yet another one the newspaper reports really glossed over without taking seriously. I’ll admit that I glossed it over at first too, since I think this stuff is just too elegant *not* to love. Still, I’ve mulled this over not just as applies to these calculations, but with regard to a lot of mathematics at this level (thus qualifying the “why we care” as a rant).

This sort of question from a non-mathematician almost always is looking for an engineering response. “What’s it good for?” means, “what can we build with it?” Honestly I have to say “not much”. Representations of Lie groups do have their uses, though, and I can point out a few things they have already been good for.

As indicated in Dr. Vogan’s slides, representations of the one-dimensional Lie groups are concerned with change through time, particularly periodic changes. This means that they’re exceptionally good at talking about periodic phenomena, like waves. Sound waves, light waves, electrical circuits, vibrating strings — they’re all one-dimensional waves. So what? So every time you use the graphic equalizer on your stereo the electronics are taking the signal and performing a fast Fourier transform on it. This turns a function on the line (Lie group) into a function on the *space of all representations of the group*; that’s the “unitary dual” that Dr. Adams refers to. Then you can adjust the periodic components and reconstruct a new function with much fatter bass, or whatever your tastes are.

The same sorts of things can be done in higher dimensions. Similar techniques revealed that you can’t hear the shape of a drum — there are differently-shaped membranes that have the same vibrational characteristics. What are “orbitals” of electrons around an atomic nucleus (hazy memories of chemistry)? They’re *representations of the Lie group !*

So what can we do with ? Nothing right now, but there’s plenty we can do (and have done) with representation theory in general.

There’s another reason (beyond the intrinsic beauty of the ideas) to work out the Atlas: more data means more patterns, and more patterns means more interrelationships between seemingly-distinct fields. Quite a few of the greatest theorems in recent years have been saying that this field of mathematics over here and that one over there are “really” the same thing. Everyone knows that Andrew Wiles solved Fermat’s Last Theorem, but what he really did was show that some things in algebraic geometry (the study of solution sets of polynomials) called “elliptic curves” are deeply related to functions with a certain sort of periodicity called “modular forms”. If, as David Corfield asserts, mathematics proceeds by “telling stories”, then each field’s stories become allegories for the other. Hard questions in one area might be translated into questions we know how to solve in the other.

So how does having a lot of data like the Atlas around help out? Because we discover a lot of these relationships from similar patterns in the data, and in many cases (though I hate to admit it) through the same numbers showing up over and over. As just one example, I present the Monstrous Moonshine conjecture. The Monster is a finite, simple group — no normal subgroups, so it can’t be broken down into even a semidirect product of smaller groups — of order (brace yourself)

That’s elements being juggled around in an intricate symmetry. People sat down and calculated its character table, very much a similar project to the current one about . And then there’s a certain special modular form called that just happens to be related to it. How so? John McKay happened to see the -function written out like this:

So? So he’d also seen the dimensions of representations of the Monster, which start with , , , and continue. *Every single coefficient in the function came from dimensions of representations of the Monster!* And it was conjectured that the pattern continued. In fact it did. Twenty-some years ago, Frenkel, Lepowsky, and Meurman constructed a representation of the Monster that made it clear, and their results are still echoing. One of my colleagues graduated last year and went on to Harvard by studying exactly the same sorts of connections.

And how did it start? By recognizing patterns in a mountain of raw data about representations. What unsolved problems might be translatable into representation theory by reflections found through the Atlas data? Maybe the Navier-Stokes equations, which would give a better understanding of fluid flows and aerodynamics. Maybe the Riemann hypothesis, which would lead to a better understanding of the distributions of prime numbers, which would have an impact on modern cryptography. Who knows?

Oh, and one more thing. How did someone find the Monster in the first place? Well it turns out to be a group of symmetries of a certain collection of points tiling eight-dimensional space. What collection of points? The “Leech lattice”. And you’ve already seen it: that picture of the root system in all the news reports is the basic cell, just like a square is the basic cell of a checkerboard tiling of the plane. And it all comes back around again.

How the heck can you *not* care about this stuff?

*[EDIT: I've found out I was wrong about how the Monster relates to . More info in the link.]*