The Unapologetic Mathematician

Mathematics for the interested outsider

Geeking out

I seem to be the only one around who thinks this is hilarious, or even gets it. I am the biggest geek in the department of mathematics.

March 31, 2007 Posted by | Uncategorized | 1 Comment

Ring homomorphisms

There is a special kind of function between rings, just like we have in groups. Given rings R and S, a function f:R\rightarrow S is called a homomorphism if it preserves all the ring structure.

The sort of odd thing here is that we’ve got two different kinds of rings to consider: those with and without identities. If we’re considering rings in general, we require that

  • f(r_1+r_2)=f(r_1)+f(r_2)
  • f(r_1r_2)=f(r_1)f(r_2)

but if we’re restricting ourselves to rings with identities, we also require that

  • f(1)=1

where the 1 on the left is the identity of R, and the one on the right is the identity for S. If we have two rings with identities but we consider them as general rings there will be more homomorphisms than if we consider them as rings with identity. It becomes important to pay a bit of attention to what kind of rings we’re really concerned with.

As an exercise, consider an arbitrary ring R and see what ring homomorphisms exist from \mathbb{Z} to R. If R has an identity, which of these homomorphisms preserve the identity?

Oh, and I probably should mention this: all the terminology from groups comes along for the ride. An injective (one-to-one) ring homomorphism is a monomorphism. A surjective (onto) ring homomorphism is an epimorphism. One that’s both is an isomorphism. A homomorphism from a ring to itself is an endomorphism, and an isomorphism from a ring to itself is an automorphism.

[EDIT: cleaned up LaTeX error and added comments at the end about terminology.]

March 31, 2007 Posted by | Ring theory | 1 Comment

KLV errata

I just got home from a long discussion with Dr. Zuckerman about this whole business. I’m not quite ready to say exactly what’s going on, but I want to correct a couple errors that I’ve made. Let it not be said that I don’t admit when I’m wrong.

Firstly, in my little added remarks about the Monster group in my “Why We Care” post, I was oversimplifying. First of all, the E_8 lattice is not the Leech lattice. The Leech lattice lives in 24-dimensional space for one thing (doh). Basically, you put together three copies of the E_8 lattice and then tweak it a bit.

Putting them together I can explain. The simplest lattice is just the integers sitting inside the real line. If you move to the plane, the points with integer coordinates sit at the corners of the squares in a checkerboard tiling of the plane. This is “adding two copies of the integer lattice”. For three copies of E_8, we want 24-tuples of numbers so the first eight, second eight, and third eight are each the coordinates of a point in the E_8 lattice.

When you do this, it turns out there’s just enough room to squeeze in some more points to get a new lattice. That’s the Leech lattice. The Monster also isn’t quite just a group of symmetries of this lattice, so there’s still a few more steps to go, but it’s definitely related. So the connection isn’t quite as close as I’d implied, but it’s there.

The other thing is about real forms. I’d forgotten that not every choice of “realification” of the Killing form gives a Lie group, and further that not every choice that does work gives a unique Lie group.

What is true is that to every real form G(\mathbb{R}) of a complex Lie group G, there’s a largest compact subgroup K(\mathbb{R}). This means that its ends curve back in on themselves like the circle or the torus, and don’t run off to infinity like the line or the cylinder. Then we can “complexify” this group to get another complex group K that’s really interesting to us. This group K is a subgroup of G, which will be important. In particular, if we take the compact real form of G, its maximal compact subgroup is just itself, so its complexification K is just G back again.

March 31, 2007 Posted by | Atlas of Lie Groups | 2 Comments

Coloring knots

Today I’m going to be talking to the graduate students about various topics relating to coloring knots. I think I’ll leave you with a little project to play with.

First, go to Bar-Natan’s table of knots. Notice how all the diagrams seem to be made up of arcs meeting up where one strand of the knot crosses under another. Pick a knot diagram and try to color each arc either red, green, or blue, subject to the following rule: at any crossing, the three arcs that meet (two for the undercrossing strand and one for the overcrossing) must either be all the same color or all different colors.

Which knots can you color using all three colors at least once? If that’s too easy for you, how many ways can you color a given knot? If that’s too easy for you, you’ve almost surely seen this before.

To get you started, I’ve tricolored the trefoil knot using all three colors.
Tricolored Trefoil

March 30, 2007 Posted by | Knot theory | 3 Comments

How to Play by Yourself

Sometime between dragging myself into my bed after the calculus exam and related activities last night and dragging myself back out of it in time to teach this morning, an article claiming to solve triangular peg solitaire went up on the arXiv. I’ve obviously not had time to read it, so I don’t know how good it is, but the subject matter at least should be pretty generally accessible.

March 30, 2007 Posted by | Uncategorized | 1 Comment

Different kinds of rings

There are a number of different kinds of rings differentiated (sorry) by properties of their multiplications. Most of them lead into their own specialized areas of study. I mentioned that a ring may or may not be commutative, and it may or may not have an identity, but there are a few more that will be useful.

One initially counterintuitive idea is that it’s entirely possible that a ring has “zero divisors”: two nonzero elements that multiply to give zero. Imagine starting with two copies of the integers, \mathbb{Z} and \bar{\mathbb{Z}}, writing elements of the second copy as integers with a bar over them. Now consider pairs of elements, one from each copy, (a,\bar{b}). Add pairs by adding the two components, but multiply them like this:
(a,\bar{b})(c,\bar{d})=(ac,\bar{ad+bc})
Notice that the product of any two elements of \bar{\mathbb{Z}} is zero! Weird. Eerie.

To be explicit: an element of this ring coming from \bar{\mathbb{Z}} is (0,\bar{a}). We calculate the product:
(0,\bar{a})(0,\bar{b})=(00,\bar{0b+a0})=(0,\bar{0})

So, any element a for which there is a b so that ab=0 is called a left zero divisor. Right zero divisors are defined similarly. If a ring has no zero divisors, so the product of two nonzero elements is always nonzero, we call it an “integral domain”. The integers are just such an integral domain, fittingly enough.

Now if a ring has a multiplicative identity we can start talking about multiplicative inverses. We say an element a has a left inverse b if ba=1, or a right inverse c if ac=1. If a ring has both a left and a right inverse they’re the same, since
b=b1=b(ac)=(ba)c=1c=c
In this case we call a a unit and write its inverse as a^{-1}. We can also see that an element having a left (right) inverse cannot be a left (right) zero divisor:
ax=0\Rightarrow x=1x=(ba)x=b(ax)=b0=0
If every nonzero element of a ring is a unit, we call it a division ring.

In the case of commutative rings, all these distinctions between “left” and “right” (zero divisors, inverses, etc.) disappear, since multiplication doesn’t care about the order of the factors. We actually have a special name for a commutative division ring: we call it a “field”, though everyone else in the world except the Belgians seems to call it a “(dead) body” (körper, corps, поле, test, lichaam, …).

[EDIT: added explicit calculation verifying that elements from \bar{\mathbb{Z}} in the example are zero-divisors.]

March 29, 2007 Posted by | Ring theory | 3 Comments

The ring of integers

As I mentioned before, the primal example of a ring is the integers \mathbb{Z}. So far we’ve got an ordered abelian group structure on the set of (equivalence classes of) pairs of natural numbers. Now we need to add a multiplication that distributes over the addition.

First we’ll figure out how to multiply natural numbers. This is pretty much as we expect. Remember that a natural number is either {}0 or S(b) for some number b. We define
a\cdot0=0
a\cdot S(b)=(a\cdot b)+a
where we’ve already defined addition of natural numbers.

Firstly, this is commutative. This takes a few inductions. First show by induction that {}0 commutes with everything, then show by another induction that if a commutes with everything then so does S(a). Then by induction, every number commutes with every other. I’ll leave the details to you.

Similarly, we can use a number of inductions to show that this multiplication is associative — (a\cdot b)\cdot c=a\cdot(b\cdot c) — and distributes over addition of natural numbers — a\cdot(b+c)=a\cdot b+a\cdot c. This is extremely tedious and would vastly increase the length of this post without really adding anything to the exposition, so I’ll again leave you the details. I’m reminded of something Jeff Adams said (honest, I’m not trying to throw these references in gratuitously) in his class on the classical groups. He told us to verify that the commutator in an associative algebra satisfies the Jacobi identity because, “It’s long and tedious and doesn’t add much, but I had to do it when I was a grad student, so now you’re grad students and it’s your turn.”

So now these operations — addition and multiplication — of natural numbers make \mathbb{N} into what some call a “semiring”. I prefer (following John Baez) to call it a “rig”, though: a “ring without negatives”. We use this to build up the ring structure on the integers.

Recall that the integers are (for us) pairs of natural numbers considered as “differences”. We thus define the product
(a,b)\cdot(c,d)=(a\cdot c+b\cdot d,a\cdot d+b\cdot c)

Our life now is vastly easier than it was above: since we know addition and multiplication of natural numbers is commutative, the above expression is manifestly commutative. No work needs to be done! Associativity is also easy: just set up both triple products and expand out, checking that each term is the same by the rig structure of the natural numbers. Similarly, we can check distributivity, that (1,0) acts as an identity, and that the product of two integers is independent of the representing pair of natural numbers.

Lastly, multiplication by a positive integer preserves order. If a<b and 0<c then ac<bc. Together all these properties make the integers as we’ve defined them into a commutative ordered ring with unit. The proofs of all these things have been incredibly dull (I actually did them all today just to be sure how they worked), but it’s going to get a lot easier soon.

March 29, 2007 Posted by | Fundamentals, Numbers, Ring theory | 1 Comment

Rubik’s Cube Wrapup

I want to tie up a few loose ends about Rubik’s group today.

We can fit Rubik’s group into a sequence that more clearly shows all the structure I’m talking about. Specifically, it’s a subgroup of the bigger group I mentioned back at the beginning. We can restate the three restrictions as saying the maneuvers in Rubik’s group are those in the kernel of a certain homomorphism. So, first let’s write down the big group.

The unrestricted edge and corner groups are just wreath products, which I’ll write out as semidirect products. Without restrictions, these two groups are independent, so we just have a direct product to give the unrestricted Rubik’s group.
\bar{G}=\left(\mathbb{Z}_2^{12}\rtimes S_{12}\right)\times\left(\mathbb{Z}_3^8\rtimes S_8\right)
I’ll write (((e_1,e_2,...,e_{12}),\sigma_e),((c_1,c_2,...,c_8),\sigma_c)) for a generic element of this group. Each part of this list corresponds to part of the expression for \bar{G} above.

Now we want to add up all the edge flips and make them come out to zero. We can write this sum as a homomorphism:
e:\bar{G}\rightarrow\mathbb{Z}_2
e(((e_1,e_2,...,e_{12}),\sigma_e),((c_1,c_2,...,c_8),\sigma_c))=e_1+e_2+...+e_{12}
where the sum is taken in the group \mathbb{Z}_2. You should be able to verify that this actually is a homomorphism. Similarly, we want the sum of the total twists as a homomorphism:
c:\bar{G}\rightarrow\mathbb{Z}_3
c(((e_1,e_2,...,e_{12}),\sigma_e),((c_1,c_2,...,c_8),\sigma_c))=c_1+c_2+...+c_8
where the sum is taken in \mathbb{Z}_3.

Finally, the permutation condition uses the “signum” homomorphism from a symmetric group to \mathbb{Z}_2. It assigns the value {}0 to even permutations and the value 1 to odd ones. We use it to write the last restriction as a homomorphism:
p:\bar{G}\rightarrow\mathbb{Z}_2
p(((e_1,e_2,...,e_{12}),\sigma_e),((c_1,c_2,...,c_8),\sigma_c))={\rm sgn}(\sigma_e)+{\rm sgn}(\sigma_c)

Now we assemble our overall restriction homomorphism as the direct product of these three:
f=e\times c\times p:\bar{G}\rightarrow\mathbb{Z}_2\times\mathbb{Z}_3\times\mathbb{Z}_2
and get the short exact sequence:
\mathbf{1}\rightarrow G\rightarrow\bar{G}\rightarrow^{f}\mathbb{Z}_2\times\mathbb{Z}_3\times\mathbb{Z}_2\rightarrow\mathbf{1}

Commenter Dan Hoey brought up where my fundamental operations come from. To be honest, these four are just ones I remember off the top of my head. He’s right, though, that there are systematic ways of coming up with maneuvers that perform double-flips, double-twists, and 3-cycles. I’ll leave you to read his comment and work out yourself that you can realize four such basic maneuvers as commutators — products of elements of the form m_1m_2m_1^{-1}m_2^{-1}. This means that the commutator subgroup \left[G,G\right] of Rubik’s group is almost all of G itself. It just misses a single twist. In fact, G/\left[G,G\right]\cong\mathbb{Z}_2 — Rubik’s group is highly non-abelian.

Incidentally, this approach to the cube is not the first one I worked out, but it’s far more elegant than my pastiche of particular tools. I picked it up back when I was at the University of Maryland from a guy who had worked it out while he was at Yale as a graduate student back when the cube first came out: Jeff Adams.

March 28, 2007 Posted by | Group theory, Rubik\'s Cube | 2 Comments

That’ll learn me

I wanted to see how a book I’d checked out from our library treated a certain topic, hoping that it might have a theorem all ready for me to use. Unfortunately I didn’t remember the authors nor exactly what it was called, but I did remember what it looked like. So I went to the library and tried to find it with no luck. As a fallback, I asked Paul.

Paul Lukasiewicz is our librarian, and has been around forever. People I know who were students here in the early ’80s thought of him as omniscient already. You can give him a title and author of any book in the library and he can tell you what color it is off the top of his head.

Unfortunately it doesn’t work in reverse, so I was driven back to the online directory of the library system here to search through hundreds of books on the topic to find the one I remembered. They really need to make those things searchable on appearance.

March 27, 2007 Posted by | Uncategorized | Leave a comment

More sketches, and why we care

Dr. Adams just sent me a link to an explanation of the technical details for mathematicians in other fields, but it’s still somewhat readable.

I also have been reading the slides for Dr. Vogan’s talk, The Character Table for E8, or How We Wrote Down a 453,060 x 453,060 Matrix and Found Happiness. There’s also an audio recording available (7MB mp3). Incidentally, I’d have gone for The Split Real Form of E8, or How We Learned to Stop Worrying and Love the Character Table, but it’s all good. This talk actually manages to be very generally accessible, and includes all sorts of pretty pictures. Those of you who wanted more visuals than I provided in my rough overview might like to check that one out.

Together, these two are my core that, together with some input from Dr. Zuckerman I’ll be trying to break down into smaller chunks. I highly advise reading at least Vogan’s slides and preferably also Adams’ notes.

I also want to respond to a comment basically asking, “so why the heck should we care about this?” It’s an excellent question, and yet another one the newspaper reports really glossed over without taking seriously. I’ll admit that I glossed it over at first too, since I think this stuff is just too elegant not to love. Still, I’ve mulled this over not just as applies to these calculations, but with regard to a lot of mathematics at this level (thus qualifying the “why we care” as a rant).

This sort of question from a non-mathematician almost always is looking for an engineering response. “What’s it good for?” means, “what can we build with it?” Honestly I have to say “not much”. Representations of Lie groups do have their uses, though, and I can point out a few things they have already been good for.

As indicated in Dr. Vogan’s slides, representations of the one-dimensional Lie groups are concerned with change through time, particularly periodic changes. This means that they’re exceptionally good at talking about periodic phenomena, like waves. Sound waves, light waves, electrical circuits, vibrating strings — they’re all one-dimensional waves. So what? So every time you use the graphic equalizer on your stereo the electronics are taking the signal and performing a fast Fourier transform on it. This turns a function on the line (Lie group) into a function on the space of all representations of the group; that’s the “unitary dual” that Dr. Adams refers to. Then you can adjust the periodic components and reconstruct a new function with much fatter bass, or whatever your tastes are.

The same sorts of things can be done in higher dimensions. Similar techniques revealed that you can’t hear the shape of a drum — there are differently-shaped membranes that have the same vibrational characteristics. What are “orbitals” of electrons around an atomic nucleus (hazy memories of chemistry)? They’re representations of the Lie group SO(3,\mathbb{R})!

So what can we do with E_8? Nothing right now, but there’s plenty we can do (and have done) with representation theory in general.

There’s another reason (beyond the intrinsic beauty of the ideas) to work out the Atlas: more data means more patterns, and more patterns means more interrelationships between seemingly-distinct fields. Quite a few of the greatest theorems in recent years have been saying that this field of mathematics over here and that one over there are “really” the same thing. Everyone knows that Andrew Wiles solved Fermat’s Last Theorem, but what he really did was show that some things in algebraic geometry (the study of solution sets of polynomials) called “elliptic curves” are deeply related to functions with a certain sort of periodicity called “modular forms”. If, as David Corfield asserts, mathematics proceeds by “telling stories”, then each field’s stories become allegories for the other. Hard questions in one area might be translated into questions we know how to solve in the other.

So how does having a lot of data like the Atlas around help out? Because we discover a lot of these relationships from similar patterns in the data, and in many cases (though I hate to admit it) through the same numbers showing up over and over. As just one example, I present the Monstrous Moonshine conjecture. The Monster is a finite, simple group — no normal subgroups, so it can’t be broken down into even a semidirect product of smaller groups — of order (brace yourself)

808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000

That’s 8\times10^{53} elements being juggled around in an intricate symmetry. People sat down and calculated its character table, very much a similar project to the current one about E_8. And then there’s a certain special modular form called j that just happens to be related to it. How so? John McKay happened to see the j-function written out like this:

j(\tau) = \frac{1}{q} + 744 + 196884q + 21483670q^2 + ...

So? So he’d also seen the dimensions of representations of the Monster, which start with 1, 196883, 21296876, and continue. Every single coefficient in the function came from dimensions of representations of the Monster! And it was conjectured that the pattern continued. In fact it did. Twenty-some years ago, Frenkel, Lepowsky, and Meurman constructed a representation of the Monster that made it clear, and their results are still echoing. One of my colleagues graduated last year and went on to Harvard by studying exactly the same sorts of connections.

And how did it start? By recognizing patterns in a mountain of raw data about representations. What unsolved problems might be translatable into representation theory by reflections found through the Atlas data? Maybe the Navier-Stokes equations, which would give a better understanding of fluid flows and aerodynamics. Maybe the Riemann hypothesis, which would lead to a better understanding of the distributions of prime numbers, which would have an impact on modern cryptography. Who knows?

Oh, and one more thing. How did someone find the Monster in the first place? Well it turns out to be a group of symmetries of a certain collection of points tiling eight-dimensional space. What collection of points? The “Leech lattice”. And you’ve already seen it: that picture of the E_8 root system in all the news reports is the basic cell, just like a square is the basic cell of a checkerboard tiling of the plane. And it all comes back around again.

How the heck can you not care about this stuff?

[EDIT: I’ve found out I was wrong about how the Monster relates to E_8. More info in the link.]

March 26, 2007 Posted by | Atlas of Lie Groups, rants | 5 Comments