Okay, it’s been pointed out to me that what I was thinking of in my update to yesterday’s post was a little more general than group theory. In the case of groups, preserving the composition is all that’s required. If I talk about semigroups later that extra condition will be needed.
So, I’ll leave it as a (relatively straightforward) exercise to show that if a function from one group to another preserves the composition that it also preserves identities. Oh, and inverses. May as well nail that down while we’re at it.
For now I’ll make this point, which I also make to all my calculus classes: it’s really not that bad to misremember something. That’s one of the nice things about mathematics. If you make a mistake it’s usually not hard to check. Also, when two people have different views on something it’s possible to check which one is right. There are real answers to be found. Despite the accusations of coldness and impersonality, it’s comforting to know that something has a real answer.
At last we come to the notion of a homomorphism of groups. These are really, in my view, the most important parts of the theory. They show up everywhere, and the structure of group theory is intimately bound up with the way homomorphisms work.
So what is a homomorphism? It’s a function from the set of members of one group to the set of members of another that “preserves the composition”. That is, a homomorphism takes an element of and gives back an element of . It has the further property that . The product of and uses the composition from , while the product of and uses the composition of .
Let’s consider an example very explicitly: a homomorphism . Remember that is the group of rearrangements of 3 objects (I’ll use a, b, and c), while is the group of “addition modulo 2”.
If we consider the permutations and in , each one is sent to 1 in the group , and 1+1 = 0 there. On the other hand, , which is sent to 0. The composition of the images is the image of the composition. We can pick any two permutations on the right and see the same thing.
Another example: with . The homomorphism property says that , and indeed we see that .
Another: . By . I mean the real numbers with addition as composition, and by . I mean the positive.nonzero real numbers with multiplication. I define . The laws of exponents tell us that .
As we continue we will see many more examples of homomorphisms. For now, there are a few definitions we will find useful later. Recall from the discussion about functions that a surjection is a function between functions that hits every point in its codomain at least once. A group homomorphism that is also a surjection we call an “epimorphism”. Similarly, an injection is a function that hits every point in its codomain at most once. A group homomorphism that is also an injection we call a “monomorphism”. A homomorphism that is both — the function is a bijection — we call an “isomorphism”. In the above examples, is an epimorphism, is a monomorphism, and is an isomorphism.
If a homomorphism’s domain and codomain group are the same, as in above, we call it an “endomorphism” on the group. If it’s also an isomorphism we call it an “automorphism”. The homomorphism is not an automorphism, since it doesn’t hit any point that’s not a multiple of 3.
And finally, a few things to think about.
- Can you construct a homomorphism from to similar to above, but for other values of ?
- What homomorphisms can you construct from to ? to ? to an arbitrary group ?
- What homomorphisms can you construct from to ?
UPDATE: I just remembered that I left off another technical requirement. A homomorphism has to send the identity of the first group to the identity of the second. It usually doesn’t cause a problem, but I should include it to be thorough. It isn’t hard to verify that all the homomorphisms I mentioned satisfy this property too.
A science museum in Charlotte, NC, Discovery Place has an exhibit on candy running, sponsored by Jelly Belly. Or “had”, I should say. Part of it has been taken down, and it’s all thanks to an eight-year-old boy.
The feature in question was a big pyramid filled with brand-name jelly beans. The question posed is how to estimate the number of jelly beans inside. Here’s what the plaque said:
A jelly bean has a volume of about 1 cubic cm.This container is half a pyramid.
Its base measures 46 cm by 23 cm and its height is 72 cm.
Here’s the formula to find the volume: 1/3 x base area x height.
Now divide your answer by 2 since this is half a pyramid.
Now multiply your answer by 0.9 to account for spaces between the jelly beans.
The answer should be 22,853.
Do you see the problem here? If the first formula were for the area of a square pyramid everything would be fine. However, since that term was written “base area” rather than “the area the base would have if it were a square, we’ve already taken the factor of 1/2 into account. Dividing by 2 again is extraneous.
Apparently nobody caught this until Parker Garrison came along. He’s evidently the only one who actually did the calculations and saw that the actual result of the given formula wasn’t what was printed. He knew something was funny and instead of just piping down and moving on he spoke up about it.
This sort of critical thinking and willingness to take action is just what we need more of, and the mathematical skill and interest needs to be nurtured. I don’t know if you’re out there, Parker, but if someone can put me in contact with you I’ve got a copy of Martin Gardner’s New Mathematical Diversions: More Puzzles, Problems, Games, and Other Mathematical Diversions (which has a section on the sphere packing problem) with your name on it. That should help explain where that 0.9 comes from.
I’m soon going to need to really use the notion of a function, and I want to make sure that I lay the groundwork properly. This is also a good place to mention a few things about sets.
For most of my purposes, a naïve concept of sets will suffice. A set is a collection of objects, called the elements of the set. In formal treatments of set theory, the elements are themselves sets. In fact, everything in sight is “really” a set. This sort of foundationalist approach, though, tends to obscure the real structure of mathematical theories, so I’ll avoid talking about formal set theory unless it’s absolutely necessary. What we’ll need from set theory are a few operations on sets.
- Specification: If we have a set and some statement that can be unambiguously determined true or false for each element of , there is a set containing exactly those elements of so that is true. We write this set as , read “the set of those elements of such that is true of “.
- Intersection: This is actually a special case of specification. For our statement we use “ is an element of the set “. This gives us the set of all elements in both and , and we write this set . In practice, we will allow intersections not just of two sets, but of any number of sets — even infinitely many.
- Union: For any two sets and there is a set containing any element in either or . We write this set as . As for intersection, we will allow unions of any collection of sets.
- Cartesian product: For any two sets and there is the set of ordered pairs , where is an element of and is an element of . We write this set . Again, we allow Cartesian products of any number of sets, though only a finite number at a time here.
- Empty set: While not a “construction”, per se, the empty set is something important to pay attention to. This is, predictably enough, the set containing no elements at all. We write it .
- Subset: If every element of is also an element of we say is a subset of — written . Note that specification gives us a subset of .
- Power set: For any set we have the set of all subsets of. We write this set .
I may have forgotten some, but I will mention those (and add them here) if I realize it later.
Anyhow, a function is basically a rule that assigns to each element of one set an element of another. Formally, we need to specify a “domain” set and a “codomain” set (the codomain is often called the “range”). For every element of the domain, there is a uniquely specified element in the codomain. Often there is some sort of calculational method to determine the value of the function, but a simple lookup table will suffice. We write to specify a function with domain and codomain .
There are a few properties a function may have that are worth mentioning here. Every function assigns a value in its codomain to every element in its domain. If every element in the codomain is the value of the function at some element of the domain we say that the function is “onto” or a “surjection”. Every function assigns only one value to every element in the domain. If no element in the codomain is the value of the function at more than one element of the domain, we say that the function is “one-to-one” or an “injection”. If both of these properties hold, we say the function is a “bijection”.
Let’s consider four sets: , , , . We use these to define a number of examples of functions.
- , with , , and . This function is surjective, but not injective.
- , with , , . This function is injective, but not surjective.
- , with , , . This function is neither injective nor surjective.
- , with , , . This function is bijective.
I want to throw out a few more examples of groups before I move deeper into the theory.
First up: Abelian groups. These are more a general class of groups than an example like permutation groups were. They are distinguished by the fact that the composition is “commutative” — it doesn’t matter what order the group elements come in. The composition is the same as the composition .
All the groups I’ve mentioned so far, except for permutations and rotations, are Abelian. It’s common when dealing with an abelian group to write the composition as “+”, the identity as “0”, and the inverse of as . Let’s recap the axioms for an Abelian group in this notation.
- For any , , and :
- There is an element 0 so that for any :
- For every there is an element so that:
- For any and :
Abelian groups are really fantastically important. Many later algebraic structures start with an Abelian group and add structure to it, just as a group starts with a set and adds structure to it. We’ll see many examples of these later.
The other thing I want to mention is a free group. As the name might imply, this is a group with absolutely no restrictions other than the group axioms. We start by picking some basic pieces, sometimes called “generators” or “letters”, and then just start writing out whatever “words” the rules of group theory allow.
Let’s start with the free group on one letter: . We definitely have the identity element — written “1” — and we throw in our single letter . We can compose this with itself however many times we like by just writing letters next to each other: , , , and so on. We also need an inverse, . We can use and to build up long words like . But notice that whenever an and an sit next to each other they cancel. That collapses this long word down to . We see that in all words look like , where a positive means a string of s, a negative means a string of s, and for the identity. We compose just by adding the exponents.
The free group on two letters, gets a lot more complicated. We again start with the identity and throw in letters and . Now we can build up all sorts of words like . But now we can’t do anything to pull and past each other. Letters only cancel their inverses when they’re right next to each other, so this word only collapses to . That’s the best we can do. Free groups on more generators are pretty much the same, but with more basic symbols.
Composition of words and just writes them one after another, cancelling whatever possible in the middle. For example, in let’s say and . We write them together (in order!) as and cancel inverses where we can to get .
Free groups seem hideously complicated at first, but they aren’t so bad once you get used to them. They’re also tremendously useful, as we’ll soon see. They’re the primordial groups, with absolutely nothing extra beyond the bare minimum of what’s needed to make a group.
Some points to ponder
- What is the inverse of a word in a free group?
- What should the free Abelian group on letters look like?
And now, the solution to the Airplane Seat problem.
Obviously there’s a 1% probability that the first passenger chooses his own seat, in which case the last one automatically gets his. Then there’a a 1% chance that the first chooses the second passenger’s seat, who then has a 1/99 chance of choosing her own, or a 100% chance of this all getting way too complicated.
So let’s look for something not quite so obvious. There are various ways of solving the problem, but the one I’m interested in uses the fact that what we’re really looking at is a permutation!
This also has the property that the sequence of only increases. We need the probability that is not 100.
Let’s say that it is. Then we have the cycle . We can stick the transposition after it to give . This is a valid cycle that doesn’t end with 100.
What if we have a cycle that doesn’t end with 100? If we write it as we can again stick the transposition after it. Now this gives , which does end with 100.
Even better, if we start with any cycle, ending with 100 or not, and do both transformations we get back the exact same cycle we started with. We say that the two transformations are inverses of each other, which just means that each undoes what the other does. A fancier word for an invertible function between two sets is “bijection”.
So we’ve split all valid sequences into ones where 100 gets his own seat and ones where he doesn’t, and shown that there’s a bijection between them. That means that the two sets have the same size! So picking a sequence at random we find that in exactly half the cases 100 will get his own seat.
Like Euclid used to say, Q.E.D.
Tonight a new paper showed up on the arXiv: “A polynomial invariant of finite quandles”. Of course, this leads to an invariant of knots and links. As soon as I get my current plate cleared I’ll be turning to this one to see how it behaves under quandle morphisms. The more invariants I bring into my program, the better.
I thought I’d drop a little problem to think about for a while. If you’ve seen it before, please don’t ruin it for those who haven’t by posting a solution in the comments.
Let’s imagine an airplane with 100 seats. Instead of the cattle-call boarding procedures we know, passengers board the plane one at a time. On this flight, the first passenger has lost his ticket, so he just picks a seat at random to sit in. When the second passenger boards, if her own seat is available she sits in it. If not, she picks a random empty seat. This repeats until the plane fills. What is the probability that the 100th passenger gets to sit in his own seat?
Assume that all the random choices are made with equal probability of picking any currently-empty seat.
UPDATE: Since I noticed later that the convention that makes things simplest in group actions is to compose permutations right-to-left, I’ve gone back and tweaked that here.
Today I want to go into an example of a group in a lot more detail. It looks pretty long, so I’ll just put the whole thing behind the jump
UPDATE: I originally phrased my closing sentence poorly, and have chosen to revise it. The original should remain, struck through
I’d been hoping to write about today’s entry in the department’s graduate student seminar today, but there wasn’t a speaker lined up. So I guess I’ll have to get my rant on again.
I spoke recently with a young graduate student in a math department who’d come from undergraduate study in physics. I like to draw connections between my interests and whatever else I come across, so I mentioned John Baez’ work on “higher gauge theory”, which takes the very geometric approach to particle physics known as gauge theory and “categorifies” it. It’s a very interesting program, and I hope to talk more about it here.
The student, however, waved his hands and dismissed all discussion. I pointed out that the paper Baez wrote with Urs Schreiber — who run The n-Category Café along with David Corfield — gives a beautiful explanation of a thereto ad hoc “fake flatness” condition in other approaches to extending gauge theory. I know from conversations with other mathematicians this student had spoken to that he’s interested in just this sort of extension, so why the complete disinterest? “String theory has already done all this.”
String theory, as many popular articles and books have explained, moves from the basic premise that instead of being little points the fundamental particles are all tiny vibrating “strings” of energy. From this hypothesis flows a vast theoretical model that many people think will ultimately be the “Theory of Everything” uniting all of fundamental physics the way Newton’s theory of gravity united terrestrial and celestial mechanics so long ago.
The operative word here, though, is think. String theory provides a sweeping vision of physics and a rich loam of mathematical conjectures, but the vision remains on the horizon and the conjectures stand unproven. “No matter,” say the string theorists, “It’s obvious that the math works this way because this is how it should work.” Except mathematics doesn’t work with “should”.
Here is the metaphor I return to again and again in conversation: a mathematician and a physicist walk along a sidewalk and come upon a crack in the pavement. The physicist notes the crack is only a millimeter wide and steps over it without a second thought. The mathematician (possibly afflicted with ADHD) decides to stop and peer into the crack, finding it to be many kilometers deep. The physicist rushes ahead and makes many grand predictions which turn out to be true with startling accuracy while the mathematician lags behind, carefully plastering up the crack.
This is all well and good, and the accurate predictions of physical theories like quantum electrodynamics (“QED”) have more than justified pushing ahead while the foundations are shored up from behind. String theory, though, suffers from the fact that it makes no new predictions, and has even ventured into an intellectual territory known as “the landscape” which leaves so many choices that no real predictions are possible. It’s like arguing with an astrologer. If one choice is refuted there are literally millions of infinities of alternate choices for them to fall back on. All that’s left is a fertile ground of mathematical conjecture.
So here’s where the problem really begins: string theory has grown like kudzu, and is choking the life out of fundamental physics. The more popular it gets the harder it is to get funding for alternative theories like loop quantum gravity (so much for the government’s commitment to presenting all sides of a scientific debate). The harder it is to get funding, the less likely it is that new physicists will go into those areas. The fewer new physicists entering alternate fields the more popular string theory becomes. And around and around and around in a vicious circle. The feedback loop is even used by some string theorists as a kind of evidence! “See what a large fraction of the papers on the high-energy physics theory arXiv deal with string theory? Obviously it’s correct if so many people believe it!”
And here’s where it really gets sinister: since all that’s left in string theory is a fertile ground of mathematical conjectures, some string theory zealots are turning their eyes on mathematics to invade. “Peer review is so slow and holds up progress. All papers should just be put on the math arXiv and forget the journals.” Who cares about being able to check someone’s work? Who cares about the well-known effect in journalism that few people who read today’s big story ever hear about tomorrow’s retraction?
One of the scariest discussions I ever had with a young, gung-ho string theorist was about the Atiyah-Singer Index Theorem. This is possibly one of the most beautiful results of the 20th century, and one of the more complicated to prove. About 25 years after it was first proved a “string theory proof” emerged, which was much shorter and established most cases physicists care about. I’ve seen such a “proof” of ASIT and while it did fit onto three pages it was riddled with holes from a mathematician’s perspective. In the discussion I was using this contrast as an example of how mathematical results suggested by string theory could be much more difficult to rigorously pin down. His response chilled me: “maybe it’s time mathematics started accepting string theory proofs as valid.”
String theory is a fascinating model and a wellspring of mathematical conjecture, but as it stands it is not physics. Those who rabidly hold the party line that string theory is a — is the — model of fundamental physics use that as an axiom and will not hear dissent. I’ve seen enough electronic blood shed at Peter Woit’s Not Even Wrong and his main vilifier’s site (to which I will not link) to know that there are many string theorists who have completely drained their Kool-Aid and become fanatics in the service of this ideology.
And so string theory
has gonerisks going from a possible model of fundamental particles to a close-minded fundamentalist physics.