From the module perspective, we’re led back to the concept of a group action. This is like a -module, but “discrete”. Let’s just write down the axioms for easy reference: we’ve got a set and a function such that
- preserves the identity: .
- preserves the group operation: .
Notice how this looks almost exactly like the axioms for a -module, except since is just a set we don’t have any sense of “linearity”.
Now, from a group action on a finite set we can get a finite-dimensional representation. We let — the vector space defined to have as a basis. That is, vectors in are of the form
for some complex coefficients . We get a -module by extending to a bilinear function . We already know how it behaves on the basis of the form , and the extension to a bilinear function is uniquely defined. We call the “permutation representation” associated to , and the elements for we call the “standard basis”.
As an example, the group is defined from the very beginning by the fact that it acts on the set by shuffling the numbers around. And so we get a representation from this action, which we call the “defining representation”. By definition, it has dimension , since it has a basis given by . To be even more explicit, let me write out the defining matrix representation for . Technically, going from an abstract representation to a matrix representation requires not just a basis, but an ordered basis, but the order should be pretty clear in this case. And so, with no further ado:
To see how this works, note that the permutation sends to . Similarly, we find that . That is:
We also see that composition of permutations turns into matrix multiplication. For example, . In terms of the matrices we calculate:
You can check for yourself all the other cases that you care to.
Notice that in general the matrices are index by two elements of , and the matrix element — the one in the th row and th column — is . That is, it’s if — if the action of sends to — and otherwise. This guarantees that every entry will be either or , and that each row and each column will have exactly one . Such a matrix we call a “permutation matrix”, and we see that the matrices that occur in permutation representations are permutation matrices.
As with so many of the objects we study, root systems form a category. If is a root system in the inner product space , and is a root system in the inner product space , then a morphism from to will be a linear map so that if then . Further, we’ll require that for all roots .
Immediately from this, we find that the Weyl group of not only acts on itself, but on . Indeed, induces a homomorphism that sends the generator to the generator . Even better, actually intertwines these actions! That is, . Indeed, we can calculate
In particular, we can say that two root systems are isomorphic if there’s an invertible linear transformation sending to , and whose inverse sends back onto . In this case, the intertwining property can be written as an isomorphism of Weyl groups sending to .
Even more particularly, an automorphism of is an isomorphism from to itself. That is, it’s an invertible linear transformation from to itself that leaves invariant. And so we see that itself is a subgroup of . In fact, the Weyl group is a normal subgroup of the automorphism group. That is, given an element of and an automorphism of , the conjugation is again in the Weyl group. And this is exactly what we proved last time!
We can now revise our goal: we want to classify all possible root systems up to isomorphism.
Let’s take a root system in the inner product space . Each vector in gives rise to a reflection in , the group of transformations preserving the inner product on . So what sorts of transformations can we build up from these reflections? The subgroup of generated by the reflections for all is called the Weyl group of the root system. It’s pronounced “vile”, but we don’t mean that as any sort of value judgement.
Anyway, we can also realize as a subgroup of the group of permutations on the vectors in . Indeed, by definition each sends each vector in back to another vector in , and so shuffles them around. So if has vectors, the Weyl group can be realized as a subgroup of .
In particular, is a finite group, as a subgroup of another finite group. In fact, we even know that the number of transformations in divides . It may well (and usually does) have elements which are not of the form , but there are still only a finite number of them.
The first thing we want to take note of is how certain transformations in act on by conjugation. Specifically, if leaves invariant, then it induces an automorphism on that sends the generator to — which (it turns out) is the generator — for all . Further, it turns out that for all .
Indeed, we can calculate
Now, every vector in is of the form for some , and so sends it to the vector , which is again in , so it leaves invariant. The transformation also fixes every vector in the hyperplane , for if is orthogonal to , then the above formula shows that is left unchanged by the transformation. Finally, sends to .
This is all the data we need to invoke our lemma, and conclude that is actually equal to . Specifying the action on the generators of is enough to determine the whole automorphism. Of course, we can also just let act on each element of by conjugation, but it’s useful to know that the generating reflections are sent to each other exactly as their corresponding vectors are.
Now we can calculate from the definition of a reflection
Comparing this with the equation above, we find that , as asserted.
A groupoid is the natural extension of a group, considered as a category. A groupoid is a category where each morphism is invertible. If a groupoid has only one object it’s a group.
These structures are the central players in John Baez’ “Tale of Groupoidification”, which started in this issue of This Week’s Finds. They’re also a rather popular subject over at The n-Category Café.
Unfortunately, one of the best examples of a groupoid is still beyond us at this point, but there are others. One nice place groupoids come up is from group actions. In fact, Baez is of the opinion that the groupoid viewpoint is actually more natural than the group action viewpoint, but I think it would be difficult to start teaching from that end.
So how does this work? Let’s say we have a group and a left action of on a set . Now let’s build a groupoid from this information. The high-level view is that every single move from one state (element of ) to another will be a morphism, and we can compose morphisms if the outgoing state of the first is the incoming state of the second. Now for the details.
We take the objects of our groupoid to be the set , and the morphisms to be the set . The source of a morphism will be , and the target will be . For each object we have the identity morphism , where is the identity of . If we have two morphisms and , and (so the pair are composable), then we have the composite . Note that this still has source , and its target is , as it should be.
Well this is enough to show we have a category, but now we need to show that each morphism has an inverse. If we have a morphism going from to , we’ll need the inverse to go from to . The clear choice is , and indeed we find that , and . Thus we have a groupoid.
This result we call the “action groupoid” or the “weak quotient”, and write . Remember that the “real” quotient is the set of orbits of on — we consider two elements of to be “the same” if they are related by an element of . Here we don’t consider them to be the same; we just make them isomorphic. Since we’ve replaced “the same” by “isomorphic”, we call this a “weak” quotient.
Now, an action groupoid is not just any groupoid. For one thing, we’ve got a functor . Just send every single object of to the unique object of , considered as a category. Then send the morphism to . We’re sort of folding up the groupoid into the groupoid (with one object) . But here’s the thing: this functor is faithful! Indeed, we see that is the set of so that . In particular, it’s a subset of the elements of , and the functor acts on this hom set by the inclusion function of this subset into , which is injective. Even though the functor loses a lot of information (like, which state a given morphism started at) it’s still faithful because “faithful” just pays attention to what happens in each hom set, not what happens to the objects.
It turns out we can find a (somewhat) simpler groupoid equivalent to and compatible with this faithful functor. That is, if the functor above is , we can find a groupoid with functors and . The functor will be an equivalence, will be faithful, and we’ll have . The really special thing about is that it will be a collection of groups with no morphisms between any distinct objects.
For the objects of , take the set of -orbits of . There will be no morphisms between distinct objects, so we just need to specify a group of morphisms on each object. So, given an orbit pick a point and let — the stabilizer of . Of course, different representative points may give different stabilizers, but the stabilizers of any two points in the same orbit are isomorphic. For the functor , just send each object of to the single object of and include each hom set into as a subgroup, just like we did for .
Now we define our equivalence. Since we’ve already picked a representative point in each orbit, let’s just send the object of corresponding to that orbit to the object of corresponding to that point. Then we can just send to itself. Clearly this is fully faithful, and it’s also essentially surjective because every point of is in some -orbit. So we have an equivalence. Also it should be clear that .
In fact, we can adapt this to any groupoid with a faithful functor to a group . Just replace “in the same orbit” by “have an arrow between them”. Then chop up the objects of into equivalence classes, make each one an object of our new groupoid , and make the morphisms on an object in the “stabilizer” of a representative object from . It all goes through to give the same sort of factorization.
And then if you push just a little bit harder you can take any discrete groupoid (“discrete” = no morphisms between distinct objects) with a faithful functor and puff it up into an action groupoid. In fact, it suffices to show how to do this for a single object in , since all the others go the same way.
In this case we have a subgroup and we need to see it as the stabilizer of some action. We use the set of left cosets of in . This may not be a group (since may not be normal), but it’s at least a set, and certainly acts on it by left-multiplication. What subgroup of fixes the coset ? Exactly . So, we puff out into and take its weak quotient by to get an action groupoid equivalent to .
Putting this all together we see that any groupoid with a faithful functor is equivalent to the action groupoid for some -set . And any action gives an action groupoid with a faithful functor to . The two concepts are essentially the same.
The theory of group actions looks really nice when we translate it into the language of categories. That’s what I plan to do today.
We know that a monoid is essentially the same thing as a category with one object, and a group is a special kind of monoid. So any group can be considered as a category with one object in which every morphism has an inverse. So what’s a (left) group action? Well, it’s descibed by choosing a set for to act on, and a homomorphism from to the group of bijections on . But is an object in the category of sets, and bijections from to itself are just the invertible morphisms in . What we’re describing here is a functor from to ! A (left) action of is an object of the category of (covariant) functors from to . We know it’s a category because has (for our purposes) just a set of morphisms — it’s small.
Now I was adding these parentheticals above, since there’s also a notion of a right -action. This is pretty much the same, but it satisfies instead of . That is, it switches the order of composition. But we know how to do that! A right -action is a contravariant functor — an object of .
So these actions form categories. What are the morphisms? Since these are functor categories, they’re natural transformations. Let’s say we have (left) actions and , and that and send the single object of to the sets and , respectively. Now, what’s a natural transformation ?
Well, there’s only one object in , so a transformation only has one component: . For any morphism (group element) it must satisfy the naturality condition for all . Note that the dot on the left denotes the action of on , and the one on the right denotes that on . So we see that must be a function which “intertwines” the two actions of . That is, it preserves the -action, just like homomorphisms of monoids preserve the composition and so on — a very natural notion of morphism of -actions indeed! As a special case, a natural transformation from to itself is a permutation of the elements of which commutes with the actions of all the elements of .
Since there’s only one object in we only get one covariant functor and one contravariant functor . The covariant one gives us the group acting on its set of elements by left-multiplication, and the contravariant one gives the same set with the action of right-multiplication. Now given any other left -action (covariant functor) on Yoneda’s Lemma tells us that the set is in bijection with the set of functions satisfying . In particular, we must have , where is the identity of the group, so the image of the identity element specifies the function completely. That is, if we have a left -action on a set then every element gives us an intertwining function satisfying , and every intertwining function sends to some element of , giving us our bijection. A similar situation holds for right -actions.
What about the Yoneda embedding ? Well as we noted above, this sends the single object of to the set of elements of acting on itself on the right (because the target is the category of contravariant functors). But is a functor, so it sends morphisms of (group elements) to morphisms of (-intertwining functions). Specifically, a group element gets sent to a function satisfying .
Even better, we can compose these natural transformations! Since is a functor we have . Also, must be the identity function on — . Putting this all together, we see that , and indeed multiplying by on the left commutes with multiplying by another element of on the right, since group composition is associative! So we have a function sending to the group of permutations of the elements of , and this function preserves group multiplication and identities. Then it must also preserve inverses, and we have a group homomorphism from to .
And now for the coup de grâce: the Yoneda embedding is fully faithful! Firstly, this means that the homomorphism is injective — its kernel is trivial — so every group is isomorphic to a subgroup of a permutation group. Secondly, this means that every permutation of the elements of which commutes with right-multiplication is given by left-multiplication by some element of .
As a coda, we’re always more interested in representable functors than in represented functors, since we only care about things up to isomorphism. It turns out that there’s a special name for representable functors in this setup: torsors. I’ll be mentioning these again, to be sure, but a basic definition is that a -torsor is a (left) -action on a set, which is isomorphic to the action of on itself by left-multiplication. That link will take you to an excellent explanation (with great examples) by John Baez.
There’s another thing I should have mentioned before. When a group acts on a set , there is a bijection between the orbit of a point and the set of cosets of in . In fact, if and only if if and only if is in if and only if . This is the the bijection we need.
This has a few immediate corollaries. Yesterday, I mentioned the normalizer of a subgroup . When a subgroup acts on by conjugation we call the isotropy group of an element of the “centralizer” of in . This gives us the following special cases of the above theorem:
- The number of elements in the conjugacy class of in is the number of cosets of in .
- The number of subgroups conjugate to in is the number of cosets of in .
In fact, since we’re starting to use this “the number of cosets” phrase a lot it’s time to introduce a bit more notation. When is a subgroup of a group , the number of cosets of in is written . Note that this doesn’t have to be a finite number, but when (and thus ) is finite, it is equal to the number of elements in divided by the number in . Also notice that if is normal, there are elements in .
This is why we could calculate the number of permutations with a given cycle type the way we did: we picked a representative of the conjugacy class and calculated .
One last application: We call a group action “free” if every element other than the identity has no fixed points. In this case, is always the trivial group, so the number of points in the orbit of is is the number of elements of . We saw such a free action of Rubik’s Group, which is why every orbit of the group in the set of states of the cube has the same size.
Let’s work out how symmetric groups act on themselves by conjugation. As I’m writing I notice that what I said before about composition of permutations is sort of backwards. It’s one of those annoying conventions that doesn’t really change anything, but can still be a bit confusing. From here on when we write permutations in cycle notation we compose by reading the cycles from right to left. That is, . Before I was reading them left to right. The new way behaves more like group actions. The exposition comes after the jump.
Read more »
One of the most useful examples of a group acting on a set comes directly from group theory itself. Let be a group and be a subgroup of . The subgroup acts on the set of all subgroups of as follows.
If is any subgroup of and is any element of , then the set of elements of of the form with in is another subgroup of . Indeed, if we take two elements and of this set, their product is , which is again of the proper form since is in . We call this subgroup the conjugation of by .
Given two elements of we can check that , so conjugating by is the same as conjugating by , then by . That is, this defines an action of on the set of all subgroups of .
Even better, is not just another subgroup of , it is isomorphic to . In proving that is a subgroup we showed that the function sending to is a homomorphism. We can undo it by conjugating by , so it’s an isomorphism. We say that two subgroups of related by a conjugation are conjugate.
The subgroup of sending to itself — those in so that — is called the normalizer of in , written . We can verify that is a normal subgroup in , and that is normal in exactly when .
One orbit is particularly interesting to consider: is always sent to itself by conjugation. That is, given an element of the homomorphism sending to is an automorphism of . In fact, given any group , the automorphisms of themselves form a group, called . Conjugation gives us a homomorphism from to : given an element , is the automorphism of conjugation by .
We call automorphisms arising in this way “inner automorphisms”. The group of inner automorphisms on is the image of in . If is an element of and is any automorphism of , then is the automorphism sending in to
Which is just conjugation of by . This proves that is normal in . The quotient is the group of outer automorphisms .
The kernel of is the set of elements so that for all in G. That is, for any we have , so the kernel of is the subgroup of consisting of elements that commute with every element of . We call this subgroup the center of .
Now, consider the group of permutations on letters. Determine how this group acts on itself by conjugation. Write out some conjugations in cycle notation to get an idea of what the answer should be.
Okay, now we’ve got all the setup for one big use of group theory.
Most mathematical structures come with some notion of symmetries. We can rotate a regular -sided polygon of a turn, or we can flip it over. We can rearrange the elements of a set. We can apply an automorphism of a group. The common thread in all these is a collection of reversible transformations. Performing one transformation and then the other is certainly also a transformation, so we can use this as a composition. The symmetries of a given structure form a group!
What if we paint one side of the above -gon black and the other white, so flipping it over definitely changes it. Then we can only rotate. The rotations are the collection of symmetries that preserve the extra structure of which side is which, and they form a subgroup of the group of all symmetries of the -gon. The symmetries of a structure preserving some extra structure form a subgroup!
As far as we’re concerned right now, mathematical structures are all built on sets. So the fundamental notion of symmetry is rearranging the elements of a set. Given a set , the set of all bijections from to itself is a group. We’ve actually seen a lot of these before. If is a finite set with elements, is the symmetric group on letters.
Now, a group action on a set is simply this: a homomorphism from to . That is, for each element in there is a bijection of and .
It’s important to note what seems like a switch here. It’s really due to the notation, but can easily seem confusing. What’s written on the right is “do the permutation corresponding to , then the one corresponding to “. So we have to think of the multiplication in as “first do , then do “.
In what follows I’ll often write as . The homomorphism property then reads
I’ll throw out a few definitions now, and follow up with examples in later posts.
We can slice up into subsets so that if and are in the same subset, for some , and not if they’re not in the same subset. In fact, this is rather like how we sliced up itself into cosets of . We call these slices the “orbits” of the action.
As an important special case of the principle that fixing additional structure induces subgroups, consider the “extra structure” of one identified point. We’re given an element of , and want to consider those transformations in which send to itself. Verify that this forms a subgroup of . We call it the “isotropy group” or the “stabilizer” of , and write it .
I’ll leave you to ponder this: if and are in the same -orbit, show that and are isomorphic.