As I said before, if we take the free commutative monoid on generators, then build the semigroup ring from that, the result is the ring of polynomials in variables. I hinted at a noncommutative analogue, which today I’ll construct from the other side.
Instead of starting with a set of generators and getting a monoid, let’s start by building the free abelian group . This consists of ordered -tuples of integers, and we add them component by component. We can pick out the generators , where the shows up in slot . Then every element can be written , where the is entry in the -tuple form of the element.
So how do we build the tensor product ? First we take all pairs
and use them to generate a free abelian group. Then we impose the linearity relations and . What does that mean here? Well for one thing we can apply it to the collection of pairs:
So we could just as well write the tensor product as the group generated by .
This same argument goes through as we tensor in more and more copies of . The tensor power is the free abelian group generated by the elements , where each index runs from to .
Now we take all of these tensor powers and throw them together. We get formal linear combinations
where all but finitely many of the “coefficients” are zero. These look an awful lot like polynomials, don’t they? In fact, if we only had a commutative property that then these would be exactly (isomorphic to) the polynomials we came up with last time.
To be explicit about the universal properties, any function from the generators to the underlying abelian group of a ring with unit has an unique extension to a linear function from to . Then this has a unique extension to a ring homomorphism from to . From the other side, there is a unique extension of the original function to a monoid homomorphism from the free monoid to the underlying monoid of . Then this has a unique extension to a ring homomorphism from to . Since both and satisfy this same universal property they must be isomorphic. We commonly write this universal ring as , and call it the ring of noncommutative polynomials in variables.
Last week I talked about how to make a ring out of a semigroup by adding an additive structure. Now I want to do the other side. Starting with an abelian group we’ll build a ring by adding a “free” multiplication.
The main tools will be what I was saying about tensor products and abelian groups. Specifically, we have isomorphisms
The first of these lets us unambiguously talk about “the” tensor product of any finite list of abelian groups. The particular case we’re interested in here is when all of them are the same group. We define the th “tensor power” with copies of tensored together on the right. In the case we define . Then we see that for all natural numbers and we have .
Now we can take all these tensor powers of indexed by and form the direct sum
I claim that this abelian group carries the structure of a ring. Remember that a multiplication on an abelian group that distributes over addition is equivalent to a linear function . So I want to exhibit such a function for .
This is where the second isomorphism comes in. We consider the tensor product
which is isomorphic to
which is isomorphic to
Let’s change how we index these. Here we’re direct summing up over all pairs of natural numbers, but the abelian group we’re summing only depends on . So let’s first index by the sum of the natural numbers.
Okay, now we’ve got a single infinite direct sum over , and each term is direct sum of copies of . For each of these finite direct sums we can just add up the elements of from each copy. This gives a linear function . We can apply the right one to each direct summand to get a linear function
This is our ring structure.
As usual, there’s a universal property floating around. Any linear function from an abelian group to a ring extends uniquely to a ring homomorphism from to . Just define and extend linearly to find how acts on a given direct summand of . This justifies calling the free ring on the abelian group .
A few weeks ago I mentioned the knot coloring problem, and left you to play with it. Now I’m going to say what’s going on.
First let’s remember what it means to color a knot. We take a knot and draw a knot diagram to represent it. Then we color each arc of the diagram — from the undercrossing at one end to the undercrossing at the other — either red, green, or blue. At every crossing three arcs come together, and we require that either all three get the same color or all three get different colors. We can always just give every arc the same color, but we’re interested in when we can use all three colors.
Of course you’re now screaming (or you should be) that we had to choose a diagram for the knot, so how do we know that the answer doesn’t depend on which choice we made? Luckily we have a way to tell if two knot diagrams represent the same knot: Reidemeister moves! So how do colorings behave when we do a Reidemeister move?
Let’s go through them one at a time. The first move looks like this:
Of course, it doesn’t have to be red. There’s a similar diagram for green and blue. So, if we have a strand colored red we can twist it, coloring both arcs red. On the other hand, if we have a twisted strand, both arcs have to be the same color. Otherwise the crossing wouldn’t be colored right. We can then untwist the strand.
Here’s the second move:
On the left we have two strands of different colors. After performing the move we can give the new arc the third color. If the strands were both the same color we could give the new arc that same color again. On the other side, any coloring of the right side of the move will give the same color to the top and bottom ends of each strand. We can then undo the move and still have a valid coloring.
Finally the third move:
Any coloring of the ends of the three strands that can be extended to a valid coloring of the middle on the left side can be extended to a valid coloring on the right side, and vice versa. For example, both sides require that the strand running through the middle get the same color at both ends.
Now the first and third moves don’t change the number of colors that appear. The second one seems like it might, though. If we have a coloring on the right using all three colors, maybe the left only has two? If we’re dealing with a knot (rather than a link with more than one loop) then eventually the red and green strands will have to meet up. When they do, it’s at a crossing, and then we’ll need to use blue. So if any diagram of a knot has a coloring with all three colors then all of them do. “Three-colorability” is a property of the knot itself, not just of a diagram.
Actually, we can do even better. Pick a diagram on one side of a Reidemeister move and color the ends of each strand. Either this coloring can be extended to a valid coloring of the interior of the diagram or it can’t, and if it can there’s only one way to do it. The extendible colorings of the ends of one side of a move are exactly the same as the extendible colorings of the other side.
The upshot is that we can ask about how many colorings a knot has, or even a multi-loop link. Some of them may not use all three colors, but every diagram of the same link has the same number of valid colorings. Every knot has at least three (monochromatic) colorings, so a knot is three-colorable (using all three colors) if and only if the number of valid colorings of any diagram is bigger than 3.
One thing we need is another fact about the tensor product of abelian groups. If we take three abelian groups , , and , we can form the tensor product , and then use that to make . On the other hand, we could have started with and then built . If we look at the construction we used to show that tensor products actually exist we see that these two groups are not the same. However, they are isomorphic.
To see this, let’s make a bilinear function from to . By our construction, any element of can be represented as a sum , so linearity says we just need to consider elements of the form . Define . This induces a unique linear function given by and extending to sums of such elements. Similarly we get a linear function , so we have an isomorphism of abelian groups. We can thus (somewhat) unambiguously talk about “the” tensor product .
Now let’s take a collection of abelian groups with running over an index set , and let be any other abelian group. We want to consider the tensor product
Since the direct sum is a direct product of groups, it comes with projections . Since the free product is in general a subgroup of the direct sum (a proper subgroup when the index set is infinite), we also have injections coming from the free product. We can use these to build homomorphisms
applying to and the identity to . By the universal property of direct sums (the one it gets from free products of groups) this gives us a homomorphism
On the other hand, for each we have a bilinear function sending in to in . By the universal properties of tensor products this gives a linear function . The universal property of direct sums (the one it gets from direct products of groups) gives us a linear function
Now there’s a lot of juggling of functions and injections and projections here that I really don’t think is very illuminating. The upshot is that and are inverses of each other, giving us an isomorphism of the two abelian groups. There’s nothing really special about the left side of the tensor product either. A similar result holds if the direct sum is the right tensorand. We can even put them together to get the really nice isomorphism:
Now we come to a really nice example of a semigroup ring. Start with the free commutative monoid on generators. This is just the product of copies of the natural numbers: . Now let’s build the semigroup ring on this monoid.
First off, an element of the monoid is an ordered -tuple of natural numbers . Let’s write it in the following, more suggestive notation: . We multiply such “monomials” just by adding up the corresponding exponents, as we know from the composition rule for the monoid. Now we build the semigroup ring by taking formal linear combinations of these monomials. A generic element looks like
where the are integers, and all but finitely many of them are zero.
Assuming everyone’s taken high school algebra, we’ve seen these before. They’re just polynomials in variables with integer coefficients! The addition and multiplication rules are just what we know from high school algebra. The only difference is here we specifically don’t think of as a “placeholder” for a number, but as an actual element of our ring.
But we can still use it as a placeholder. Let’s consider any other commutative ring with unit and pick elements of . Call them , , and so on up to . Since is a commutative monoid under multiplication there is a unique homomorphism of monoids from to sending to . That’s just what it means for to be a free commutative monoid. Now there’s a unique homomorphism of rings from to sending to , because is the semigroup ring of .
The upshot is that is the free commutative ring with unit on generators. Because of this, we’ll usually omit the intermediate step of constructing and just write this ring as .
There are similar constructions to this one that I’ll leave you to ponder on your own. What if we just constructed the free monoid on generators (not commutative)? What about the free semigroup? What sort of rings do we get, and what universal properties do they satisfy?
I’ve said a bunch about natural numbers, but I seem to have ignored what we’re most used to doing with them: counting things! The reason is that we actually don’t use natural numbers to count, we use something called cardinal numbers.
So let’s go back and think about sets and functions. In fact, for the moment let’s just think about finite sets. It seems pretty straightforward to say there are three elements in the set , and that there are also three elements in the set . Step back for a moment, though, and consider why there are the same number of elements in these two sets. Try to do it without counting the numbers first. I’ll wait.
The essential thing that says there’s something the same about these two sets is that there is a bijection between them. For example, I could define a function by , , and . Every element of is hit by exactly one element of , so this is a bijection. Of course, it’s not the only one, but we’ll leave that alone for now.
So now let’s move back to all (possibly infinte) sets and define a relation. Say that sets and are “in bijection” — and write — if there is some bijection . This is an equivalence relation! Any set is in bijection with itself, using the identity function. If is in bijection with then we can use the inverse function to see that . Finally, if and are bijections, then is a bijection.
Any time we have an equivalence relation we can split things up into equivalence classes. Now I define a cardinal number to be an bijection class of sets — every set in the class is in bijection with every other, and with none outside the class.
So what does this have to do with natural numbers? Well, let’s focus in on finite sets again. There’s only one empty set , so let’s call its cardinal number . Now given any finite set with cardinal number — bijection class — , there’s something not in . Pick any such something, call it , and look at the set . If I took any other set in bijection with and anything not in then there is a bijection between and . Just apply the bijection from to on those elements from , and send to . This shows that the bijection class — the cardinal number — doesn’t depend on what choices we made along the way. Since it’s well-defined we can call it the successor .
We look at the set of all bijection classes of finite sets. We’ve got an identified element , and a successor function. In fact, this satisfies the universal property for natural numbers. The set of cardinal numbers of finite sets is (isomorphic to) the set of natural numbers!
And that’s how we count things.
Today I’ll give another great way to get rings: from semigroups.
Start with a semigroup . If it helps, think of a finite semigroup or a finitely-generated one, but this construction doesn’t much care. Now take one copy of the integers for each element of and direct sum them all together. There are two ways to think of an element of the resulting abelian group, as a function that sends all but finitely many elements of to zero, or as a “formal finite sum” where each is an integer and is “” from the copy of corresponding to .
I’ll try to talk in terms of both pictures since some people find the one easier to understand and some the other. We can go back and forth by taking a valid function and using its nonzero values as the coefficients of a formal sum: . This sum is finite because most of the values of are zero. On the other hand, we can use the coefficients of a formal sum to define a valid function.
So we’ve got an abelian group here, but we want a ring. We use the semigroup multiplication to define the ring multiplication. In the formal sum picture, we define , and extend to sums the only way we can to make the multiplication satisfy the distributive law. In the function picture we define where we take the sum over all pairs of elements of whose product is . This takes the product of all nonzero components of and and collects the resulting terms whose indices multiply to the same element of the semigroup.
The ring we get is called the “semigroup ring” of , written . There are a number of easy variations on the same theme. If is actually a monoid we sometimes say “monoid ring”, and note that the ring has a unit given by the identity of the monoid. If is a group we usually say “group ring”. If in any of these cases we start with a commutative semigroup (monoid, group) we get a commutative ring.
So here’s the really important thing about semigroup rings. If we take any ring and forget its additive structure we’re left with a semigroup. If we take any semigroup homomorphism from to this “underlying semigroup” of we can uniquely extend it to a ring homomorphism from to . This is just like what we saw for free groups, and it’s just as important.
As a side note, I want to mention something about the multiplication in group rings. Since only if we can rewrite the product formula in the function case . This way of multiplying two functions on a group is called “convolution”, and it shows up all over the place.
The organizers want me to give my 30-minute talk on bracket extensions. Nice way to go out, if it turns out I’m going out of the tower.