General Linear Groups — Generally
Monday, we saw that the general linear groups are matrix groups, specifically consisting of those whose columns are linearly independent. But what about more general vector spaces?
Well, we know that every finite-dimensional vector space has a basis, and is thus isomorphic to , where
is the cardinality of the basis. So given a vector space
with a basis
of cardinality
, we have the isomorphism
defined by
and
.
This isomorphism of vector spaces then induces an isomorphism of their automorphism groups. That is, . Given an invertible linear transformation
, we can conjugate it by
to get
. This has inverse
, and so is an element of
. Thus (not unexpectedly) every invertible linear transformation from a vector space
to itself gets an invertible matrix.
But this assignment depends essentially on the arbitrary choice of the basis for
. What if we choose a different basis
? Then we get a new transformation
and a new isomorphism of groups
. But this gives us an inner automorphism of
. Given a transformation
, we get the transformation
This composite sends
to itself, and it has an inverse. Thus changing the basis on
induces an inner automorphism of the matrix group
.
Now let’s consider a linear transformation . We have two bases for
, and thus two different matrices — two different elements of
— corresponding to
:
and
. We get from one to the other by conjugation with
:
And what is this transformation ? How does it act on a basis vector in
? We calculate:
where expresses the vectors in one basis for
in terms of those of the other. That is, the
th column of the matrix
consists of the components of
written in terms of the
. Similarly, the inverse matrix
with entries
, writes the
in terms of the
:
.
It is these “change-of-basis” matrices that effect all of our, well, changes of basis. For example, say we have a vector with components
. Then we can expand this:
So our components in the new basis are .
As another example, say that we have a linear transformation with matrix components
with respect to the basis
. That is,
. Then we can calculate:
and we have the new matrix components .
The General Linear Groups
Not just any general group for any vector space
, but the particular groups
. I can’t put LaTeX, or even HTML subscripts in post titles, so this will have to do.
The general linear group is the automorphism group of the vector space
of
-tuples of elements of
. That is, it’s the group of all invertible linear transformations sending this vector space to itself. The vector space
comes equipped with a basis
, where
has a
in the
th place, and
elsewhere. And so we can write any such transformation as an
matrix.
Let’s look at the matrix of some invertible transformation :
How does it act on a basis element? Well, let’s consider its action on :
It just reads off the first column of the matrix of . Similarly,
will read off the
th column of the matrix of
. This works for any linear endomorphism of
: its columns are the images of the standard basis vectors. But as we said last time, an invertible transformation must send a basis to another basis. So the columns of the matrix of
must form a basis for
.
Checking that they’re a basis turns out to be made a little easier by the special case we’re in. The vector space has dimension , and we’ve got
column vectors to consider. If all
are linearly independent, then the column rank of the matrix is
. Then the dimension of the image of
is
, and thus
is surjective.
On the other hand, any vector in the image of
is a linear combination of the columns of the matrix of
(use the components of
as coefficients). If these columns are linearly independent, then the only combination adding up to the zero vector has all coefficients equal to
. And so
implies
, and
is injective.
Thus we only need to check that the columns of the matrix of are linearly independent to know that
is invertible.
Conversely, say we’re given a list of linearly independent vectors
in
. They must be a basis, since any linearly independent set can be completed to a basis, and a basis of
must have exactly
elements, which we already have. Then we can use the
as the columns of a matrix. The corresponding transformation
has
, and extends from there by linearity. It sends a basis to a basis, and so must be invertible.
The upshot is that we can consider this group as a group of matrices. They are exactly the ones so that the set of columns is linearly independent.
Braid groups
Okay, time for a group I really like.
Imagine you’re playing the shell game. You’re mixing up some shells on the surface of a table, and you can’t lift them up. How can you rearrange them? At first, you might think this is just a permutation group all over again, but not quite. Let’s move two shells around each other, taking a picture of the table of the table each moment, and then stack those pictures like a flip-book movie. I’ve drawn just such a picture.
We read this movie from the bottom to the top. The shell on the right moves behind the shell on the left as they switch places. It could also have moved in front of the left shell, though, and that picture would show the paths crossing the other way. We don’t want to consider those two pictures the same.
So why don’t we want to? Because the paths those shells trace out look a lot like the strands of knots! The two dimensions of the table and one of time make a three-dimensional space we can use to embed knots. Since these pictures just have a bunch of strands running up and down, crossing over and under each other as they go we call them “braids”. In fact, these movies form a group. We compose movies by running them one after another. We always bring the shells back where they started so we can always start the next movie with no jumps. We get the identity by just leaving the shells alone. Finally, we can run a movie backwards to invert it.
There’s one such braid group for each number
of shells. The first one,
is trivial since there’s nothing to do — there’s no “braiding” going on with one strand. The second one,
is just a copy of the integers again, counting how many twists like the one pictured above we’ve done. Count the opposite twist as
. Notice that this is already different from the symmetric groups, where
just has the two moves, “swap the letters” or “leave them alone”.
Beyond here the groups and
get more and more different, but they’re also pretty tightly related. If we perform a braiding and then forget which direction we made each crossing we’re just left with a permutation. Clearly every permutation can arise from some braiding, so we have an epimorphism from
onto
. In fact, this shows up when we try to give a presentation of the braid group.
Recall that the symmetric group has presentation:
The generator swaps the contents of places
and
. The relations mean that swapping twice undoes a swap, widely spaced swaps can be done in either order, and another seemingly more confusing relation that’s at least easily verified. The braid group looks just like this, except now a twist is not its own inverse. So get rid of that first relation:
The fact that we get from the braid group to the symmetric group by adding relations reflects the fact that is a quotient of
. It’s interesting to play with this projection and compute its kernel.
A difficult exercise
While proctoring the make-up exam for my class, I thought of an exercise related to my group theory posts on direct and free products that should cause even the more experienced mathematicians in the audience a bit of difficulty.
Consider four groups ,
,
, and
, and four homomorphisms:
Use these to construct two homomorphisms from to
, and show that they’re the same.
The Airplane Seat solution
And now, the solution to the Airplane Seat problem.
Obviously there’s a 1% probability that the first passenger chooses his own seat, in which case the last one automatically gets his. Then there’a a 1% chance that the first chooses the second passenger’s seat, who then has a 1/99 chance of choosing her own, or a 100% chance of this all getting way too complicated.
So let’s look for something not quite so obvious. There are various ways of solving the problem, but the one I’m interested in uses the fact that what we’re really looking at is a permutation!
This also has the property that the sequence of only increases. We need the probability that
is not 100.
Let’s say that it is. Then we have the cycle . We can stick the transposition
after it to give
. This is a valid cycle that doesn’t end with 100.
What if we have a cycle that doesn’t end with 100? If we write it as we can again stick the transposition
after it. Now this gives
, which does end with 100.
Even better, if we start with any cycle, ending with 100 or not, and do both transformations we get back the exact same cycle we started with. We say that the two transformations are inverses of each other, which just means that each undoes what the other does. A fancier word for an invertible function between two sets is “bijection”.
So we’ve split all valid sequences into ones where 100 gets his own seat and ones where he doesn’t, and shown that there’s a bijection between them. That means that the two sets have the same size! So picking a sequence at random we find that in exactly half the cases 100 will get his own seat.
Like Euclid used to say, Q.E.D.
The Airplane Seat problem
I thought I’d drop a little problem to think about for a while. If you’ve seen it before, please don’t ruin it for those who haven’t by posting a solution in the comments.
Let’s imagine an airplane with 100 seats. Instead of the cattle-call boarding procedures we know, passengers board the plane one at a time. On this flight, the first passenger has lost his ticket, so he just picks a seat at random to sit in. When the second passenger boards, if her own seat is available she sits in it. If not, she picks a random empty seat. This repeats until the plane fills. What is the probability that the 100th passenger gets to sit in his own seat?
Assume that all the random choices are made with equal probability of picking any currently-empty seat.