Isomorphisms of Vector Spaces
Okay, after that long digression into power series and such, I’m coming back to linear algebra. What we want to talk about now is how two vector spaces can be isomorphic. Of course, this means that they are connected by an invertible linear transformation, (which preserves the addition and scalar multiplication operations):
First off, to be invertible the kernel of must be trivial. Otherwise we’d have two vectors in
mapping to the same vector in
, and we wouldn’t be able to tell which one it came from in order to invert the map. Similarly, the cokernel of
must be trivial, or we’d have missed some vectors in
, and we couldn’t tell where in
to send them under the inverse map. This tells us that the index of an isomorphism must be zero, and thus that the vector spaces must have the same dimension. This seems sort of obvious, that isomorphic vector spaces would have to have the same dimension, but you can’t be too careful.
Next we note that an isomorphism sends bases to bases. That is, if is a basis for
, then the collection of
will form a basis for
.
Since is surjective, given any
there is some
with
. But
uniquely (remember the summation convention) because the
form a basis. Then
, and so we have an expression of
as a linear combination of the
. The collection
thus spans
.
On the other hand, if we have a linear combination , then we can write
. Since
is injective we find
, and thus each
, since the
form a basis. Thus the spanning set
is linearly independent, and thus forms a basis.
The converse, it turns out, is also true. If is a basis of
, and
is a basis of
, then the map
defined by
(and extending by linearity) is an isomorphism. Indeed, we can define an inverse straight away:
, and extend by linearity.
The upshot of these facts is that two vector spaces are isomorphic exactly when they have the same dimension. That is, just the same way that the cardinality of a set determines its isomorphism class in the category of sets, the dimension of a vector space determines its isomorphism class in the category of vector spaces.
Now let’s step back and consider what happens in any category and throw away all the morphisms that aren’t invertible. We’re left with a groupoid, and like any groupoid it falls apart into a bunch of “connected” pieces: the isomorphism classes. In this case, the isomorphism classes are given by the dimensions of the vector spaces.
Each of these connected pieces, then, is equivalent (as a groupoid) to the automorphism group of any of its objects, all of which such groups are isomorphic. In this case, we have a name for these automorphism groups.
Given any vector space , all the interesting information about isomorphisms to or from this group can be summed up in the “general linear group” of
, which consists of all invertible linear maps from
to itself. We write this automorphism group as
.
We have a special name in the case when is the vector space
of
-tuples of elements of the base field
. In this case we write the general linear group as
or as
. Since every finite-dimensional vector space over
is isomorphic to one of these (specifically, the one with
), we have
. These particular general linear groups are thus extremely important for understanding isomorphisms of finite-dimensional vector spaces. We’ll investigate these groups as we move forward.