The Unapologetic Mathematician

Mathematics for the interested outsider

Isomorphisms of Vector Spaces

Okay, after that long digression into power series and such, I’m coming back to linear algebra. What we want to talk about now is how two vector spaces can be isomorphic. Of course, this means that they are connected by an invertible linear transformation, (which preserves the addition and scalar multiplication operations):

T:V\rightarrow W

First off, to be invertible the kernel of T must be trivial. Otherwise we’d have two vectors in V mapping to the same vector in W, and we wouldn’t be able to tell which one it came from in order to invert the map. Similarly, the cokernel of T must be trivial, or we’d have missed some vectors in W, and we couldn’t tell where in V to send them under the inverse map. This tells us that the index of an isomorphism must be zero, and thus that the vector spaces must have the same dimension. This seems sort of obvious, that isomorphic vector spaces would have to have the same dimension, but you can’t be too careful.

Next we note that an isomorphism sends bases to bases. That is, if \{e_i\} is a basis for V, then the collection of f_i=T(e_i) will form a basis for W.

Since T is surjective, given any w\in W there is some v\in V with T(v)=w. But v=v^ie_i uniquely (remember the summation convention) because the e_i form a basis. Then w=T(v)=T(v^ie_i)=v^iT(e_i)=v^if_i, and so we have an expression of w as a linear combination of the f_i. The collection \{f_i\} thus spans W.

On the other hand, if we have a linear combination 0=x^if_i, then we can write 0=x^iT(e_i)=T(x^ie_i). Since T is injective we find x^ie_i=0, and thus each x^i=0, since the e_i form a basis. Thus the spanning set \{f_i\} is linearly independent, and thus forms a basis.

The converse, it turns out, is also true. If \{e_i\} is a basis of V, and \{f_i\} is a basis of W, then the map T defined by T(e_i)=f_i (and extending by linearity) is an isomorphism. Indeed, we can define an inverse straight away: T^{-1}(f_i)=e_i, and extend by linearity.

The upshot of these facts is that two vector spaces are isomorphic exactly when they have the same dimension. That is, just the same way that the cardinality of a set determines its isomorphism class in the category of sets, the dimension of a vector space determines its isomorphism class in the category of vector spaces.

Now let’s step back and consider what happens in any category and throw away all the morphisms that aren’t invertible. We’re left with a groupoid, and like any groupoid it falls apart into a bunch of “connected” pieces: the isomorphism classes. In this case, the isomorphism classes are given by the dimensions of the vector spaces.

Each of these connected pieces, then, is equivalent (as a groupoid) to the automorphism group of any of its objects, all of which such groups are isomorphic. In this case, we have a name for these automorphism groups.

Given any vector space V, all the interesting information about isomorphisms to or from this group can be summed up in the “general linear group” of V, which consists of all invertible linear maps from V to itself. We write this automorphism group as \mathrm{GL}(V).

We have a special name in the case when V is the vector space \mathbb{F}^n of n-tuples of elements of the base field \mathbb{F}. In this case we write the general linear group as \mathrm{GL}(n,\mathbb{F}) or as \mathrm{GL}_n(\mathbb{F}). Since every finite-dimensional vector space over \mathbb{F} is isomorphic to one of these (specifically, the one with n=\dim(V)), we have \mathrm{GL}(V)\cong\mathrm{GL}(n,\mathbb{F}). These particular general linear groups are thus extremely important for understanding isomorphisms of finite-dimensional vector spaces. We’ll investigate these groups as we move forward.

October 17, 2008 Posted by | Algebra, Linear Algebra | 11 Comments



Get every new post delivered to your Inbox.

Join 366 other followers