The Unapologetic Mathematician

Mathematics for the interested outsider

The Category of Matrices IV

Finally, all the pieces are in place to state the theorem I’ve been driving at for a while now:

The functor that we described from \mathbf{Mat}(\mathbb{F}) to \mathbf{FinVec}(\mathbb{F}) is an equivalence.

To show this, we must show that it is full, faithful, and essentially surjective. The first two conditions say that given natural numbers m and n the linear transformations \mathbb{F}^m\rightarrow\mathbb{F}^n and the n\times m matrices over \mathbb{F} are in bijection.

But this is just what we’ve been showing! The vector spaces of n-tuples come with their canonical bases, and given these bases every linear transformation gets a uniquely-defined matrix. Conversely, every matrix defines a unique linear transformation when we’ve got the bases to work with. So fullness and faithfulness are straightforward.

Now for essential surjectivity. This says that given any finite-dimensional vector space V we have some n so that V\cong\mathbb{F}^n. But we know that every vector space has a basis, and for V it must be finite; that’s what “finite-dimensional” means! Let’s say that we’ve got a basis \left\{f_i\right\} consisting of n vectors.

Now we just line up the canonical basis \left\{e_i\right\} of \mathbb{F}^n and define linear transformations by S(e_i)=f_i and T(f_i)=e_i. Remember that we can define a linear transformation by specifying its values on a basis (which can all be picked independently) and then extending by linearity. Thus we do have two well-defined linear transformations here. But just as clearly we see that for any v\in V we have

S(T(v))=S(T(v^ie_i))=v^iS(T(e_i))=v^iS(f_i)=v^ie_i=v

and a similar equation holds for every n-tuple in \mathbb{F}^n. Thus S and T are inverses of each other, and are the isomorphism we need.

This tells us that the language of linear transformations between finite-dimensional vector spaces is entirely equivalent to that of matrices. But we gain some conceptual advantages by thinking in terms of finite-dimensional vector spaces. One I can point to right here is how we can tell the difference between a vector space and its dual. Sure, they’ve got the same dimension, and so there’s some isomorphism between them. Still, when we’re dealing with both at the same time they behave differently, and it’s valuable to keep our eye on that difference.

On the other hand, there are benefits to matrices. For one thing, we can actually write them down and calculate with them. A lot of people are — surprise! — interested in using mathematics to solve problems. And the problems that linear algebra is most directly applicable to are naturally stated in terms of matrices.

What the theorem tells us is that none of this matters. We can translate problems from the category of matrices to the category of vector spaces and back, and nothing is lost in the process.

June 24, 2008 - Posted by | Algebra, Category theory, Linear Algebra

No comments yet.

Leave a comment