# The Unapologetic Mathematician

## Vector spaces

I know I usually go light on Sundays, but I want to finish off what I started yesterday. Remember that we’re considering free modules over a ring $R$ with unit. A free module has a basis, but there may be different bases to choose from. I’ll start with an example of how widely bases can vary.

Let $M$ be a free $R$-module with one basis element for each natural number: $\{e_1,e_2,...,e_n,...\}$. Then consider the ring $S=\hom_R(M,M)$. I claim that for any natural number $n$ there is a basis of $S$ as a free $S$ module with $n$ elements. That is, $S\cong S\oplus...\oplus S$ as a left $S$ module for any finite number of summands. In fact, here it is: $\{f_0,...,f_{n-1}\}$, where $f_r(e_{qn+r})=e_q$ and $f_r$ sends all other basis elements of $M$ to ${}0$. I’ll leave it to you to check that these elements span $S$ and are linearly independent.

This example shows that in general we can’t even say how many elements are in a basis. However, in many cases of interest we can. In particular, if our ring is commutative or a division ring (or both: a field) then any two bases of a free module are in bijection. Actually, when we’re working over a division ring the situation is even better: every module is free!

Let’s start by considering a module $V$ over a division ring $D$. I claim that a linearly independent set spans $V$ — and thus is a basis — exactly when it is maximal. That is, if we add any other vector we’ll get a nontrivial linear combination adding up to ${}0$. Indeed if $\{e_i\}_{i\in\mathcal{I}}$ is maximal then when we add any new element $e$ we get a relation $re+\sum\limits_{i\in\mathcal{I}}r_ie_i=0$
Here $r$ has to be nonzero, because otherwise we would already have a linear relation on $\{e_i\}_{i\in\mathcal{I}}$. But since $D$ is a division ring we can multiply on the left by $r^{-1}$ to get $e=\sum\limits_{i\in\mathcal{I}}-r^{-1}r_ie_i$
so the maximal linearly independent set $\{e_i\}_{i\in\mathcal{I}}$ spans $V$, and thus is a basis.

Now, take any linearly independent subset $X$ of $V$ and consider the collection of all linearly independent subsets of $V$ containing $X$. We can partially-order these subsets by inclusion: if a subset $Y$ is contained in another $Y'$ then $Y\leq Y'$ in the order.

Take some list of these subsets $\{C_i\}$ so that the inclusion order restricted to this list is a total order. That is, each $C_i$ either contains or is contained in each other $C_j$. We can verify that the union of all the subsets in the list is actually still linearly independent, and it clearly contains every other element in the list. Thus it is an upper bound on the list which is still in the collection of linearly independent subsets of $V$. Every such list does contain an upper bound in the collection.

And here we need a seemingly-bizarre statement that I’ll cover more thoroughly in a later post: Zorn’s Lemma. This says that any nonempty partially-ordered set $P$ in which every chain (totally ordered subset of $P$) has an upper bound in $P$ contains a maximal element. That is, an element $a$ so that $a\leq c$ implies $a=c$. There’s nothing “above” $a$.

So here we have just such a partially-ordered set. Zorn’s Lemma tells us that there is some linearly independent subset of $V$ containing $X$ that is contained in no larger linearly independent subset. Thus starting with any linearly independent set we can add some elements to it and get a basis. In particular, we could choose $X$ to be the empty set — no elements means no relations at all means linearly independent — and pull a basis for $V$ out of Zorn’s hat. Weird. Eerie. And another similar argument shows that if we started with a set that spans $V$ we can throw out some elements to get a basis.

From here there are a couple of really rather technical theorems to get to the fact that any two bases of $V$ have the same cardinality. One handles the infinite case (and applies to all rings with unit) and the other the finite case (and just applies to division rings). The latter is actually not that hard, just dry. Take one basis and replace its elements one-by-one with elements of the other basis, showing at each step that you still have a basis if you choose the replacement right. I might go through these if people really want to see them, but I’ve never seen what good the proofs are.

The upshot is that modules over division rings are exceedingly nice. Every single one of them has a basis, and any two bases of a given module have the same cardinality. We have a few special terms here. We call modules over division rings “vector spaces”, and the cardinality of any basis of a vector space we call its “dimension”. Vector spaces, particularly over fields (commutative division rings), will be extremely useful to us as we move ahead.

One very common use is to use a vector space over a field $\mathbb{F}$ as the substrate for an algebraic structure rather than an abelian group ( $\mathbb{Z}$-module). For example, we might want to put an action of some other ring $R$ onto a vector space $V$, commuting with the field action. Our work on modules then tells us that in many ways working over $\mathbb{F}$ is just like working over $\mathbb{Z}$. For instance, we can take tensor products over $F$ and apply $\hom_F$ and get back vector spaces over $F$ since $F$ is commutative and acts on both sides of any vector space. The resulting theory will often be simpler, though, because general vector spaces are so much simpler than general abelian groups, and so they’re less likely to “get in the way” of other structures than abelian groups are.