# The Unapologetic Mathematician

## Free modules

Following yesterday’s examples of module constructions, we consider a ring $R$ with unit. Again, $R$ is a left and a right module over itself by multiplication.

We can form the direct sum of a bunch of copies of $R$ over any (finite or infinite) index set $\mathcal{I}$: $\bigoplus\limits_{i\in\mathcal{I}}R$. Every element of this module is a list of elements of $R$ indexed by $\mathcal{I}$$\left(r_i\right)_{i\in\mathcal{I}}$ — and all but a finite number of them are zero. The ring $R$ acts from the left by $r\cdot\left(r_i\right)_{i\in\mathcal{I}}=\left(rr_i\right)_{i\in\mathcal{I}}$.

One special thing about this module is that any element can be written as a sum — $\left(r_i\right)_{i\in\mathcal{I}}=\sum\limits_{i\in\mathcal{I}}r_i\cdot e_i$ — where $e_i$ is the element with a $1$ in the slot indexed by $i$ and ${}0$ in all the other slots. This sum makes sense because there are only a finite number of nonzero terms to consider for any given module element. Since any element can be written as an $R$-linear combination of these $e_i$, we say they “span” the module.

Even better, there’s no way of writing any of the $e_i$ as an $R$-linear combination of the others. More specifically, if we have some $R$-linear combination $\sum\limits_{i\in\mathcal{I}}r_i\cdot e_i$, the only way for it to be the zero element of the module is for all of the $r_i$ to be zero. Since there are no $R$-linear relations between the $e_i$, we say that they are “linearly independent” (over $R$).

These two conditions — span and linear independence — show up all the time. Whenever we have a linearly independent collection of module elements that span a module, we say that they form a “basis” of the module. By the spanning property, every module element can be written as a linear combination of basis elements. The linear independence tells us that this expression is unique.

Now it’s important to note that not all modules even have a single basis. As an example of a module without a basis, consider the abelian group $\mathbb{Z}_2$ as a $\mathbb{Z}$-module. Now no element of this module is even linearly independent on its own! Clearly $n\cdot0=0$, even when $n$ is nonzero, so $\{0\}$ is not linearly independent. Also $n\cdot1=0$ whenever $n$ is even, so $\{1\}$ can’t be linearly independent either. There are no linearly independent sets, so no basis.

On the other hand, if an $R$-module $M$ does have a basis $\{b_i\}$, I claim that it’s isomorphic to a direct sum of copies of $R$, as above. Just take the index set to index the basis itself and try to find an isomorphism $M\cong\bigoplus\limits_{i\in\mathcal{I}}R$. Construct the function by sending $b_i$ to $e_i$ and extend by $R$-linearity. Since $\{b_i\}$ is a basis we can write an element of $M$ as $\sum\limits_{i\in\mathcal{I}}r_ib_i$, which must be sent to $\sum\limits_{i\in\mathcal{I}}r_ie_i$. Since $\{e_i\}$ is a basis of $\bigoplus\limits_{i\in\mathcal{I}}R$, the only way an element of $M$ gets sent to zero is if all the $r_i$ are zero already, and every element in the target gets hit at least once. Thus the function is an isomorphism.

Now, by the way direct sums interact with $\hom$, we see that for any left module $M$ we have
$\hom\left(\bigoplus\limits_{i\in\mathcal{I}}R,M\right)\cong\prod\limits_{i\in\mathcal{I}}\hom(R,M)\cong\prod\limits_{i\in\mathcal{I}}M$
thus if we pick a list of elements $m_i$ of $M$ indexed by $\mathcal{I}$ — no restriction on how many nonzero elements we pick — we get a unique homomorphism from $\bigoplus\limits_{i\in\mathcal{I}}R$ to $M$ sending $e_i$ to $m_i$. This justifies calling $\bigoplus\limits_{i\in\mathcal{I}}R$ a “free” left $R$-module, analogously to free groups, free rings, and so on.

The upshot of this property is that when we’re dealing with two free modules $M$ and $N$ and we have a basis in hand for each, then we have a nice way of writing down homomorphisms from $M$ to $N$. Let’s use $\{a_i\}_{i\in\mathcal{I}}$ as our basis for $M$ and $\{b_j\}_{j\in\mathcal{J}}$ as our basis for $N$. Then we can specify any homomorphism $f:M\rightarrow N$ by saying where $f$ sends the basis of $M$. We write $f(a_i)=n_i$. But then since $N$ has a basis we can write the $n_i$ in terms of the $b_j$, getting $f(a_i)=\sum\limits_{j\in\mathcal{J}}f_{i,j}b_j$.

What if we have another homomorphism $g:N\rightarrow P$, where $P$ is free on $\{c_k\}_{k\in\mathcal{K}}$? If we write $g(b_j)=\sum\limits_{k\in\mathcal{K}}g_{j,k}c_k$ then we compose homomorphisms to get
$g(f(a_i))=g\left(\sum\limits_{j\in\mathcal{J}}f_{i,j}b_j\right)=\sum\limits_{j\in\mathcal{J}}f_{i,j}g(b_j)=\sum\limits_{j\in\mathcal{J}}f_{i,j}\sum\limits_{k\in\mathcal{K}}g_{j,k}c_k=\sum\limits_{k\in\mathcal{K}}\left(\sum\limits_{j\in\mathcal{J}}f_{i,j}g_{j,k}\right)c_k$

If this looks familiar, it’s because we’re getting the coefficients of the composite homomorphism on the right by matrix multiplication! That’s right: we’re finally getting to high school algebra II here. One thing I’ll point out here that your teacher probably didn’t tell you is that we only wrote down a matrix for a homomorphism after picking a basis for each free module. A free module may have many different bases, and it requires a choice to pick one or another to write down a matrix. This choice may lead to all sorts of artifacts in the matrix that really have nothing to do with the homomorphism itself and everything to do with the basis. Thus we’ll try everywhere to avoid using a specific basis unless one clearly stands out as useful.