## The Sum of Subspaces

We know what the direct sum of two vector spaces is. That we define abstractly and without reference to the internal structure of each space. It’s sort of like the disjoint union of sets, and in fact the basis for a direct sum is the disjoint union of bases for the summands.

Let’s use universal properties to prove this! We consider the direct sum , and we have a basis for and a basis for . But remember that the whole point of a basis is that vector spaces are free modules.

That is, there is a forgetful functor from to , sending a vector space to its underlying set. This functor has a left adjoint which assigns to any set the vector space of formal linear combinations of elements of . This is the free vector space on the basis , and when we choose the basis for a vector space we are actually choosing an isomorphism .

Okay. So we’re really considering the direct sum , and we’re asserting that it is isomorphic to . But we just said that constructing a free vector space is a functor, and this functor has a right adjoint. And we know that any functor that has a right adjoint preserves colimits! The disjoint union of sets is a coproduct, and the direct sum of vector spaces is a biproduct, which means it’s also a coproduct. Thus we have our isomorphism. Neat!

But not all unions of sets are disjoint. Sometimes the sets share elements, and the easiest way for this to happen is for them to both be subsets of some larger set. Then the union of the two subsets has to take this overlap into account. And since subspaces of a larger vector space may intersect nontrivially, their sum as subspaces has to take this into account.

First, here’s a definition in terms of the vectors themselves: given two subspaces and of a larger vector space , the sum will be the subspace consisting of all vectors that can be written in the form for and . Notice that there’s no uniqueness requirement here, and that’s because if and overlap in anything other than the trivial subspace we can add a vector in that overlap to and subtract it from , getting a different decomposition. This is precisely the situation a direct sum avoids.

Alternatively, let’s consider the collection of all subspaces of . This is a partially-ordered set, where the order is given by containment of the underlying sets. It’s sort of like the power set of a set, except that only those subsets of which are subspaces get included.

Now it turns out that, like the power set, this poset is actually a lattice. The meet is the intersection of subspaces, but the join isn’t their union. Indeed, the union of subspaces usually isn’t a subspace at all! What do we use instead? The sum, of course! It’s easiest to verify this with the algebraic definition of a lattice.

The lattice does have a top element (the whole space ) and a bottom element (the trivial subspace ). It’s even modular! Indeed, let , , and be subspaces with . Then on the one hand we consider , which is the collection of all vectors , where , , and . On the other hand we consider , which is collection of all vectors , where , , and . I’ll leave it to you to show how these two conditions are equivalent.

Unfortunately, the lattice isn’t distributive. I could work this out directly, but it’s easier to just notice that complements aren’t unique. Just consider three subspaces of : has all vectors of the form , has all of the form , and has all of the form . Then , and , but .

This is all well and good, but it’s starting to encroach on Todd’s turf, so I’ll back off a bit. The important bit here is that the sum behaves like a least-upper-bound.

In categorical terms, this means that it’s a product in the lattice of subspaces (considered as a category). Don’t get confused here! Direct sums are coproducts in the category , while sums are coproducts in the category (lattice) of subspaces of a given vector space. These are completely different categories, so don’t go confusing coproducts in one with those in the other.

In this case, all we mean by saying this is a categorical coproduct is that we have a description of the sum of two subspaces which doesn’t refer to the elements of the subspaces at all. The sum is the smallest subspace of which contains both and . It is the “smallest” in the sense that any other subspace containing both and must contain . This description from the outside of the subspaces will be useful when we don’t want to get our hands dirty with actual vectors.

I hope I’m not confusing products in one category with coproducts in the other, but… aren’t joins in lattices coproducts, while meets are products?

Comment by Sridhar Ramesh | July 21, 2008 |

Oops.. I’d turned that upside-down in my head. Fixed now.

Comment by John Armstrong | July 21, 2008 |

(One tiny instance left to fix, at the beginning of the last paragraph.)

Comment by Sridhar Ramesh | July 21, 2008 |

[...] since direct sums add dimensions this [...]

Pingback by The Euler Characteristic of an Exact Sequence Vanishes « The Unapologetic Mathematician | July 23, 2008 |

[...] Let’s start with just two subspaces and of some larger vector space. We’ll never really need that space, so we don’t need to give it a name. The thing to remember is that and might have a nontrivial intersection — their sum may not be direct. [...]

Pingback by The Inclusion-Exclusion Principle « The Unapologetic Mathematician | July 24, 2008 |

[...] Complements and the Lattice of Subspaces We know that the poset of subspaces of a vector space is a lattice. Now we can define complementary subspaces in a way that [...]

Pingback by Orthogonal Complements and the Lattice of Subspaces « The Unapologetic Mathematician | May 7, 2009 |