The Unapologetic Mathematician

Mathematics for the interested outsider

Roots of Polynomials II

This one might have to suffice for two days. The movers are almost done loading the truck here in New Orleans, and I’ll be taking today and tomorrow to drive to Maryland.

We can actually tease out more information from the factorization we constructed yesterday. Bur first we need a little definition.

Remember that when we set up the algebra of polynomials we noted that the coefficients have to be all zero after some finite number of them. Thus there must be a greatest nonzero coefficient c_n. The index n corresponding to this coefficient we call the “degree” of the polynomial. If all the coefficients are zero — giving the zero polynomial — then a common convention is to assign it degree -\infty. This actually isn’t completely arbitrary, but the details won’t concern us until later.

So, armed with this information, look at how we constructed the factorization p=(X-x)q. We replaced each term c_kX^k in p with the term c_k(X^k-x^k). Then we factored out (X-x) from this term, giving c_k(X^{k-1}+X^{k-2}x+...+Xx^{k-2}+x^{k-1}). So the highest power of X that shows up in this term (with a nonzero coefficient) is c_kX^{k-1}. And the highest power coming from all the terms of p will be c_nX^{n-1}. The power X^{n-1} shows up only once in the expression for q, so there’s no way for two such terms to add together and make its coefficient turn out to be zero, and no higher power of X ever shows up at all. Thus the degree of q is one less than that of p.

So what does this gain us? Well, each time we find a root we can factor out a term like (X-x), which reduces the degree by {1}. So if p has degree n there can only be at most n roots!

A nonzero constant polynomial p=c_0 has degree {0}, but it also has no roots! Perfect.

A linear polynomial p=c_0+c_1X has degree {1}, and it has exactly one root: -\frac{c_0}{c_1}.

Now let’s assume that our statement is true for all polynomials of degree n: they have n or fewer roots. Then given a polynomial p of degree n+1 either p has a root x or it doesn’t. If it doesn’t, then we’re already done. If it does, then we can factor p=(X-x)q, where p has degree n. But then q can have at most n roots, and thus p can have at most n+1 roots!

A nice little corollary of this is that if our base field \mathbb{F} is infinite (like it is for the most familiar examples) then only the zero polynomial can give us the zero function when we evaluate it at various field elements. That is, if p(x)=0 for all x\in\mathbb{F}, then p=0. This must be true because p has an infinite number of roots, and so no finite degree polynomial can possibly have that many roots. The only possibility left is the zero polynomial.

Just to be clear, though, let’s look at this one counterexample. Think about the field \mathbb{Z}_3 we used when we talked about Set. The polynomial p=X^3-X is not the zero polynomial, but p(x) is the zero function. Indeed p(0)=0^3-0=0, p(1)=1^3-1=0, and p(2)=2^3-2=8-2=6, which is divisible by 3, and so is the same as {0} in this field.

July 31, 2008 Posted by | Algebra, Polynomials, Ring theory | 3 Comments

Roots of Polynomials I

When we consider a polynomial as a function, we’re particularly interested in those field elements x so that p(x)=0. We call such an x a “zero” or a “root” of the polynomial p.

One easy way to get this to happen is for p to have a factor of X-x. Indeed, in that case if we write p=(X-x)q for some other polynomial q then we evaluate to find

p(x)=(X-x)q(x)=0

The interesting thing is that this is the only way for a root to occur, other than to have the zero polynomial. Let’s say we have the polynomial

p=c_0+c_1X+c_2X^2+...+c_nX^n

and let’s also say we’ve got a root x so that p(x)=0. But that means

0=c_0+c_1x+c_2x^2+...+c_nx^n

This is not just a field element — it’s the zero polynomial! So we can subtract it from p to find

\begin{aligned}p=\left(c_0+c_1X+c_2X^2+...+c_nX^n\right)-\left(c_0+c_1x+c_2x^2+...+c_nx^n\right)\\=c_1(X-x)+c_2(X^2-x^2)+...+c_n(X^n-x^n)\end{aligned}

Now for any k we can use the identity

X^k-x^k=(X-x)(X^{k-1}+X^{k-2}x+...+Xx^{k-2}+x^{k-1})

to factor out (X-x) from each term above. This gives the factorization we were looking for.

July 30, 2008 Posted by | Algebra, Polynomials, Ring theory | 2 Comments

Polynomials as Functions

When I set up the algebra of polynomials I was careful to specify that the element X is not a “variable”, as in high school algebra. Why do I have to do that? What is the “variable” thing that we were all taught, then?

We’ve got the algebra of polynomials \mathbb{F}[X] over the base field \mathbb{F}. Now I’m going to define a function \mathrm{ev}:\mathbb{F}[X]\times\mathbb{F}\rightarrow\mathbb{F} called the “evaluation map”. We define \mathrm{ev}(p,x) by first writing out p in terms of the standard basis

p=c_0+c_1X+c_2X^2+...+c_nX^n

Remember here that the sum must terminate after a finite number of basis elements. Then we just stick the field element x in for X to get an expression written out in the field \mathbb{F} itself:

\mathrm{ev}(p,x)=c_0+c_1x+c_2x^2+...+c_nx^n

Now the superscripts on each x must be read as exponents. This defines a particular element of the field. If we keep the polynomial p fixed and let x range over \mathbb{F} we get a function from \mathbb{F} to itself, which we can abuse notation to write as p(x). This is the notion of polynomial-as-function we were taught in high school.

But it’s actually more interesting to see what happens as we fix x and let p vary over all polynomials. The map p\mapsto p(x) turns out to be a homomorphism of \mathbb{F}-algebras! Indeed, given polynomials

p=c_0+c_1X+c_2X^2+...+c_nX^n
q=d_0+d_1X+d_2X^2+...+d_nX^n

(the top coefficients here may be zero, and all higher coefficients definitely are) and a field element k we find

\begin{aligned}\left[p+q\right](x)=(c_0+d_0)+(c_1+d_1)x+(c_2+d_2)x^2+...+(c_n+d_n)x^n\\=c_0+d_0+c_1x+d_1x+c_2x^2+d_2x^2+...+c_nx^n+d_nx^n\\=c_0+c_1x+c_2x^2+...+c_nx^n+d_0+d_1x+d_2x^2+...+d_nx^n\\=p(x)+q(x)\end{aligned}

\begin{aligned}\left[kp\right](x)=(kc_0)+(kc_1)x+(kc_2)x^2+...+(kc_n)x^n\\=kc_0+kc_1x+kc_2x^2+...+kc_nx^n\\=k(c_0+c_1x+c_2x^2+...+c_nx^n)\\=kp(x)\end{aligned}

I’ll let you write out the verification that it also preserves multiplication.

In practice this “evaluation homomorphism” provides a nice way of extracting information about polynomials. And considering polynomials as functions provides another valuable slice of information. But we must still keep in mind the difference between the abstract polynomial

p=c_0+c_1X+c_2X^2+...+c_nX^n

and the field element

p(x)=c_0+c_1x+c_2x^2+...+c_nx^n

July 29, 2008 Posted by | Algebra, Ring theory | 7 Comments

Polynomials

Okay, we’re going to need some other algebraic tools before we go any further into linear algebra. Specifically, we’ll need to know a few things about the algebra of polynomials. Specifically (and diverging from the polynomials discussed earlier) we’re talking about polynomials in one variable, and with coefficients in the field we’re building our vector spaces over already.

We’ll write this algebra as \mathbb{F}[X], where X is now not a “variable”, like it was back in high school calculus. It’s a new element of the algebra. We start with the field \mathbb{F} which is trivially an algebra over itself. Then we just throw in this new thing called X. Then, since we want to still be an algebra over \mathbb{F}, we have to be able to multiply elements. Defining a scalar multiple cX for each c\in\mathbb{F} is a good start, but we also have to multiply X by itself to get X^2. There’s no reason this should be anything we’ve seen yet, so it’s new. Going along, we get X^3, X^4, and so on. Each of these is a new element, and we also get scalar multiples cX^k, and even linear combinations:

\displaystyle\sum\limits_{k=0}^\infty c_kX^k

as long as there are only a finite number of nonzero terms in this sum. That is, the coefficients are all zero after some point. We customarily take X^0=1 — the unit of the algebra.

Note here that we’re not using the summation convention for polynomials, though we could in principle. Remember, an algebra is a vector space, and what we’ve said above establishes that the set \{X^k\} constitutes a basis for this vector space!

The algebra structure can be specified by defining it on pairs of basis elements. Remember that the structure is just a bilinear multiplication, which is just a linear map from the tensor square \mathbb{F}[X]\otimes\mathbb{F}[X] to \mathbb{F}[X]. And we know that the basis for a tensor product consists of pairs of basis elements. So we can specify this linear map on a basis and extend by linearity — bilinearity — whatever…

Anyhow, how should we define the multiplication? Simply: \mu(X^m,X^n)=X^{m+n}. Then the whole rest of the algebra structure is defined for us. Now this looks like adding exponents, but remember we can just as well think of these as indices on basis elements that just happen to add when we multiply corresponding basis elements. Thus we wouldn’t be out of place using the summation convention here, though we won’t for the moment.

July 28, 2008 Posted by | Algebra, Polynomials, Ring theory | 5 Comments

Open Gap Thread

I’m preparing for my drive tomorrow. I’m heading down to New Orleans for the final move-out. As such, I think I’ll throw this one out to discuss the (non)existence of the math gender gap. Also, feel free to weigh in about Title IX applying to math and science departments. The latter I heard about through Jesse Johnson, so here’s a tam-tip to him.

Talk amongst yourselves.

July 25, 2008 Posted by | Uncategorized | 17 Comments

The Inclusion-Exclusion Principle

In combinatorics we have a method of determining the cardinality of unions in terms of the sets in question and their intersection: the InclusionExclusion Principle. Predictably enough, this formula is reflected in the subspaces of a vector space. We could argue directly in terms of bases, but it’s much more interesting to use the linear algebra we have at hand.

Let’s start with just two subspaces V and W of some larger vector space. We’ll never really need that space, so we don’t need to give it a name. The thing to remember is that V and W might have a nontrivial intersection — their sum may not be direct.

Now we consider the sum of the subspaces. Every vector in V+W is the sum of a vector in V and a vector in W. This tells us that there’s a linear map f_1:V\oplus W\rightarrow V+W defined by f_1(v,w)=v+w. Note carefully that this is linear as a function of the pair (v,w). It’s not bilinear — linear in each of v and w separately — which would mean bringing in the tensor product.

This function is always surjective, but it may fail to be injective. Specifically, if V and W have a nontrivial intersection then it gives us a nontrivial kernel. If x\in V\cap W then f_1(x,-x)=0. But rather than talk about elements, let’s use the intersection to parametrize the kernel! We’ve got another linear map f_2:V\cap W\rightarrow V\oplus W, defined by f_2(x)=(x,-x).

Now it’s clear that f_2 has a trivial kernel. On the other hand, its image is precisely the set of pairs that f_1 kills off. That means we’ve got an exact sequence:

\mathbf{0}\rightarrow V\cap W\xrightarrow{f_2}V\oplus W\xrightarrow{f_1}V+W\rightarrow\mathbf{0}

Taking the Euler characteristic we find that

\dim\left(V\cap W\right)-\dim\left(V\oplus W\right)+\dim\left(V+W\right)=0

Some simple juggling turns this into

\dim\left(V+W\right)=\dim\left(V\right)+\dim\left(W\right)-\dim\left(V\cap W\right)

which is the inclusion-exclusion principle for two subspaces.

What about three subspaces U, V, and W? We start off again with

U\oplus V\oplus W\xrightarrow{f_1}U+V+W\rightarrow\mathbf{0}

defined by f_1(u,v,w)=u+v+w. Then the kernel contains triples like (0,x,-x), (-y,0,y), and (z,-z,0). We use the pairwise intersections to cover the kernel by the map

\left(V\cap W\right)\oplus\left(W\cap U\right)\oplus\left(U\cap V\right)\xrightarrow{f_2}U\oplus V\oplus W

defined by (x,y,z)=(z-y,x-z,y-x). The image of f_2 is exactly the kernel of f_1, but this time it has a nontrivial kernel itself! Now we need another map

\mathbf{0}\rightarrow U\cap V\cap W\xrightarrow{f_3}\left(V\cap W\right)\oplus\left(W\cap U\right)\oplus\left(U\cap V\right)

defined by f_3(t)=(t,t,t). Now f_3 is injective, and its image is exactly the kernel of f_2. We stick these maps all together to have the long exact sequence

\begin{aligned}\mathbf{0}\rightarrow U\cap V\cap W\xrightarrow{f_3}\left(V\cap W\right)\oplus\left(W\cap U\right)\oplus\left(U\cap V\right)\\\xrightarrow{f_2}U\oplus V\oplus W\xrightarrow{f_1}U+V+W\rightarrow\mathbf{0}\end{aligned}

Taking the Euler characteristic and juggling again we find

\begin{aligned}\dim\left(U+V+W\right)=\dim\left(U\right)+\dim\left(V\right)+\dim\left(W\right)\\-\dim\left(U\cap V\right)-\dim\left(U\cap W\right)-\dim\left(V\cap W\right)\\+\dim\left(U\cap V\cap W\right)\end{aligned}

We include the dimensions of the individual subspaces, exclude the dimensions of the pairwise intersections, and include back in the dimension of the triple intersection.

For larger numbers of subspaces we can construct longer and longer exact sequences. The kth term consists of the direct sum of the intersections of the subspaces, taken k at a time. The kth map consists of including each k-fold intersection into each of the (k-1)-fold intersections it lies in, with positive and negative signs judiciously sprinkled throughout to make the sequence exact.

The result is that the dimension of a sum of subspaces is the alternating sum of the dimensions of the k-fold intersections. First add the dimension of each subspace, then subtract the dimension of each pairwise intersection, then add back in the dimension of each triple intersection, and so on.

From here it’s the work of a moment to derive the combinatorial inclusion-exclusion principle. Given a collection of sets, just construct the free vector space on each of them. These free vector spaces will have nontrivial intersections corresponding to the nonempty intersections of the sets, and we’ve got canonical basis elements floating around to work with. Then all these dimensions we’re talking about are cardinalities of various subsets of the sets we started with, and the combinatorial inclusion-exclusion principle follows. Of course, as I said before we could have proved it from the other side, but that would require a lot of messy hands-on work with bases and such.

The upshot is that the combinatorial inclusion-exclusion principle is equivalent to the statement that exact sequences of vector spaces have trivial Euler characteristic! This little formula that we teach in primary or secondary school turns out to be intimately connected with one of the fundamental tools of homological algebra and the abstract approach to linear algebra. Neat!

July 24, 2008 Posted by | Algebra, Linear Algebra | 14 Comments

The Euler Characteristic of an Exact Sequence Vanishes

Naturally, one kind of linear map we’re really interested in is an isomorphism. Such a map has no kernel and no cokernel, and so its index is definitely zero. If it weren’t clear enough already, this shows that isomorphic vector spaces have the same dimension!

But remember that in abelian categories we’ve got diagrams to chase and exact sequences to play with. And these have something to say about the matters at hand.

First, remember that a linear map whose kernel vanishes looks like this in terms of exact sequences:

\mathbf{0}\rightarrow V\xrightarrow{f}W

And one whose cokernel vanishes looks like this:

V\xrightarrow{f}W\rightarrow\mathbf{0}

So an isomorphism is just an exact sequence like this:

\mathbf{0}\rightarrow V\xrightarrow{f}W\rightarrow\mathbf{0}

And then we have the equation

-\dim(V)+\dim(W)=0

Yes, I’m writing the negative of the index here, but there’s a good reason for it.

Now what if we have a segment of an exact sequence:

...\rightarrow U\xrightarrow{f}V\xrightarrow{g}W\rightarrow...

Considering the map f allows us to break up V as \mathrm{Im}(f)\oplus\mathrm{Cok}(f) (since short exact sequences split). On the other hand, considering the map g allows us to break up V as \mathrm{Ker}(g)\oplus\mathrm{Coim}(g). Exactness tells us that \mathrm{Im}(f)=\mathrm{Ker}(g), which gives us the isomorphism V\cong\mathrm{Im}(f)\oplus\mathrm{Coim}(g).

Now the rank-nullity theorem says that \mathrm{Im}(f)\cong\mathrm{Coim}(f), and similarly for all other linear maps. So we get V\cong\mathrm{Coim}(f)\oplus\mathrm{Im}(g) — which expresses V as the direct sum of one subspace of U and one of W. And each of those vector spaces has another part to hand off to the vector space on its other side, and so on!

What does this mean? It says that if we look at every other term of an exact sequence and take their direct sum, the result is the same whether we look at the odd or the even terms. More explicitly, let’s say we have a long exact sequence

\mathbf{0}\rightarrow V_n\xrightarrow{f_n}V_{n-1}\xrightarrow{f_{n-1}}...\xrightarrow{f_2}V_1\xrightarrow{f_1}V_0\rightarrow\mathbf{0}

Then we can decompose each term as either V_k\cong\mathrm{Im}(f_{k+1})\oplus\mathrm{Coim}(f_k) or V_k\cong\mathrm{Coim}(f_{k+1})\oplus\mathrm{Im}(f_k) — one for the even terms and the other for the odd terms. Then direct-summing everything up we have an isomorphism

\displaystyle\bigoplus\limits_{\substack{0\leq k\leq n\\k\mathrm{~even}}}V_k\cong\bigoplus\limits_{\substack{0\leq k\leq n\\k\mathrm{~odd}}}V_k

which tells us that

\displaystyle\dim\left(\bigoplus\limits_{\substack{0\leq k\leq n\\k\mathrm{~even}}}V_k\right)-\dim\left(\bigoplus\limits_{\substack{0\leq k\leq n\\k\mathrm{~odd}}}V_k\right)=0

But since direct sums add dimensions this means

\displaystyle\sum\limits_{\substack{0\leq k\leq n\\k\mathrm{~even}}}\dim\left(V_k\right)-\sum\limits_{\substack{0\leq k\leq n\\k\mathrm{~odd}}}\dim\left(V_k\right)=0

And now we can just combine these sums:

\displaystyle\sum\limits_{k=0}^n(-1)^k\dim\left(V_k\right)=0

Which generalizes the formula we wrote above in the case of an isomorphism. This alternating sum we call the “Euler characteristic” of a sequence, and we’ll be seeing a lot more of that sort of thing in the future. But here the major result is that for exact sequences we always get the value zero.

In fact, this amounts to a far-reaching generalization of the rank-nullity theorem. And that theorem, of course, is essential to the proof. Yet again we see this pattern of “bootstrapping” our way from a special case to a larger theorem. Despite some mathematicians being enamored of reductio ad absurdum, this induction from special to general has to be one of the most useful tools we keep running across.

July 23, 2008 Posted by | Algebra, Linear Algebra | 2 Comments

The Index of a Linear Map

Today I want to talk about the index of a linear map A:V\rightarrow W, which I’ll motivate in terms of a linear system Ax=b for some linear map A:\mathbb{F}^m\rightarrow\mathbb{F}^n.

One important quantity is the dimension of the kernel of A. In terms of the linear system, this is the dimension of the associated homogenous system Ax=0. If there are any solutions of the system under consideration, they will form an affine space of this dimension.

But the system is inhomogenous in general, and as such it might not have any solutions. Since every short exact sequence of vector spaces splits we can write \mathbb{F}^n=\mathrm{Im}(A)\oplus U. Then the vector b will have some component in the image of A, and some component in the complementary subspace U. Note that U is not canonical here. I’m just asserting that some such subspace exists so we can make this decomposition.

Now the condition that the system have any solutions is that the component of b in U vanishes. But we can pick some basis for U and think of each component of b with respect to each of those basis elements vanishing. That is, we must apply a number of linear conditions equal to the dimension of U in order to know that the system has any solutions at all. And once it does, it will have \dim(\mathrm{Ker}(A)) of them.

But what happens if we quotient out \mathbb{F}^n by \mathrm{Im}(A)? We get the cokernel \mathrm{Cok}(A), which must then be isomorphic to U! So the number of conditions we need to apply to b is the dimension of the cokernel.

Now I’ll define the index \mathrm{Ind}(A) of the linear map A to be the difference between the dimension of the kernel and the dimension of the cokernel. In the more general case, if one of these dimensions is infinite we may get an infinite index. But often we’ll restrict to the case of “finite index” operators, where the difference works out to be a well-defined finite number. Of course, when dealing with linear systems it’s guaranteed to be finite, but there are a number of results out there for the finite index case. In fact, this is pretty much what a “Fredholm operator” is, if we ever get to that.

Anyhow, we’ve got this definition:

\mathrm{Ind}(A)=\dim(\mathrm{Ker}(A))-\dim(\mathrm{Cok}(A))

Now let’s add and subtract the dimension of the image of A:

\mathrm{Ind}(A)=\left(\dim(\mathrm{Ker}(A))+\dim(\mathrm{Im}(A))\right)-\left(\dim(\mathrm{Cok}(A))+\dim(\mathrm{Im}(A))\right)

Clearly the dimension of the image and the dimension of the cokernel add up to the dimension of the target space. But notice also that the rank-nullity theorem tells us that the dimension of the kernel and the dimension of the image add up to the dimension of the source space! That is, we have the equality

\mathrm{Ind}(A)=\dim(V)-\dim(W)

What happened here? We started with an analytic definition in terms of describing solutions to a system of equations, and we ended up with a geometric formula in terms of the dimensions of two vector spaces.

What’s more, the index doesn’t really depend much on the particulars of A at all. Any two linear maps between the same pair of vector spaces will have the same index! And this gives us a simple tradeoff: for every dimension we add to the space of solutions, we have to add a linear condition to restrict the vectors b for which there are any solutions at all.

Alternately, what happens when we add a new equation to a system? With a new equation the dimension of the target space goes up, and so the index goes down. One very common way for this to occur is for the dimension of the solution space to drop. This gives rise to our intuition that each new equation reduces the number of independent solutions by one, until we have exactly as many equations as variables.

July 22, 2008 Posted by | Algebra, Linear Algebra | 3 Comments

The Sum of Subspaces

We know what the direct sum of two vector spaces is. That we define abstractly and without reference to the internal structure of each space. It’s sort of like the disjoint union of sets, and in fact the basis for a direct sum is the disjoint union of bases for the summands.

Let’s use universal properties to prove this! We consider the direct sum V\oplus W, and we have a basis A for V and a basis B for W. But remember that the whole point of a basis is that vector spaces are free modules.

That is, there is a forgetful functor from \mathbf{Vec}(\mathbb{F}) to \mathbf{Set}, sending a vector space to its underlying set. This functor has a left adjoint which assigns to any set S the vector space \mathbb{F}\left[S\right] of formal linear combinations of elements of S. This is the free vector space on the basis S, and when we choose the basis A for a vector space V we are actually choosing an isomorphism V\cong\mathbb{F}\left[A\right].

Okay. So we’re really considering the direct sum \mathbb{F}\left[A\right]\oplus\mathbb{F}\left[B\right], and we’re asserting that it is isomorphic to \mathbb{F}\left[A\uplus B\right]. But we just said that constructing a free vector space is a functor, and this functor has a right adjoint. And we know that any functor that has a right adjoint preserves colimits! The disjoint union of sets is a coproduct, and the direct sum of vector spaces is a biproduct, which means it’s also a coproduct. Thus we have our isomorphism. Neat!

But not all unions of sets are disjoint. Sometimes the sets share elements, and the easiest way for this to happen is for them to both be subsets of some larger set. Then the union of the two subsets has to take this overlap into account. And since subspaces of a larger vector space may intersect nontrivially, their sum as subspaces has to take this into account.

First, here’s a definition in terms of the vectors themselves: given two subspaces V and W of a larger vector space U, the sum V+W will be the subspace consisting of all vectors that can be written in the form v+w for v\in V and w\in W. Notice that there’s no uniqueness requirement here, and that’s because if V and W overlap in anything other than the trivial subspace \left\{0\right\} we can add a vector in that overlap to v and subtract it from w, getting a different decomposition. This is precisely the situation a direct sum avoids.

Alternatively, let’s consider the collection of all subspaces of U. This is a partially-ordered set, where the order is given by containment of the underlying sets. It’s sort of like the power set of a set, except that only those subsets of U which are subspaces get included.

Now it turns out that, like the power set, this poset is actually a lattice. The meet is the intersection of subspaces, but the join isn’t their union. Indeed, the union of subspaces usually isn’t a subspace at all! What do we use instead? The sum, of course! It’s easiest to verify this with the algebraic definition of a lattice.

The lattice does have a top element (the whole space U) and a bottom element (the trivial subspace \left\{0\right\}). It’s even modular! Indeed, let X, Y, and Z be subspaces with X\subseteq Z. Then on the one hand we consider X+(Y\cap Z), which is the collection of all vectors u=x+y, where x\in X, y\in Y, and y\in Z. On the other hand we consider (X+Y)\cap Z, which is collection of all vectors u=x+y, where x\in X, y\in Y, and u\in Z. I’ll leave it to you to show how these two conditions are equivalent.

Unfortunately, the lattice isn’t distributive. I could work this out directly, but it’s easier to just notice that complements aren’t unique. Just consider three subspaces of \mathbb{F}^2: X has all vectors of the form \begin{pmatrix}x\\{0}\end{pmatrix}, Y has all of the form \begin{pmatrix}y\\y\end{pmatrix}, and Z has all of the form \begin{pmatrix}0\\z\end{pmatrix}. Then X+Y=\mathbb{F}^2=X+Z, and X\cap Y=\left\{0\right\}=X\cap Z, but Y\neq Z.

This is all well and good, but it’s starting to encroach on Todd’s turf, so I’ll back off a bit. The important bit here is that the sum behaves like a least-upper-bound.

In categorical terms, this means that it’s a product in the lattice of subspaces (considered as a category). Don’t get confused here! Direct sums are coproducts in the category \mathbf{Vec}(\mathbb{F}), while sums are coproducts in the category (lattice) of subspaces of a given vector space. These are completely different categories, so don’t go confusing coproducts in one with those in the other.

In this case, all we mean by saying this is a categorical coproduct is that we have a description of the sum of two subspaces which doesn’t refer to the elements of the subspaces at all. The sum V+W is the smallest subspace of U which contains both V and W. It is the “smallest” in the sense that any other subspace containing both V and W must contain V+W. This description from the outside of the subspaces will be useful when we don’t want to get our hands dirty with actual vectors.

July 21, 2008 Posted by | Algebra, Category theory, Linear Algebra | 6 Comments

Unsolvable Inhomogenous Systems

We know that when an inhomogenous system has a solution, it has a whole family of them. Given a particular solution, it defines a coset of the subspace of the solutions to the associated homogenous system. And that subspace is the kernel of a certain linear map.

But must there always be a particular solution to begin with? Clearly not. When we first talked about linear systems we mentioned the example

3x^1+4x^2=12
x^1-x^2=11
x^1+x^2=10

In our matrix notation, this reads

\displaystyle\begin{pmatrix}3&4\\1&-1\\1&1\end{pmatrix}\begin{pmatrix}x^1\\x^2\end{pmatrix}=\begin{pmatrix}12\\11\\10\end{pmatrix}

or Ax=b in purely algebraic notation.

We saw then that this system has no solutions at all. What’s the problem? Well, we’ve got a linear map A:\mathbb{F}^2\rightarrow\mathbb{F}^3. The rank-nullity theorem tells us that the dimension of the image (the rank) plus the dimension of the kernel (the nullity) must equal the dimension of the source. But here this dimension is 2, and so the rank can be at most 2, which means there must be some vectors b\in\mathbb{F}^3 which can’t be written as b=Ax no matter what vector x we pick. And the vector in the example is just such a vector outside the image of A.

The upshot is that we can only solve the system Ax=b if the vector b lies in the image of the linear map A, and it might be less than obvious what vectors satisfy this requirement. Notice that this is more complicated than the old situation for single equations of single variables. In that case, the target only has one dimension, and the linear transformation “multiply by the number A” only misses this dimension if A=0, which is easy to recognize.

July 18, 2008 Posted by | Algebra, Linear Algebra | 3 Comments