## Properties of Complex Numbers

Today I’ll collect a few basic properties of complex numbers.

First off, they form a vector space over the reals. We constructed them as an algebra — the quotient of the algebra of polynomials by a certain ideal — and every algebra is a vector space. So what can we say about them as a vector space? The easiest fact is that it’s two-dimensional, and it’s got a particularly useful basis.

To see this, remember that we have a basis for the algebra of polynomials, which is given by the powers of the variable. So here when we throw in the formal element , its powers form a basis of the ring . But we have a relation, and that cuts things down a bit. Specifically, the element is the same as the element .

Given a polynomial in the “variable” , we can write it as

We can peel off the constant and linear terms, and then pull out a factor of :

Now this factor of can be replaced by , which drops the overall degree. We can continue like this, eventually rewriting any term involving higher powers of using only constant and linear terms. That is, any complex number can be written as , where and are real constants. Further, this representation is unique. This establishes the set as a basis for as a vector space over .

Now the additive parts of the field structure are clear from the vector space structure here. We can write the sum of two complex numbers and simply by adding the components: . We get the negative of a complex number by taking the negatives of the components.

We can also write out products pretty simply, since we know the product of pairs of basis elements. The only one that doesn’t involve the unit of the algebra is . So in terms of components we can write out the product of the complex numbers above as .

Notice here that the field of real numbers sits inside that of complex numbers, using scalar multiples of the complex unit. This is characteristic of algebras, but it’s worth pointing out here. Any real number can be considered as the complex number . This preserves all the field structures, but it ignores the order on the real numbers. A small price to pay, but an important one in certain ways.

We also mentioned the symmetry between and . Either one is just as valid as a square root of as the other is, so if we go through consistently replacing with , and with , we can’t tell the difference. This leads to an automorphism of fields called “complex conjugation”. It sends the complex number to its “conjugate” . This preserves all the field structure — additive and multiplicative — and it fixes the real numbers sitting inside the complex numbers.

Studying this automorphism, and similar structures of other field extensions forms the core of what algebraists call “Galois theory”. I’m not going there now, but it’s a huge part of modern mathematics, and its study is ultimately the root of all of our abstract algebraic techniques. The first groups were automorphism groups shuffling around roots of polynomials.

## The Complex Numbers

Yesterday we defined a field to be algebraically closed if any polynomial over the field always has exactly as many roots (counting multiplicities) as we expect from its degree. But we don’t know a single example of an algebraically complete field. Today we’ll (partially) remedy that problem.

First, remember the example we used for a polynomial with no roots over the real numbers. That is . The problem is that we have no field element whose square is . So let’s just postulate one! This, of course, has the same advantages as those of theft over honest toil. We write our new element as for “imaginary”, and throw it in with the rest of the real numbers .

Okay, just like when we threw in as a new element, we can build up sums and products involving real numbers and this new element . But there’s one big difference here: we have a relation that must satisfy. When we use the evaluation map we must find . And, of course, any polynomial which includes as a factor must evaluate to as well. But this is telling us that the kernel of the evaluation homomorphism for contains the principal ideal .

Can it contain anything else? If is a polynomial in the kernel, but is *not* divisible by , then Euclid’s algorithm gives us a greatest common divisor of and , which is a linear combination of these two, and must have degree either or . In the former case, we would find that the evaluation map would have to send everything — even the constant polynomials — to zero. In the latter case, we’d have a linear factor of , which would be a root. Clearly neither of these situations can occur, so the kernel of the evaluation homomorphism at is exactly the principal ideal .

Now the first isomorphism theorem for rings tells us that we can impose our relation by taking the quotient ring . But what we just discussed above further goes to show that is a maximal ideal, and the quotient of a ring by a maximal ideal is a field! Thus when we take the real numbers and adjoin a square root of to get a ring we might call , the result is a field. This is the field of “complex numbers”, which is more usually written .

Now we’ve gone through a lot of work to just add one little extra element to our field, but it turns out this is all we need. Luckily enough, the complex numbers are already algebraically complete! This is very much not the case if we were to try to algebraically complete other fields (like the rational numbers). Unfortunately, the proof really is essentially analytic. It seems to be a completely algebraic statement, but remember all the messy analysis and topology that went into defining the real numbers.

Don’t worry, though. We’ll come back and prove this fact once we’ve got a bit more analysis under our belts. We’ll also talk a lot more about how to think about complex numbers. But for now all we need to know is that they’re the “algebraic closure” of the real numbers, we get them by adding a square root of that we call , and we can use them as an example of an algebraically closed field.

One thing we can point out now, though, is the inherent duality of our situation. You see, we didn’t just add one square root of . Indeed, once we have complex numbers to work with we can factor as (test this by multiplying it out and imposing the relation). Then we have another root: . This is just as much a square root of as was, and anything we can do with we can do with . That is, there’s a symmetry in play that exchanges and . We can pick one and work with it, but we must keep in mind that whenever we do we’re making a non-canonical choice.

## Polynomials with Too Few Roots

Okay, we saw that roots of polynomials exactly correspond to linear factors, and that a polynomial can have at most as many roots as its degree. In fact, there’s an expectation that a polynomial of degree will have *exactly* roots. Today I want to talk about two ways this can fail.

First off, let’s work over the real numbers and consider the polynomial . A root of will be a real number so that , but a little playing around will show us that is the only possible solution. The degree of the polynomial is two, so we expect two roots, but we only find one. What went wrong?

Well, we know from the fact that is a root that is a factor of . Our division algorithm shows us that we can write . The factor that gives us the root shows up twice! But since we’ve already counted the root once the second occurrence doesn’t do anything.

To remedy this, let’s define the “multiplicity” of a root to be the number of times we can evenly divide out a factor of . So in our example the root has multiplicity . When we count the roots along with their multiplicities, we get back exactly the degree.

So do multiplicities handle all our problems? Unfortunately, no. An example here over the real numbers is the polynomial . A root of this polynomial would be a real number with . But since the square of any real number is nonnegative, this can’t be satisfied. So there exist polynomials with fewer roots than their degrees would indicate; even with no roots at all!

Now, some fields are well-behaved. We say that a field is “algebraically closed” if every polynomial over that field has a root . In that case we can divide out by to get a polynomial of one degree less, which must again have a root by algebraic closure. We can keep going until we write the polynomial as the product of a bunch of linear factors — the number is the same as the degree of the polynomial — and leave one field element left once we get down to degree zero. Thus over an algebraically closed field every polynomial has exactly as many roots as its degree indicates.. if you count them with their multiplicities!

## Euclid’s Algorithm for Polynomials

Consider the division algorithm. It looks a lot like division for integers, as it should. Specifically, it’s got all the features we need for the setup of Euclid’s Algorithm, and we have an analogue here!

Just like we did back then, we start with two polynomials and , and we divide them to find . Then if we turn around and divide to find . We keep going until we end up with a remainder of , at which point we can’t divide anymore, and so we stop. And we must eventually stop, because the degree of the remainders keep going down.

At the end, the last nonzero remainder will be the “greatest common divisor” of and . That is, it’s a polynomial that divides both and (leaving no remainder), and any other such common divisor must itself evenly divide .

Euclid’s algorithm also gives us a way of writing out the greatest common divisor as a linear combination . Just like for the integers, this marks the greatest common divisor as the linear combination with the least degree.

Now how to show all this? It’s easy: just go back to the proof of Euclid’s algorithm for integers! Everything works the same way again. We just have to stick in references to the degree where appropriate to measure the “size” of a polynomial.

In fact, if we can equip any integral domain with a degree function (where is the set of nonzero ring elements) so that we have a division algorithm like those for integers and polynomials, then we’ve got the setup for Euclid’s algorithm. In this case, we say we have a “Euclidean domain”. So the point here is that is Euclidean, and so it has a form of Euclid’s algorithm, and all that follows from it.

Notice that in the case of the integers we had some ambiguity about common divisors, since would both work equally well. The point here is that they differ by multiplication by a unit, and so each divides the other. This sort of thing happens in the divisibility preorder for any ring. For polynomials, the units are just the nonzero elements of the base field, considered as constant (degree zero) polynomials. Thus the greatest common divisor is only determined up to a constant multiple, but we can easily ignore that little ambiguity. The “greatest” just has to be read as referring to the degree of the polynomial.

## Polynomial Division

Polynomials generally form a ring, and not a field. However, the integers also form a ring, and we have a notion of division from arithmetic. For polynomials we have a similar form of division we’ll discuss now. Instead of using the absolute value of an integer to measure its size, we’ll use the degree of a polynomial to measure its size.

Given two polynomials and , we can find two more polynomials — the quotient and the remainder — so that , and . These polynomials are unique. The fact that we can find them is called the “division algorithm”, even though it’s a theorem. There are various algorithms we can use to establish this fact, but one of the most straightforward is polynomial long division, which we were all taught in high school algebra.

We start by writing . If the degree of is less than that of , we’re done. Otherwise let’s write and . The polynomials lead off with terms and . If we multiply by its leading term becomes exactly the same as that of , so when we subtract the top degree terms cancel out. That is, we write , and we see that has degree less than that of .

Now if the degree of is still greater than or equal to that of we can keep going. Multiply by a term so that when we subtract the product from we’ll kill off the highest remaining term and drop its degree. Since the degree of the remainder keeps going down, it must eventually drop below that of . Even if the degree of is zero, we can get the remainder down to the polynomial , which we decided to give the degree . This gives us our and

Now for uniqueness. Let’s say we have and , and both and have degrees less than that of . We can subtract to find . Now the degree on the right must be less than that of , since adding and subtracting polynomials can’t increase the degree beyond the larger of the two. On the left, on the other hand, the degree is at *least* that of , with one exception: if . In this case the left hand side vanishes. Since this is the only way to drop the degree of the left-hand side we must have . And then on the right we find .

## A Puzzlement

Randall’s got an analogy for Rubik’s cube. Like the cube, there’s a trick to it. Unlike the cube, it doesn’t really illustrate any interesting mathematics. Also unlike the cube, I’m not about to go telling everyone what the trick is out in public.

I mean, sure, it’s not like I’m using it or anything, but it’s the principle of the thing.

## Testing…

While I wait for my food to cool a little bit, I thought I’d try out this WordPress iPhone app. Kick the e-tires and so on.

I’m not going to make a one-day run this time, since I didn’t leave New Orleans until 13:00 CDT. So finally I have the excuse to stop in Dayton, TN tonight. Tomorrow morning I’ll see the Scopes Trial museum at 08:00 and be on my way. Yes, there will be pictures.