## Polynomials

Okay, we’re going to need some other algebraic tools before we go any further into linear algebra. Specifically, we’ll need to know a few things about the algebra of polynomials. Specifically (and diverging from the polynomials discussed earlier) we’re talking about polynomials in one variable, and with coefficients in the field we’re building our vector spaces over already.

We’ll write this algebra as , where is now not a “variable”, like it was back in high school calculus. It’s a new element of the algebra. We start with the field which is trivially an algebra over itself. Then we just throw in this new thing called . Then, since we want to still be an algebra over , we have to be able to multiply elements. Defining a scalar multiple for each is a good start, but we also have to multiply by itself to get . There’s no reason this should be anything we’ve seen yet, so it’s new. Going along, we get , , and so on. Each of these is a new element, and we also get scalar multiples , and even linear combinations:

as long as there are only a finite number of nonzero terms in this sum. That is, the coefficients are all zero after some point. We customarily take — the unit of the algebra.

Note here that we’re not using the summation convention for polynomials, though we could in principle. Remember, an algebra is a vector space, and what we’ve said above establishes that the set constitutes a *basis* for this vector space!

The algebra structure can be specified by defining it on pairs of basis elements. Remember that the structure is just a bilinear multiplication, which is just a linear map from the tensor square to . And we know that the basis for a tensor product consists of pairs of basis elements. So we can specify this linear map on a basis and extend by linearity — bilinearity — whatever…

Anyhow, how should we define the multiplication? Simply: . Then the whole rest of the algebra structure is defined for us. Now this *looks* like adding exponents, but remember we can just as well think of these as indices on basis elements that just *happen* to add when we multiply corresponding basis elements. Thus we wouldn’t be out of place using the summation convention here, though we won’t for the moment.

[...] as Functions When I set up the algebra of polynomials I was careful to specify that the element is not a “variable”, as in high school [...]

Pingback by Polynomials as Functions « The Unapologetic Mathematician | July 29, 2008 |

[...] that when we set up the algebra of polynomials we noted that the coefficients have to be all zero after some finite number of them. Thus there [...]

Pingback by Roots of Polynomials II « The Unapologetic Mathematician | July 31, 2008 |

[...] see this, remember that we have a basis for the algebra of polynomials, which is given by the powers of the variable. So here when we throw in the formal element , its [...]

Pingback by Properties of Complex Numbers « The Unapologetic Mathematician | August 8, 2008 |

[...] power series are sort of like polynomials, except that the coefficients don’t have to die out at infinity. That is, when we consider [...]

Pingback by Power Series « The Unapologetic Mathematician | August 18, 2008 |

[...] so let’s take the algebra of polynomials, and consider its representation [...]

Pingback by Representations of a Polynomial Algebra « The Unapologetic Mathematician | October 30, 2008 |