# The Unapologetic Mathematician

## Local Rings

Sorry for the break last Friday.

As long as we’re in the neighborhood — so to speak — we may as well define the concept of a “local ring”. This is a commutative ring which contains a unique maximal ideal. Equivalently, it’s one in which the sum of any two noninvertible elements is again noninvertible.

Why are these conditions equivalent? Well, if we have noninvertible elements $r_1$ and $r_2$ with $r_1+r_2$ invertible, then these elements generate principal ideals $(r_1)$ and $(r_2)$. If we add these two ideals, we must get the whole ring, for the sum contains $r_1+r_2$, and so must contain $1$, and thus the whole ring. Thus $(r_1)$ and $(r_2)$ cannot both be contained within the same maximal ideal, and thus we would have to have two distinct maximal ideals.

Conversely, if the sum of any two noninvertible elements is itself noninvertible, then the noninvertible elements form an ideal. And this ideal must be maximal, for if we throw in any other (invertible) element, it would suddenly contain the entire ring.

Why do we care? Well, it turns out that for any manifold $M$ and point $p\in M$ the algebra $\mathcal{O}_p$ of germs of functions at $p$ is a local ring. And in fact this is pretty much the reason for the name “local” ring: it is a ring of functions that’s completely localized to a single point.

To see that this is true, let’s consider which germs are invertible. I say that a germ represented by a function $f:U\to\mathbb{R}$ is invertible if and only if $f(p)\neq0$. Indeed, if $f(p)=0$, then $f$ is certainly not invertible. On the other hand, if $f(p)\neq0$, then continuity tells us that there is some neighborhood $V$ of $p$ where $f(p)\neq0$. Restricting $f$ to this neighborhood if necessary, we have a representative of the germ which never takes the value zero. And thus we can define a function $g(q)=\frac{1}{f(q)}$ for $q\in V$, which represents the multiplicative inverse to the germ of $f$.

With this characterization of the invertible germs in hand, it should be clear that any two noninvertible germs represented by $f_1$ and $f_2$ must have $f_1(p)=f_2(p)=0$. Thus $f_1(p)+f_2(p)=0$, and the germ of $f_1+f_2$ is again noninvertible. Since the sum of any two noninvertible germs is itself noninvertible, the algebra $\mathcal{O}_p$ of germs is local, and its unique maximal ideal $\mathfrak{m}_p$ consists of those functions which vanish at $p$.

Incidentally, we once characterized maximal ideals as those for which the quotient $R/I$ is a field. So which field is it in this case? It’s not hard to see that $\mathcal{O}_p/\mathfrak{m}_p\cong\mathbb{R}$ — any germ is sent to its value at $p$, which is just a real number.

March 28, 2011

## Specifying Morphisms Between Boolean Rings

It will be useful to have other ways of showing that a given function $f:\mathcal{S}\to\mathcal{T}$ between boolean rings is a morphism. The definition, of course, is that $f(E\Delta F)=f(E)\Delta f(F)$ and $f(E\cap F)=f(E)\cap f(F)$, since $\Delta$ and $\cap$ here denote our addition and multiplication, respectively.

It’s also sufficient to show that $f(E\setminus F)=f(E)\setminus f(F)$ and $f(E\cup F)=f(E)\cup f(F)$. Indeed, we can build both $\Delta$ and $\cap$ from $\setminus$ and $\cup$:

\displaystyle\begin{aligned}E\Delta F&=(E\setminus F)\cup(F\setminus E)\\E\cap F&=E\setminus(E\setminus F)=F\setminus(E\setminus F)\end{aligned}

So if $\setminus$ and $\cup$ are preserved, then so are $\Delta$ and $\cap$.

A little more subtle is the fact that for surjections we can preserve order in both directions. That is, if $E\subseteq F$ is equivalent to $f(E)\subseteq f(F)$ for a surjection $f$ from one underlying set onto the other, then $f$ is a homomorphism of Boolean rings. We first show that $f$ preserves $\cup$ by using its characterization as a least upper bound: $E\subseteq E\cup F$, $F\subseteq E\cup F$, and if $E\subseteq G$ and $F\subseteq G$ then $E\cup F\subseteq G$.

So, since $f$ preserves order, we know that $f(E)\subseteq f(E\cup F)$, and that $f(F)\subseteq f(E\cup F)$. We can conclude from this that $f(E)\cup f(F)\subseteq f(E\cup F)$, and we are left to short that $f(E\cup F)\subseteq f(E)\cup f(F)$. But f(E)\cup f(F)=f(G)\$ for some $G\in\mathcal{S}$, since $f$ is surjective. Since $f(E)\subseteq f(G)$, we must have $E\subseteq G$, and similarly $F\subseteq G$. But then $E\cup F\subseteq G$, and so $f(E\cup F)\subseteq f(G)=f(E)\cup f(F)$, as we wanted to show.

We thus know that $f$ preserves the operation $\cup$, and we can similarly show that $f$ preserves $\cap$, using the dual universal property. In order to build $\setminus$ and $\Delta$ from $\cup$ and $\cap$, we need complements. But complements satisfy a universal property of their own: $E\cap E^c=\emptyset$, and if $E\cap F=\emptyset$ then $F\subseteq E^c$.

First, I want to show that $f(\emptyset)=\emptyset$ by showing it is below all elements of $\mathcal{T}$. Indeed, by the surjectivity of $f$, every element of $\mathcal{T}$ is the image of some element $E\in\mathcal{S}$. Thus we want to show that $f(\emptyset)\subseteq f(E)$. But this is true because $\emptyset\subseteq E$ for all $E\in\mathcal{S}$, and so $f(\emptyset)=\emptyset$

Now given a set $E$ and its complement $E^c=X\setminus E$, we know that $E\cap E^c=\emptyset$. Since $f$ preserves $\cap$, we must have $f(E)\cap f(E^c)=f(\emptyset)=\emptyset$. Let’s say $G\in\mathcal{T}$ is another set so that $f(E)\cap G=\emptyset$. By the surjectivity of $f$, we have $G=f(F)$ for some $F\in\mathcal{S}$. Thus $f(E\cap F)=f(E)\cap f(F)=\emptyset=f(\emptyset)$, and thus $E\cap F=\emptyset$. This tells us that $F\subseteq E^c$, and thus $G=f(F)\subseteq f(E^c)$. And so $f(E^c)=f(E)^c$.

Therefore, since $f$ preserves $\cup$, $\cap$, and complements, it preserves $\cup$ and $\setminus$, and thus is a morphism of Boolean rings.

August 11, 2010 Posted by | Algebra, Ring theory | Leave a comment

## Completeness of Boolean Rings

We have a notion of “completeness” for boolean rings, which is related to the one for uniform spaces (and metric spaces), but which isn’t quite the same thing. We say that a Boolean ring $\mathcal{R}$ is complete if every set of elements of $\mathcal{R}$ has a union.

A complete Boolean ring is clearly a Boolean $\sigma$-algebra. It’s an algebra, because we can get a top element $X$ by taking the union of all the elements of $\mathcal{R}$ together, and it’s a $\sigma$-ring because countable unions are included under all unions. The converse is a little hairier.

First off, every totally finite measure algebra $(\mathcal{S},\mu)$ is complete. That is, if $\mu(X)<\infty$ — and thus $\mu(E)<\infty$ for all $E\in\mathcal{S}$, then every collection $\mathcal{E}$ of elements of $\mathcal{S}$ will have a union.

We let $\hat{\mathcal{E}}$ be the collection of all finite unions of elements in $\mathcal{E}$, all of which exist since $\mathcal{S}$ is a Boolean ring. We let $\alpha$ be the (finite) supremum of the measures of all elements in $\hat{\mathcal{E}}$. Since this is a finite supremum, there must be some increasing sequence $\{E_n\}$ so that $\mu(E_n)$ increases to $\alpha$. I say that the union $E$ of the sequence $\{E_n\}$ — which exists because $\mathcal{S}$ is a $\sigma$-ring — is the union of all of $\mathcal{E}$.

Indeed, we must have $\mu(E)=\alpha$ by the continuity of the measure $\mu$. Take any set $E_0\in\mathcal{E}$ and consider the difference $D=E_0\setminus E$, which is the amount by which $E_0$ extends past $E$. Our assertion is that this is always $\emptyset$. Define the sets $D_n=E_n\uplus D$, which is a disjoint union since $D$ is disjoint from $E$. Each of the $D_n$ is in $\hat{\mathcal{E}}$, and the limit of the sequence is $E\uplus D$. This set has measure

$\displaystyle\mu(E\uplus D)=\mu(E)+\mu(D)=\alpha+\mu(D)$

but since each of the $\mu(D_n)$ is bounded above by $\alpha$ then so is their limit. Thus $\mu(D)=0$, and thus $D=\emptyset$, since we consider all negligible sets to be the same.

The same is true for totally $\sigma$-finite measure algebras. Indeed, we can break such a measure algebra up into a countable collection of finite measure algebras by breaking $X$ into a countable number of elements $X_n$ so that $\mu(X_n)<\infty$. We define the finite measure algebra $\mathcal{S}_n$ to consist of the intersections of elements of $\mathcal{S}$ with $X_n$. Then given any collection $\mathcal{E}\subseteq\mathcal{S}$ we consider its image under such intersections to get $\mathcal{E}_n\subseteq\mathcal{S}_n$. What we said above shows that each of these collections has an intersection $E_n\in\mathcal{S}_n$, and their union in $\mathcal{S}$ is the union of the original collection $\mathcal{E}$.

August 10, 2010 Posted by | Algebra, Ring theory | Leave a comment

## Functions on Boolean Rings and Measure Rings

We’re not just interested in Boolean rings as algebraic structures, but we’re also interested in real-valued functions on them. Given a function $\mu:\mathcal{R}\to\mathbb{R}$ on a Boolean ring $\mathcal{R}$, we say that $\mu$ is additive, or a measure, $\sigma$-finite (on $\sigma$-rings), and so on analogously to the same concepts for set functions. We also say that a measure $\mu$ on a Boolean ring is “positive” if it is zero only for the zero element of the Boolean ring.

Now, if $\mathcal{S}$ is the Boolean $\sigma$-ring that comes from a measurable space $(X,\mathcal{S})$, then usually a measure $\mu$ on $\mathcal{S}$ is not positive under this definition, since there exist sets of measure zero. However, remember that in measure theory we usually talk about things that happen almost everywhere. That is, we consider two sets — two elements of $\mathcal{S}$ — to be “the same” if their difference is negligible — if the value of $\mu$ takes the value zero on this difference. If we let $\mathcal{N}=\mathcal{N}(\mu)\subseteq\mathcal{S}$ be the collection of $\mu$-negligible sets, it turns out that $\mathcal{N}$ is an ideal in the Boolean $\sigma$-ring $\mathcal{S}$. Indeed, if $M$ and $N$ are negligible, then so is $M\Delta N$, so $\mathcal{N}$ is an Abelian subgroup. Further, if $N\in\mathcal{N}$ and $E\in\mathcal{S}$, then $E\cap N\in\mathcal{N}$, so $\mathcal{N}$ is an ideal.

So we can form the quotient ring $\mathcal{S}/\mathcal{N}(\mu)$, which consists of the equivalence classes of elements which differ by an element of measure zero. This is equivalent to our old rhetorical trick of only considering properties up to “almost everywhere”. Using this new definition of “equals zero”, any measure $\mu$ on a Boolean $\sigma$-ring $\mathcal{S}$ gives rise to a positive measure on the quotient $\sigma$-ring $\mathcal{S}/\mathcal{N}(\mu)$. In particular, given a measure space $(X,\mathcal{S},\mu)$, we write $\mathcal{S}(\mu)=\mathcal{S}/\mathcal{N}(\mu)$ for the Boolean $\sigma$-ring it gives rise to.

We say that a “measure ring” $(\mathcal{S},\mu)$ is a Boolean $\sigma$-ring $\mathcal{S}$ together with a positive measure $\mu$ on $\mathcal{S}$. For instance, if $(X,\mathcal{S},\mu)$ is a measure space, then $(\mathcal{S}(\mu),\mu)$ is a measure ring.. If $\mathcal{S}$ is a Boolean $\sigma$-algebra we say that $(\mathcal{S},\mu)$ is a measure algebra. We say that measure rings and algebras are (totally) finite or $\sigma$-finite the same as for measure spaces. Measure rings, of course, form a category; a morphism $f:(\mathcal{S},\mu)\to(\mathcal{T},\nu)$ from one measure algebra to another is a morphism of boolean $\sigma$-algebras $f:\mathcal{S}\to\mathcal{T}$ so that $\mu(E)=\nu(f(E))$ for all $E\in\mathcal{S}$.

I say that the mapping which sends a measure space $(X,\mathcal{S},\mu)$ to its associated measure algebra $(\mathcal{S}(\mu),\mu)$ is a contravariant functor. Indeed, let $f:(X,\mathcal{S},\mu)\to(Y,\mathcal{T},\nu)$ be a morphism of measure spaces. That is, $f$ is a measurable function from $X$ to $Y$, so $\mathcal{S}$ contains the pulled-back $\sigma$-algebra $f^{-1}(\mathcal{T})$. This pull-back defines a map $f^{-1}:\mathcal{T}\to\mathcal{S}$. Further, since $f$ is a morphism of measure spaces it must push forward $\mu$ to $\nu$. That is, $\nu=f(\mu)$, or in other words $\nu(E)=\mu(f^{-1}(E))$. And so if $\nu(E)=0$ then $\mu(f^{-1}(E))=0$, thus the ideal $\mathcal{N}(\nu)\subseteq\mathcal{T}$ is sent to the ideal $\mathcal{N}(\mu)\subseteq\mathcal{S}$, and so $f^{-1}$ descends to a homomorphism between the quotient rings: $f^{-1}:\mathcal{T}(\nu)\to\mathcal{S}(\mu)$. As we just said, $\nu(E)=\mu(f^{-1}(E))$, and thus we have a morphism of measure algebras $f^{-1}:(\mathcal{T}(\nu),\nu)\to(\mathcal{S}(\mu),\mu)$. It’s straightforward to confirm that this assignment preserves identities and compositions.

August 5, 2010

## Boolean Rings

A “Boolean ring” is a commutative ring $R$ with the additional property that each and every element is idempotent. That is, for any $r\in R$ we have $r^2=r$. An immediate consequence of this axiom is that $r+r=0$, since we can calculate

\displaystyle\begin{aligned}0&=(r+r)-(r+r)\\&=(r+r)^2-(r+r)\\&=r^2+r^2+r^2+r^2-(r+r)\\&=r+r+r+r-(r+r)\\&=r+r\end{aligned}

The typical example we care about in the measure-theoretic context is a ring of subsets of some set $X$, with the operation $E\Delta F$ for addition and $E\cap F$ for multiplication. You should check that these operations satisfy the axioms of a Boolean ring. Since this is our main motivation, we will just consistently use $\Delta$ and $\cap$ to denote addition and multiplication in Boolean rings, whether they arise from a measure theoretic context or not. From here it looks a lot like set theory, but keep in mind that the objects we’re looking at may have nothing to do with sets.

We can use these operations to define the other common set-theoretic operations. Indeed

$\displaystyle E\cup F=(E\Delta F)\Delta(E\cap F)$

and

$\displaystyle E\setminus F=E\Delta(E\cap F)$

and we can then define orders in the usual manner: $E\subseteq F\Leftrightarrow E\cap F=E$.

As usual, the union of two elements is the “smallest” (with respect to this order) element above both of them, and the intersection of two elements is the “largest” element below both of them. The same goes for any finite number of elements, but if we try to move to an infinite number of elements there is no guarantee that there is any element above or below all of them, much less that such an element is unique. A “Boolean $\sigma$-ring” is a Boolean ring so that every countably infinite set of elements has a union. In this case, it is immediately true that any countably infinite set of elements has an intersection as well. The typical example, of course, is a $\sigma$-ring of subsets of a set $X$.

A “Boolean algebra” is a Boolean ring for which there is some element $X\neq0$ so that $E\subseteq X$ for all elements $E$. A “Boolean $\sigma$-algebra” is both a Boolean $\sigma$-ring and a Boolean algebra.

In the obvious way we have a full subcategory $\mathcal{B}oolean$ of the category of rings. It contains full subcategories of Boolean $\sigma$-rings, Boolean algebras, and Boolean $\sigma$-algebras.

August 4, 2010 Posted by | Algebra, Ring theory | 11 Comments

## Graded Objects

We’re about to talk about certain kinds of algebras that have the added structure of a “grading”. It’s not horribly important at the moment , but we might as well talk about it now so we don’t forget later.

Given a monoid $G$, a $G$-graded algebra is one that, as a vector space, we can write as a direct sum

$\displaystyle A=\bigoplus\limits_{g\in G}A_g$

so that the product of elements contained in two grades lands in the grade given by their product in the monoid. That is, we can write the algebra multiplication by

$\displaystyle\mu:A_g\otimes A_h\rightarrow A_{gh}$

for each pair of grades $g$ and $h$. As usual, we handle elements that are the sum of two elements with different grades by linearity.

By far the most common grading is by the natural numbers under addition, in which case we often just say “graded”. For example, the algebra of polynomials is graded, where the grading is given by the total degree. That is, if $A=R[X_1,\dots,X_k]$ is the algebra of polynomials in $k$ variables, then the $n$ grade consists of sums of products of $n$ of the variables at a time. This is a grading because the product of two such homogeneous polynomials is itself homogeneous, and the total degree of each term in the product is the sum of the degrees of the factors. For instance, the product of $xy+yz$ in grade $2$ and $x^3+xyz+yz^2$ in grade $3$ is

$\displaystyle (xy+yz)(x^3+xyz+yz^2)=x^4y+x^3yz+x^2y^2z+2xy^2z^2+y^2z^3$

in grade $5=2+3$.

Other common gradings include $\mathbb{Z}$-grading and $\mathbb{Z}_2$-grading. The latter algebras are often called “superalgebras”, related to their use in studying supersymmetry in physics. “Superalgebra” sounds a lot more big and impressive than “$\mathbb{Z}_2$-graded algebra”, and physicists like that sort of thing.

In the context of graded algebras we also have graded modules. A $G$-graded module $M$ over the $G$-graded algebra $A$ can also be written down as a direct sum

$\displaystyle M=\bigoplus\limits_{g\in G}M_g$

But now it’s the action of $A$ on $M$ that involves the grading:

$\displaystyle\alpha:A_g\otimes M_h\rightarrow M_{gh}$

We can even talk about grading in the absence of a multiplicative structure, like a graded vector space. Now we don’t even really need the grades to form a monoid. Indeed, for any index set $I$ we might have the graded vector space

$\displaystyle V=\bigoplus\limits_{i\in I}V_i$

This doesn’t seem to be very useful, but it can serve to recognize natural direct summands in a vector space and keep track of them. For instance, we may want to consider a linear map $T$ between graded vector spaces $V$ and $W$ that only acts on one grade of $V$ and with an image contained in only one grade of $W$:

\displaystyle\begin{aligned}T(V_i)&\subseteq W_j\\T(V_k)&=0\qquad k\neq i\end{aligned}

We’ll say that such a map is graded $(i,j)$. Any linear map from $V$ to $W$ can be decomposed uniquely into such graded components

$\displaystyle\hom(V,W)=\bigoplus\limits_{(i,j)\in I\otimes J}\hom(V_i,W_j)$

giving a grading on the space of linear maps.

October 23, 2009

## The Category of Representations

Now let’s narrow back in to representations of algebras, and the special case of representations of groups, but with an eye to the categorical interpretation. So, representations are functors. And this immediately leads us to the category of such functors. The objects, recall, are functors, while the morphisms are natural transformations. Now let’s consider what, exactly, a natural transformation consists of in this case.

Let’s say we have representations $\rho:A:\rightarrow\hom_\mathbb{F}(V,V)$ and $\sigma:A\rightarrow\hom_\mathbb{F}(W,W)$. That is, we have functors $\rho$ and $\sigma$ with $\rho(*)=V$, $\sigma(*)=W$ — where $*$ is the single object of $A$, when it’s considered as a category — and the given actions on morphisms. We want to consider a natural transformation $\phi:\rho\rightarrow\sigma$.

Such a natural transformation consists of a list of morphisms indexed by the objects of the category $A$. But $A$ has only one object: $*$. Thus we only have one morphism, $\phi_*$, which we will just call $\phi$.

Now we must impose the naturality condition. For each arrow $a:*\rightarrow *$ in $A$ we ask that the diagram

$\displaystyle\begin{matrix}V&\xrightarrow{\phi}&W\\\downarrow^{\rho(a)}&&\downarrow^{\sigma(a)}\\V&\xrightarrow{\phi}&W\end{matrix}$

commute. That is, we want $\phi\circ\rho(a)=\sigma(a)\circ\phi$ for every algebra element $a$. We call such a transformation an “intertwiner” of the representations. These intertwiners are the morphisms in the category of $\mathbf{Rep}(A)$ of representations of $A$. If we want to be more particular about the base field, we might also write $\mathbf{Rep}_\mathbb{F}(A)$.

Here’s another way of putting it. Think of $\phi$ as a “translation” from $V$ to $W$. If $\phi$ is an isomorphism of vector spaces, for instance, it could be a change of basis. We want to take a transformation from the algebra $A$ and apply it, and we also want to translate. We could first apply the transformation in $V$, using the representation $\rho$, and then translate to $W$. Or we could first translate from $V$ to $W$ and then apply the transformation, now using the representation $\sigma$. Our condition is that either order gives the same result, no matter which element of $A$ we’re considering.

October 28, 2008

## Algebra Representations

We’ve defined a representation of the group $G$ as a homomorphism $\rho:G\rightarrow\mathrm{GL}(V)$ for some vector space $V$. But where did we really use the fact that $G$ is a group?

This leads us to the more general idea of representing a monoid $M$. Of course, now we don’t need the image of a monoid element to be invertible, so we may as well just consider a homomorphism of monoids $\rho:M\rightarrow\hom_\mathbb{F}(V,V)$, where we consider this endomorphism algebra as a monoid under composition.

And, of course, once we’ve got monoids and $\mathbb{F}$-linearity floating around, we’re inexorably drawn — Serge would way we have an irresistable compulsion — to consider monoid objects in the category of $\mathbb{F}$-modules. That is: $\mathbb{F}$algebras.

And, indeed, things work nicely for $\mathbb{F}$-algebras. We say a representation of an $\mathbb{F}$-algebra $A$ is a homomorphism $\rho:A\rightarrow\hom_\mathbb{F}(V,V)$ for some vector space $V$ over $\mathbb{F}$. How else can we view such a homomorphism?

Well, it turns an algebra element into an endomorphism. And the most important thing about an endomorphism is that it does something to vectors. So given an algebra element $a\in A$, and a vector $v\in V$, we get a new vector $\left[\rho(a)\right](v)$. And this operation is $\mathbb{F}$-linear in both of its variables. So we have a linear map $\mathrm{ev}\circ(\rho\otimes1_V):A\otimes V\rightarrow V$, built from the representation $\rho$ and the evaluation map $\mathrm{ev}$. But this is just a left $A$module!

In fact, the evaluation above is the counit of the adjunction between $\underline{\hphantom{X}}\otimes V$ and the internal $\hom$ functor $\hom_\mathbb{F}(V,\underline{\hphantom{X}})$. This adjunction is a natural isomorphism of $\hom$ sets: $\hom_\mathbb{F}(A\otimes V,V)\cong\hom_\mathbb{F}(A,\hom_\mathbb{F}(V,V))$. That is, left $A$-modules are in natural bijection with representations of $A$. In practice, we just consider the two structures to be the same, and we talk interchangeably about modules and representations.

As it would happen, the notion of an algebra representation properly extends that of a group representation. Given any group $G$ we can build the group algebra $\mathbb{F}[G]$. As a vector space, this has a basis vector $e_g$ for each group element $g\in G$. We then define a multiplication on pairs of basis elements by $e_{g_1}e_{g_2}=e_{g_1g_2}$, and extend by bilinearity.

Now it turns out that representations of the group $G$ and representations of the group algebra $\mathbb{F}[G]$ are in bijection. Indeed, the basis vectors $e_g$ are invertible in the algebra $\mathbb{F}[G]$. Thus, given a homomorphism $\alpha:\mathbb{F}[G]\rightarrow\hom_\mathbb{F}(V,V)$, the linear maps $\rho(g)=\alpha(e_g)$ must be invertible. And so we have a group representation $\rho:G\rightarrow\mathrm{GL}(V)$. Conversely, if $\rho:G\rightarrow\mathrm{GL}(V)$ is a representation of the group $G$, then we can define $\alpha(e_g)=\rho(g)\in\mathrm{GL}(V)\subset\hom_\mathbb{F}(V,V)$ and extend by linearity to get an algebra representation $\alpha:\mathbb{F}[G]\rightarrow\hom_\mathbb{F}(V,V)$.

So we have representations of algebras. Within that we have the special cases of representations of groups. These allow us to cast abstract algebraic structures into concrete forms, acting as transformations of vector spaces.

October 24, 2008

## The Exponential Series

What is it that makes the exponential what it is? We defined it as the inverse of the logarithm, and this is defined by integrating $\frac{1}{x}$. But the important thing we immediately showed is that it satisfies the exponential property.

But now we know the Taylor series of the exponential function at ${0}$:

$\displaystyle\exp(x)=\sum\limits_{k=0}^\infty\frac{x^k}{k!}$

In fact, we can work out the series around any other point the same way. Since all the derivatives are the exponential function back again, we find

$\displaystyle\exp(x)=\sum\limits_{k=0}^\infty\frac{\exp(x_0)}{k!}(x-x_0)^k$

Or we could also write this by expanding around $a$ and writing the relation as a series in the displacement $b=x-a$:

$\displaystyle\exp(a+b)=\sum\limits_{l=0}^\infty\frac{\exp(a)}{l!}b^l$

Then we can expand out the $\exp(a)$ part as a series itself:

$\displaystyle\exp(a+b)=\sum\limits_{l=0}^\infty\left(\sum\limits_{k=0}^\infty\frac{a^k}{k!}\right)\frac{b^l}{l!}$

But then (with our usual handwaving about rearranging series) we can pull out the inner series since it doesn’t depend on the outer summation variable at all:

$\displaystyle\exp(a+b)=\left(\sum\limits_{k=0}^\infty\frac{a^k}{k!}\right)\left(\sum\limits_{l=0}^\infty\frac{b^l}{l!}\right)$

And these series are just the series defining $\exp(a)$ and $\exp(b)$, respectively. That is, we have shown the exponential property $\exp(a+b)=\exp(a)\exp(b)$ directly from the series expansion.

That is, whatever function the power series $\sum\limits_{k=0}^\infty\frac{x^k}{k!}$ defines, it satisfies the exponential property. In a sense, the fact that the inverse of this function turns out to be the logarithm is a big coincidence. But it’s a coincidence we’ll tease out tomorrow.

For now I’ll note that this important exponential property follows directly from the series. And we can write down the series anywhere we can add, subtract, multiply, divide (at least by integers), and talk about convergence. That is, the exponential series makes sense in any topological ring of characteristic zero. For example, we can define the exponential of complex numbers by the series

$\displaystyle\exp(z)=\sum\limits_{k=0}^\infty\frac{z^k}{k!}$

Finally, this series will have the exponential property as above, so long as the ring is commutative (like it is for the complex numbers). In more general rings there’s a generalized version of the exponential property, but I’ll leave that until we eventually need to use it.

October 8, 2008 Posted by | Analysis, Calculus, Power Series | 3 Comments

## The Taylor Series of the Exponential Function

Sorry for the lack of a post yesterday, but I was really tired after this weekend.

So what functions might we try finding a power series expansion for? Polynomials would be boring, because they already are power series that cut off after a finite number of terms. What other interesting functions do we have?

Well, one that’s particularly nice is the exponential function $\exp$. We know that this function is its own derivative, and so it has infinitely many derivatives. In particular, $\exp(0)=1$, $\exp'(0)=1$, $\exp''(0)=1$, …, $\exp^{(n)}(0)=1$, and so on.

So we can construct the Taylor series at ${0}$. The coefficient formula tells us

$\displaystyle a_k=\frac{\exp^{(k)}(0)}{k!}=\frac{1}{k!}$

which gives us the series

$\displaystyle\sum\limits_{k=0}^\infty\frac{x^k}{k!}$

We use the ratio test to calculate the radius of convergence. We calculate

$\displaystyle\limsup\limits_{k\rightarrow\infty}\left|\frac{\frac{x^{k+1}}{(k+1)!}}{\frac{x^k}{k!}}\right|=\limsup\limits_{k\rightarrow\infty}\left|\frac{x^{k+1}k!}{x^k(k+1)!}\right|=\limsup\limits_{k\rightarrow\infty}\left|\frac{x}{(k+1)}\right|=0$

Thus the series converges absolutely no matter what value we pick for $x$. The radius of convergence is thus infinite, and the series converges everywhere.

But does this series converge back to the exponential function? Taylor’s Theorem tells us that

$\displaystyle\exp(x)=\left(\sum\limits_{k=0}^n\frac{x^k}{k!}\right)+R_n(x)$

where there is some $\xi_n$ between ${0}$ and $x$ so that $R_n(x)=\frac{\exp(\xi_n)x^n}{(n+1)!}$.

Now the derivative of $\exp$ is $\exp$ again, and $\exp$ takes only positive values. And so we know that $\exp$ is everywhere increasing. What does this mean? Well, if $x\leq0$ then $\xi_n\leq0$, and so $\exp(\xi_n)\leq\exp(0)=1$. On the other hand if $x\geq0$ then $\xi_n\leq nx$, and so $\exp(\xi_n)\leq\exp(x)$. Either way, we have some uniform bound $M$ on $\exp(\xi_n)$ no matter what the $\xi_n$ are.

So now we know $R_n(x)\leq\frac{Mx^n}{(n+1)!}$. And it’s not too hard to see (though I can’t seem to find it in my archives) that $n!$ grows much faster than $x^n$ for any fixed $x$. Basically, the idea is that each time you’re multiplying by $\frac{x}{n+1}$, which eventually gets less than and stays less than one. The upshot is that the remainder term $R_n(x)$ must converge to ${0}$ for any fixed $x$, and so the series indeed converges to the function $\exp(x)$.

October 7, 2008 Posted by | Analysis, Calculus, Power Series | 5 Comments