# The Unapologetic Mathematician

## Integrating Simple Functions

We start our turn from measure in the abstract to applying it to integration, and we start with simple functions. In fact, we start a bit further back than that even; the simple functions are exactly the finite linear combinations of characteristic functions, and so we’ll start there.

Given a measurable set $E$, there’s an obvious way to define the integral of the characteristic function $\chi_E$: the measure $\mu(E)$! In fact, if you go back to the “area under the curve” definition of the Riemann integral, this makes sense: the graph of $\chi_E$ is a “rectangle” (possibly in many pieces) with one side a line of length $1$ and the other “side” the set $E$. Since $\mu(E)$ is our notion of the “size” of $E$, the “area” will be the product of $1$ and $\mu(E)$. And so we define $\displaystyle\int\chi_E\,d\mu=\mu(E)$

That is, the integral of the characteristic function $\chi_E$ with respect to the measure $\mu$ is $\mu(E)$. Of course, this only really makes sense if $\mu(E)<\infty$.

Now, we’re going to want our integral to be linear, and so given a linear combination $f=\sum\alpha_i\chi_{E_i}$ we define the integral $\displaystyle\int f\,d\mu=\int\left(\sum\alpha_i\chi_{E_i}\right)\,d\mu=\sum\alpha_i\int\chi_{E_i}\,d\mu=\sum\alpha_i\mu(E_i)$

Again, this only really makes sense if all the $E_i$ associated to nonzero $\alpha_i$ have finite measure. When this happens, we call our function $f$ “integrable”.

Since every simple function $f$ is a finite linear combination of characteristic functions, we can always use this to define the integral of any simple function. But there might be a problem: what if we have two different representations of a simple function as linear combinations of characteristic functions? Do we always get the same integral?

Well, first off we can always choose an expression for $f$ so that the $E_i$ are disjoint. As an example, say that we write $f=\alpha\chi_A+\beta\chi_B$, where $A$ and $B$ overlap. We can rewrite this as $f=\alpha\chi_{A\setminus B}+\beta\chi_{B\setminus A}+(\alpha+\beta)\chi_{A\cap B}$. If $f$ is integrable, then $A$ and $B$ both have finite measure, and so $\mu$ is subtractive. Thus we can verify \displaystyle\begin{aligned}\int\alpha\chi_{A\setminus B}+\beta\chi_{B\setminus A}+(\alpha+\beta)\chi_{A\cap B}&=\alpha\mu(A\setminus B)+\beta\mu(B\setminus A)+(\alpha+\beta)\mu(A\cap B)\\&=\alpha\left(\mu(A)-\mu(A\cap B)\right))+\beta\left(\mu(B)-\mu(A\cap B)\right)+(\alpha+\beta)\mu(A\cap B)\\&=\alpha\mu(A)+\beta\mu(B)\\&=\int\alpha\chi_A+\beta\chi_B\,d\mu\end{aligned}

Thus given any representation the corresponding disjoint representation gives us the same integral.

But what if we have two different disjoint representations $f=\sum\alpha_i\chi_{E_i}$ and $f=\sum\beta_j\chi_{F_j}$? Our function can only take a finite number of nonzero values $\{\gamma_k\}$. We can define $G_k$ to be the (measurable) set where $f$ takes the value $\gamma_k$. For any given $k$, we can consider all the $i$ so that $\alpha_i=\gamma_k$. The corresponding sets $E_i$ must be a disjoint partition of $G_k$, and additivity tells us that the sum of these $\mu(E_i)$ is equal to $\mu(G_k)$. But the same goes for the $F_j$ corresponding to values $\beta_j=\gamma_k$. And so both our representations give the same integral as $f=\sum\gamma_k\chi_{G_k}$. Everything in sight is linear, so this is all very straightforward.

At the end of the day, the integral of any simple function $f$ is well-defined so long as all the preimage $G_k$ of each nonzero value $\gamma_k$ has a finite measure. Again, we call these simple functions “integrable”.

May 24, 2010 Posted by | Analysis, Measure Theory | 10 Comments