# The Unapologetic Mathematician

## Oscillation

Oscillation in a function is sort of a local and non-directional version of variation. If $f:X\rightarrow\mathbb{R}$ is a bounded function on some region $X\subseteq\mathbb{R}^n$, and if $T$ is a nonempty subset of $X$, then we define the oscillation of $f$ on $T$ by the formula

$\displaystyle\Omega_f(T)=\sup\limits_{x,y\in T}\left\{f(y)-f(x)\right\}$

measuring the greatest difference in values of $f$ on $T$.

We also want a version that’s localized to a single point $x\in X$. To do this, we first note that the collection of all subsets $N$ of $X$ which contain $x$ form a poset as usual by inclusion. But we want to reverse this order and say that $M\preceq N$ if and only if $N\subseteq M$.

Now for any two subsets $x\in N_1\subseteq X$ and $x\in N_2\subseteq X$, their intersection $N_1\cap N_2$ is another such subset containing $x$. And since it’s contained in both $N_1$ and $N_2$, it’s above both of them in our partial order, which makes this poset a directed set, and the oscillation of $f$ is a net.

In fact, it’s easy to see that if $N\subseteq M$ then $\Omega_f(N)\leq\Omega_f(M)$, so this net is monotonically decreasing as the subset gets smaller and smaller. Further, we can see that $\Omega_f(N)\geq0$, since if we can always consider the difference $f(t)-f(t)=0$, the supremum must be at least this big.

Anyhow, now we know that the net has a limit, and we define

$\displaystyle\omega_f(x)=\lim\Omega_f(N)$

where $N$ is a subset of $X$ containing $x$, and we take the limit as $N$ gets smaller and smaller.

In fact, this is slightly overdoing it. Our domain is a topological subspace of $\mathbb{R}^n$, and is thus a metric space. If we want we can just work with metric balls and define

$\displaystyle\omega_f(x)=\lim\limits_{r\rightarrow0^+}\Omega_f(N(x;r)\cap X)$

where $N(x;r)$ is the ball of radius $r$ around $x$. These definitions are exactly equivalent in metric spaces, but the net definition works in more general topological spaces, and it’s extremely useful in its own right later, so it’s worth thinking about now.

Oscillation provides a nice way to restate our condition for continuity, and it works either using the metric space definition or the neighborhood definition of continuity. I’ll work it out in the latter case for generality, but it’s worth writing out the parallel proof for the $\epsilon$-$\delta$ definition.

Our assertion is that $f$ is continuous at a point $x$ if and only if $\omega_f(x)=0$. If $f$ is continuous, then for every $\epsilon$ there is some neighborhood $N$ of $x$ so that $\lvert f(y)-f(x)\rvert<\frac{\epsilon}{3}$ for all $y\in N$. Then we can check that

$f(y_2)-f(y_1)=\left(f(y_2)-f(x)\right)+\left(f(x)-f(y_1)\right)<\frac{\epsilon}{3}+\frac{\epsilon}{3}<\frac{2}{3}\epsilon$

for all $y_1$ and $y_2$ in $N$, and so $\Omega_f(N)<\epsilon$. Further, any smaller neighborhood of $x$ will also satisfy this inequality, so the net is eventually within $\epsilon$ of ${0}$. Since this holds for any $\epsilon$, we find that the net has limit ${0}$.

Conversely, let’s assume that the oscillation of $f$ at $x$ is zero. That is, for any $\epsilon$ we have some neighborhood $N$ of $x$ so that $\Omega_f(N)<\frac{\epsilon}{2}$, and the same will automatically hold for smaller neighborhoods. This tells us that $f(y)-f(x)<\epsilon$ for all $y\in N$, and also $f(x)-f(y)<\epsilon$. Together, these tell us that $\lvert f(y)-f(x)\rvert<\epsilon$, and so $f$ is continuous at $x$.

December 7, 2009 Posted by | Analysis, Calculus | 4 Comments