Real-Valued Functions of a Single Real Variable
At long last we can really start getting into one of the most basic kinds of functions: those which take a real number in and spit a real number out. Quite a lot of mathematics is based on a good understanding of how to take these functions apart and put them together in different ways — to analyze them. And so we have the topic of “real analysis”. At our disposal we have a toolbox with various methods for calculating and dealing with these sorts of functions, which we call “calculus”. Really, all calculus is is a collection of techniques for understanding what makes these functions tick.
Sitting behind everything else, we have the real number system — the unique ordered topological field which is big enough to contain limits of all Cauchy sequences (so it’s a complete uniform space) and least upper bounds for all nonempty subsets which have any upper bounds at all (so the order is Dedekind complete), and yet small enough to exclude infinitesimals and infinites (so it’s Archimedean).
Because the properties that make the real numbers do their thing are all wrapped up in the topology, it’s no surprise that we’re really interested in continuous functions, and we have quite a lot of them. At the most basic, the constant function for all real numbers is continuous, as is the identity function .
We also have ways of combining continuous functions, many of which are essentially inherited from the field structure on . We can add and multiply functions just by adding and multiplying their values, and we can multiply a function by a real number too.
Since all the nice properties of these algebraic constructions carry over from , this makes the collection of continuous functions into an algebra over the field of real numbers. We get additive inverses as usual in a module by multiplying by , so we have an -module using addition and scalar multiplication. We have a bilinear multiplication because of the distributive law holding in the ring where our functions take their values. We also have a unit for multiplication — the constant function — and a commutative law for multiplication. I’ll leave you to verify that all these operations give back continuous functions when we start with continuous functions.
What we don’t have is division. Multiplicative inverses are tough because we can’t invert any function which takes the value zero anywhere. Even the identity function is very much not continuous at . In fact, it’s not even defined there! So how can we deal with this?
Well, the answer is sitting right there. The function is not continuous at that point. We have two definitions (by neighborhood systems and by nets) of what it means for a function between two topological spaces to be continuous at one point or another, and we said a function is continuous if it’s continuous at every point in its domain. So we can throw out some points and restrict our attention to a subspace where the function is continuous. Here, for instance, we can define a function by , and this function is continuous at each point in its domain.
So what we should really be considering is this: for each subspace we have a collection of those real-valued functions which are continuous on . Each of these is a commutative -algebra, just like we saw for the collection of functions continuous on all of .
But we may come up with two functions over different domains that we want to work with. How do we deal with them together? Well, let’s say we have a function and another one , where . We may not be able to work with at the points in that aren’t in , but we can certainly work with at just those points of that happen to be in . That is, we can restrict the function to the function . It’s the exact same function, except it’s only defined on instead of all of . This gives us a homomorphism of -algebras . (If you’ve been reading along for a while, how would a category theorist say this?)
As an example, we have the identity function in and the reciprocal function in . We can restrict the identity function by forgetting that it has a value at to get another function , which we will also denote by . Then we can multiply to get the function . Notice that the resulting function we get is not the constant function on because it’s not defined at .
Now as far as language goes, we usually drop all mention of domains and assume by default that the domain is “wherever the function makes sense”. That is, whenever we see we automatically restrict to nonzero real numbers, and whenever we combine two functions on different domains we automatically restrict to the intersection of their domains, all without explicit comment.
We do have to be a bit careful here, though, because when we see , we also restrict to nonzero real numbers. This is not the constant function because as it stands it’s not defined for . Clearly, this is a little nutty and pedantic, so tomorrow we’ll come back and see how to cope with it.