The Picard Iteration Converges
Now that we’ve defined the Picard iteration, we have a sequence of functions from a closed neighborhood of to a closed neighborhood of . Recall that we defined to be an upper bound of on , to be a Lipschitz constant for on , less than both and , and .
Specifically, we’ll show that the sequence converges in the supremum norm on . That is, we’ll show that there is some so that the maximum of the difference for decreases to zero as increases. And we’ll do this by showing that the individual functions and get closer and closer in the supremum norm. Then they’ll form a Cauchy sequence, which we know must converge because the metric space defined by the supremum norm is complete, as are all the spaces.
Anyway, let be exactly the supremum norm of the difference between the first two functions in the sequence. I say that . Indeed, we calculate inductively
Now we can bound the distance between any two functions in the sequence. If are two indices we calculate:
But this is a chunk of a geometric series; since , the series must converge, and so we can make this sum as small as we please by choosing and large enough.
This then tells us that our sequence of functions is -Cauchy, and thus -convergent, which implies uniform pointwise convergence. The uniformity is important because it means that we can exchange integration with the limiting process. That is,
And so we can start with our definition:
and take the limit of both sides
where we have used the continuity of . This shows that the limiting function does indeed satisfy the integral equation, and thus the original initial value problem.