Change of Variables in Multiple Integrals I
In the one-variable Riemann and Riemann-Stieltjes integrals, we had a “change of variables” formula. This let us replace our variable of integration by a function of a new variable, and we got the same answer. This was useful because the form of the resulting integrand might have been simpler to work with in terms of using the fundamental theorem of calculus.
In multiple variables we’ll have a similar formula, but it will have an additional use. Not only might it be used to simplify the integrand, but it might simplify the region of integration itself! Of course, there might also be a trade-off between these two considerations, as many students in multivariable calculus classes might remember. A substitution which simplifies the region of integration might make antidifferentiating the integrand (in any of the resulting variables) impractical, while another substitution which simplifies the integrand might make the region a nightmare to work with.
The formula in one variable looked something like this:
where (along with the induced transformation ) is a continuously differentiable function on with and . Notice that could extend out beyond , but if it went above it would have to come back down again, covering the same region twice with opposite signs. This is related to the signed volumes we talked about, where (in one dimension) an interval can be traversed (integrated over) from left to right or from right to left.
The picture gets a little simpler when we assume that is strictly monotonic. That is, either is strictly increasing, , and ; or is strictly decreasing, , and (traversing in the opposite direction). In the first case, we can write our change of variables relation as
while in the second case, reversing the direction of integration entails adding a negative sign.
but in this case, the derivative is strictly negative. We can combine it with this new sign, and rather elegantly write both cases as
In all of these cases, we know that the inverse function exists because of the inverse function theorem. Here the Jacobian determinant is simply the derivative , which we’re assuming is everywhere nonzero.
In essence, the idea was that measures the factor by which stretches intervals near . That is, the tiny bit of one-dimensional volume gets stretched into the tiny bit of (unsigned) one-dimensional volume . And this works because at a very small scale, little changes in transform almost linearly.
So in higher-dimensional spaces, we will assume that transforms small enough changes in almost linearly — is differentiable — and that the Jacobian determinant is everywhere nonzero, so we can invert the transformation. This gives us hope that we can write something like
Since is invertible, integrating as ranges over is the same as letting range over , so the region of integration lines up, as does the integrand. All that’s left is to figure out how we should replace .
Now this is not the differential of a variable . When it shows up in a multiple integral, it’s a tiny little bit of -dimensional volume. And we measure the scaling of -dimensional volumes with the Jacobian determinant! The same sign considerations as before tell us that either the Jacobian determinant is always positive or always negative, and in either case we can write
or, using our more Leibniz-style notation
We will start proving this formula next time.