The Unapologetic Mathematician

Mathematics for the interested outsider

Change of Variables in Multiple Integrals I

In the one-variable Riemann and Riemann-Stieltjes integrals, we had a “change of variables” formula. This let us replace our variable of integration by a function of a new variable, and we got the same answer. This was useful because the form of the resulting integrand might have been simpler to work with in terms of using the fundamental theorem of calculus.

In multiple variables we’ll have a similar formula, but it will have an additional use. Not only might it be used to simplify the integrand, but it might simplify the region of integration itself! Of course, there might also be a trade-off between these two considerations, as many students in multivariable calculus classes might remember. A substitution which simplifies the region of integration might make antidifferentiating the integrand (in any of the resulting variables) impractical, while another substitution which simplifies the integrand might make the region a nightmare to work with.

The formula in one variable looked something like this:


where x=g(u) (along with the induced transformation dx=g'(u)\,du) is a continuously differentiable function on u\in[c,d] with a=g(c) and b=g(d). Notice that g(u) could extend out beyond [a,b], but if it went above b it would have to come back down again, covering the same region twice with opposite signs. This is related to the signed volumes we talked about, where (in one dimension) an interval can be traversed (integrated over) from left to right or from right to left.

The picture gets a little simpler when we assume that g is strictly monotonic. That is, either g is strictly increasing, g'(u)>0, and g([c,d])=[a,b]; or g is strictly decreasing, g'(u)<0, and g([c,d])=[b,a] (traversing in the opposite direction). In the first case, we can write our change of variables relation as


while in the second case, reversing the direction of integration entails adding a negative sign.


but in this case, the derivative g'(u) is strictly negative. We can combine it with this new sign, and rather elegantly write both cases as

\displaystyle\int\limits_{[a,b]}f(x)\,dx=\int\limits_{g^{-1}[a,b]}f(g(u))\lvert g'(u)\rvert\,du

In all of these cases, we know that the inverse function exists because of the inverse function theorem. Here the Jacobian determinant is simply the derivative g(x), which we’re assuming is everywhere nonzero.

In essence, the idea was that \lvert g'(u)\rvert measures the factor by which g stretches intervals near u. That is, the tiny bit of one-dimensional volume du gets stretched into the tiny bit of (unsigned) one-dimensional volume dx=\lvert g'(u)\rvert\,du. And this works because at a very small scale, little changes in u transform almost linearly.

So in higher-dimensional spaces, we will assume that g transforms small enough changes in u almost linearly — g is differentiable — and that the Jacobian determinant J_g(u) is everywhere nonzero, so we can invert the transformation. This gives us hope that we can write something like


Since g is invertible, integrating as u ranges over g^{-1}(S) is the same as letting x=g(u) range over S, so the region of integration lines up, as does the integrand. All that’s left is to figure out how we should replace dx.

Now this dx is not the differential of a variable x. When it shows up in a multiple integral, it’s a tiny little bit of n-dimensional volume. And we measure the scaling of n-dimensional volumes with the Jacobian determinant! The same sign considerations as before tell us that either the Jacobian determinant is always positive or always negative, and in either case we can write

\displaystyle\int\limits_{S}f(x)\,dx=\int\limits_{g^{-1}(S)}f(g(u))\lvert J_g(u)\rvert\,du

or, using our more Leibniz-style notation


We will start proving this formula next time.

January 5, 2010 Posted by | Analysis, Calculus | 5 Comments