# The Unapologetic Mathematician

## Change of Variables in Multiple Integrals I

In the one-variable Riemann and Riemann-Stieltjes integrals, we had a “change of variables” formula. This let us replace our variable of integration by a function of a new variable, and we got the same answer. This was useful because the form of the resulting integrand might have been simpler to work with in terms of using the fundamental theorem of calculus.

In multiple variables we’ll have a similar formula, but it will have an additional use. Not only might it be used to simplify the integrand, but it might simplify the region of integration itself! Of course, there might also be a trade-off between these two considerations, as many students in multivariable calculus classes might remember. A substitution which simplifies the region of integration might make antidifferentiating the integrand (in any of the resulting variables) impractical, while another substitution which simplifies the integrand might make the region a nightmare to work with.

The formula in one variable looked something like this:

$\displaystyle\int\limits_a^bf(x)\,dx=\int\limits_c^df(g(u))g'(u)\,du$

where $x=g(u)$ (along with the induced transformation $dx=g'(u)\,du$) is a continuously differentiable function on $u\in[c,d]$ with $a=g(c)$ and $b=g(d)$. Notice that $g(u)$ could extend out beyond $[a,b]$, but if it went above $b$ it would have to come back down again, covering the same region twice with opposite signs. This is related to the signed volumes we talked about, where (in one dimension) an interval can be traversed (integrated over) from left to right or from right to left.

The picture gets a little simpler when we assume that $g$ is strictly monotonic. That is, either $g$ is strictly increasing, $g'(u)>0$, and $g([c,d])=[a,b]$; or $g$ is strictly decreasing, $g'(u)<0$, and $g([c,d])=[b,a]$ (traversing in the opposite direction). In the first case, we can write our change of variables relation as

$\displaystyle\int\limits_{[a,b]}f(x)\,dx=\int\limits_{g^{-1}[a,b]}f(g(u))g'(u)\,du$

while in the second case, reversing the direction of integration entails adding a negative sign.

$\displaystyle\int\limits_{[a,b]}f(x)\,dx=-\int\limits_{g^{-1}[a,b]}f(g(u))g'(u)\,du$

but in this case, the derivative $g'(u)$ is strictly negative. We can combine it with this new sign, and rather elegantly write both cases as

$\displaystyle\int\limits_{[a,b]}f(x)\,dx=\int\limits_{g^{-1}[a,b]}f(g(u))\lvert g'(u)\rvert\,du$

In all of these cases, we know that the inverse function exists because of the inverse function theorem. Here the Jacobian determinant is simply the derivative $g(x)$, which we’re assuming is everywhere nonzero.

In essence, the idea was that $\lvert g'(u)\rvert$ measures the factor by which $g$ stretches intervals near $u$. That is, the tiny bit of one-dimensional volume $du$ gets stretched into the tiny bit of (unsigned) one-dimensional volume $dx=\lvert g'(u)\rvert\,du$. And this works because at a very small scale, little changes in $u$ transform almost linearly.

So in higher-dimensional spaces, we will assume that $g$ transforms small enough changes in $u$ almost linearly — $g$ is differentiable — and that the Jacobian determinant $J_g(u)$ is everywhere nonzero, so we can invert the transformation. This gives us hope that we can write something like

$\displaystyle\int\limits_{S}f(x)\,dx=\int\limits_{g^{-1}(S)}f(g(u))\,dx$

Since $g$ is invertible, integrating as $u$ ranges over $g^{-1}(S)$ is the same as letting $x=g(u)$ range over $S$, so the region of integration lines up, as does the integrand. All that’s left is to figure out how we should replace $dx$.

Now this $dx$ is not the differential of a variable $x$. When it shows up in a multiple integral, it’s a tiny little bit of $n$-dimensional volume. And we measure the scaling of $n$-dimensional volumes with the Jacobian determinant! The same sign considerations as before tell us that either the Jacobian determinant is always positive or always negative, and in either case we can write

$\displaystyle\int\limits_{S}f(x)\,dx=\int\limits_{g^{-1}(S)}f(g(u))\lvert J_g(u)\rvert\,du$

or, using our more Leibniz-style notation

$\displaystyle\int\limits_{S}f(x^1,\dots,x^n)\,d(x^1,\dots,x^n)=\int\limits_{g^{-1}(S)}f(g(u^1,\dots,u^n))\left\lvert\frac{\partial(x^1,\dots,x^n)}{\partial(u^1,\dots,u^n)}\right\rvert\,d(u^1,\dots,u^n)$

We will start proving this formula next time.

January 5, 2010 - Posted by | Analysis, Calculus

1. […] of Variables in Multiple Integrals II Okay, let’s get to actually proving the change of variables formula for multiple integrals. To be […]

Pingback by Change of Variables in Multiple Integrals II « The Unapologetic Mathematician | January 6, 2010 | Reply

2. […] of Variables in Multiple Integrals III Today we finish up the proof of the change of variables formula for multiple […]

Pingback by Change of Variables in Multiple Integrals III « The Unapologetic Mathematician | January 7, 2010 | Reply

3. […] factor by which a transformation scales infinitesimal pieces of -dimensional volume. Now, with the change of variables formula and the mean value theorem in hand, we can pull out a macroscopic […]

Pingback by The Geometric Interpretation of the Jacobian Determinant « The Unapologetic Mathematician | January 8, 2010 | Reply

4. […] este artículo del blog The Unapologetic mathematician y en los posteriores podéis ver explicaciones y demostraciones sobre el cambio de variables en […]

Pingback by El problema de Basilea (II) | Gaussianos | January 11, 2010 | Reply

5. […] or the negative sign if it’s everywhere nonpositive on the unit cube. But now we can use the change of variables formula to see […]

Pingback by Integration on Singular Cubes « The Unapologetic Mathematician | August 3, 2011 | Reply