We’re trying to invert a function which is continuously differentiable on some region . That is we know that if is a point where , then there is a ball around where is one-to-one onto some neighborhood around . Then if is a point in , we’ve got a system of equations
that we want to solve for all the .
We know how to handle this if is defined by a linear transformation, represented by a matrix :
In this case, the Jacobian transformation is just the function itself, and so the Jacobian determinant is nonzero if and only if the matrix is invertible. And so our solution depends on finding the inverse and solving
This is the approach we’d like to generalize. But to do so, we need a more specific method of finding the inverse.
This is where Cramer’s rule comes in, and it starts by analyzing the way we calculate the determinant of a matrix . This formula
involves a sum over all the permutations , and we want to consider the order in which we add up these terms. If we fix an index , we can factor out each matrix entry in the th column:
where the hat indicates that we omit the th term in the product. For a given value of , we can consider the restricted sum
which is times the determinant of the - “minor” of the matrix . That is, if we strike out the row and column of which contain and take the determinant of the remaining matrix, we multiply this by to get . These are the entries in the “adjugate” matrix .
What we’ve shown is that
(no summation on ). It’s not hard to show, however, that if we use a different row from the adjugate matrix we find
That is, the adjugate times the original matrix is the determinant of times the identity matrix. And so if we find
So what does this mean for our system of equations? We can write
But how does this sum differ from the one we used before (without summing on ) to calculate the determinant of ? We’ve replaced the th column of by the column vector , and so this is just another determinant, taken after performing this replacement!
Here’s an example. Let’s say we’ve got a system written in matrix form
The entry in the th row and th column of the adjugate matrix is calculated by striking out the th column and th row of our original matrix, taking the determinant of the remaining matrix, and multiplying by . We get
and thus we find
where we note that
In other words, our solution is given by ratios of determinants:
and similar formulae hold for larger systems of equations.
Sorry for the late post. I didn’t get a chance to get it up this morning before my flight.
Brace yourself. Just like last time we’ve got a messy technical lemma about what happens when the Jacobian determinant of a function is nonzero.
This time we’ll assume that is not only continuous, but continuously differentiable on a region . We also assume that the Jacobian at some point . Then I say that there is some neighborhood of so that is injective on .
First, we take points in and make a function of them
That is, we take the th partial derivative of the th component function and evaluate it at the th sample point to make a matrix , and then we take the determinant of this matrix. As a particular value, we have
Since each partial derivative is continuous, and the determinant is a polynomial in its entries, this function is continuous where it’s defined. And so there’s some ball of so that if all the are in we have . We want to show that is injective on .
So, let’s take two points and in so that . Since the ball is convex, the line segment is completely contained within , and so we can bring the mean value theorem to bear. For each component function we can write
for some in (no summation here on ). But like last time we now have a linear system of equations described by an invertible matrix. Here the matrix has determinant
which is nonzero because all the are inside the ball . Thus the only possible solution to the system of equations is . And so if for points within the ball , we must have , and thus is injective.