The Index of a Linear Map
Today I want to talk about the index of a linear map , which I’ll motivate in terms of a linear system
for some linear map
.
One important quantity is the dimension of the kernel of . In terms of the linear system, this is the dimension of the associated homogenous system
. If there are any solutions of the system under consideration, they will form an affine space of this dimension.
But the system is inhomogenous in general, and as such it might not have any solutions. Since every short exact sequence of vector spaces splits we can write . Then the vector
will have some component in the image of
, and some component in the complementary subspace
. Note that
is not canonical here. I’m just asserting that some such subspace exists so we can make this decomposition.
Now the condition that the system have any solutions is that the component of in
vanishes. But we can pick some basis for
and think of each component of
with respect to each of those basis elements vanishing. That is, we must apply a number of linear conditions equal to the dimension of
in order to know that the system has any solutions at all. And once it does, it will have
of them.
But what happens if we quotient out by
? We get the cokernel
, which must then be isomorphic to
! So the number of conditions we need to apply to
is the dimension of the cokernel.
Now I’ll define the index of the linear map
to be the difference between the dimension of the kernel and the dimension of the cokernel. In the more general case, if one of these dimensions is infinite we may get an infinite index. But often we’ll restrict to the case of “finite index” operators, where the difference works out to be a well-defined finite number. Of course, when dealing with linear systems it’s guaranteed to be finite, but there are a number of results out there for the finite index case. In fact, this is pretty much what a “Fredholm operator” is, if we ever get to that.
Anyhow, we’ve got this definition:
Now let’s add and subtract the dimension of the image of :
Clearly the dimension of the image and the dimension of the cokernel add up to the dimension of the target space. But notice also that the rank-nullity theorem tells us that the dimension of the kernel and the dimension of the image add up to the dimension of the source space! That is, we have the equality
What happened here? We started with an analytic definition in terms of describing solutions to a system of equations, and we ended up with a geometric formula in terms of the dimensions of two vector spaces.
What’s more, the index doesn’t really depend much on the particulars of at all. Any two linear maps between the same pair of vector spaces will have the same index! And this gives us a simple tradeoff: for every dimension we add to the space of solutions, we have to add a linear condition to restrict the vectors
for which there are any solutions at all.
Alternately, what happens when we add a new equation to a system? With a new equation the dimension of the target space goes up, and so the index goes down. One very common way for this to occur is for the dimension of the solution space to drop. This gives rise to our intuition that each new equation reduces the number of independent solutions by one, until we have exactly as many equations as variables.