The Unapologetic Mathematician

Mathematics for the interested outsider

The Index of a Linear Map

Today I want to talk about the index of a linear map A:V\rightarrow W, which I’ll motivate in terms of a linear system Ax=b for some linear map A:\mathbb{F}^m\rightarrow\mathbb{F}^n.

One important quantity is the dimension of the kernel of A. In terms of the linear system, this is the dimension of the associated homogenous system Ax=0. If there are any solutions of the system under consideration, they will form an affine space of this dimension.

But the system is inhomogenous in general, and as such it might not have any solutions. Since every short exact sequence of vector spaces splits we can write \mathbb{F}^n=\mathrm{Im}(A)\oplus U. Then the vector b will have some component in the image of A, and some component in the complementary subspace U. Note that U is not canonical here. I’m just asserting that some such subspace exists so we can make this decomposition.

Now the condition that the system have any solutions is that the component of b in U vanishes. But we can pick some basis for U and think of each component of b with respect to each of those basis elements vanishing. That is, we must apply a number of linear conditions equal to the dimension of U in order to know that the system has any solutions at all. And once it does, it will have \dim(\mathrm{Ker}(A)) of them.

But what happens if we quotient out \mathbb{F}^n by \mathrm{Im}(A)? We get the cokernel \mathrm{Cok}(A), which must then be isomorphic to U! So the number of conditions we need to apply to b is the dimension of the cokernel.

Now I’ll define the index \mathrm{Ind}(A) of the linear map A to be the difference between the dimension of the kernel and the dimension of the cokernel. In the more general case, if one of these dimensions is infinite we may get an infinite index. But often we’ll restrict to the case of “finite index” operators, where the difference works out to be a well-defined finite number. Of course, when dealing with linear systems it’s guaranteed to be finite, but there are a number of results out there for the finite index case. In fact, this is pretty much what a “Fredholm operator” is, if we ever get to that.

Anyhow, we’ve got this definition:

\mathrm{Ind}(A)=\dim(\mathrm{Ker}(A))-\dim(\mathrm{Cok}(A))

Now let’s add and subtract the dimension of the image of A:

\mathrm{Ind}(A)=\left(\dim(\mathrm{Ker}(A))+\dim(\mathrm{Im}(A))\right)-\left(\dim(\mathrm{Cok}(A))+\dim(\mathrm{Im}(A))\right)

Clearly the dimension of the image and the dimension of the cokernel add up to the dimension of the target space. But notice also that the rank-nullity theorem tells us that the dimension of the kernel and the dimension of the image add up to the dimension of the source space! That is, we have the equality

\mathrm{Ind}(A)=\dim(V)-\dim(W)

What happened here? We started with an analytic definition in terms of describing solutions to a system of equations, and we ended up with a geometric formula in terms of the dimensions of two vector spaces.

What’s more, the index doesn’t really depend much on the particulars of A at all. Any two linear maps between the same pair of vector spaces will have the same index! And this gives us a simple tradeoff: for every dimension we add to the space of solutions, we have to add a linear condition to restrict the vectors b for which there are any solutions at all.

Alternately, what happens when we add a new equation to a system? With a new equation the dimension of the target space goes up, and so the index goes down. One very common way for this to occur is for the dimension of the solution space to drop. This gives rise to our intuition that each new equation reduces the number of independent solutions by one, until we have exactly as many equations as variables.

July 22, 2008 - Posted by | Algebra, Linear Algebra

3 Comments »

  1. […] really interested in is an isomorphism. Such a map has no kernel and no cokernel, and so its index is definitely zero. If it weren’t clear enough already, this shows that isomorphic vector […]

    Pingback by The Euler Characteristic of an Exact Sequence Vanishes « The Unapologetic Mathematician | July 23, 2008 | Reply

  2. […] , and we couldn’t tell where in to send them under the inverse map. This tells us that the index of an isomorphism must be zero, and thus that the vector spaces must have the same dimension. This […]

    Pingback by Isomorphisms of Vector Spaces « The Unapologetic Mathematician | October 17, 2008 | Reply

  3. […] to be invertible? Its image must miss some vectors in . That is, we have a nontrivial kernel. The index of is zero, so a trivial kernel would mean a trivial cokernel. We would then have a one-to-one and […]

    Pingback by The Determinant of a Noninvertible Transformation « The Unapologetic Mathematician | January 14, 2009 | Reply


Leave a comment