Yesterday we defined the column rank of a matrix to be the maximal number of linearly independent columns. Flipping over, we can consider the obviously analogous quantity for rows. The “row rank” is the maximal number of linearly independent rows in a matrix. But what’s a row? It’s a vector in a dual space.
Okay, let’s talk a bit more concretely. Consider a linear transformation , which is described by the matrix . Then for each index we can take the th row in this matrix:
And use it as a linear functional on the space of column vectors . Specifically, it sends the vector with components to the number . Thus we get linear functionals — elements of the dual space .
But what linear functionals are these? They must be something special, and indeed they are. Remember that we’ve got our canonical basis for the target space . We also immediately have the dual basis of . The linear functionals we got from the rows are then just the pullbacks of these basic linear functionals .
To see this, notice that the linear functional is given by the matrix with a in the th component and everywhere else. That is, its components come from the delta: . We get a linear functional on by first hitting a column vector with the matrix and then with this matrix. We use matrix multiplication to see that this is equivalent to just using the matrix — the element of — with components . And this is just the th row of the matrix!
So the rows span the image of the dual space under the dual map . This is, of course, a subspace of , and its dimension is exactly the row rank of . We’ll come back later and show that this must actually be the same as the column rank.