Matrices IV
Like we saw with the tensor product of vector spaces, the dual space construction turns out to be a functor. In fact, it’s a contravariant functor. That is, if we have a linear transformation we get a linear transformation
. As usual, we ask what this looks like for matrices.
First, how do we define the dual transformation? It turns out this is the contravariant functor represented by . That is, if
is a linear functional, we define
. In terms of the action on vectors,
Now let’s assume that and
are finite-dimensional, and pick bases
and
for
and
, respectively. Then the linear transformation
has matrix coefficients
. We also get the dual bases
of
and
of
.
Given a basic linear functional on
, we want to write
in terms of the
. So let’s evaluate it on a generic basis vector
and see what we get. The formula above shows us that
In other words, we can write . The same matrix works, but we use its indices differently.
In general, given a linear functional with coefficients
we find the coefficients of
as
. The value
becomes
. Notice that the summation convention tells us this must be a scalar (as we expect) because there are no unpaired indices. Also notice that because we can use the same matrix for two different transformations we seem to have an ambiguity: is the lower index running over a basis for
or one for
? Luckily, since every basis gives rise to a dual basis, we don’t need to care. Both spaces have the same dimension anyhow.