Another thing vector spaces come with is duals. That is, given a vector space we have the dual vector space of “linear functionals” on — linear functions from to the base field . Again we ask how this looks in terms of bases.
So let’s take a finite-dimensional vector space with basis , and consider some linear functional . Like any linear function, we can write down matrix coefficients . Notice that since our target space (the base field ) is only one-dimensional, we don’t need another index to count its basis.
Now let’s consider a specially-crafted linear functional. We can define one however we like on the basis vectors and then let linearity handle the rest. So let’s say our functional takes the value on and the value on every other basis element. We’ll call this linear functional . Notice that on any vector we have
so it returns the coefficient of . There’s nothing special about here, though. We can define functionals by setting . This is the “Kronecker delta”, and it has the value when its two indices match, and when they don’t.
Now given a linear functional with matrix coefficients , let’s write out a new linear functional . What does this do to basis elements?
so this new transformation has exactly the same matrix as does. It must be the same transformation! So any linear functional can be written uniquely as a linear combination of the , and thus they form a basis for the dual space. We call the “dual basis” to .
Now if we take a generic linear functional and evaluate it on a generic vector we find
Once we pick a basis for we immediately get a basis for , and evaluation of a linear functional on a vector looks neat in terms of these bases.