# The Unapologetic Mathematician

## Dual Spaces

Another thing vector spaces come with is duals. That is, given a vector space $V$ we have the dual vector space $V^*=\hom(V,\mathbb{F})$ of “linear functionals” on $V$ — linear functions from $V$ to the base field $\mathbb{F}$. Again we ask how this looks in terms of bases.

So let’s take a finite-dimensional vector space $V$ with basis $\left\{e_i\right\}$, and consider some linear functional $\mu\in V^*$. Like any linear function, we can write down matrix coefficients $\mu_i=\mu(e_i)$. Notice that since our target space (the base field $\mathbb{F}$) is only one-dimensional, we don’t need another index to count its basis.

Now let’s consider a specially-crafted linear functional. We can define one however we like on the basis vectors $e_i$ and then let linearity handle the rest. So let’s say our functional takes the value ${1}$ on $e_1$ and the value ${0}$ on every other basis element. We’ll call this linear functional $\epsilon^1$. Notice that on any vector we have

$\epsilon^1(v)=\epsilon^1(v^ie_i)=v^i\epsilon^1(e_i)=v^1$

so it returns the coefficient of $e_1$. There’s nothing special about $e_1$ here, though. We can define functionals $\epsilon^j$ by setting $\epsilon^j(e_i)=\delta_i^j$. This is the “Kronecker delta”, and it has the value ${1}$ when its two indices match, and ${0}$ when they don’t.

Now given a linear functional $\mu$ with matrix coefficients $\mu_j$, let’s write out a new linear functional $\mu_j\epsilon^j$. What does this do to basis elements?

$\mu_j\epsilon^j(e_i)=\mu_j\delta_i^j=\mu_i$

so this new transformation has exactly the same matrix as $\mu$ does. It must be the same transformation! So any linear functional can be written uniquely as a linear combination of the $\epsilon^j$, and thus they form a basis for the dual space. We call $\left\{\epsilon^j\right\}$ the “dual basis” to $\left\{e_i\right\}$.

Now if we take a generic linear functional $\mu$ and evaluate it on a generic vector $v$ we find

$\mu(v)=\mu_j\epsilon^j(v^ie_i)=\mu_jv^i\epsilon^j(e_i)=\mu_jv^i\delta_i^j=\mu_iv^i$

Once we pick a basis for $V$ we immediately get a basis for $V^*$, and evaluation of a linear functional on a vector looks neat in terms of these bases.

May 27, 2008 - Posted by | Algebra, Linear Algebra

1. Hello, you’re the first weblog I’ve come across that has the same layout as my own.
Interesting post above – you might be interested in our recent conference on the teaching of maths at 3rd level, over at http://coraifeartaigh.wordpress.com/

One question – I see you have the same problem with your blogroll as I do. Is there no way in this particular design to categorize different links? There doesn’t seem to be…Cormac

Comment by cormac | May 28, 2008 | Reply

2. Fáilte, Cormac.

I think you can set categories for the blogroll, same as for posts. My lack of separation more stems from the fact that I haven’t tended my blogroll in forever…

Comment by John Armstrong | May 28, 2008 | Reply

3. […] Like we saw with the tensor product of vector spaces, the dual space construction turns out to be a functor. In fact, it’s a contravariant functor. That is, if we […]

Pingback by Matrices IV « The Unapologetic Mathematician | May 28, 2008 | Reply

4. Hi John!
No, although you can set the links in categories, they don’t actually appear on the frontpage in the Andreas 04 design, as far as I can make out…pity

Comment by cormac | May 28, 2008 | Reply

5. […] and . Again, we interpret an index pair as described above. The symbol is another form of the Kronecker delta, which takes the value when its indices agree and when they […]

Pingback by The Category of Matrices II « The Unapologetic Mathematician | June 3, 2008 | Reply

6. […] that we’ve got our canonical basis for the target space . We also immediately have the dual basis of . The linear functionals we got from the rows are then just the pullbacks of these basic linear […]

Pingback by Row Rank « The Unapologetic Mathematician | July 2, 2008 | Reply

7. […] be a finite-dimensional vector space with dual space . Then if we have a basis of we immediately get a dual basis for (yet another to keep […]

Pingback by The Coevaluation on Vector Spaces « The Unapologetic Mathematician | November 13, 2008 | Reply

8. […] viewpoint comes from recognizing that we’ve got a duality for vector spaces. This lets us rewrite our bilinear form as a linear transformation . We can view […]

Pingback by Bilinear Forms « The Unapologetic Mathematician | April 14, 2009 | Reply

9. […] be such a basis. To be explicit, this means that , where the are real numbers and is the Kronecker delta — if its indices match, and if they don’t. But we still have some freedom. If I […]

Pingback by Real Inner Products « The Unapologetic Mathematician | April 15, 2009 | Reply

10. Nice article. I ask you a question which does not relate to it. Can you give me an example of a C*-left module which is not rational ?

Comment by renaissence | April 17, 2009 | Reply

11. Nope, sorry. I’m not really a C* kind of guy.

Comment by John Armstrong | April 17, 2009 | Reply

12. I found one: C* is a left module of C* which is not rational 🙂
An other one:
describe all the comodules of dimension 3 over trigonometric coalgebra.

Comment by renaissence | April 18, 2009 | Reply

13. […] on a finite-dimensional vector space is a bilinear form, it provides two isomorphisms from to its dual . And since an inner product is a symmetric bilinear form, these two isomorphisms are identical. […]

Pingback by Adjoint Transformations « The Unapologetic Mathematician | May 22, 2009 | Reply

14. […] our vector space . This gives us coordinates on the Euclidean space of points. It also gives us the dual basis of the dual space . This lets us write any linear functional as a unique linear combination . The […]

Pingback by Uniqueness of the Differential « The Unapologetic Mathematician | September 29, 2009 | Reply

15. […] a bilinear form, our inner product defines an isomorphism from the space of displacements to its dual space. This isomorphism sends the basis vector to the dual basis vector , since we can check […]

Pingback by The Gradient Vector « The Unapologetic Mathematician | October 5, 2009 | Reply

16. […] subspaces of symmetric and antisymmetric tensors. Specifically, how do all of these interact with duals. Through these post we’ll be working with a vector space over a field , which at times will […]

Pingback by Multilinear Functionals « The Unapologetic Mathematician | October 22, 2009 | Reply

17. […] for some (no summation on ). As usual, is the Kronecker delta. […]

Pingback by The Inverse Function Theorem « The Unapologetic Mathematician | November 18, 2009 | Reply

18. […] field is itself a normed linear space (like or itself) we can conclude that the “continuous dual space” consisting of bounded linear functionals is a normed linear space using the operator norm […]

Pingback by Bounded Linear Transformations « The Unapologetic Mathematician | September 2, 2010 | Reply

19. […] bases for a given inner product. No, we just define our inner product by saying that — the Kronecker delta, with value when its indices are the same and otherwise — and extend the only way we can. […]

Pingback by Maschke’s Theorem « The Unapologetic Mathematician | September 28, 2010 | Reply

20. […] turns out that these spaces are naturally isomorphic to each other’s dual spaces. That is, for any -modules and we have an […]

Pingback by Hom Space Duals « The Unapologetic Mathematician | October 13, 2010 | Reply

21. […] first of these conditions says that is a linear functional on . It’s the second that’s special: it tells us that obeys something like the product […]

Pingback by Tangent Vectors at a Point « The Unapologetic Mathematician | March 29, 2011 | Reply

22. […] every bit as useful: a cotangent vector. A cotangent vector at a point is just an element of the dual space to , which we write as […]

Pingback by Cotangent Vectors, Differentials, and the Cotangent Bundle « The Unapologetic Mathematician | April 13, 2011 | Reply

23. […] the tensor product of vector spaces is old hat by now, as is using the dual space . We’ll put them together by defining the space of “tensors of type ” […]

Pingback by Tensor Bundles « The Unapologetic Mathematician | July 6, 2011 | Reply

24. […] As we mentioned when discussing adjoint transformations, this gives us an isomorphism from to its dual space . That is, when we have a metric floating around we have a canonical way of identifying tangent […]

Pingback by Inner Products on 1-Forms « The Unapologetic Mathematician | October 1, 2011 | Reply

25. […] So what we’re saying is that this divergence doesn’t really work in the way we usually think of it, but we can pretend it’s something that integrates to give us whenever our region of integration contains the point . We’ll call this something , where the is known as the “Dirac delta-function”, despite not actually being a function. Incidentally, it’s actually very closely related to the Kronecker delta […]

Pingback by Gauss’ Law « The Unapologetic Mathematician | January 11, 2012 | Reply

26. […] way is to start with a module and then consider its dual space . I say that this can be made into an -module by […]

Pingback by New Modules from Old « The Unapologetic Mathematician | September 17, 2012 | Reply

27. Like any linear function, we can write down matrix coefficients $\mu_i=\mu(e_i)$.

How?

Comment by isomorphismes | August 17, 2015 | Reply

• I assume you mean a “matrix” that’s only 1-dimensional (what you probably mean by

since the target is 1-dimensional we don’t need to count its basis

So why call it “matrix coefficients” for a linear transform? Why not just a list of coefficients that go along some basis? (but how do you get the basis?)

Comment by isomorphismes | August 17, 2015 | Reply

• That’s exactly what I mean. If $V$ is an $n$-dimensional space with a basis $\{e_i\}$, and $\mu\in V^*$ is a linear functional, then it has a representation as a $1\times n$ matrix whose $i$th entry is $\mu_i=\mu(e_i)$.

In general, any linear transformation from a space $V$ with basis $\{e_i\}$ to a space $W$ with basis $\{f_j\}$ has a matrix representation, and the coefficients are determined by

$\mu(e_i)=\mu_{1,i}f_1+\mu_{2,i}f_2+...+\mu_{m,i}f_m$

where the decomposition of $\mu(e_i)$ as a linear combination of the $f_j$ exists and is unique because the latter form a basis of $W$. That’s all a matrix is: a list of the coefficients of this decomposition with respect to the choices of bases for $V$ and $W$.

To your parenthetical question, bases come from all sorts of places, depending on the application. In this case, I’ve taken a particular basis as part of the setup of my example. If you start with a basis $\{e_i\}$ of $V$ you automatically get a “dual basis” $\{\epsilon^i\}$ of the dual space $V^*$. For finite-dimensional spaces (which I’ve also assumed that $V$ is here), a basis is always guaranteed to exist, though for infinite-dimensional spaces the question gets a little trickier.

Comment by John Armstrong | August 17, 2015 | Reply

• Yeah, I think you must have covered finite & ∞-dimensional (and non-existence of) bases in some of your other posts

Comment by isomorphismes | August 20, 2015 | Reply