The Unapologetic Mathematician

Mathematics for the interested outsider

Matrices and Forms I

Yesterday, we defined a Hermitian matrix to be the matrix-theoretic analogue of a self-adjoint transformation. So why should we separate out the two concepts? Well, it turns out that there are more things we can do with a matrix than represent a linear transformation. In fact, we can use matrices to represent forms, as follows.

Let’s start with either a bilinear or a sesquilinear form B\left(\underline{\hphantom{X}},\underline{\hphantom{X}}\right) on the vector space V. Let’s also pick an arbitrary basis \left\{e_i\right\} of V. I want to emphasize that this basis is arbitrary, since recently we’ve been accustomed to automatically picking orthonormal bases. But notice that I’m not assuming that our form is even an inner product to begin with.

Now we can define a matrix b_{ij}=B(e_i,e_j). This completely specifies the form, by either bilinearity or sesquilinearity. And properties of such forms are reflected in their matrices.

For example, suppose that H is a conjugate-symmetric sesquilinear form. That is, H(v,w)=\overline{H(w,v)}. Then we look at the matrix and find


so H is a Hermitian matrix!

Now the secret here is that the matrix of a form secretly is the matrix of a linear transformation. It’s the transformation that takes us from V to V^* by acting on one slot of the form, and written in terms of the basis e_i and its dual. Let me be a little more explicit.

When we feed a basis vector into our form B, we get a linear functional B(e_i,\underline{\hphantom{X}}). We want to write that out in terms of the dual basis \left\{\epsilon^j\right\} as a linear combination

\displaystyle B(e_i,\underline{\hphantom{X}})=b_{ik}\epsilon^k

So how do we read off these coefficients? Stick another basis vector into the form!


which is just the same matrix as we found before.

June 24, 2009 Posted by | Algebra, Linear Algebra | 12 Comments