The Unapologetic Mathematician

Mathematics for the interested outsider

Nondegenerate Forms I

The notion of a positive semidefinite form opens up the possibility that, in a sense, a vector may be “orthogonal to itself”. That is, if we let H be the self-adjoint transformation corresponding to our (conjugate) symmetric form, we might have a nonzero vector v such that \langle v\rvert H\lvert v\rangle=0. However, the vector need not be completely trivial as far as the form is concerned. There may be another vector w so that \langle w\rvert H\lvert v\rangle\neq0.

Let us work out a very concrete example. For our vector space, we take \mathbb{R}^2 with the standard basis, and we’ll write the ket vectors as columns, so:

\displaystyle\begin{aligned}\lvert1\rangle&=\begin{pmatrix}1\\{0}\end{pmatrix}\\\lvert2\rangle&=\begin{pmatrix}0\\1\end{pmatrix}\end{aligned}

Then we will write the bra vectors as rows — the transposes of ket vectors:

\displaystyle\begin{aligned}\langle1\rvert&=\begin{pmatrix}1&0\end{pmatrix}\\\langle2\rvert&=\begin{pmatrix}0&1\end{pmatrix}\end{aligned}

If we were working over a complex vector space we’d take conjugate transposes instead, of course. Now it will hopefully make the bra-ket and matrix connection clear if we note that the bra-ket pairing now becomes multiplication of the corresponding matrices. For example:

\displaystyle\begin{aligned}\langle1\vert1\rangle&=\begin{pmatrix}1&0\end{pmatrix}\begin{pmatrix}1\\{0}\end{pmatrix}=\begin{pmatrix}1\end{pmatrix}\\\langle1\vert2\rangle&=\begin{pmatrix}1&0\end{pmatrix}\begin{pmatrix}0\\1\end{pmatrix}=\begin{pmatrix}0\end{pmatrix}\end{aligned}

The bra-ket pairing is exactly the inner product we get by declaring our basis to be orthonormal.

Now let’s insert a transformation between the bra and ket to make a form. Specifically, we’ll use the one with the matrix S=\begin{pmatrix}0&1\\1&0\end{pmatrix}. Then the basis vector \lvert1\rangle is just such a one of these vectors “orthogonal” to itself (with respect to our new bilinear form). Indeed, we can calculate

\displaystyle\langle1\rvert S\lvert1\rangle=\begin{pmatrix}1&0\end{pmatrix}\begin{pmatrix}0&1\\1&0\end{pmatrix}\begin{pmatrix}1\\{0}\end{pmatrix}=\begin{pmatrix}1&0\end{pmatrix}\begin{pmatrix}0\\1\end{pmatrix}=\begin{pmatrix}0\end{pmatrix}

However, this vector is not totally trivial with respect to the form S. For we can calculate

\displaystyle\langle2\rvert S\lvert1\rangle=\begin{pmatrix}0&1\end{pmatrix}\begin{pmatrix}0&1\\1&0\end{pmatrix}\begin{pmatrix}1\\{0}\end{pmatrix}=\begin{pmatrix}0&1\end{pmatrix}\begin{pmatrix}0\\1\end{pmatrix}=\begin{pmatrix}1\end{pmatrix}

Now, all this is prologue to a definition. We say that a form B (symmetric or not) is “degenerate” if there is some non-zero ket vector \lvert v\rangle so that for every bra vector \langle w\rvert we find

\displaystyle\langle w\rvert B\lvert v\rangle=0

And, conversely, we say that a form is “nondegenerate” if for every ket vector \lvert v\rangle there exists some bra vector \langle w\rvert so that

\displaystyle\langle w\rvert B\lvert v\rangle\neq0

July 15, 2009 Posted by | Algebra, Linear Algebra | 3 Comments

   

Follow

Get every new post delivered to your Inbox.

Join 389 other followers