The Unapologetic Mathematician

Mathematics for the interested outsider

Normal Transformations

All the transformations in our analogy — self-adjoint and unitary (or orthogonal), and even anti-self-adjoint (antisymmetric and “skew-Hermitian”) transformations satisfying T^*=-T — all satisfy one slightly subtle but very interesting property: they all commute with their adjoints. Self-adjoint and anti-self-adjoint transformations do because any transformation commutes with itself and also with its negative, since negation is just scalar multiplication. Orthogonal and unitary transformations do because every transformation commutes with its own inverse.

Now in general most pairs of transformations do not commute, so there’s no reason to expect this to happen commonly. Still, if we have a transformation N so that N^*N=NN^*, we call it a “normal” transformation.

Let’s bang out an equivalent characterization of normal operators while we’re at it, so we can get an idea of what they look like geometrically. Take any vector \lvert v\rangle, hit it with N, and calculate its squared-length (I’m not specifying real or complex, since the notation is the same either way). We get

\displaystyle\lVert\lvert N(v)\rangle\rVert^2=\langle N(v)\vert N(v)\rangle=\langle v\rvert N^*N\lvert v\rangle

On the other hand, we could do the same thing but using N^* instead of N.

\displaystyle\lVert\lvert N^*(v)\rangle\rVert^2=\langle N^*(v)\vert N^*(v)\rangle=\langle v\rvert NN^*\lvert v\rangle

But if N is normal, then N^*N and NN^* are the same, and thus \lVert\lvert N(v)\rangle\rVert^2=\lVert\lvert N^*(v)\rangle\rVert^2 for all vectors \lvert v\rangle

Conversely, if \lVert\lvert N(v)\rangle\rVert^2=\lVert\lvert N^*(v)\rangle\rVert^2 for all vectors \lvert v\rangle, then we can use the polarization identities to conclude that N^*N=NN^*.

So normal transformations are exactly those that the length of a vector is the same whether we use the transformation or its adjoint. For self-adjoint and anti-self-adjoint transformations this is pretty obvious since they’re (almost) the same thing anyway. For orthogonal and unitary transformations, they don’t change the lengths of vectors at all, so this makes sense.

Just to be clear, though, there are matrices that are normal, but which aren’t any of the special kinds we’ve talked about so far. For example, the transformation represented by the matrix

\displaystyle\begin{pmatrix}1&1&0\\{0}&1&1\\1&0&1\end{pmatrix}

has its adjoint represented by

\displaystyle\begin{pmatrix}1&0&1\\1&1&0\\{0}&1&1\end{pmatrix}

which is neither the original transformation nor its negative, so it’s neither self-adjoint nor anti-self-adjoint. We can calculate their product in either order to get

\displaystyle\begin{pmatrix}2&1&1\\1&2&1\\1&1&2\end{pmatrix}

since we get the same answer, the transformation is normal, but it’s clearly not unitary because if it were we’d get the identity matrix here.

August 5, 2009 Posted by | Algebra, Linear Algebra | 3 Comments

   

Follow

Get every new post delivered to your Inbox.

Join 393 other followers