The Unapologetic Mathematician

Mathematics for the interested outsider

Self-Adjoint Transformations

Let’s now consider a single inner-product space V and a linear transformation T:V\rightarrow V. Its adjoint is another linear transformation T^*:V\rightarrow V. This opens up the possibility that T^* might be the same transformation as T. If this happens, we say that T is “self-adjoint”. It then satisfies the adjoint relation

\displaystyle\langle v,T(w)\rangle=\langle T(v),w\rangle

What does this look like in terms of matrices? Since we only have one vector space we only need to pick one orthonormal basis \left\{e_i\right\}. Then we get a matrix

\displaystyle\begin{aligned}t_i^j&=\langle e_j,T(e_i)\rangle\\&=\langle T(e_j),e_i\rangle\\&=\overline{\langle e_i,T(e_j)\rangle}\\&=\overline{t_j^i}\end{aligned}

That is, the matrix of a self-adjoint transformation is its own conjugate transpose. We have a special name for this sort of matrix — “Hermitian” — even though it’s exactly equivalent to self-adjointness as a linear transformation. If we’re just working over a real vector space we don’t have to bother with conjugation. In that case we just say that the matrix is symmetric.

Over a one-dimensional complex vector space, the matrix of a linear transformation T is simply a single complex number t. If T is to be self-adjoint, we must have t=\bar{t}, and so t must be a real number. In this sense, the operation of taking the conjugate transpose of a complex matrix (or the simple transpose of a real matrix) extends the idea of conjugating a complex number. Self-adjoint matrices, then, are analogous to real numbers.

June 23, 2009 Posted by | Algebra, Linear Algebra | 9 Comments



Get every new post delivered to your Inbox.

Join 366 other followers