The Unapologetic Mathematician

Mathematics for the interested outsider

Every Self-Adjoint Transformation has an Eigenvector

Okay, this tells us nothing in the complex case, but for real transformations we have no reason to assume that a given transformation has any eigenvalues at all. But if our transformation is self-adjoint it must have one.

When we found this in the complex case we saw that the characteristic polynomial had to have a root, since \mathbb{C} is algebraically closed. It’s the fact that \mathbb{R} isn’t algebraically closed that causes our trouble. But since \mathbb{R} sits inside \mathbb{C} we can consider any real polynomial as a complex polynomial. That is, the characteristic polynomial of our transformation, considered as a complex polynomial (whose coefficients just happen to all be real) must have a complex root.

This really feels like a dirty trick, so let’s try to put it on a bit firmer ground. We’re looking at a transformation S:V\rightarrow V on a vector space V over \mathbb{R}. What we’re going to do is “complexify” our space, so that we can use some things that only work over the complex numbers. To do this, we’ll consider \mathbb{C} itself as a two-dimensional vector space over \mathbb{R} and form the tensor product V^\mathbb{C}=V\otimes_\mathbb{R}\mathbb{C}. The transformation S immediately induces a transformation S^\mathbb{C}:V^\mathbb{C}\rightarrow V^\mathbb{C} by defining S^\mathbb{C}(v\otimes z)=S(v)\otimes z. It’s a complex vector space, since given a complex constant c\in\mathbb{C} we can define the scalar product of v\otimes z by c as v\otimes(cz). Finally, S^\mathbb{C} is complex-linear since it commutes with our complex scalar product.

What have we done? Maybe it’ll be clearer if we pick a basis \left\{e_i\right\}_{i=1}^n for V. That is, any vector in V is a linear combination of the e_i in a unique way. Then every (real) vector in V^\mathbb{C} is a unique linear combination of e_i\otimes1 and e_i\otimes i (this latter i is the complex number, not the index; try to keep them separate). But as complex vectors, we have e_i\otimes i=i(e_i\otimes1), and so every vector is a unique complex linear combination of the e_i\otimes1. It’s like we’ve kept the same basis, but just decided to allow complex coefficients too.

And what about the matrix of S^\mathbb{C} with respect to this (complex) basis of e_i\otimes1? Well it’s just the same as the old matrix of S with respect to the e_i! Just write

\displaystyle S^\mathbb{C}(e_i\otimes1)=S(e_i)\otimes1=(s_i^je_j)\otimes1=s_i^j(e_j\otimes1)

Then if S is self-adjoint its matrix will be symmetric, and so will the matrix of S^\mathbb{C}, which must then be self-adjoint as well. And we can calculate the characteristic polynomial of S from its matrix, so the characteristic polynomial of S^\mathbb{C} will be the same — except it will be a complex polynomial whose coefficients all just happen to be real.

Okay so back to the point. Since S^\mathbb{C} is a transformation on a complex vector space it must have an eigenvalue \lambda and a corresponding eigenvector v. And I say that since S^\mathbb{C} is self-adjoint$, the eigenvalue \lambda must be real. Indeed, we can calculate

\displaystyle\lambda\langle v,v\rangle=\langle v,\lambda v\rangle=\langle v,A(v)\rangle=\langle A(v),v\rangle=\langle\lambda v,v\rangle=\bar{\lambda}\langle v,v\rangle

and thus \lambda=\bar{\lambda}, so \lambda is real.

Therefore, we have found a real number \lambda so that when we plug it into the characteristic polynomial of S^\mathbb{C}, we get zero. But then we also get zero when we plug it into the characteristic polynomial of S, and thus it’s also an eigenvalue of S.

And so, finally, every self-adjoint transformation on a real vector space has at least one eigenvector.

August 12, 2009 Posted by | Algebra, Linear Algebra | 1 Comment

Invariant Subspaces of Self-Adjoint Transformations

Okay, today I want to nail down a lemma about the invariant subspaces (and, in particular, eigenspaces) of self-adjoint transformations. Specifically, the fact that the orthogonal complement of an invariant subspace is also invariant.

So let’s say we’ve got a subspace W\subseteq V and its orthogonal complement W^\perp. We also have a self-adjoint transformation S:V\rightarrow V so that S(w)\in W for all w\in W. What we want to show is that for every v\in W^\perp, we also have S(v)\in W^\perp

Okay, so let’s try to calculate the inner product \langle S(v),w\rangle for an arbitrary w\in W.

\displaystyle\langle S(v),w\rangle=\langle v,S(w)\rangle=0

since S is self-adjoint, S(w) is in W, and v is in W^\perp. Then since this is zero no matter what w\in W we pick, we see that S(v)\in W^\perp. Neat!

August 11, 2009 Posted by | Algebra, Linear Algebra | 1 Comment

The Complex Spectral Theorem

We’re now ready to characterize those transformations on complex vector spaces which have a diagonal matrix with respect to some orthonormal basis. First of all, such a transformation must be normal. If we have a diagonal matrix we can find the matrix of the adjoint by taking its conjugate transpose, and this will again be diagonal. Since any two diagonal matrices commute, the transformation must commute with its adjoint, and is therefore normal.

On the other hand, let’s start with a normal transformation N and see what happens as we try to diagonalize it. First, since we’re working over \mathbb{C} here, we can pick an orthonormal basis that gives us an upper-triangular matrix and call the basis \left\{e_i\right\}_{i=1}^n. Now, I assert that this matrix already is diagonal when N is normal.

Let’s write out the matrices for N

\displaystyle\begin{pmatrix}a_{1,1}&\cdots&a_{1,n}\\&\ddots&\vdots\\{0}&&a_{n,n}\end{pmatrix}

and N^*

\displaystyle\begin{pmatrix}\overline{a_{1,1}}&&0\\\vdots&\ddots&\\\overline{a_{1,n}}&\cdots&\overline{a_{n,n}}\end{pmatrix}

Now we can see that N(e_1)=a_{1,1}e_1, while N^*(e_1)=\overline{a_{1,1}}e_1+\dots+\overline{a_{1,n}}e_n. Since these bases are orthonormal, it’s easy to calculate the squared-lengths of these two:

\displaystyle\begin{aligned}\lVert N(e_1)\rVert^2&=\lvert a_{1,1}\rvert^2\\\lVert N^*(e_1)\rVert^2&=\lvert a_{1,1}\rvert^2+\dots+\lvert a_{1,n}\rvert^2\end{aligned}

But since N is normal, these two must be the same. And so all the entries other than maybe a_{1,1} in the first row of our matrix must be zero. We can then repeat this reasoning with the basis vector e_2, and reach a similar conclusion about the second row, and so on until we see that all the entries above the diagonal must be zero.

That is, not only is it necessary that a transformation be normal in order to diagonalize it, it’s also sufficient. Any normal transformation on a complex vector space has an orthonormal basis of eigenvectors.

Now if we have an arbitrary orthonormal basis — say N is a transformation on \mathbb{C}^n with the standard basis already floating around — we may want to work with the matrix of N with respect to this basis. If this were our basis of eigenvectors, N would have the diagonal matrix \Lambda=\Bigl(\lambda_i\delta_{ij}\Bigr). But we may not be so lucky. Still, we can perform a change of basis using the basis of eigenvectors to fill in the columns of the change-of-basis matrix. And since we’re going from one orthonormal basis to another, this will be unitary!

Thus a normal transformation is not only equivalent to a diagonal transformation, it is unitarily equivalent. That is, the matrix of any normal transformation can be written as U\Lambda U^{-1} for a diagonal matrix \Lambda and a unitary matrix U. And any matrix which is unitarily equivalent to a diagonal matrix is normal. That is, if you take the subspace of diagonal matrices within the space of all matrices, then use the unitary group to act by conjugation on this subspace, the result is the subspace of all normal matrices, which represent normal transformations.

Often, you’ll see this written as U\Lambda U^*, which is really the same thing of course, but there’s an interesting semantic difference. Writing it using the inverse is a similarity, which is our notion of equivalence for transformations. So if we’re thinking of our matrix as acting on a vector space, this is the “right way” to think of the spectral theorem. On the other hand, using the conjugate transpose is a congruence, which is our notion of equivalence for bilinear forms. So if we’re thinking of our matrix as representing a bilinear form, this is the “right way” to think of the spectral theorem. But of course since we’re using unitary transformations here, it doesn’t matter! Unitary equivalence of endomorphisms and of bilinear forms is exactly the same thing.

August 10, 2009 Posted by | Algebra, Linear Algebra | 9 Comments

Unitary and Orthogonal Matrices and Orthonormal Bases

I almost forgot to throw in this little observation about unitary and orthogonal matrices that will come in handy.

Let’s say we’ve got a unitary transformation U and an orthonormal basis \left\{e_i\right\}_{i=1}^n. We can write down the matrix as before

\displaystyle\begin{pmatrix}u_{1,1}&\cdots&u_{1,n}\\\vdots&\ddots&\vdots\\u_{n,1}&\cdots&u_{n,n}\end{pmatrix}

Now, each column is a vector. In particular, it’s the result of transforming a basis vector e_i by U.

\displaystyle U(e_i)=u_{1,i}e_1+\dots+u_{n,i}e_n

What do these vectors have to do with each other? Well, let’s take their inner products and find out.

\displaystyle\langle U(e_i),U(e_j)\rangle=\langle e_i,e_j\rangle=\delta_{i,j}

since U preserves the inner product. That is the collection of columns of the matrix of U form another orthonormal basis.

On the other hand, what if we have in mind some other orthonormal basis \left\{f_j\right\}_{j=1}^n. We can write each of these vectors out in terms of the original basis

\displaystyle f_j=a_{1,j}e_1+\dots+a_{n,j}e_n

and even get a change-of-basis transformation (like we did for general linear transformations) A defined by

\displaystyle A(e_j)=f_j=a_{1,j}e_1+\dots+a_{n,j}e_n

so the a_{i,j} are the matrix entries for A with respect to the basis \left\{e_i\right\}. This transformation A will then be unitary.

Indeed, take arbitrary vectors v=v^ie_i and w=w^je_j. Their inner product is

\displaystyle\langle v,w\rangle=\langle v^ie_i,w^je_j\rangle=\overline{v^i}w^j\langle e_i,e_j\rangle=\overline{v^i}w^j\delta_{i,j}

On the other hand, after acting by A we find

\displaystyle\langle A(v),A(w)\rangle=\langle v^iA(e_i),w^jA(e_j)\rangle=\overline{v^i}w^j\langle f_i,f_j\rangle=\overline{v^i}w^j\delta_{i,j}

since the basis \left\{f_j\right\} is orthonormal as well.

To sum up: with respect to an orthonormal basis, the columns of a unitary matrix form another orthonormal basis. Conversely, writing any other orthonormal basis in terms of the original basis and using these coefficients as the columns of a matrix gives a unitary matrix. The same holds true for orthogonal matrices, with similar reasoning all the way through. And both of these are parallel to the situation for general linear transformations: the columns of an invertible matrix with respect to any basis form another basis, and conversely.

August 7, 2009 Posted by | Algebra, Linear Algebra | 3 Comments

Eigenvalues and Eigenvectors of Normal Transformations

Let’s say we have a normal transformation N. It turns out we can say some interesting things about its eigenvalues and eigenvectors.

First off, it turns out that the eigenvalues of N^* are exactly the complex conjugates of those of N (the same, if we’re working over \mathbb{R}. Actually, this isn’t even special to normal operators. Indeed, if T-\lambda I_V has a nontrivial kernel, then we can take the adjoint to find that T^*-\bar{\lambda}I_V must have a nontrivial kernel as well. But if our transformation is normal, it turns out that not only do we have conjugate eigenvalues, they correspond to the same eigenvectors as well!

To see this, we do almost the same thing as before. But we get more than just a nontrivial kernel this time. Given an eigenvector v we know that \left(N-\lambda I_V\right)v=0, and so it must have length zero. But if N is normal then so is N-\lambda I_V:

\displaystyle\begin{aligned}\left(N-\lambda I_V\right)\left(N-\lambda I_V\right)^*&=\left(N-\lambda I_V\right)\left(N^*-\bar{\lambda}I_V\right)\\&=NN^*-\lambda I_VN^*-\bar{\lambda}NI_V+\lambda\bar{\lambda}I_VI_V\\&=N^*N-\lambda N^*I_V-\bar{\lambda}I_VN+\lambda\bar{\lambda}I_VI_V\\&=\left(N^*-\bar{\lambda}I_V\right)\left(N-\lambda I_V\right)\\&=\left(N-\lambda I_V\right)^*\left(N-\lambda I_V\right)\end{aligned}

and so acting by \left(N-\lambda I_V\right)^* gives the same length as acting by \left(N-\lambda I_V\right). That is:

\displaystyle0=\lVert\left(N-\lambda I_V\right)v\rVert=\lVert\left(N-\lambda I_V\right)^*v\rVert=\lVert N^*v-\bar{\lambda}v\rVert

thus by the definiteness of length, we know that N^*v-\bar{\lambda}v. That is, v is also an eigenvector of N^*, with eigenvalue \bar{\lambda}.

Then as a corollary we can find that not only are the eigenvectors corresponding to distinct eigenvalues linearly independent, they are actually orthogonal! Indeed, if v and w are eigenvectors of N with distinct eigenvalues \lambda and \mu, respectively, then we find

\displaystyle\begin{aligned}(\lambda-\mu)\langle v,w\rangle&=\langle\bar{\lambda}v,w\rangle-\langle v,\mu w\rangle\\&=\langle N^*v,w\rangle-\langle v,Nw\rangle\\&=\langle v,Nw\rangle-\langle v,Nw\rangle=0\end{aligned}

Since \lambda-\mu\neq0 we must conclude that \langle v,w\rangle=0, and that the two eigenvectors are orthogonal.

August 6, 2009 Posted by | Algebra, Linear Algebra | Leave a comment

Normal Transformations

All the transformations in our analogy — self-adjoint and unitary (or orthogonal), and even anti-self-adjoint (antisymmetric and “skew-Hermitian”) transformations satisfying T^*=-T — all satisfy one slightly subtle but very interesting property: they all commute with their adjoints. Self-adjoint and anti-self-adjoint transformations do because any transformation commutes with itself and also with its negative, since negation is just scalar multiplication. Orthogonal and unitary transformations do because every transformation commutes with its own inverse.

Now in general most pairs of transformations do not commute, so there’s no reason to expect this to happen commonly. Still, if we have a transformation N so that N^*N=NN^*, we call it a “normal” transformation.

Let’s bang out an equivalent characterization of normal operators while we’re at it, so we can get an idea of what they look like geometrically. Take any vector \lvert v\rangle, hit it with N, and calculate its squared-length (I’m not specifying real or complex, since the notation is the same either way). We get

\displaystyle\lVert\lvert N(v)\rangle\rVert^2=\langle N(v)\vert N(v)\rangle=\langle v\rvert N^*N\lvert v\rangle

On the other hand, we could do the same thing but using N^* instead of N.

\displaystyle\lVert\lvert N^*(v)\rangle\rVert^2=\langle N^*(v)\vert N^*(v)\rangle=\langle v\rvert NN^*\lvert v\rangle

But if N is normal, then N^*N and NN^* are the same, and thus \lVert\lvert N(v)\rangle\rVert^2=\lVert\lvert N^*(v)\rangle\rVert^2 for all vectors \lvert v\rangle

Conversely, if \lVert\lvert N(v)\rangle\rVert^2=\lVert\lvert N^*(v)\rangle\rVert^2 for all vectors \lvert v\rangle, then we can use the polarization identities to conclude that N^*N=NN^*.

So normal transformations are exactly those that the length of a vector is the same whether we use the transformation or its adjoint. For self-adjoint and anti-self-adjoint transformations this is pretty obvious since they’re (almost) the same thing anyway. For orthogonal and unitary transformations, they don’t change the lengths of vectors at all, so this makes sense.

Just to be clear, though, there are matrices that are normal, but which aren’t any of the special kinds we’ve talked about so far. For example, the transformation represented by the matrix

\displaystyle\begin{pmatrix}1&1&0\\{0}&1&1\\1&0&1\end{pmatrix}

has its adjoint represented by

\displaystyle\begin{pmatrix}1&0&1\\1&1&0\\{0}&1&1\end{pmatrix}

which is neither the original transformation nor its negative, so it’s neither self-adjoint nor anti-self-adjoint. We can calculate their product in either order to get

\displaystyle\begin{pmatrix}2&1&1\\1&2&1\\1&1&2\end{pmatrix}

since we get the same answer, the transformation is normal, but it’s clearly not unitary because if it were we’d get the identity matrix here.

August 5, 2009 Posted by | Algebra, Linear Algebra | 3 Comments

The Determinant of a Positive-Definite Transformation

Let’s keep pushing the analogy we’ve got going.

First, we know that the determinant of the adjoint of a transformation is the complex conjugate of the determinant of the original transformation (or just the same, for a real transformation). So what about self-adjoint transformations? We’ve said that these are analogous to real numbers, and indeed their determinants are real numbers. If we have a transformation H satisfying H^*=H, then we can take determinants to find

\displaystyle\det(H)=\det(H^*)=\overline{\det(H)}

and so the determinant is real.

What if H is not only self-adjoint, but positive-definite? We would like the determinant to actually be a positive real number.

Well, first let’s consider the eigenvalues of H. If \lvert v\rangle is an eigenvector we have H\lvert v\rangle=\lambda\lvert v\rangle for some scalar \lambda. Then we can calculate

\displaystyle\langle v\rvert H\lvert v\rangle=\langle v\rvert\lambda\lvert v\rangle=\lambda\langle v\vert v\rangle=\lambda\lVert v\rVert^2

If H is to be positive-definite, this must be positive, and so \lambda itself must be positive. Thus the eigenvalues of a positive-definite transformation are all positive.

Now if we’re working with a complex transformation we’re done. We can pick a basis so that the matrix for H is upper-triangular, and then its determinant is the product of its eigenvalues. Since the eigenvalues are all positive, so is the determinant.

But what happens over the real numbers? Now we might not be able to put the transformation into an upper-triangular form. But we can put it into an almost upper-triangular form. The determinant is then the product of the determinants of the blocks along the diagonal. The 1\times1 blocks are just eigenvalues, which still must be positive.

The 2\times2 blocks, on the other hand, correspond to eigenpairs. They have trace \tau and determinant \delta, and these must satisfy \tau^2<4\delta, or else we could decompose the block further. But \tau^2 is definitely positive, and so the determinant \delta has to be positive as well in any of these blocks. And thus the product of the determinants of the blocks down the diagonal is again positive.

So either way, the determinant of a positive-definite transformation is positive.

August 3, 2009 Posted by | Algebra, Linear Algebra | 7 Comments

Follow

Get every new post delivered to your Inbox.

Join 388 other followers