The Matrix of the Adjoint
I hear joints popping as I stretch and try to get back into the main line of my posts.
We left off defining what we mean by a matrix element of a linear transformation. Let’s see how this relates to adjoints.
We start with a linear transformation between two inner product spaces. Given any vectors and we have the matrix element , using the inner product on . We can also write down the adjoint transformation , and its matrix element , using the inner product on .
But the inner product on is (conjugate) linear. That is, we know that the matrix element can also be written as . And we also have the adjoint relation . Putting these together, we find
So the matrix elements of and are pretty closely related.
What if we pick whole orthonormal bases of and of ? Now we can write out an entire matrix of as . Similarly, we can write a matrix of as
That is, we get the matrix for the adjoint transformation by taking the original matrix, swapping the two indices, and taking the complex conjugate of each entry. This “conjugate transpose” operation on matrices reflects adjunction on transformations.
[…] does this look like in terms of matrices? Since we only have one vector space we only need to pick one basis . Then we get a […]
Pingback by Self-Adjoint Transformations « The Unapologetic Mathematician | June 23, 2009 |
[…] First of all, such a transformation must be normal. If we have a diagonal matrix we can find the matrix of the adjoint by taking its conjugate transpose, and this will again be diagonal. Since any two diagonal matrices […]
Pingback by The Complex Spectral Theorem « The Unapologetic Mathematician | August 10, 2009 |