The Unapologetic Mathematician

Mathematics for the interested outsider

Orthogonal transformations

Given a form on a vector space V represented by the transformation B and a linear map T:V\rightarrow V, we’ve seen how to transform B by the action of T. That is, the space of all bilinear forms is a vector space which carries a representation of \mathrm{GL}(V). But given a particular form B, what is the stabilizer of B? That is, what transformations in \mathrm{GL}(V) send B back to itself.

Before we answer this, let’s look at it in a slightly different way. Given a form B we have a way of pairing vectors in V to get scalars. On the other hand, if we have a transformation T we could use it on the vectors before pairing them. We’re looking for those transformations so that for every pair of vectors the result of the pairing by B is the same before and after applying T.

So let’s look at the action we described last time: the form B is sent to T^*BT. So we’re looking for all T so that

\displaystyle T^*BT=B

We say that such a transformation is B-orthogonal, and the subgroup of all such transformations is the “orthogonal group” \mathrm{O}(V,B)\subseteq \mathrm{GL}(V). Sometimes, since the vector space V is sort of implicit in the form B, we abbreviate the group to \mathrm{O}(B).

Now there’s one particular orthogonal group that’s particularly useful. If we’ve got an inner-product space V (the setup for having our bra-ket notation) then the inner product itself is a form, and it’s described by the identity transformation. That is, the orthogonality condition in this case is that

\displaystyle T^*T=I_V

A transformation is orthogonal if its adjoint is the same as its inverse. This is the version of orthogonality that we’re most familiar with. Commonly, when we say that a transformation is “orthogonal” with no qualification about what form we’re using, we just mean that this condition holds.

Let’s take a look at this last condition geometrically. We use the inner product to define a notion of (squared-)length \langle v\vert v\rangle and a notion of (the cosine of) angle \langle w\vert v\rangle. So let’s transform the space by T and see what happens to our inner product, and thus to lengths and angles.

\displaystyle\langle T(w)\vert T(v)\rangle=\langle w\rvert T^*T\lvert v\rangle

First off, note that no matter what T we use, the transformation in the middle is self-adjoint and positive-definite, and so the new form is symmetric and positive-definite, and thus defines another inner product. But when is it the same inner product? When T^*T=I_V, of course! For then we have

\displaystyle\langle T(w)\vert T(v)\rangle=\langle w\rvert T^*T\lvert v\rangle=\langle w\rvert I_V\lvert v\rangle=\langle w\vert v\rangle

So orthogonal transformations are exactly those which preserve the notions of length and angle defined by the inner product. Geometrically, they correspond to rotations and reflections that change orientations, but leave lengths of vectors the same, and leave the angle between any pair of vectors the same.

July 27, 2009 Posted by | Algebra, Linear Algebra | 9 Comments

   

Follow

Get every new post delivered to your Inbox.

Join 392 other followers