## A Lemma on Reflections

Here’s a fact we’ll find useful soon enough as we talk about reflections. Hopefully it will also help get back into thinking about linear transformations and inner product spaces. However, if the linear algebra gets a little hairy (or if you’re just joining us) you can just take this fact as given. Remember that we’re looking at a real vector space equipped with an inner product .

Now, let’s say is some finite collection of vectors which span (it doesn’t matter if they’re linearly independent or not). Let be a linear transformation which leaves invariant. That is, if we pick any vector then the image will be another vector in . Let’s also assume that there is some -dimensional subspace which leaves completely untouched. That is, for every . Finally, say that there’s some so that (clearly ) and also that is invariant under . Then I say that and .

We’ll proceed by actually considering the transformation , and showing that this is the identity. First off, definitely fixes , since

so acts as the identity on the line . In fact, I assert that also acts as the identity on the quotient space . Indeed, acts trivially on , and every vector in has a unique representative in . And then acts trivially on , and every vector in has a unique representative in .

This does *not*, however, mean that acts trivially on any given complement of . All we really know at this point is that for every the difference between and is some scalar multiple of . On the other hand, remember how we found upper-triangular matrices before. This time we peeled off one vector and the remaining transformation was the identity on the remaining -dimensional space. This tells us that all of our eigenvalues are , and the characteristic polynomial is , where . We can evaluate this on the transformation to find that

Now let’s try to use the collection of vectors . We assumed that both and send vectors in back to other vectors in , and so the same must be true of . But there are only finitely many vectors (say of them) in to begin with, so must act as some sort of permutation of the vectors in . But every permutation in has an order that divides . That is, applying times must send every vector in back to itself. But since is a spanning set for , this means that , or that

So we have two polynomial relations satisfied by , and will clearly satisfy any linear combination of these relations. But Euclid’s algorithm shows us that we can write the greatest common divisor of these relations as a linear combination, and so must satisfy the greatest common divisor of and . It’s not hard to show that this greatest common divisor is , which means that we must have or .

It’s sort of convoluted, but there are some neat tricks along the way, and we’ll be able to put this result to good use soon.

[...] is all the data we need to invoke our lemma, and conclude that is actually equal to . Specifying the action on the generators of is enough to [...]

Pingback by The Weyl Group of a Root System « The Unapologetic Mathematician | January 21, 2010 |

You’ve a “$lateh_\alpha$” which ought to be fixed.

Comment by Blake Stacey | February 21, 2010 |