The Existence of Bases for Root Systems
We’ve defined what a base for a root system is, but we haven’t provided any evidence yet that they even exist. Today we’ll not only see that every root system has a base, but we’ll show how all possible bases arise. This will be sort of a long and dense one.
First of all, we observe that any hyperplane has measure zero, and so any finite collection of them will too. Thus the collection of all the hyperplanes perpendicular to vectors
cannot fill up all of
. We call vectors in one of these hyperplanes “singular”, and vectors in none of them “regular”.
When is regular, it divides
into two collections. A vector
is in
if
and
, and we have a similar definition for
. It should be clear that
, and that every vector
is in one or the other; otherwise
would be in
. For a regular
, we say that
is “decomposable” if
for
. Otherwise, we say that
is “indecomposable”.
Now we can state our existence theorem. Given a regular , let
be the set of indecomposable roots in
. Then
is a base of
, and every base of
arises in this manner. We will prove this in a number of steps.
First off, every vector in is a nonnegative integral linear combination of the vectors in
. Otherwise there is some
that can’t be written like that, and we can choose
so that
is as small as possible.
itself can’t be indecomposable, so we must have
for some two vectors
, and so
. Each of these two inner products are strictly positive, so to avoid contradicting the minimality of
we must be able to write each of
and
as a nonnegative linear combination of vectors in
. But then we can write
in this form after all! The assertion follows.
Second, if and
are distinct vectors in
then
. Indeed, by our lemma if
then
. And so either
or
lies in
. In the first case, we can write
, so
is decomposable. In the second case, we can similarly show that
is decomposable. And thus we have a contradiction and the assertion follows.
Next, is linearly independent. If we have a linear combination
then we can separate out the vectors for which the coefficient
and those
for which
, and write
with all coefficients positive. Call this common sum and calculate
Since each , this whole sum must be nonpositive, which can only happen if
. But then
which forces all the . Similarly, all the
, and thus the original linear combination must have been trivial. Thus
is linearly independent.
Now we can show that is a base. Every vector in
is indeed a nonnegative integral linear combination of the vectors in
. Since
, every vector in this set is a nonpositive integral linear combination of the vectors in
. And every vector in
is in one or the other of these sets. Also, since
spans
we find that
spans
as well. But since it’s linearly independent, it must be a basis. And so it satisfies both of the criteria to be a base.
Finally, every base is of the form
for some regular
. Indeed, we just have to find some
for which
for each
. Then since any
is an integral linear combination of
we can verify that
for all
, proving that
is regular. and
. Then the vectors
are clearly indecomposable, showing that
. But these sets contain the same number of elements since they’re both bases of
, and so
.
The only loose end is showing that such a exists. I’ll actually go one better and show that for any basis
the intersection of the “half-spaces”
is nonempty. To see this, define
This is what’s left of the basis vector after subtracting off its projection onto each of the other basis vectors
, leaving its projection onto the line perpendicular to all of them. Then consider the vector
where each
. It’s a straightforward computation to show that
, and so
is just such a vector as we’re claiming exists.
Your final summation doesn’t actually give you a vector orthogonal to the remaining basis vectors, unless everything is already orthogonal. I think you want a gram schmidt process here, applied independently to each basis vector.
Also, out of curiosity, do you have any plans to draw out any of the polytopal or crystollographic connections to root systems?
You’re right, Gilbert. But in the end the resulting vector
still has the properties we want.
As for applications, I’m just looking at classification for now. I may return to applications at some future point.
[…] A very useful concept in our study of root systems will be that of a Weyl chamber. As we showed at the beginning of last time, the hyperplanes for cannot fill up all of . What’s left over […]
Pingback by Weyl Chambers « The Unapologetic Mathematician | February 3, 2010 |
[…] then is a (positive) root for some simple . If for all , then the same argument we used when we showed is linearly independent would show that is linearly independent. But this is impossible because […]
Pingback by Some Lemmas on Simple Roots « The Unapologetic Mathematician | February 4, 2010 |
[…] is any regular vector, then there is some so that for all . That is, sends the Weyl chamber to the fundamental […]
Pingback by The Action of the Weyl Group on Weyl Chambers « The Unapologetic Mathematician | February 5, 2010 |