## The Existence of Bases for Root Systems

We’ve defined what a base for a root system is, but we haven’t provided any evidence yet that they even exist. Today we’ll not only see that every root system has a base, but we’ll show how *all* possible bases arise. This will be sort of a long and dense one.

First of all, we observe that any hyperplane has measure zero, and so any finite collection of them will too. Thus the collection of all the hyperplanes perpendicular to vectors cannot fill up all of . We call vectors in one of these hyperplanes “singular”, and vectors in none of them “regular”.

When is regular, it divides into two collections. A vector is in if and , and we have a similar definition for . It should be clear that , and that every vector is in one or the other; otherwise would be in . For a regular , we say that is “decomposable” if for . Otherwise, we say that is “indecomposable”.

Now we can state our existence theorem. Given a regular , let be the set of indecomposable roots in . Then is a base of , and every base of arises in this manner. We will prove this in a number of steps.

First off, every vector in is a nonnegative integral linear combination of the vectors in . Otherwise there is some that can’t be written like that, and we can choose so that is as small as possible. itself can’t be indecomposable, so we must have for some two vectors , and so . Each of these two inner products are strictly positive, so to avoid contradicting the minimality of we must be able to write each of and as a nonnegative linear combination of vectors in . But then we can write in this form after all! The assertion follows.

Second, if and are distinct vectors in then . Indeed, by our lemma if then . And so either or lies in . In the first case, we can write , so is decomposable. In the second case, we can similarly show that is decomposable. And thus we have a contradiction and the assertion follows.

Next, is linearly independent. If we have a linear combination

then we can separate out the vectors for which the coefficient and those for which , and write

with all coefficients positive. Call this common sum and calculate

Since each , this whole sum must be nonpositive, which can only happen if . But then

which forces all the . Similarly, all the , and thus the original linear combination must have been trivial. Thus is linearly independent.

Now we can show that is a base. Every vector in is indeed a nonnegative integral linear combination of the vectors in . Since , every vector in this set is a non*positive* integral linear combination of the vectors in . And every vector in is in one or the other of these sets. Also, since spans we find that spans as well. But since it’s linearly independent, it must be a basis. And so it satisfies both of the criteria to be a base.

Finally, every base is of the form for some regular . Indeed, we just have to find some for which for each . Then since any is an integral linear combination of we can verify that for all , proving that is regular. and . Then the vectors are clearly indecomposable, showing that . But these sets contain the same number of elements since they’re both bases of , and so .

The only loose end is showing that such a exists. I’ll actually go one better and show that for *any* basis the intersection of the “half-spaces” is nonempty. To see this, define

This is what’s left of the basis vector after subtracting off its projection onto each of the other basis vectors , leaving its projection onto the line perpendicular to all of them. Then consider the vector where each . It’s a straightforward computation to show that , and so is just such a vector as we’re claiming exists.