The Unapologetic Mathematician

Mathematics for the interested outsider

The Existence of Bases for Root Systems

We’ve defined what a base for a root system is, but we haven’t provided any evidence yet that they even exist. Today we’ll not only see that every root system has a base, but we’ll show how all possible bases arise. This will be sort of a long and dense one.

First of all, we observe that any hyperplane has measure zero, and so any finite collection of them will too. Thus the collection of all the hyperplanes P_\alpha perpendicular to vectors \alpha\in\Phi cannot fill up all of V. We call vectors in one of these hyperplanes “singular”, and vectors in none of them “regular”.

When \gamma is regular, it divides \Phi into two collections. A vector \alpha is in \Phi^+(\gamma) if \alpha\in\Phi and \langle\alpha,\gamma\rangle>0, and we have a similar definition for \Phi^-(\gamma). It should be clear that \Phi^-(\gamma)=-\Phi^+(\gamma), and that every vector \alpha\in\Phi is in one or the other; otherwise \gamma would be in P_\alpha. For a regular \gamma, we say that \alpha\in\Phi^+(\gamma) is “decomposable” if \alpha=\beta_1+\beta_2 for \beta_1,\beta_2\in\Phi^+(\gamma). Otherwise, we say that \alpha is “indecomposable”.

Now we can state our existence theorem. Given a regular \gamma, let \Delta(\gamma) be the set of indecomposable roots in \Phi^+(\gamma). Then \Delta(\gamma) is a base of \Phi, and every base of \Phi arises in this manner. We will prove this in a number of steps.

First off, every vector in \Phi^+(\gamma) is a nonnegative integral linear combination of the vectors in \Delta(\gamma). Otherwise there is some \alpha\in\Phi^+(\gamma) that can’t be written like that, and we can choose \alpha so that \langle\gamma,\alpha\rangle is as small as possible. \alpha itself can’t be indecomposable, so we must have \alpha=\beta_1+\beta_2 for some two vectors \beta_1,\beta_2\in\Phi^+(\gamma), and so \langle\gamma,\alpha\rangle=\langle\gamma,\beta_1\rangle+\langle\gamma,\beta_2\rangle. Each of these two inner products are strictly positive, so to avoid contradicting the minimality of \langle\gamma,\alpha\rangle we must be able to write each of \beta_1 and \beta_2 as a nonnegative linear combination of vectors in \Delta(\gamma). But then we can write \alpha in this form after all! The assertion follows.

Second, if \alpha and \beta are distinct vectors in \Delta(\gamma) then \langle\alpha,\beta\rangle\leq0. Indeed, by our lemma if \langle\alpha,\beta\rangle>0 then \alpha-\beta\in\Phi. And so either \alpha-\beta or \beta-\alpha lies in \Phi^+(\gamma). In the first case, we can write \alpha=\beta+(\alpha-\beta), so \alpha is decomposable. In the second case, we can similarly show that \beta is decomposable. And thus we have a contradiction and the assertion follows.

Next, \Delta(\gamma) is linearly independent. If we have a linear combination

\displaystyle\sum\limits_{\alpha\in\Delta(\gamma)}r_\alpha\alpha=0

then we can separate out the vectors \alpha for which the coefficient r_\alpha>0 and those \beta for which r_\beta<0, and write

\displaystyle\sum\limits_\alpha s_\alpha\alpha=\sum\limits_\beta t_\beta\beta

with all coefficients positive. Call this common sum \epsilon and calculate

\displaystyle\langle\epsilon,\epsilon\rangle=\sum\limits_{\alpha,\beta}s_\alpha t_\beta\langle\alpha,\beta\rangle

Since each \langle\alpha,\beta\rangle\leq0, this whole sum must be nonpositive, which can only happen if \epsilon=0. But then

\displaystyle0=\langle\gamma,\epsilon\rangle=\sum\limits_\alpha s_\alpha\langle\gamma,\alpha\rangle

which forces all the s_\alpha=0. Similarly, all the t_\beta=0, and thus the original linear combination must have been trivial. Thus \Delta(\gamma) is linearly independent.

Now we can show that \Delta(\gamma) is a base. Every vector in \Phi^+(\gamma) is indeed a nonnegative integral linear combination of the vectors in \Delta(\gamma). Since \Phi^-(\gamma)=-\Phi^+(\gamma), every vector in this set is a nonpositive integral linear combination of the vectors in \Delta(\gamma). And every vector in \Phi is in one or the other of these sets. Also, since \Phi spans V we find that \Delta(\gamma) spans V as well. But since it’s linearly independent, it must be a basis. And so it satisfies both of the criteria to be a base.

Finally, every base \Delta is of the form \Delta(\gamma) for some regular \gamma. Indeed, we just have to find some \gamma for which \langle\gamma,\alpha\rangle>0 for each \alpha\in\Delta. Then since any \beta\in\Phi is an integral linear combination of \alpha\in\Delta we can verify that \langle\gamma,\beta\rangle\neq0 for all \beta\in\Phi, proving that \gamma is regular. and \Phi^+=\Phi^+(\gamma). Then the vectors \alpha\in\Delta are clearly indecomposable, showing that \Delta\subseteq\Delta(\gamma). But these sets contain the same number of elements since they’re both bases of V, and so \Delta=\Delta(\gamma).

The only loose end is showing that such a \gamma exists. I’ll actually go one better and show that for any basis \{\eta_i\}_{i=1}^{\dim(V)} the intersection of the “half-spaces” \{\gamma\vert\langle\gamma,\eta_i\rangle\} is nonempty. To see this, define

\displaystyle\delta_i=\eta_i-\sum\limits_{\substack{1\leq j\leq\dim(V)\\j\neq i}}\frac{\langle\eta_i,\eta_j\rangle}{\langle\eta_j,\eta_j\rangle}\eta_j

This is what’s left of the basis vector \eta_i after subtracting off its projection onto each of the other basis vectors \eta_j, leaving its projection onto the line perpendicular to all of them. Then consider the vector \gamma=r^i\delta_i where each r^i>0. It’s a straightforward computation to show that \langle\gamma,\eta_k\rangle=r^i\langle\delta_k,\eta_k\rangle>0, and so \gamma is just such a vector as we’re claiming exists.

About these ads

February 2, 2010 - Posted by | Geometry, Root Systems

5 Comments »

  1. Your final summation doesn’t actually give you a vector orthogonal to the remaining basis vectors, unless everything is already orthogonal. I think you want a gram schmidt process here, applied independently to each basis vector.

    Also, out of curiosity, do you have any plans to draw out any of the polytopal or crystollographic connections to root systems?

    Comment by Gilbert Bernstein | February 3, 2010 | Reply

  2. You’re right, Gilbert. But in the end the resulting vector \gamma still has the properties we want.

    As for applications, I’m just looking at classification for now. I may return to applications at some future point.

    Comment by John Armstrong | February 3, 2010 | Reply

  3. [...] A very useful concept in our study of root systems will be that of a Weyl chamber. As we showed at the beginning of last time, the hyperplanes for cannot fill up all of . What’s left over [...]

    Pingback by Weyl Chambers « The Unapologetic Mathematician | February 3, 2010 | Reply

  4. [...] then is a (positive) root for some simple . If for all , then the same argument we used when we showed is linearly independent would show that is linearly independent. But this is impossible because [...]

    Pingback by Some Lemmas on Simple Roots « The Unapologetic Mathematician | February 4, 2010 | Reply

  5. [...] is any regular vector, then there is some so that for all . That is, sends the Weyl chamber to the fundamental [...]

    Pingback by The Action of the Weyl Group on Weyl Chambers « The Unapologetic Mathematician | February 5, 2010 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 392 other followers

%d bloggers like this: