The Unapologetic Mathematician

Mathematics for the interested outsider

Properties of Irreducible Root Systems III

Today we conclude with our series of lemmas on irreducible root systems.

If \Phi is irreducible, then roots in \Phi have at most two different lengths. Here I mean actual geometric lengths, as measured by the inner product, not the “length” of a Weyl group element. Further, any two roots of the same length can be sent to each other by the action of the Weyl group.

Let \alpha and \beta be two roots. We just saw that the \mathcal{W}-orbit of \alpha spans V, and so not all the \sigma(\alpha) can be perpendicular to \beta. From what we discovered about pairs of roots, we know that if \langle\alpha,\beta\rangle\neq0, then the possible ratios of squared lengths \frac{\lVert\beta\rVert^2}{\lVert\alpha\rVert^2} are limited. Indeed, this ratio must be one of \frac{1}{3}, \frac{1}{2}, {1}, {2}, or {3}.

If there are three distinct root-lengths, let \alpha, \beta, and \gamma be samples of each length in increasing order. We must then have \frac{\lVert\beta\rVert^2}{\lVert\alpha\rVert^2}=2 and \frac{\lVert\gamma\rVert^2}{\lVert\alpha\rVert^2}=3, and so \frac{\lVert\gamma\rVert^2}{\lVert\beta\rVert^2}=\frac{3}{2}, which clearly violates our conditions. Thus there can be at most two root lengths, as asserted. We call those of the smaller length “short roots”, and the others “long roots”. If there is only one length, we call all the roots long, by convention.

Now let \alpha and \beta have the same length. By using the Weyl group as above, we may assume that these roots are non-orthogonal. We may also assume that they’re distinct, or else we’re already done! By the same data as before, we conclude that \alpha\rtimes\beta=\beta\rtimes\alpha=\pm1. We can replace one root by its negative, if need be, and assume that \alpha\rtimes\beta=1. Then we may calculate:

\displaystyle[\sigma_\alpha\sigma_\beta\sigma_\alpha](\beta)=[\sigma_\alpha\sigma_\beta](\beta-\alpha)=\sigma_\alpha(-\beta-\alpha+\beta)=\alpha.

We may note, in passing, that the unique maximal root \beta is long. Indeed, it suffices to show that \langle\beta,\beta\rangle\geq\langle\alpha,\alpha\rangle for all \alpha\in\Phi. We may, without loss of generality, assume \alpha is in the fundamental doman \overline{\mathfrak{C}(\Delta)}. Since \beta-\alpha\succ0, we must have \langle\gamma,\beta-\alpha\rangle\geq0 for any other \gamma\in\overline{\mathfrak{C}(\Delta)}. In particular, we have \langle\beta,\beta-\alpha\rangle\geq0 and \langle\alpha,\beta-\alpha\rangle\geq0. Putting these together, we conclude

\displaystyle\langle\beta,\beta\rangle\geq\langle\alpha,\beta\rangle\geq\langle\alpha,\alpha\rangle

and so \beta must be a long root.

February 12, 2010 Posted by | Geometry, Root Systems | 4 Comments

Properties of Irreducible Root Systems II

We continue with our series of lemmas on irreducible root systems.

If \Phi is irreducible, then the Weyl group \mathcal{W} acts irreducibly on V. That is, we cannot decompose the representation of \mathcal{W} on V as the direct sum of two other representations. Even more explicitly, we cannot write V=W\oplus W' for two nontrivial subspaces W and W' with each one of these subspaces invariant under \mathcal{W}. If W is an invariant subspace, then the orthogonal complement W' will also be invariant. This is a basic fact about the representation theory of finite groups, which I will simply quote for now, since I haven’t covered that in detail. Thus my assertion is that if W is an invariant subspace under \mathcal{W}, then it is either trivial or the whole of V.

For any root \alpha\in\Phi, either \alpha\in W or W\subseteq P_\alpha. Indeed, since \sigma_\alpha\in\mathcal{W}, we must have \sigma_\alpha(W)=W. As a reflection, \sigma_\alpha breaks V into a one-dimensional eigenspace with eigenvalue -1 and another complementary eigenspace with eigenvalue {1}. If W contains the -1-eigenspace, then \alpha\in W. If not, then \alpha is perpendicular to W or W couldn’t be invariant under \sigma_\alpha, and in this case W\subseteq P_\alpha.

So then if \alpha isn’t in W then it must be in the orthogonal complement W'. Thus every root is either in W or in W', and this gives us an orthogonal decomposition of the root system. But since \Phi is irreducible, one or the other of these collections must be empty, and thus W must be either trivial or the whole of V.

Even better, the span of the \mathcal{W}-orbit of any root \alpha\in\Phi spans V. Indeed, the subspace spanned by roots of the form \sigma(\alpha) is invariant under the action of \mathcal{W}, and so since V is irreducible it must be either trivial (clearly impossible) or the whole of V.

February 11, 2010 Posted by | Geometry, Root Systems | 4 Comments

Properties of Irreducible Root Systems I

Now we can turn towards the project of classifying irreducible root systems up to isomorphism. And we start with some properties of irreducible root systems.

First off, remember a root system \Phi is reducible if we can write it as the disjoint union of two collections of roots \Phi=\Psi\uplus\Psi' so that each root in \Psi is perpendicular to each one in \Psi'. I assert that, for any base \Delta\subseteq\Phi, \Phi is reducible if and only if \Delta can itself be broken into two collections \Delta=\Gamma\uplus\Gamma' in just the same way. One direction is easy: if we can decompose \Phi, then the roots in \Delta are either in \Psi or \Psi' and we can define \Gamma=\Delta\cap\Psi and \Gamma'=\Delta\cap\Psi'.

On the other hand, if we write \Delta=\Gamma\uplus\Gamma' with each simple root in \Gamma perpendicular to each one in \Gamma', then we will find a similar decomposition of \Phi. But we know from our study of the Weyl group that every root in \Phi can be sent by the Weyl group to some simple root in \Delta. So we define \Psi to be the collection of roots whose orbit includes a point of \Gamma, and \Psi' to be the collection of roots whose orbit includes a point of \Gamma'. So a vector in \Psi is of the form \sigma(\alpha) for some Weyl group element \sigma\in\mathcal{W} and some simple root \alpha\in\Gamma. The Weyl group element \sigma can be written as a sequence of simple reflections. A simple reflection corresponding to a root in \Gamma adds some multiple of that root to the vector, and thus leaves the subspace spanned by \Gamma invariant; while a simple reflection corresponding to a root in \Gamma' leaves the subspace spanned by \Gamma fixed point-by-point. Thus any root in \Psi must lie in the subspace spanned by \Gamma, and similarly any root in \Psi' must lie in the subspace spanned by \Gamma'. This shows that \Phi=\Psi\uplus\Psi' is exactly the sort of decomposition we’re looking for.

If \Phi is irreducible, with base \Delta, then there is a unique maximal root \beta\in\Phi relative to the partial ordering \prec on roots. In particular, the height of \beta is greater than the height of any other root in \Phi, \langle\beta,\alpha\rangle\geq0 for all simple roots \alpha\in\Delta, and in the unique expression

\displaystyle\beta=\sum\limits_{\alpha\in\Delta}k_\alpha\alpha

all of the coefficients k_\alpha are strictly positive.

First of all, if \beta is maximal then it’s clearly positive, and so each k_\alpha is either positive or zero. Let \Gamma be the collection of simple roots \alpha so that k_\alpha>0, and \Gamma' be the collection of simple roots \alpha so that k_\alpha=0. Then \Delta=\Gamma\uplus\Gamma' is a partition of the base. From here we’ll assume that \Gamma' is nonempty and derive a contradiction.

If \alpha'\in\Gamma', then \langle\alpha,\alpha'\rangle\leq0 for any other simple root \alpha\in\Delta. From this, we can calculate

\displaystyle\langle\beta,\alpha'\rangle=\left\langle\sum\limits_{\substack{\alpha\in\Delta\\\alpha\neq\alpha'}}k_\alpha\alpha,\alpha'\right\rangle=\sum\limits_{\substack{\alpha\in\Delta\\\alpha\neq\alpha'}}k_\alpha\langle\alpha,\alpha'\rangle\leq0

Since \Phi is irreducible, there must be at least one \alpha\in\Gamma and \alpha'\in\Gamma' so that \langle\alpha,\alpha'\rangle<0, and so we must have \langle\beta,\alpha'\rangle<0 for this \alpha'. This proves that \beta+\alpha' must also be a root, which contradicts the maximality of \beta. And so we conclude that \Gamma' is empty, and all the coefficients k_\alpha>0. In passing, we can use the same fact to show that \langle\beta,\alpha\rangle\geq0 for all \alpha\in\Delta, or else \beta wouldn’t be maximal.

This same argument applies to any other maximal root \beta', giving \langle\alpha,\beta'\rangle for all \alpha\in\Delta with the inequality strict for at least one \alpha. We calculate

\displaystyle\langle\beta,\beta'\rangle=\sum\limits_{\alpha\in\Delta}k_\alpha\langle\alpha,\beta'\rangle>0

which tells us that \beta-\beta' is a root unless \beta=\beta'. But if \beta-\beta' is a root, then either \beta\prec\beta' or \beta\succ\beta', contradicting the assumption that both are maximal. Thus \beta must be unique.

February 10, 2010 Posted by | Geometry, Root Systems | 4 Comments

The Fundamental Weyl Chamber

When we first discussed Weyl chambers, we defined the fundamental Weyl chamber \mathfrak{C}(\Delta) associated to a base \Delta as the collection of all the vectors \lambda\in V satisfying \langle\lambda,\alpha\rangle>0 for all simple roots \alpha\in\Delta. Today, I want to discuss the closure \overline{\mathfrak{C}(\Delta)} of this set — allowing \langle\lambda,\alpha\rangle=0 — and show that it’s a fundamental domain for the action of the Weyl group \mathcal{W}.

To be more explicit, saying that the fundamental Weyl chamber \overline{\mathfrak{C}(\Delta)} is a fundamental domain means that each vector \mu\in V is in the orbit of exactly one vector in \overline{\mathfrak{C}(\Delta)}. That is, there is a unique \lambda\in\overline{\mathfrak{C}(\Delta)} so that \lambda=\sigma(\mu) for some \sigma\in\mathcal{W}.

First, to existence. Given a vector \mu\in V, we consider its orbit — the collection of all the \sigma(\mu) as \sigma runs over all elements of \mathcal{W}. We have to find a vector in this orbit which lies in the fundamental Weyl chamber \overline{\mathfrak{C}(\Delta)}. To do this, we’ll temporarily extend our partial order to all of V by saying that \mu\prec\nu if \nu-\mu is a nonnegative \mathbb{R}-linear combination of simple roots. Relative to this order, pick a maximal vector \lambda=\sigma(\mu); that is, one so that for any \nu=\tau(\mu) we never have \nu\succ\lambda. There may well be more than one such maximal vector, given what we’ve said so far, but there will always be at least one.

I say that this \lambda=\sigma(\mu) is actually in the fundamental Weyl chamber. Indeed, if it weren’t then there would be some simple root \alpha\in\Delta so that \langle\lambda,\alpha,\rangle<0. But then we could look at the vector \sigma_\alpha(\lambda)=[\sigma_\alpha\sigma](\mu)\in\mathcal{W}\mu. We calculate

\displaystyle\sigma_\alpha(\lambda)-\lambda=-2\frac{\langle\lambda,\alpha\rangle}{\langle\alpha,\alpha,\rangle}\alpha

which is a positive \mathbb{R}-linear combination of simple roots. Thus \sigma_\alpha(\lambda)\succ\lambda, which is impossible by assumption. In fact, this gives us a method for constructing a maximal vector in the orbit. Just start with \mu and form its inner product with all the simple roots. If we find one for which the inner product is negative, reflect the vector through the plane perpendicular to that simple root. Eventually, you’ll end up with a vector in the fundamental Weyl chamber!

Now for uniqueness: if there are two vectors \lambda_1 and \lambda_2 in the orbit \mathcal{W}\mu that lie within the fundamental Weyl chamber, then we must have \lambda_1=\sigma(\lambda_2) for some \sigma\in\mathcal{W}. What I’ll show is that if we have \lambda_1=\sigma(\lambda_2) for two vectors in the fundamental Weyl chamber, then \sigma must be the product of simple reflections which leave \lambda_2 fixed, and thus \lambda_1=\lambda_2.

We’ll prove this by induction on the length of the Weyl group element \sigma. If l(\sigma)=0, then \sigma is the identity and the statement is obvious. If l(\sigma)>0 then (by the result we proved last time) \sigma must send some positive root to a negative root. In particular, \sigma cannot send all simple roots to positive roots. So let’s say that \alpha\in\Delta is a simple root for which \sigma(\alpha)\prec0. Then we observe

\displaystyle0\geq\langle\lambda_1,\sigma(\alpha)\rangle=\langle\sigma^{-1}(\lambda_1),\alpha\rangle=\langle\lambda_2,\alpha\rangle\geq0

since \lambda_1 and \lambda_2 are both in the fundamental Weyl domain. Thus it is forced that \langle\lambda_2,\alpha\rangle=0, that \sigma_\alpha(\lambda_2)=\lambda_2, and then that [\sigma\sigma_\alpha](\lambda_2)=\lambda_1. But \sigma\sigma_\alpha sends fewer positive roots to negative ones than \sigma does, so l(\sigma\sigma_\alpha)<l(\sigma) and we can invoke the inductive hypothesis to finish the job.

The upshot of all this is that we know what the space of orbits of \mathcal{W} looks like! It has one point for each vector \lambda\in\overline{\mathfrak{C}(\Delta)}. If \lambda is in the interior of this fundamental domain, then the orbit looks just like a copy of \mathcal{W}. On the other hand, if \lambda lies on one of the boundary hyperplanes the orbit looks like “half” of the Weyl group. That is, if \langle\lambda,\alpha\rangle=0 then \sigma(\lambda)=[\sigma\sigma_\alpha](\lambda), so both of the corresponding group elements “collapse” into one point in this orbit. As \lambda lies on more and more of the boundary hyperplanes, more and more of the orbit “folds up”, until finally at \lambda=0 we have an orbit consisting of exactly one point.

February 9, 2010 Posted by | Geometry, Root Systems | 2 Comments

Lengths of Weyl Group Elements

With our theorem from last time about the Weyl group action, and the lemmas from earlier about simple roots and reflections, we can define a few notions that make discussing Weyl groups easier. Any Weyl group element \sigma\in\mathcal{W} can be written as a composition of simple reflections

\displaystyle\sigma=\sigma_{\alpha_1}\dots\sigma_{\alpha_t}

where all \alpha_k\in\Delta are simple roots for some choice of a base \Delta\subseteq\Phi. In general we can do this in many ways, and some will have larger values for t than others. But there must be some minimal number of simple reflections it takes to make \sigma — some smallest possible value of t. This number we call the “length” l(\sigma) of the root \sigma relative to \Delta, and an expression that uses this minimal number of reflections is called “reduced”. By definition we set l(1)=0 for the identity element, since we can write it with no reflections at all.

Now, we also have another characterization of the length of a root. Let n(\sigma) be the number of positive roots \alpha\in\Phi^+ for which \sigma(\alpha)\prec0 — the number of roots that \sigma moves from \Phi^+ to \Phi^-. I say that l(\sigma)=n(\sigma) for all \sigma\in\mathcal{W}, and I’ll proceed by induction on l(\sigma). Indeed, the base case is obvious, since the only element of \mathcal{W} with length zero is the identity, and it sends no positive roots to negative roots.

If this assertion is true for all \tau\in\mathcal{W} with l(\tau)<l(\sigma), then we write \sigma in a reduced form as \sigma_{\alpha_1}\dots\sigma_{\alpha_t} and set \alpha=\alpha_t. By one of our lemmas, we see that \sigma(\alpha)\prec0. By another of our lemmas we know that \sigma_\alpha merely permutes the positive roots other than \alpha, and so n(\sigma\sigma_\alpha)=n(\sigma)-1. On the other hand, l(\sigma\sigma_\alpha)=l(\sigma)-1<l(\sigma), and our inductive hypothesis allows us to conclude that l(\sigma\sigma_\alpha)=n(\sigma\sigma_\alpha), and thus that l(\sigma)=n(\sigma).

February 8, 2010 Posted by | Geometry, Root Systems | 2 Comments

The Action of the Weyl Group on Weyl Chambers

With our latest lemmas in hand, we’re ready to describe the action of the Weyl group \mathcal{W} of a root system \Phi on the set of its Weyl chambers. Specifically, the action is “simply transitive”, and the group itself is generated by the reflections corresponding to the simple roots in any given base \Delta.

To be a bit more explicit, let \Delta be any fixed base of \Phi. Then a number of things happen:

  • If \gamma is any regular vector, then there is some \sigma\in\mathcal{W} so that \langle\sigma(\gamma),\alpha\rangle>0 for all \alpha\in\Delta. That is, \sigma sends the Weyl chamber \mathfrak{C}(\gamma) to the fundamental Weyl chamber \mathfrak{C}(\Delta).
  • If \Delta' is another base, then there is some \sigma\in\mathcal{W} so that \sigma(\Delta')=\Delta. That is, \sigma sends \mathfrak{C}(\Delta') to \mathfrak{C}(\Delta). We say that the action of the Weyl group is “transitive” on bases and their corresponding Weyl chambers.
  • If \alpha\in\Phi is any root, then there is some \sigma\in\mathcal{W} so that \sigma(\alpha)\in\Delta.
  • The Weyl group \mathcal{W} is generated by the \sigma_\alpha for \alpha\in\Delta.
  • If \sigma(\Delta)=\Delta for some \sigma\in\mathcal{W}, then \sigma is the identity transformation. That is, the only transformation in the Weyl group that sends a base back to itself is the trivial one. We say that the action of the Weyl group is “simple” on bases and their corresponding Weyl chambers.

What we’ll do is let \mathcal{W}' be the group generated by the \sigma_\alpha for \alpha\in\Delta, as in the fourth assertion. We’ll show that this group satisfies the first three assertions, and then show that \mathcal{W}'=\mathcal{W}.

Let \gamma be a regular vector and write \delta for the half-sum of the positive roots

\displaystyle\delta=\frac{1}{2}\sum\limits_{\beta\in\Phi^+}\beta

Choose some \sigma\in\mathcal{W}' so that \langle\sigma(\gamma),\delta\rangle is as large as possible. If \sigma_\alpha is simple, then \sigma_\alpha\sigma is in \mathcal{W}' too, so we find

\displaystyle\begin{aligned}\langle\sigma(\gamma),\delta\rangle&\geq\langle\sigma_\alpha\sigma(\gamma),\delta\rangle\\&=\langle\sigma(\gamma),\sigma_\alpha(\delta)\rangle\\&=\langle\sigma(\gamma),\delta-\alpha\rangle\\&=\langle\sigma(\gamma),\delta\rangle-\langle\sigma(\gamma),\alpha\rangle\end{aligned}

which forces \langle\sigma(\gamma),\alpha\rangle\geq0 for all \alpha\in\Delta. None of these inner products can actually equal zero, because if one was then we would have \gamma\in P_\alpha and \gamma wouldn’t be regular. Therefore \sigma(\gamma) lies in the fundamental Weyl chamber, as desired.

For the second assertion, we know that there must be some regular \gamma in the positive half-space for each root \alpha'\in\Delta', and the first assertion then applies to send \Delta' to \Delta.

For the third assertion, we can invoke the second assertion as long as we know that every root \alpha\in\Phi lies in some base \Delta'. We can find some \gamma\in P_\alpha that’s in no other hyperplane perpendicular to another root (other than -\alpha). Then pick some close enough \gamma' so that \langle\gamma',\alpha\rangle=\epsilon>0, but also \lvert\langle\gamma',\beta\rangle\rvert>0 for all \beta\neq\pm\alpha. The root \alpha must then belong to the base \Delta(\gamma').

Okay, now let’s show that \mathcal{W}'=\mathcal{W}. We just need to show that each reflection \sigma_\alpha for \alpha\in\Phi (all of which together generate \mathcal{W}) is an element of \mathcal{W}'. But using our third assertion we can find some \tau\in\mathcal{W}' so that \beta=\tau(\alpha)\in\Delta. Then

\displaystyle\sigma_\beta=\sigma_{\tau(\alpha)}=\tau\sigma_\alpha\tau^{-1}

and so \sigma_\alpha=\tau^{-1}\sigma_\beta\tau\in\mathcal{W}'.

Finally, suppose that \sigma is some non-identity element of \mathcal{W} so that \sigma(\Delta)=\Delta. Thanks to our fourth assertion we can write \sigma as a string \sigma_1\dots\sigma_t of basic reflections, and we can assume that t is as small as possible. Then we must have \sigma(\alpha_t)\prec0 by our final lemma from last time, but we also must have \sigma(\alpha_t)\in\Delta\subseteq\Phi^+, which gives us a contradiction.

February 5, 2010 Posted by | Geometry, Root Systems | 7 Comments

Some Lemmas on Simple Roots

If \Delta is some fixed base of a root system \Phi, we call the roots \alpha\in\Delta “simple”. Simple roots have a number of nice properties, some of which we’ll run through now.

First off, if \alpha\in\Phi^+ is positive but not simple, then \alpha-\beta is a (positive) root for some simple \beta\in\Delta. If \langle\alpha,\beta\rangle\leq0 for all \beta\in\Delta, then the same argument we used when we showed \Delta(\gamma) is linearly independent would show that \Delta\cup\{\alpha\} is linearly independent. But this is impossible because \Delta is already a basis.

So \langle\alpha,\beta\rangle>0 for some \beta\in\Delta, and thus \alpha-\beta\in\Phi. It must be positive, since the height of \alpha must be at least 2. That is, at least one coefficient of \alpha-\beta with respect to \Delta must be positive, and so they all are.

In fact, every \alpha\in\Phi^+ can be written (not uniquely) as the sum \beta_1+\dots+\beta_{\mathrm{ht}(\alpha)} for a bunch of \beta_i\in\Delta, and in such a way that each partial sum \beta_1+\dots+\beta_k is itself a positive root. This is a great proof by induction on \mathrm{ht}(\alpha), for if \mathrm{ht}(\alpha)=1 then \alpha is in fact simple itself. If \alpha is not simple, then our argument above gives a \beta_{\mathrm{ht}(\alpha)} so that \alpha=\alpha'+\beta_{\mathrm{ht}(\alpha)} for some \alpha'\in\Phi^+ with \mathrm{ht}(\alpha')=\mathrm{ht}(\alpha)-1. And so on, by induction.

If \alpha is simple, then the reflection \sigma_\alpha permutes the positive roots other than \alpha. That is, if \alpha\neq\beta\in\Phi^+, then \sigma_\alpha(\beta)\in\Phi^+ as well. Indeed, we write

\displaystyle\beta=\sum\limits_{\gamma\in\Delta}k_\gamma\gamma

with all k_\gamma nonnegative. Clearly k_\gamma\neq0 for some \gamma\neq\alpha (otherwise \beta=\alpha). But the coefficient of \gamma in \sigma_\alpha(\beta)=\beta-(\beta\rtimes\alpha)\alpha must still be k_\gamma. Since this is positive, all the coefficients in the decomposition of \sigma_\alpha(\beta) are positive, and so \sigma_\alpha(\beta)\in\Phi^+. Further, it can’t be \alpha itself, because \alpha is the image \sigma_\alpha(-\alpha).

In fact, this leads to a particularly useful little trick. Let \delta be the half-sum of all the positive roots. That is,

\displaystyle\delta=\frac{1}{2}\sum\limits_{\beta\in\Phi^+}\beta

then \sigma_\alpha(\delta)=\delta-\alpha for all simple roots \alpha. The reflection shuffles around all the positive roots other than \alpha itself, which it sends to -\alpha. This is a difference in the sum of -2\alpha, which the \frac{1}{2} turns into -\alpha.

Now take a bunch of \alpha_1,\dots,\alpha_t\in\Delta (not necessarily distinct) and write \sigma_i=\sigma_{\alpha_i}. If \sigma_1\dots\sigma_{t-1}(\alpha_t)\prec0, then there is some index 1\leq s<t that we can skip. That is,

\displaystyle\sigma_1\dots\sigma_t=\sigma_1\dots\sigma_{s-1}\sigma_{s+1}\dots\sigma_{t-1}

Write \beta_i=\sigma_{i+1}\dots\sigma_{t-1}(\alpha_t) for every i from {0} to t-2, and \beta_{t-1}=\alpha_t. By our assumption, \beta_0\prec0 and \beta_{t-1}\succ0. Thus there is some smallest index s so that \beta_s\succ0. Then \sigma_s(\beta_s)\prec0, and we must have \beta_s=\alpha_s. But we know that \sigma_{\tau(\alpha)}=\tau\sigma_\alpha\tau^{-1}. In particular,

\displaystyle\sigma_s=\left(\sigma_{s+1}\dots\sigma_{t-1}\right)\sigma_t\left(\sigma_{s+1}\dots\sigma_{t-1}\right)^{-1}

And then we can write

\displaystyle\begin{aligned}\sigma_1\dots\sigma_{s-1}(\sigma_s\sigma_{s+1}\dots\sigma_{t-1})\sigma_t&=\sigma_1\dots\sigma_{s-1}(\sigma_{s+1}\dots\sigma_{t-1}\sigma_t)\sigma_t\\&=\sigma_1\dots\sigma_{s-1}\sigma_{s+1}\dots\sigma_{t-1}\end{aligned}

From this we can conclude that if \sigma=\sigma_1\dots\sigma_t is an expression in terms of the basic reflections with t as small as possible, then \sigma(\alpha_t)\prec0. Indeed, if \sigma(\alpha_t)\succ0, then

\displaystyle\sigma_1\dots\sigma_{t-1}(\alpha_t)=\sigma_1\dots\sigma_{t-1}(\sigma_t(-\alpha_t))=-\sigma(\alpha_t)\prec0

and we’ve just seen that in this case we can leave off \sigma_t as well as some \sigma_s in the expression for \sigma.

February 4, 2010 Posted by | Geometry, Root Systems | 8 Comments

Weyl Chambers

A very useful concept in our study of root systems will be that of a Weyl chamber. As we showed at the beginning of last time, the hyperplanes P_\alpha for \alpha\in\Phi cannot fill up all of V. What’s left over they chop into a bunch of connected components, which we call Weyl chambers. Thus every regular vector \gamma belongs to exactly one of these Weyl chambers, denoted \mathfrak{C}(\gamma).

Saying that two vectors share a Weyl chamber — that \mathfrak{C}(\gamma)=\mathfrak{C}(\gamma') — tells us that \gamma and \gamma' lie on the same side of each and every hyperplane P_\alpha for \alpha\in\Phi. That is, \langle\gamma,\alpha\rangle and \langle\gamma',\alpha\rangle are either both positive or both negative. So this means that \Phi^+(\gamma)=\Phi^+(\gamma'), and thus the induced bases are equal: \Delta(\gamma)=\Delta(\gamma'). We see, then, that we have a natural bijection between the Weyl chambers of a root system \Phi and the bases for \Phi.

We write \mathfrak{C}(\Delta)=\mathfrak{C}(\gamma) for \Delta=\Delta(\gamma) and call this the fundamental Weyl chamber relative to \Delta. Geometrically, \mathfrak{C}(\Delta) is the open convex set consisting of the intersection of all the half-spaces \{\gamma\vert\langle\gamma,\alpha\rangle>0\} for \alpha\in\Delta.

The Weyl group \mathcal{W} of \Phi shuffles Weyl chambers around. Specifically, if \sigma\in\mathcal{W} and \gamma is regular, then \sigma(\mathfrak{C}(\gamma))=\mathfrak{C}(\sigma(\gamma)).

On the other hand, the Weyl group also sends bases of \Phi to each other. If \Delta\subseteq\Phi is a base, then \sigma(\Delta) is another base. Indeed, since \sigma is invertible \sigma(\Delta) will still be a basis for V. Further, for any \beta\in\Phi we can write \beta=\sigma(\beta'), and then use the base property of \Delta to write \beta' as a nonnegative or nonpositive integral combination of \Delta. Hitting everything with \sigma makes \beta a nonnegative or nonpositive integral combination of \sigma(\Delta), and so this is indeed a base.

And, just as we’d hope, these two actions of the Weyl group are equivalent by the bijection above. We have \sigma(\Delta(\gamma))=\Delta(\sigma(\gamma)) because \sigma preserves the inner product, and so \langle\sigma(\gamma),\sigma(\alpha)\rangle=\langle\gamma,\alpha\rangle. Thus we write \Delta=\Delta(\gamma) for some regular \gamma and find that

\displaystyle\begin{aligned}\sigma(\mathfrak{C}(\Delta))&=\sigma(\mathfrak{C}(\Delta(\gamma)))\\&=\sigma(\mathfrak{C}(\gamma))\\&=\mathfrak{C}(\sigma(\gamma))\\&=\mathfrak{C}(\Delta(\sigma(\gamma)))\\&=\mathfrak{C}(\sigma(\Delta(\gamma)))\\&=\mathfrak{C}(\sigma(\Delta))\end{aligned}

February 3, 2010 Posted by | Geometry, Root Systems | 3 Comments

The Existence of Bases for Root Systems

We’ve defined what a base for a root system is, but we haven’t provided any evidence yet that they even exist. Today we’ll not only see that every root system has a base, but we’ll show how all possible bases arise. This will be sort of a long and dense one.

First of all, we observe that any hyperplane has measure zero, and so any finite collection of them will too. Thus the collection of all the hyperplanes P_\alpha perpendicular to vectors \alpha\in\Phi cannot fill up all of V. We call vectors in one of these hyperplanes “singular”, and vectors in none of them “regular”.

When \gamma is regular, it divides \Phi into two collections. A vector \alpha is in \Phi^+(\gamma) if \alpha\in\Phi and \langle\alpha,\gamma\rangle>0, and we have a similar definition for \Phi^-(\gamma). It should be clear that \Phi^-(\gamma)=-\Phi^+(\gamma), and that every vector \alpha\in\Phi is in one or the other; otherwise \gamma would be in P_\alpha. For a regular \gamma, we say that \alpha\in\Phi^+(\gamma) is “decomposable” if \alpha=\beta_1+\beta_2 for \beta_1,\beta_2\in\Phi^+(\gamma). Otherwise, we say that \alpha is “indecomposable”.

Now we can state our existence theorem. Given a regular \gamma, let \Delta(\gamma) be the set of indecomposable roots in \Phi^+(\gamma). Then \Delta(\gamma) is a base of \Phi, and every base of \Phi arises in this manner. We will prove this in a number of steps.

First off, every vector in \Phi^+(\gamma) is a nonnegative integral linear combination of the vectors in \Delta(\gamma). Otherwise there is some \alpha\in\Phi^+(\gamma) that can’t be written like that, and we can choose \alpha so that \langle\gamma,\alpha\rangle is as small as possible. \alpha itself can’t be indecomposable, so we must have \alpha=\beta_1+\beta_2 for some two vectors \beta_1,\beta_2\in\Phi^+(\gamma), and so \langle\gamma,\alpha\rangle=\langle\gamma,\beta_1\rangle+\langle\gamma,\beta_2\rangle. Each of these two inner products are strictly positive, so to avoid contradicting the minimality of \langle\gamma,\alpha\rangle we must be able to write each of \beta_1 and \beta_2 as a nonnegative linear combination of vectors in \Delta(\gamma). But then we can write \alpha in this form after all! The assertion follows.

Second, if \alpha and \beta are distinct vectors in \Delta(\gamma) then \langle\alpha,\beta\rangle\leq0. Indeed, by our lemma if \langle\alpha,\beta\rangle>0 then \alpha-\beta\in\Phi. And so either \alpha-\beta or \beta-\alpha lies in \Phi^+(\gamma). In the first case, we can write \alpha=\beta+(\alpha-\beta), so \alpha is decomposable. In the second case, we can similarly show that \beta is decomposable. And thus we have a contradiction and the assertion follows.

Next, \Delta(\gamma) is linearly independent. If we have a linear combination

\displaystyle\sum\limits_{\alpha\in\Delta(\gamma)}r_\alpha\alpha=0

then we can separate out the vectors \alpha for which the coefficient r_\alpha>0 and those \beta for which r_\beta<0, and write

\displaystyle\sum\limits_\alpha s_\alpha\alpha=\sum\limits_\beta t_\beta\beta

with all coefficients positive. Call this common sum \epsilon and calculate

\displaystyle\langle\epsilon,\epsilon\rangle=\sum\limits_{\alpha,\beta}s_\alpha t_\beta\langle\alpha,\beta\rangle

Since each \langle\alpha,\beta\rangle\leq0, this whole sum must be nonpositive, which can only happen if \epsilon=0. But then

\displaystyle0=\langle\gamma,\epsilon\rangle=\sum\limits_\alpha s_\alpha\langle\gamma,\alpha\rangle

which forces all the s_\alpha=0. Similarly, all the t_\beta=0, and thus the original linear combination must have been trivial. Thus \Delta(\gamma) is linearly independent.

Now we can show that \Delta(\gamma) is a base. Every vector in \Phi^+(\gamma) is indeed a nonnegative integral linear combination of the vectors in \Delta(\gamma). Since \Phi^-(\gamma)=-\Phi^+(\gamma), every vector in this set is a nonpositive integral linear combination of the vectors in \Delta(\gamma). And every vector in \Phi is in one or the other of these sets. Also, since \Phi spans V we find that \Delta(\gamma) spans V as well. But since it’s linearly independent, it must be a basis. And so it satisfies both of the criteria to be a base.

Finally, every base \Delta is of the form \Delta(\gamma) for some regular \gamma. Indeed, we just have to find some \gamma for which \langle\gamma,\alpha\rangle>0 for each \alpha\in\Delta. Then since any \beta\in\Phi is an integral linear combination of \alpha\in\Delta we can verify that \langle\gamma,\beta\rangle\neq0 for all \beta\in\Phi, proving that \gamma is regular. and \Phi^+=\Phi^+(\gamma). Then the vectors \alpha\in\Delta are clearly indecomposable, showing that \Delta\subseteq\Delta(\gamma). But these sets contain the same number of elements since they’re both bases of V, and so \Delta=\Delta(\gamma).

The only loose end is showing that such a \gamma exists. I’ll actually go one better and show that for any basis \{\eta_i\}_{i=1}^{\dim(V)} the intersection of the “half-spaces” \{\gamma\vert\langle\gamma,\eta_i\rangle\} is nonempty. To see this, define

\displaystyle\delta_i=\eta_i-\sum\limits_{\substack{1\leq j\leq\dim(V)\\j\neq i}}\frac{\langle\eta_i,\eta_j\rangle}{\langle\eta_j,\eta_j\rangle}\eta_j

This is what’s left of the basis vector \eta_i after subtracting off its projection onto each of the other basis vectors \eta_j, leaving its projection onto the line perpendicular to all of them. Then consider the vector \gamma=r^i\delta_i where each r^i>0. It’s a straightforward computation to show that \langle\gamma,\eta_k\rangle=r^i\langle\delta_k,\eta_k\rangle>0, and so \gamma is just such a vector as we’re claiming exists.

February 2, 2010 Posted by | Geometry, Root Systems | 5 Comments

Bases for Root Systems

We don’t always want to deal with a whole root system \Phi\subseteq V. Indeed, that’s sort of like using a whole group when all the information is contained in some much smaller generating set. For a vector space we call such a small generating set a basis. For a root system, we call it a base. Specifically, a subset \Delta\subseteq\Phi is called a base if first of all \Delta is a basis for V, and if each vector \beta\in\Phi can be written as a linear combination

\displaystyle\beta=\sum\limits_{\alpha\in\Delta}k_\alpha\alpha

where the coefficients k_\alpha are either all nonnegative integers or all nonpositive integers.

Some observations are immediate. Because \Delta is a basis, it contains exactly n=\dim(V) vectors of \Phi. It also tells us that the decomposition of each \beta is unique. In fact, as for any basis, every vector in V can be written uniquely as a linear combination of the vectors in \Delta. What we’re emphasizing here is that for vectors in \Phi, the coefficients are all integers, and they’re either all nonnegative or all nonpositive.

Another thing a choice of base gives us is a partial order \preceq on the root system \Phi. We say that \beta is a “positive root” with respect to \Delta (and write \beta\succeq0) if all of its coefficients are nonnegative integers. Similarly, we say that \beta is a “negative root” with respect to \Delta (and write \beta\preceq0) if all of its coefficients are nonpositive integers. We extend this to a partial order by defining \beta\preceq\alpha if \beta-\alpha\preceq0.

Every root is either positive or negative. We write \Phi^+ for the collection of positive roots with respect to a base \Delta and \Phi^- for the collection of negative roots. It should be clear that \Delta\subseteq\Phi^+, and also that \Phi^-=-\Phi^+ — the negative roots are exactly the negatives of the positive roots.

We can also define a kind of size of a vector \beta\in\Phi. Given the above (unique) decomposition, we define the “height” of \beta relative to \Delta as

\displaystyle\mathrm{ht}(\beta)=\sum\limits_{\alpha\in\Delta}k_\alpha

This will be useful when it comes to proving statements about all vectors in \Phi^+ by induction on their heights.

If \alpha\neq\beta are two vectors in a base \Delta\subseteq\Phi, then we know that \langle\alpha,\beta\rangle\leq0 and \alpha-\beta\notin\Phi. Indeed, our lemma tells us that if \langle\alpha,\beta\rangle>0 then \alpha-\beta would be in \Phi. But this is impossible, because every vector in \Phi can only be written as a linear combination of vectors in \Delta in one way, and that way cannot have some positive signs and some negative signs like \alpha-\beta does.

What this tells us (among other things) is that \beta must be one end of the \alpha root string through \beta. The other end must be \sigma_\alpha(\beta), and the root string must be unbroken between these two ends. Every vector \beta+k\alpha with 0\leq k\leq-\beta\rtimes\alpha must be in \Phi^+.

February 1, 2010 Posted by | Geometry, Root Systems | 17 Comments

Follow

Get every new post delivered to your Inbox.

Join 391 other followers