# The Unapologetic Mathematician

## Proving the Classification Theorem V

Today we conclude the proof of the classification theorem. The first four parts of the proof are here, here, here, and here.

1. The only possible Coxeter graphs with a triple vertex are those of the forms $D_n$, $E_6$, $E_7$, and $E_8$.
2. From step 8 we have the labelled graph

Like we did in step 9, we define

$\displaystyle\epsilon=\sum\limits_{i=1}^{p-1}i\epsilon_i$

$\displaystyle\eta=\sum\limits_{j=1}^{q-1}i\eta_j$

$\displaystyle\zeta=\sum\limits_{k=1}^{r-1}i\zeta_k$

These vectors $\epsilon$, $\eta$, and $\zeta$ are mutually orthogonal, linearly independent vectors, and that $\psi$ is not in the subspace that they span.

We look back at our proof of step 4 to determine that $\cos(\theta_1)^2+\cos(\theta_2)^2+\cos(\theta_3)^3<0$; where $\theta_1$, $\theta_2$, and $\theta_3$ are the angles between $\psi$ and $\epsilon$, $\eta$, and $\zeta$, respectively. We look back at our proof of step 9 to determine that $\langle\epsilon,\epsilon\rangle=\frac{p(p-1)}{2}$, $\langle\eta,\eta\rangle=\frac{q(q-1)}{2}$, and $\langle\zeta,\zeta\rangle=\frac{r(r-1)}{2}$. Thus we can calculate the cosine

\displaystyle\begin{aligned}\cos(\theta_1)^2&=\frac{\langle\epsilon,\psi\rangle^2}{\langle\epsilon,\epsilon\rangle\langle\psi,\psi\rangle}\\&=\frac{(p-1)^2\langle\epsilon_{p-1},\psi\rangle^2}{\langle\epsilon,\epsilon\rangle}\\&=\frac{(p-1)^2\frac{1}{4}}{\frac{p(p-1)}{2}}\\&=\frac{p-1}{2p}\\&=\frac{1}{2}\left(1-\frac{1}{p}\right)\end{aligned}

And similarly we find $\cos(\theta_2)^2=\frac{1}{2}(1-\frac{1}{q})$, and $\cos(\theta_3)^2=\frac{1}{2}(1-\frac{1}{r})$. Adding up, we find

\displaystyle\begin{aligned}\frac{1}{2}\left(3-\frac{1}{p}-\frac{1}{q}-\frac{1}{r}\right)&<1\\3-\frac{1}{p}-\frac{1}{q}-\frac{1}{r}&<2\\\frac{1}{p}+\frac{1}{q}+\frac{1}{r}&>1\end{aligned}

This last inequality, by the way, is hugely important in many areas of mathematics, and it’s really interesting to find it cropping up here.

Anyway, now none of $p$, $q$, or $r$ can be $1$ or we don’t have a triple vertex at all. We can also choose which strand is which so that

$\displaystyle\frac{1}{p}\leq\frac{1}{q}\leq\frac{1}{r}\leq\frac{1}{2}$

We can determine from here that

$\displaystyle\frac{3}{2}\geq\frac{3}{r}\geq\frac{1}{p}+\frac{1}{q}+\frac{1}{r}>1$

and so we must have $r=2$, and the shortest leg must be one edge long. Now we have $\frac{1}{p}+\frac{1}{q}>\frac{1}{2}$, and so $\frac{2}{q}>\frac{1}{2}$, and $q$ must be either $2$ or $3$.

If $q=3$, then the second shortest leg is two edges long. In this case, $\frac{1}{p}>\frac{1}{6}$ and $p<6$. The possibilities for the triple $(p,q,r)$ are $(3,3,2)$, $(4,3,2)$, and $(5,3,2)$; giving graphs $E_6$, $E_7$, and $E_8$, respectively.

On the other hand, if $q=2$, then the second shortest leg is also one edge long. In this case, there is no more restriction on $p$, and so the remaining leg can be as long as we like. This gives us the $D_n$ family of graphs.

And we’re done! If we have one triple edge, we must have the graph $G_2$. If we have a double edge or a triple vertex, we can have only one, and we can’t have one of each. Step 9 narrows down graphs with a double edge to $F_4$ and the families $B_n$ and $C_n$, while step 10 narrows down graphs with a triple vertex to $E_6$, $E_7$, and $E_8$, and the family $D_n$. Finally, if there are no triple vertices or double edges, we’re left with a single simple chain of type $A_n$.

February 26, 2010

## Proving the Classification Theorem IV

We continue proving the classification theorem. The first three parts are here, here, and here.

1. Any connected graph $\Gamma$ takes one of the four following forms: a simple chain, the $G_2$ graph, three simple chains joined at a central vertex, or a chain with exactly one double edge.
2. This step largely consolidates what we’ve done to this point. Here are the four possible graphs:

The labels will help with later steps.

Step 5 told us that there’s only one connected graph that contains a triple edge. Similarly, if we had more than one double edge or triple vertex, then we must be able to find two of them connected by a simple chain. But that will violate step 7, and so we can only have one of these features either.

3. The only possible Coxeter graphs with a double edge are those underlying the Dynkin diagrams $B_n$, $C_n$, and $F_4$.
4. Here we’ll use the labels on the above graph. We define

$\displaystyle\epsilon=\sum\limits_{i=1}^pi\epsilon_i$

$\displaystyle\eta=\sum\limits_{j=1}^qj\eta_j$

As in step 6, we find that $2\langle\epsilon_i,\epsilon_{i+1}\rangle=-1=2\langle\eta_j,\eta_{j+1}\rangle$ and all other pairs of vectors are orthogonal. And so we calculate

\displaystyle\begin{aligned}\langle\epsilon,\epsilon\rangle&=\sum\limits_{i=1}^pi^2\langle\epsilon_i,\epsilon_i\rangle+2\sum\limits_{i

And similarly, $\langle\eta,\eta\rangle=\frac{q(q+1)}{2}$. We also know that $4\langle\epsilon_p,\eta_q\rangle^2=2$, and so we find

$\displaystyle\langle\epsilon,\eta\rangle^2=p^2q^2\langle\epsilon_p,\eta_q\rangle^2=\frac{p^2q^2}{2}$

Now we can use the Cauchy-Schwarz inequality to conclude that

$\frac{p^2q^2}{2}=\langle\epsilon,\eta\rangle^2<\langle\epsilon,\epsilon\rangle\langle\eta,\eta\rangle=\frac{p(p+1)q(q+1)}{4}$

where the inequality is strict, since $\epsilon$ and $\eta$ are linearly independent. And so we find

\displaystyle\begin{aligned}\frac{p^2q^2}{2}&<\frac{p(p+1)q(q+1)}{4}\\2p^2q^2&

We thus must have either $p=q=2$, which gives us the $F_4$ diagram, or $p=1$ or $q=1$ with the other arbitary, which give rise the the $B_n$ and $C_n$ Coxeter graphs.

February 25, 2010 Posted by | Geometry, Root Systems | 5 Comments

## Proving the Classification Theorem III

Sorry, but what with errands today and work on some long-term plans, I forgot to put this post up this afternoon.

We continue with the proof of the classification theorem. The first two parts are here and here.

1. If $\{\epsilon_1,\dots,\epsilon_k\}\subseteq\mathfrak{D}$ is a simple chain in the graph $\Gamma$, then we can remove these vectors and replace them by their sum. The resulting set is still admissible, and its graph is obtained from $\Gamma$ by contracting the chain to a single vertex.
2. A simple chain in $\Gamma$ is a subgraph like

We define

$\displaystyle\epsilon=\sum\limits_{i=1}^k\epsilon_i$

I say that if we replace $\{\epsilon_1,\dots,\epsilon_k\}$ by $\{\epsilon\}$ in $\mathfrak{D}$ then we still have an admissible set $\mathfrak{D}'$. Any edges connecting to any of the vertices $\epsilon_i$ will then connect to the vertex $\epsilon$.

It should be clear that $\mathfrak{D}'$ is still linearly independent. If there’s no way to write zero as a nontrivial linear combination of the vectors in $\mathfrak{D}$, then of course there’s no way to do it with the additional condition that the coefficients of all the $\epsilon_i$ are the same.

Because we have a simple chain, we find $2\langle\epsilon_i,\epsilon_{i+1}\rangle=-1$ for $1\leq i\leq k-1$ and $\langle\epsilon_i,\epsilon_j\rangle=0$ otherwise. Thus we calculate (compare to step 2)

\displaystyle\begin{aligned}\langle\epsilon,\epsilon\rangle&=k+\sum\limits_{i

and so $\epsilon$ is a unit vector.

Any $\eta\in\Gamma$ can be connected by an edge to at most one of the $\epsilon_i$, or else $\Gamma$ would contain a cycle (violating step 3). Thus we have either $\langle\eta,\epsilon\rangle=0$ if $\eta$ is connected to no vertex of the chain, or $\langle\eta,\epsilon\rangle=\langle\eta,\epsilon_i\rangle$ if $\eta$ is connected to $\epsilon_i$ by some number of edges. In either case we still find that $4\langle\eta,\epsilon\rangle^2$ is $0$, $1$, $2$, or $3$, and so $\mathfrak{D}'$ is admissible.

Further, the previous paragraph shows that the edges connecting $\epsilon$ to any other vertex $\eta$ of the graph $\Gamma$ are exactly the edges connecting the chain to $\eta$. Thus we obtain the graph $\Gamma'$ of $\mathfrak{D}'$ by contracting the chain in $\Gamma$ to a single vertex.

3. The graph $\Gamma$ contains no subgraph of any of the following forms:
4. Indeed, if one of these graphs occurred in $\Gamma$ it would be (by step 1) the graph of an admissible set itself. However, if this were the case we could contract the central chain to a single vertex by step 6. We would then find one of the graphs

But each of these graphs has four edges incident on the central vertex, which violates step 4!

February 25, 2010 Posted by | Geometry, Root Systems | 4 Comments

## Proving the Classification Theorem II

We continue with the proof of the classification theorem that we started yesterday.

1. No more than three edges (counting multiplicities) can be incident on any given vertex of $\Gamma$.
2. Consider some vector $\epsilon\in\mathfrak{D}$, and let $\eta_1,\dots,\eta_k$ be the vectors whose vertices are connected to $\epsilon$ by at least one edge. Since by step 3 the graph $\Gamma$ cannot contain any cycles, we cannot connect any $\eta_i$ and $\eta_j$, and so we have $\langle\eta_i,\eta_j\rangle=0$ for $i\neq j$.

Since the collection $\{\epsilon,\eta_1,\dots,\eta_k\}$ is linearly independent, there must be some unit vector $\eta_0$ in the span of these vectors which is perpendicular to each of the $\eta_i$. This vector must satisfy $\langle\epsilon,\eta_0\rangle\neq0$. Then the collection $\{\eta_0,\eta_1,\dots,\eta_k\}$ is an orthonormal basis of this subspace, and we can thus write

$\displaystyle\epsilon=\sum\limits_{i=0}^k\langle\epsilon,\eta_i\rangle\eta_i$

and thus

\displaystyle\begin{aligned}1=\langle\epsilon,\epsilon\rangle&=\left\langle\sum\limits_{i=0}^k\langle\epsilon,\eta_i\rangle\eta_i,\sum\limits_{j=0}^k\langle\epsilon,\eta_j\rangle\eta_j\right\rangle\\&=\sum\limits_{i=0}^k\sum\limits_{j=0}^k\langle\epsilon,\eta_i\rangle\langle\epsilon,\eta_j\rangle\langle\eta_i,\eta_j\rangle\\&=\sum\limits_{i=0}^k\sum\limits_{j=0}^k\langle\epsilon,\eta_i\rangle\langle\epsilon,\eta_j\rangle\delta_{i,j}\\&=\sum\limits_{i=0}^k\langle\epsilon,\eta_i\rangle^2\end{aligned}

Now since we can’t have $\langle\epsilon,\eta_0\rangle=0$, we must find

$\displaystyle\sum\limits_{i=1}^k\langle\epsilon,\eta_i\rangle^2<1$

and thus

$\displaystyle\sum\limits_{i=1}^k4\langle\epsilon,\eta_i\rangle^2<4$

But $4\langle\epsilon,\eta_i\rangle^2$ is the number of edges (counting multiplicities) between $\epsilon$ and $\eta_i$, and so this sum is the total number of edges incident on $\epsilon$.

3. The only connected Coxeter graph $\Gamma$ containing a triple edge is $G_2$
4. Indeed, step 4 tells us that no vertex can support more than three incident edges, counting multiplicities. Once we put the triple edge between two vertices, neither of them has any more room for another edge to connect to any other vertices. Thus the connected component cannot grow larger than this.

February 23, 2010 Posted by | Geometry, Root Systems | 3 Comments

## Proving the Classification Theorem I

This week, we will prove the classification theorem for root systems. The proof consist of a long series of steps, and we’ll split it up over a number of posts.

Our strategy is to determine which Coxeter graphs can arise from actual root systems, and then see what Dynkin diagrams we can get. Since Coxeter graphs ignore relative lengths of roots, we will start by just working with unit vectors whose pairwise angles are described by the Coxeter graph.

So, for the time being we make the following assumptions: $V$ is a real inner-product space (of arbitrary dimension), and $\mathfrak{D}=\{\epsilon_1,\dots,\epsilon_n\}$ is a set of $n$ linearly independent unit vectors satisfying $\langle\epsilon_i,\epsilon_j\rangle\leq0$ and $4\langle\epsilon_i,\epsilon_j\rangle^2$ is one of $0$, $1$, $2$, or $3$ for $i\neq j$. For the duration of this proof, we will call such a set of vectors “admissible”. The elements of a base $\Delta$ for a root system $\Phi$, divided by their lengths, are an admissible set.

We construct a Coxeter graph $\Gamma$, just as before, for every admissible set, and these clearly include the Coxeter graphs that come from root systems. Our task is now to classify all the connected Coxeter graphs that come from admissible sets of vectors.

1. If we remove some of the vectors from an admissible set, then the remaining vectors still form an admissible set. The Coxeter graph of the new set is obtained from that of the old one by removing the vertices corresponding to the removed vectors and all their incident edges.
2. This should be straightforward. The remaining vectors are still linearly independent, and they still have unit length. The angles between each pair are also unchanged. All we do is remove some vectors (vertices in the graph).

3. The number of pairs of vertices in $\Gamma$ connected by at least one edge is strictly less than $n$ — the number of overall vertices.
4. Let us define

$\displaystyle\epsilon=\sum\limits_{i=1}^n\epsilon_i$

Since the $\epsilon_i$ are linearly independent, $\epsilon$ cannot be zero. Thus we find that

\displaystyle\begin{aligned}0<\langle\epsilon,\epsilon\rangle&=\left\langle\sum\limits_{i=1}^n\epsilon_i,\sum\limits_{j=1}^n\epsilon_j\right\rangle\\&=\sum\limits_{i=1}^n\sum\limits_{j=1}^n\langle\epsilon_i,\epsilon_j\rangle\\&=\sum\limits_{ij}\langle\epsilon_i,\epsilon_j\rangle\\&=n+2\sum\limits_{i

Now, for any pair $\epsilon_i$ and $\epsilon_j$ joined by an edge, we must find that $4\langle\epsilon_i,\epsilon_j\rangle^2$ is $1$, $2$, or $3$, and thus that $2\langle\epsilon_i,\epsilon_j\rangle\leq-1$. Clearly, we can have no more than $n-1$ of these edges if we are to satisfy the above inequality.

5. The graph $\Gamma$ contains no cycles
6. Indeed, if we have a cycle of length $n$ in the graph, we could (using our step 1) discard all the vertices not involved in the cycle and still have the graph of an admissible set. But this graph would have $n$ vertices and at least $n$ edges — the ones involved in the cycle. But this is is impossible by step 2!

February 22, 2010 Posted by | Geometry, Root Systems | 8 Comments

## The Classification of (Possible) Root Systems

At long last, we can state the classification of irreducible root systems up to isomorphism. We’ve shown that for each such root system we can construct a connected Dynkin diagram, which determines a Cartan matrix, which determines the root system itself, up to isomorphism. So what he have to find now is a list of Dynkin diagrams which can possibly arise from a root system. And so today we state the

CLASSIFICATION THEOREM

If $\Phi$ is an irreducible root system, then its Dynkin diagram is in one of four infinite families, or one of five “exceptional” diagrams. The four infinite families are

The $A$ series. These consist of a single long string of roots with single edges connecting them. The diagrams are indexed by the number of roots, starting at $n=1$:

• $A_1$:
• $A_2$:
• $A_3$:
• $A_4$:
• $A_5$:
• $A_6$:
• $A_n$:

The $B$ series. These look like the $A$ series, except the last two roots are connected by a double edge, with the last root shorter. The diagrams are indexed by the number of roots, starting at $n=2$:

• $B_2$:
• $B_3$:
• $B_4$:
• $B_5$:
• $B_6$:
• $B_n$:

The $C$ series. These look like the $B$ series, except the last root is longer. The diagrams are indexed by the number of roots, starting at $n=3$:

• $C_3$:
• $C_4$:
• $C_5$:
• $C_6$:
• $C_n$:

The $D$ series. These look like the $A$ series, except the end is split into two roots. The diagrams are indexed by the number of roots, starting at $n=4$:

• $D_4$:
• $D_5$:
• $D_6$:
• $D_7$:
• $D_n$:

The exceptional diagrams are

The $E$ series. These have a string of roots connected by single edges. On the third root from the end of the string, a single edge branches off to another root on the side. The diagrams are indexed by the number of roots, for $n=6$, $n=7$, and $n=8$:

• $E_6$:
• $E_7$:
• $E_8$:

The $F_4$ diagram. This consists of four roots in a row. The first two and last two are connected by a single edge, while the middle two are connected by a double edge:

• $F_4$:

The $G_2$ diagram. This consists of two roots, connected by a triple edge:

• $G_2$:

Our proof will take a number of steps, winnowing down the possible diagrams to this collection. We will spread it over the next week. But over the weekend, you can amuse yourself by working out the Cartan matrices for each of these Dynkin diagrams.

February 19, 2010 Posted by | Geometry, Root Systems | 12 Comments

## Coxeter Graphs and Dynkin Diagrams

We’ve taken our root system and turned it into a Cartan matrix. Now we’re going to take our Cartan matrix and turn it into a pictorial form that we can really get our hands on.

Given a Cartan matrix, we want to construct a combinatorial graph. If the matrix is $n\times n$, then we start with $n$ points. The $n$ rows and columns of the matrix correspond to $n$ simple roots, and so each of these points corresponds to a simple root as well. Noting this, we will call the points “roots”.

Now given any two simple roots $\alpha$ and $\beta$ we can construct the Cartan integers $\alpha\rtimes\beta$ and $\beta\rtimes\alpha$. Our data about pairs of roots tells us that the product $(\alpha\rtimes\beta)(\beta\rtimes\alpha)$ is either ${0}$, ${1}$, ${2}$, or ${3}$. We’ll connect each pair of roots by this many lines, like so:

• $(\alpha\rtimes\beta)(\beta\rtimes\alpha)=0$:
• $(\alpha\rtimes\beta)(\beta\rtimes\alpha)=1$:
• $(\alpha\rtimes\beta)(\beta\rtimes\alpha)=2$:
• $(\alpha\rtimes\beta)(\beta\rtimes\alpha)=3$:

The resulting combinatorial graph is called the “Coxeter graph” of This is almost enough information to recover the Cartan matrix. Unfortunately, in the two- and three-edge cases, we know that the two roots have different lengths, and we have no way of telling which is which. To fix this, we will draw an arrow (that looks suspiciously like a “greater-than” sign) pointing from the larger of the two roots to the smaller, like so:

In each case we know one root must be longer than the other, and by a particular ratio. And so with this information we call the graph a “Dynkin diagram”, and it is enough to reconstruct the Cartan matrix. For example, consider the Dynkin diagram called $F_4$:

We can use this to reconstruct the Cartan matrix:

$\displaystyle\begin{pmatrix}2&-1&0&0\\-1&2&-2&0\\{0}&-1&2&-1\\{0}&0&-1&2\end{pmatrix}$

One nice thing about Dynkin diagrams is that we really don’t care about the order of the roots, like we had to do for the Cartan matrix. They also show graphically a lot of information about the root system. For instance, we may be able to break a Dynkin diagram up into more than one connected component. Any root $\alpha$ in one of these components has no edges connecting it to a root $\beta$ in another. This means that $\alpha\rtimes\beta=0$, and thus that $\alpha$ and $\beta$ are perpendicular. This breaks the base $\Delta$ up into a bunch of mutually-perpendicular subsets, which give rise to irreducible components of the root system $\Phi$.

The upshot of this last fact is that irreducibility of a root system corresponds to connectedness of its Dynkin diagram. Thus our project has come down to this question: which connected Dynkin diagrams arise from root systems? At last, we’re almost ready to answer this question.

February 18, 2010 Posted by | Geometry, Root Systems | 10 Comments

## From Cartan Matrix to Root System

Yesterday, we showed that a Cartan matrix determines its root system up to isomorphism. That is, in principle if we have a collection of simple roots and the data about how each projects onto each other, that is enough to determine the root system itself. Today, we will show how to carry this procedure out.

But first, we should point out what we don’t know: which Cartan matrices actually come from root systems! We know some information, though. First off, the diagonal entries must all be ${2}$. Why? Well, it’s a simple calculation to see that for any vector $\alpha$

$\displaystyle\alpha\rtimes\alpha=2\frac{\langle\alpha,\alpha\rangle}{\langle\alpha,\alpha\rangle}=2$

The off-diagonal entries, on the other hand, must all be negative or zero. Indeed, our simple roots must be part of a base $\Delta$, and any two vectors $\alpha,\beta\in\Delta$ must satisfy $\langle\alpha,\beta\rangle\leq0$. Even better, we have a lot of information about pairs of roots. If one off-diagonal $\alpha_i\rtimes\alpha_j$ element is zero, so must the corresponding one $\alpha_j\rtimes\alpha_i$ on the other side of the diagonal be zero. And if they’re nonzero, we have a very tightly controlled number of options. One must be $-1$, and the other must be $-1$, $-2$, or $-3$.

But beyond that, we don’t know which Cartan matrices actually arise, and that’s the whole point of our project. For now, though, we will assume that our matrix does in fact arise from a real root system, and see how to use it to construct a root system whose Cartan matrix is the given one. And our method will hinge on considering root strings.

What we really need is to build up all the positive roots $\Phi^+$, and then the negative roots $\Phi^-$ will just be a reflected copy of $\Phi^+$. We also know that since there are only finitely many roots, there can be only finitely many heights, and so there is some largest height. And we know that we can get to any positive root of any height by adding more and more simple roots. So we will proceed by building up all the roots of height ${1}$, then height ${2}$, and so on until we cannot find any higher roots, at which point we will be done.

So let’s start with roots of height ${1}$. These are exactly the simple roots, and we are just given those to begin with. We know all of them, and we know that there is nothing at all below them (among positive roots, at least).

Next we come to the roots of height ${2}$. Every one of these will be a root of height ${1}$ plus another simple root. The problem is that we can’t add just any simple root to a root of height ${1}$ to get another root of height ${2}$. If we step in the wrong direction we’ll fall right off the root system! We need to know which directions are safe, and that’s where root strings come to the rescue. We start with a root $\beta$ with $\mathrm{ht}(\beta)=1$, and a simple root $\alpha$. We know that the length of the $\alpha$ string through $\beta$ must be $\beta\rtimes\alpha$. But we also know that we can’t step backwards, because $\beta-\alpha$ would be (in this case) a linear combination of simple roots with both positive and negative coefficients! If $\beta\rtimes\alpha=0$ then we can’t step forwards either, because we’ve already got the whole root string. But if $\beta\rtimes\alpha>0$ then we have room to take a step in the $\alpha$ direction from $\beta$, giving a root $\beta+\alpha$ with height $\mathrm{ht}(\beta+\alpha)=2$. As we repeat this over all roots $\beta$ of height ${1}$ and all simple roots $\alpha$, we must eventually cover all of the roots of height ${2}$.

Next are the roots of height ${3}$. Every one of these will be a root of height ${2}$ plus another simple root. The problem is that we can’t add just any simple root to a root of height ${2}$ to get another root of height ${3}$. If we step in the wrong direction we’ll fall right off the root system! We need to know which directions are safe, and that’s where root strings come to the rescue… again. We start with a root $\beta$ with $\mathrm{ht}(\beta)=2$, and a simple root $\alpha$. We know that the length of the $\alpha$ string through $\beta$ must again be $\beta\rtimes\alpha$. But now we may be able to take a step backwards! That is, it may turn out that $\beta-\alpha$ is a root, and that complicates matters. But this is okay, because if $\beta-\alpha$ is a root, then it must be of height ${1}$, and we know that we already know all of these! So, look up $\beta-\alpha$ in our list of roots of height ${1}$ and see if it shows up. If it doesn’t, then the $\alpha$ string through $beta$ starts at $\beta$, just like before. If it does show up, then the root string must start at $\beta-\alpha$. Indeed, if we took another step backwards, we’ve have a root of height ${0}$, which doesn’t exist. Thus we know where the root string starts. We can also tell how long it is, because we can calculate $\beta\rtimes\alpha$ by adding up the Cartan integers $\gamma\rtimes\alpha$ for each of the simple roots $\gamma$ we’ve used to build $\beta$. And so we can tell whether or not it’s safe to take another step in the direction of $\alpha$ from $\beta$, and in this way we can build up each and every root of height ${3}$.

And so on at each level we start with the roots of height $n$ and look from each one $\beta$ in the direction of each simple root $\alpha$. In each case, we can carefully step backwards to determine where the $\alpha$ string through $\beta$ begins, and we can calculate the length $\beta\rtimes\alpha$ of the string, and so we can tell whether or not it’s safe to take another step in the direction of $\alpha$ from $\beta$, and we can build up each and every root of height $n+1$. Of course, it may just happen (and eventually it must happen) that we find no roots of height $n+1$. At this point, there can be no roots of any larger height either, and we’re done. We’ve built up all the positive roots, and the negative roots are just their negatives.

February 17, 2010 Posted by | Geometry, Root Systems | 6 Comments

## Cartan Matrices

As we move towards our goal of classifying root systems, we find new ways of encoding the information contained in a root system $\Phi$. First comes the Cartan matrix.

Pick a base $\Delta\subseteq\Phi$ of the root system. Since $V$ is finite-dimensional and $\Delta$ is a basis of $V$, $\Delta$ must be finite, and so there’s no difficulty in picking some fixed order on the simple roots. That is, we write $\Delta=\{\alpha_1,\alpha_2,\dots,\alpha_n\}$ where $n=\dim(V)$. Now we can define the “Cartan matrix” as the $n\times n$ matrix whose entry in the $i$th row and $j$th column is $\alpha_i\rtimes\alpha_j$. These entries are called “Cartan integers”.

The matrix we get, depends on the particular ordering of the base we chose, of course, so the Cartan matrix isn’t quite uniquely determined by the root system. This is relatively unimportant, actually. More to the point is the other direction: the Cartan matrix determines the root system up to isomorphism!

That is, let’s say $\Delta'\subseteq\Phi'\subseteq V'$ is another root system $\Phi'$ in another vector space $V$ with another identified base $\Delta'=\{\alpha'_1,\dots,\alpha'_n\}$. Further, assume that for all $1\leq i,j\leq n$ we have $\alpha'_i\rtimes\alpha'_j=\alpha_i\rtimes\alpha_j$, so the Cartan matrix determined by $\Delta'$ is equal to the Cartan matrix determined by $\Delta$. I say that the bijection $\alpha_i\mapsto\alpha'_i$ extends to an isomorphism $\phi:V\rightarrow V'$ that sends $\Phi$ onto $\Phi'$ and satisfies $\phi(\alpha)\rtimes\phi(\beta)=\alpha\rtimes\beta$ for all roots $\alpha,\beta\in\Phi$.

The unique extension to $\phi$ is trivial. Indeed, since $\Delta$ is a basis for $V$ all we have to do is specify all the images $\phi(\alpha_i)$ and there is a unique linear transformation $\phi:V\rightarrow V'$ extending the mapping on basis vectors. And it’s an isomorphism, since the image of our basis of $V$ is itself a basis of $V'$, so we can turn around and reverse everything.

Now our hypothesis that the bases give rise to the same Cartan matrix allows us to calculate for simple roots $\alpha,\beta\in\Delta$:

\displaystyle\begin{aligned}\sigma_{\phi(\alpha)}(\phi(\beta))&=\phi(\beta)-(\phi(\beta)-\phi(\alpha)\phi(\alpha)\\&=\phi(\beta)-(\beta\rtimes\alpha)\phi(\alpha)\\&=\phi(\beta-(\beta\rtimes\alpha)\alpha)\\&=\phi(\sigma_\alpha(\beta))\end{aligned}

That is, $\phi$ intertwines the actions of each of the simple reflections $\sigma_\alpha$. But we know that the simple reflections with respect to any given base generate the Weyl group!

And so $\phi$ must intertwine the actions of the Weyl groups $\mathcal{W}$ and $\mathcal{W}'$. That is, the mapping $\sigma\mapsto\phi\circ\sigma\circ\phi^{-1}$ is an isomorphism $\mathcal{W}\rightarrow\mathcal{W}'$ which sends $\sigma_\alpha$ to $\sigma_{\phi(\alpha)}$ for all $\alpha\in\Delta$.

We can go further. Each root $\beta$ is in the $\mathcal{W}$-orbit of some simple root $\alpha$. Say $\beta=\sigma(\alpha)$ for $\alpha\in\Delta$. Then we find

$\displaystyle\phi(\beta)=\phi(\sigma(\alpha))=(\phi\circ\sigma\circ\phi^{-1})(\phi(\alpha))\in\Phi'$

And so $\phi$ must send $\Phi$ to $\Phi'$. A straightforward calculation (unwinding the one before) shows that $\phi$ must then preserve the Cartan integers for any roots $\alpha$ and $\beta$.

February 16, 2010 Posted by | Geometry, Root Systems | 8 Comments

## Some Root Systems and Weyl Orbits

Today, I’d like to show you some examples of two-dimensional root systems, which illustrate a lot of what we talked about last week. I’ve worked them up in a Java application called Geogebra, but I don’t seem to be able to embed the resulting applets into a WordPress post. If someone knows how, I’d be glad to hear it.

Anyhow, in lieu of embedded applets, I’ll post the “.ggb” files, which you can take over to the Geogebra site and load up there. So, with no further ado, I present all four two-dimensional root systems:

Each one of these files illustrates the root system and its Weyl orbit. Each one has two simple roots, labelled $\alpha$ and $beta$ in blue. The rest of the roots are shown in black, and the fundamental domain is marked out by two rays perpendicular to $\alpha$ and $\beta$, respectively.

An arbitrary blue vector $\mu$ is shown, along with its reflected images making up the entire Weyl orbit. You can grab this vector and drag it around, watching how the orbit changes. No matter where you place $\mu$, notice that there is exactly one image in the fundamental domain, as we showed.

The first root system is reducible, but the other three are irreducible. For each of these, we can see that there is a unique maximal root. However, $A_1\amalg A_1$ doesn’t; both $\alpha$ and $beta$ are maximal.

We also see that the Weyl orbit of a root spans the plane in the irreducible cases. But, again, in $A_1\amalg A_1$ the Weyl orbits of $\alpha$ and $\beta$ only span their lines.

Finally, in each of the irreducible cases we see that there are at most two distinct root lengths. And, in each case, the unique maximal root is the long root within the fundamental domain.

February 15, 2010 Posted by | Geometry, Root Systems | 3 Comments