The Unapologetic Mathematician

Mathematics for the interested outsider

Construction of E-Series Root Systems

Today we construct the last of our root systems, following our setup. These correspond to the Dynkin diagrams E_6, E_7, and E_8. But there are transformations of Dynkin diagrams that send E_6 into E_7, and E_7 on into E_8. Thus all we really have to construct is E_8, and then cut off the right simple roots in order to give E_7, and then E_6.

We start similarly to our construction of the F_4 root system; take the eight-dimensional space with the integer-coefficient lattice I, and then build up the set of half-integer coefficient vectors

\displaystyle I'=I+\frac{1}{2}(\epsilon_1+\epsilon_2+\epsilon_3+\epsilon_4+\epsilon_5+\epsilon_6+\epsilon_7+\epsilon_8)

Starting from lattice I\cup I', we can write a generic lattice vector as


and we let J\subseteq I\cup I' be the collection of lattice vectors so that the sum of the coefficients c^i is even. This is well-defined even though the coefficients aren’t unique, because the only redundancy is that we can take {2} from c^0 and add {1} to each of the other eight coefficients, which preserves the total parity of all the coefficients.

Now let \Phi consist of those vectors \alpha\in J with \langle\alpha,\alpha\rangle=2. The explicit description is similar to that from the F_4 root system. From I, we get the vectors \pm(\epsilon_i\pm\epsilon_j), but not the vectors \pm\epsilon_i because these don’t make it into J. From I' we get some vectors of the form


Starting with the choice of all minus signs, this vector is not in J because c^0=-1 and all the other coefficients are {0}. To flip a sign, we add \epsilon_i, which flips the total parity of the coefficients. Thus the vectors of this form that make it into \Phi are exactly those with an odd number of minus signs.

We need to verify that \frac{2\langle\beta,\alpha\rangle}{\langle\alpha,\alpha\rangle}=\langle\beta,\alpha\rangle\in\mathbb{Z} for all \alpha and \beta in \Phi (technically we should have done this yesterday for F_4, but here it is. If both \alpha and \beta come from I, this is clear since all their coefficients are integers. If \alpha=\pm\epsilon_i\pm\epsilon_j\in I and \beta\in I', then the inner product is the sum of the ith and jth coefficients of \beta, but with possibly flipped signs. No matter how we choose \alpha\in I and \beta\in I', the resulting inner product is either -1, {0}, or {1}. Finally, if both \alpha and \beta are chosen from I', then each one is c=-\frac{1}{2}(\epsilon_1+\epsilon_2+\epsilon_3+\epsilon_4+\epsilon_5+\epsilon_6+\epsilon_7+\epsilon_8) plus an odd number of the \epsilon_i, which we write as a and b, respectively. Thus the inner product is

\displaystyle\langle\alpha,\beta\rangle=\langle c+a,c+b\rangle=\langle c,c\rangle+\langle c,b\rangle+\langle a,c\rangle+\langle a,b\rangle

The first term here is 2, and the last term is also an integer because the coefficients of a and b are all integers. The middle two terms are each a sum of an odd number of \pm\frac{1}{2}, and so each of them is a half-integer. The whole inner product then is an integer, as we need.

What explicit base \Delta should we pick? We start out as we’ve did for F_4 with \epsilon_2-\epsilon_3, \epsilon_3-\epsilon_4, and so on up to \epsilon_7-\epsilon_8. These provide six of our eight vertices, and the last two of them are perfect for cutting off later to make the E_7 and E_6 root systems. We also throw in \epsilon_2+\epsilon_3, like we did for the D_n series. This provides us with the triple vertex in the E_8 Dynkin diagram.

We need one more vertex off to the left. It should be orthogonal to every one of the simple roots we’ve chosen so far except for \epsilon_2+\epsilon_3, with which it should have the inner product -1. It should also be a half-integer root, so that we can get access to the rest of them. For this purpose, we choose the root \frac{1}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4-\epsilon_5-\epsilon_6-\epsilon_7-\epsilon_8). Establishing that the reflection with respect to this vector preserves the lattice J — and thus the root system \Phi — proceeds as in the F_4 case.

The Weyl group of E_8 is again the group of symmetries of a polytope. In this case, it turns out that the vectors in \Phi are exactly the vertices of a regular eight-dimensional polytope inscribed in the sphere of radius {2}, and the Weyl group of E_8 is exactly the group of symmetries of this polyhedron! Notice that this is actually something interesting; in the A_2 case the roots formed the vertices of a hexagon, but the Weyl group wasn’t the whole group of symmetries of the hexagon. This is related to the fact that the A_2 diagram possesses a symmetry that flips it end-over-end, and we will explore this behavior further.

The Weyl groups of E_7 and E_6 are also the symmetries of seven- and six-dimensional polytopes, respectively, but these aren’t quite so nicely apparent from their root systems.

As the most intricate (in a sense) of these root systems, E_8 has inspired quite a lot of study and effort to visualize its structure. I’ll leave you with an animation I found on Garrett Lisi’s notewiki, Deferential Geometry (with the help of Sarah Kavassalis).


March 10, 2010 Posted by | Geometry, Root Systems | 1 Comment

Construction of the F4 Root System

Today we construct the F_4 root system starting from our setup.

As we might see, this root system lives in four-dimensional space, and so we start with this space and its integer-component lattice I. However, we now take another copy of I and push it off by the vector \frac{1}{2}(\epsilon_1+\epsilon_2+\epsilon_3+\epsilon_4). This set I' consists of all vectors each of whose components is half an odd integer (a “half-integer” for short). Together with I, we get a new lattice J=I\cup I' consisting of vectors whose components are either all integers or all half-integers. Within this lattice J, we let \Phi consist of those vectors of squared-length 2 or 1: \langle\alpha,\alpha\rangle=2 or \langle\alpha,\alpha\rangle=1; we want to describe these vectors explicitly.

When we constructed the B_n and C_n series, we saw that the vectors of squared-length 1 and 2 in I are those of the form \pm\epsilon_i (squared-length 1) and of the form \pm(\epsilon_i\pm\epsilon_j) for i\neq j (squared-length 2). But what about the vectors in I'? We definitely have \left(\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2}\right) — with squared-length 1 — but can we have any others? The next longest vector in I' will have one component \pm\frac{3}{2} and the rest \pm\frac{1}{2}, but this has squared-length 3 and won’t fit into \Phi! We thus have twenty-four long roots of squared-length 2 and twenty-four short roots of squared-length 1.

Now, of course we need an explicit base \Delta, and we can guess from the diagram F_4 that two must be long and two must be short. In fact, in a similar way to the B_3 root system, we start by picking \epsilon_2-\epsilon_3 and \epsilon_3-\epsilon_4 as two long roots, along with \epsilon_4 as one short root. Indeed, we can see a transformation of Dynkin diagrams sending B_3 into F_4, and sending the specified base of B_3 to these three vectors.

But we need another short root which will both give a component in the direction of \epsilon_1 and will give us access to I'. Further, it should be orthogonal to both \epsilon_2-\epsilon_3 and \epsilon_3-\epsilon_4, and should have a Cartan integer of -1 with \epsilon_4 in either order. For this purpose, we pick \frac{1}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4), which then gives us the last vertex of the F_4 Dynkin diagram.

Does the reflection with respect to this last vector preserve the root system, though? What is its effect on vectors in J? We calculate

\displaystyle\begin{aligned}\sigma_{\frac{1}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4)}(v)&=v-\frac{2\left\langle v,\frac{1}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4)\right\rangle}{\left\langle\frac{1}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4),\frac{1}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4)\right\rangle}\frac{1}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4)\\&=v-\left\langle v,\frac{1}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4)\right\rangle(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4)\\&=v-\frac{v^1-v^2-v^3-v^4}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4)\end{aligned}

Now the sum v^1-v^2-v^3-v^4 is always an integer, whether the components of v are integers or half-integers. If the sum is even, then we are changing each component of v by an integer, which sends I and I' back to themselves. If the sum is off, then we are changing each component of v by a half-integer, which swaps I and I'. In either case, the lattice J is sent back to itself, and so this reflection fixes \Phi.

Like we say for G_2 it’s difficult to understand the Weyl group of F_4 in terms of its action on the components of v. However, also like G_2, we can understand it geometrically. But instead of a hexagon, now the long and short roots each make up a four-dimensional polytope called the “24-cell”. It’s a shape with 24 vertices, 96 edges, 96 equilateral triangular faces, and 24 three-dimensional “cells”, each of which is a regular octahedron; the Weyl group of F_4 is its group of symmetries, just like the Weyl group of G_2 was the group of symmetries of the hexagon.

Also like the G_2 case, the F_4 root system is isomorphic to its own dual. The long roots stay the same length when dualized, while the short roots double in length and become the long roots of the dual root system. Again, a scaling and rotation sends the dual system back to the one we constructed.

March 9, 2010 Posted by | Geometry, Root Systems | 3 Comments

Construction of the G2 Root System

We’ve actually already seen the G_2 root system, back when we saw a bunch of two-dimensional root system. But let’s examine how we can construct it in line with our setup.

The G_2 root system is, as we can see by looking at it, closely related to the A_2 root system. And so we start again with the 2-dimensional subspace of \mathbb{R}^3 consisting of vectors with coefficients summing to zero, and we use the same lattice J. But now we let \Phi be the vectors \alpha\in J of squared-length 2 or 6: \langle\alpha,\alpha\rangle=2 or \langle\alpha,\alpha\rangle=6. Explicitly, we have the six vectors from A_2\pm(\epsilon_1-\epsilon_2), \pm(\epsilon_1-\epsilon_3), and \pm(\epsilon_2-\epsilon_3) — and six new vectors — \pm(2\epsilon_2-\epsilon_1-\epsilon_3), \pm(2\epsilon_1-\epsilon_2-\epsilon_3), and \pm(\epsilon_1+\epsilon_2-2\epsilon_3).

We can pick a base \Delta=\{\epsilon_1-\epsilon_2,2\epsilon_2-\epsilon_1-\epsilon_3\}. These vectors are clearly independent. We can easily write each of the above vectors with a positive sign as a positive sum of the two vectors in \Delta. For example, in accordance with an earlier lemma, we can write


where after adding each term we have one of the positive roots. In fact, this path hits all but one of the six positive roots on its way to the unique maximal root.

It’s straightforward to calculate the Cartan integers for \Delta.



which shows that we do indeed get the Dynkin diagram G_2.

And, of course, we must consider the reflections with respect to both vectors in \Delta. Unfortunately, computations like those we’ve used before get complicated. However, we can just go back to the picture that we drew before (and that I linked to at the top of this post). It’s a nice, clean, two-dimensional picture, and it’s clear that these reflections send \Phi back to itself, which establishes that \Phi is really a root system.

We can also figure out the Weyl group geometrically from this picture. Draw line segments connecting the tips of either the long or the short roots, and we find a regular hexagon. Then the reflections with respect to the roots generate the symmetry group of this shape. The twelve roots are the twelve axes of symmetry of the polygon, and we can get rotations by first reflecting across one root and then across another. For example, rotating by a sixth of a turn can be effected by reflecting with the basic short root, followed by reflecting with the basic long root.

Finally, we can see that this root system is isomorphic to its own dual. Indeed, if \alpha is a short root, then the dual root is \alpha itself:


On the other hand, if \alpha is a long root, then we find


and so the squared-length of \alpha^\vee is \frac{2}{3}. These are now the short roots of the dual system. Scaling the dual system up by a factor of 9 and rotating \frac{1}{12} of a turn, we recover the original G_2 root system.

March 8, 2010 Posted by | Geometry, Root Systems | 1 Comment

Transformations of Dynkin Diagrams

Before we continue constructing root systems, we want to stop and observe a couple things about transformations of Dynkin diagrams.

First off, I want to be clear about what kinds of transformations I mean. Given Dynkin diagrams X and Y, I want to consider a mapping \phi that sends every vertex of X to a vertex of Y. Further, if \xi_1 and \xi_2 are vertices of X joined by n edges, then \phi(\xi_1) and \phi(\xi_2) should be joined by n edges in Y as well, and the orientation of double and triple edges should be the same.

But remember that \xi_1 and \xi_2, as vertices, really stand for vectors in some base of a root system, and the number of edges connecting them encodes their Cartan integers. If we slightly abuse notation and write X and Y for these bases, then the mapping \phi defines images of the vectors in X, which is a basis of a vector space. Thus \phi extends uniquely to a linear transformation from the vector space spanned by X to that spanned by Y. And our assumption about the number of edges joining two vertices means that \phi preserves the Cartan integers of the base X.

Now, just like we saw when we showed that the Cartan matrix determines the root system up to isomorphism, we can extend \phi to a map from the root system generated by X to the root system generated by Y. That is, a transformation of Dynkin diagrams gives rise to a morphism of root systems.

Unfortunately, the converse doesn’t necessarily hold. Look back at our two-dimensional examples; specifically, consider the A_2 and G_2 root systems. Even though we haven’t really constructed the latter yet, we can still use what we see. There are linear maps taking the six roots in A_2 to either the six long roots or the six short roots in G_2. These maps are all morphisms of root systems, but none of them can be given by transformations of Dynkin diagrams. Indeed, the image of any base for A_2 would contain either two long roots in G_2 or two short roots, but any base of G_2 would need to contain both a long and a short root.

However, not all is lost. If we have an isomorphism of root systems, then it must send a base to a base, and thus it can be seen as a transformation of the Dynkin diagrams. Indeed, an isomorphism of root systems gives rise to an isomorphism of Dynkin diagrams.

The other observation we want to make is that duality of root systems is easily expressed in terms of Dynkin diagrams: just reverse all the oriented edges! Indeed, we’ve already seen this in the case of B_n and C_n root systems. When we get to constructing G_2 and F_4, we will see that they are self-dual, in keeping with the fact that reversing the directed edge in each case doesn’t really change the diagram.

March 5, 2010 Posted by | Geometry, Root Systems | 3 Comments

Construction of B- and C-Series Root Systems

Starting from our setup, we construct root systems corresponding to the B_n (for n\geq2) and C_n (for n\geq3) Dynkin diagrams. First will be the B_n series.

As we did for the D_n series, we start out with an n dimensional space with the lattice I of integer-coefficient vectors. This time, though, we let \Phi be the collection of vectors \alpha\in I of squared-length {2} or {1}: either \langle\alpha,\alpha\rangle=2 or \langle\alpha,\alpha\rangle=1. Explicitly, this is the collection of vectors \pm(\epsilon_i\pm\epsilon_j) for i\neq j (signs chosen independently) from the D_n root system, plus all the vectors \pm\epsilon_i.

Similarly to the A_n series, and exactly as in the D_n series, we define \alpha_i=\epsilon_i-\epsilon_{i+1} for 1\leq i\leq n-1. This time, though, to get vectors whose coefficients don’t sum to zero we can just define \alpha_n=\epsilon_n, which is independent of the other vectors. Since it has n vectors, the independent set \Delta=\{\alpha_i\} is a basis for our vector space.

As in the A_n and D_n cases, any vector \epsilon_i-\epsilon_j with i<j can be written


This time, any of the \epsilon_i can be written


Thus any vector \epsilon_i+\epsilon_j can be written as the sum of two of these vectors. And so \Delta is a base for \Phi.

We calculate the Cartan integers. For i and j less than n, we again have the same calculation as in the A_n case, which gives a simple chain of length n-1 vertices. But when we involve \alpha_n things are a little different.



If 1\leq i<n-1, then both of these are zero. On the other hand, if i=n-1, then the first is -2 and the second is -1. Thus we get a double edge from \alpha_{n-1} to \alpha_n, and \alpha_{n-1} is the longer root. And so we obtain the B_n Dynkin diagram.

Considering the reflections with respect to the \alpha_i, we find that \sigma_{\alpha_i} swaps the coefficients of \epsilon_i and \epsilon_{i+1} for 1\leq i\leq n-1. But what about \alpha_n? We calculate

\displaystyle\begin{aligned}\sigma_{\alpha_n}(v)&=v-\frac{2\langle v,\alpha_n\rangle}{\langle\alpha_n,\alpha_n\rangle}\alpha_n\\&=v-2\langle v,\alpha_n\rangle\alpha_n\\&=v-2v^n\epsilon_n\end{aligned}

which flips the sign of the last coefficient of v. As we did in the D_n case, we can use this to flip the signs of whichever coefficients we want. Since these transformations send the lattice I back into itself, they send \Phi to itself and we do have a root system.

Finally, since we don’t have any restrictions on how many signs we can flip, the Weyl group for B_n is exactly the wreath product S_n\wr\mathbb{Z}_2.

So, what about C_n? This is just the dual root system to B_n! The roots of squared-length {2} are left unchanged, but the roots of squared-length {1} are doubled. The Weyl group is the same — S_n\wr\mathbb{Z}_2 — but now the short root in the base \Delta is the long root, and so we flip the direction of the double arrow in the Dynkin diagram, giving the C_n diagram.

March 4, 2010 Posted by | Geometry, Root Systems | 2 Comments

Construction of D-Series Root Systems

Starting from our setup, we construct root systems corresponding to the D_n Dynkin diagrams (for n\geq4).

The construction is similar to that of the A_n series, but instead of starting with a hyperplane in n+1-dimensional space, we just start with n-dimensional space itself with the lattice I of integer-coefficient vectors. We again take \Phi to be the collection of vectors \alpha\in I of squared-length 2: \langle\alpha,\alpha\rangle=2. Explicitly, this is the collection of vectors \pm(\epsilon_i\pm\epsilon_j) for i\neq j, where we can choose the two signs independently.

Similarly to the A_n case, we define \alpha_i=\epsilon_i-\epsilon_{i+1} for 1\leq i\leq n-1, but these can only give vectors whose coefficients sum to {0}. To get other vectors, we throw in \alpha_n=\epsilon_{n-1}+\epsilon_n, which is independent of the others. The linearly independent collection \Delta=\{\alpha_i\} has n vectors, and so must be a basis of the n-dimensional space.

As before, any vector in \phi of the form \epsilon_i-\epsilon_j for i<j can be written as


while vectors of the form \epsilon_i+\epsilon_j are a little more complicated. We can start with


and from this we can always build 2\epsilon_j=(\epsilon_j-\epsilon_n)+(\epsilon_j+\epsilon_n) for 1\leq j\leq n-1. Then if i<j\leq n-1 we can write \epsilon_i+\epsilon_j=(\epsilon_i-\epsilon_j)+2\epsilon_j. This proves that \Delta is a base for \Phi.

Again, we calculate the Cartan integers. The calculation for i and j both less than n is exactly as before, showing that these vectors form a simple chain in the Dynkin diagram of length n-1. However, when we involve \alpha_n we find


For i<n-2, this is automatically {0}; for i=n-2, we get the value -1; and for i=n-1 we again get {0}. This shows that the Dynkin diagram of \Delta is D_n.

Finally, we consider the reflections with respect to the \alpha_i. As in the A_n case, we find that \sigma_{\alpha_i} swaps the coefficients of \epsilon_i and \epsilon_{i+1} for 1\leq i\leq n-1. But what about \alpha_n?

\displaystyle\begin{aligned}\sigma_{\alpha_n}(v)&=v-\frac{2\langle v,\alpha_n\rangle}{\langle\alpha_n,\alpha_n\rangle}\alpha_n\\&=v-\langle v,\alpha_n\rangle\alpha_n\\&=v-(v^{n-1}+v^n)(\epsilon_{n-1}+\epsilon_n)\\&=v-\langle v,\alpha_n\rangle\alpha_n\\&=v-(v^{n-1}\epsilon_{n-1}+v^n\epsilon_n)-(v^n\epsilon_{n-1}+v^{n-1}\epsilon_n)\end{aligned}

This swaps the last two coefficients of v and flips their sign. Clearly, this sends the lattice I back to itself, showing that \Phi is indeed a root system.

Now we can use \sigma_{\alpha_n}\sigma_{\alpha_{n-1}} to flip the signs of coefficients of v, two at a time. We use whatever of the \sigma_{\alpha_i} we need to get the two coefficients we want into the last two slots, hit it with \sigma_{\alpha_n}\sigma_{\alpha_{n-1}} to flip them, and then invert the first permutation to move everything back where it started from. In fact, this is a lot like what we saw way back with the Rubik’s cube, when dealing with the edge group. We can effect whatever permutation we want on the coefficients, and we can flip any even number of them.

The Weyl group of D_n is then the subgroup of the wreath product S_n\wr\mathbb{Z}_2 consisting of those transformations with an even number of flips coming from the \mathbb{Z}_2 components. Explicitly, we can write \mathbb{Z}_2^{n-1} as the subgroup of \mathbb{Z}_2^n with sum zero. Then we can let S_n act on \mathbb{Z}_2^n by permuting the components, and use this to give an action of S_n on \mathbb{Z}_2^{n-1}, and thus form the semidirect product S_n\ltimes\mathbb{Z}_2^{n-1}.

March 3, 2010 Posted by | Geometry, Root Systems | 2 Comments

Construction of A-Series Root Systems

Starting from our setup, we construct root systems corresponding to the A_n Dynkin diagrams.

We start with the n+1-dimensional space \mathbb{R}^{n+1} with orthonormal basis \{\epsilon_0,\dots,\epsilon_n\}, and cut out the n-dimensional subspace E orthogonal to the vector \epsilon_0+\dots+\epsilon_n. This consists of those vectors v=\sum_{k=0}^nv^k\epsilon_k for which the coefficients sum to zero: \sum_{k=0}^nv^k=0. We let J=I\cap E, consisting of the lattice vectors whose (integer) coefficients sum to zero. Finally, we define our root system \Phi to consist of those vectors \alpha\in J such that \langle\alpha,\alpha\rangle=2.

From this construction it should be clear that \Phi consists of the vectors \{\epsilon_i-\epsilon_j\vert i\neq j\}. The n vectors \Delta=\{\alpha_i=\epsilon_{i-1}-\epsilon_i\} are independent, and thus form a basis of the n-dimensional space E. This establishes that \Phi spans E. In particular, if i<j we can write


showing that \Delta forms a base for \Phi.

We calculate the Cartan integers for this base


For i=j we get the value {2}; for j+1 or j-1 we get the value -1; otherwise we get the value {0}. This clearly gives us the Dynkin diagram A_n.

Finally, the reflections with respect to the \alpha_i should generate the entire Weyl group. We must verify that these leave the lattice J invariant to be sure that we have a root system. We calculate

\displaystyle\begin{aligned}\sigma_{\alpha_i}(v)&=v-\frac{2\langle v,\alpha_i\rangle}{\langle\alpha_i,\alpha_i\rangle}\alpha_i\\&=v-\langle v,\alpha_i\rangle\alpha_i\\&=v-(v^{i-1}-v^i)(\epsilon_{i-1}-\epsilon_i)\\&=v-(v^{i-1}\epsilon_{i-1}+v^i\epsilon_i)+(v^i\epsilon_{i-1}+v^{i-1}\epsilon_i)\end{aligned}

That is, it swaps the coefficients of \epsilon_{i-1} and \epsilon_i, and thus sends the lattice J back to itself, as we need.

We can also see from this effect that any combination of the \sigma_{\alpha_i} serves to permute the n+1 coefficients of a given vector. That is, the Weyl group of the A_n system is naturally isomorphic to the symmetric group S_{n+1}.

March 2, 2010 Posted by | Geometry, Root Systems | 4 Comments

Construction of Root Systems (setup)

Now that we’ve proven the classification theorem, we know all about root systems, right? No! All we know is which Dynkin diagrams could possibly arise from root systems. We don’t know whether there actually exists a root system for any given one of them. The situation is sort of like what we found way back when we solved Rubik’s magic cube: first we established some restrictions on allowable moves, and then we showed that everything else actually happened.

And so we must construct some actual root systems. For this task, we let E stand for a finite-dimensional real vector space \mathbb{R}^n for various n, equipped with its usual inner product. We pick an orthonormal basis \{\epsilon_1,\dots,\epsilon_n\} and let the integral linear combinations of these basis vectors form the lattice I. Here, I do not mean “lattice” in the order-theory sense. I mean that this is a discrete collection of points in the vector space that is closed under addition.

In every case we’re going to take either the lattice I, or a slightly modified lattice J. We’ll define our root system \Phi to be the collection of vectors in the lattice of either one or two specified lengths (since there can be at most two root lengths). That is, we’re considering the intersection of a discrete collection of points with one or two spheres. These spheres are closed and bounded, and thus compact. The collection \Phi must be finite or else it would have an accumulation point by Bolzano-Weierstrass, and thus wouldn’t be discrete!

Any one of our constructed collections will span E, and in fact an explicit basis will be shown in each case, in case it’s not clear. It should also be clear that none of them can contain the vector {0}, and so the first condition of being a root system will hold. Our choice of lengths will make it clear that there are no possible scalar multiples of a root besides itself and its negative. On the other hand, it should be clear that if \alpha is in a lattice I and on a sphere \lVert\alpha\rVert^2=r^2, then -\alpha is also in both, and thus the second condition holds.

The reflection \sigma_\alpha preserves lengths, and so it sends the spheres back to themselves. We’ll have to check in each case that \sigma_\alpha sends every vector in our collection back into the lattice, which will establish the third condition.

As to the fourth condition, the inner product \langle\alpha,\beta\rangle is automatically going to be in \mathbb{Z} when we pick \alpha and \beta from a lattice, and so picking the squared radii of our spheres to divide 2 should be enough to guarantee that \frac{2\langle\beta,\alpha\rangle}{\langle\alpha,\alpha\rangle}\in\mathbb{Z}.

Tomorrow we start in constructing our root systems, towards the theorem: For each Dynkin diagam allowed by the classification theorem, there exists an irreducible root system having that diagram.

March 1, 2010 Posted by | Geometry, Root Systems | 7 Comments

Proving the Classification Theorem V

Today we conclude the proof of the classification theorem. The first four parts of the proof are here, here, here, and here.

  1. The only possible Coxeter graphs with a triple vertex are those of the forms D_n, E_6, E_7, and E_8.
  2. From step 8 we have the labelled graph

    Like we did in step 9, we define




    These vectors \epsilon, \eta, and \zeta are mutually orthogonal, linearly independent vectors, and that \psi is not in the subspace that they span.

    We look back at our proof of step 4 to determine that \cos(\theta_1)^2+\cos(\theta_2)^2+\cos(\theta_3)^3<0; where \theta_1, \theta_2, and \theta_3 are the angles between \psi and \epsilon, \eta, and \zeta, respectively. We look back at our proof of step 9 to determine that \langle\epsilon,\epsilon\rangle=\frac{p(p-1)}{2}, \langle\eta,\eta\rangle=\frac{q(q-1)}{2}, and \langle\zeta,\zeta\rangle=\frac{r(r-1)}{2}. Thus we can calculate the cosine


    And similarly we find \cos(\theta_2)^2=\frac{1}{2}(1-\frac{1}{q}), and \cos(\theta_3)^2=\frac{1}{2}(1-\frac{1}{r}). Adding up, we find


    This last inequality, by the way, is hugely important in many areas of mathematics, and it’s really interesting to find it cropping up here.

    Anyway, now none of p, q, or r can be 1 or we don’t have a triple vertex at all. We can also choose which strand is which so that


    We can determine from here that


    and so we must have r=2, and the shortest leg must be one edge long. Now we have \frac{1}{p}+\frac{1}{q}>\frac{1}{2}, and so \frac{2}{q}>\frac{1}{2}, and q must be either 2 or 3.

    If q=3, then the second shortest leg is two edges long. In this case, \frac{1}{p}>\frac{1}{6} and p<6. The possibilities for the triple (p,q,r) are (3,3,2), (4,3,2), and (5,3,2); giving graphs E_6, E_7, and E_8, respectively.

    On the other hand, if q=2, then the second shortest leg is also one edge long. In this case, there is no more restriction on p, and so the remaining leg can be as long as we like. This gives us the D_n family of graphs.

And we’re done! If we have one triple edge, we must have the graph G_2. If we have a double edge or a triple vertex, we can have only one, and we can’t have one of each. Step 9 narrows down graphs with a double edge to F_4 and the families B_n and C_n, while step 10 narrows down graphs with a triple vertex to E_6, E_7, and E_8, and the family D_n. Finally, if there are no triple vertices or double edges, we’re left with a single simple chain of type A_n.

February 26, 2010 Posted by | Geometry, Root Systems | Leave a comment

Proving the Classification Theorem IV

We continue proving the classification theorem. The first three parts are here, here, and here.

  1. Any connected graph \Gamma takes one of the four following forms: a simple chain, the G_2 graph, three simple chains joined at a central vertex, or a chain with exactly one double edge.
  2. This step largely consolidates what we’ve done to this point. Here are the four possible graphs:

    The labels will help with later steps.

    Step 5 told us that there’s only one connected graph that contains a triple edge. Similarly, if we had more than one double edge or triple vertex, then we must be able to find two of them connected by a simple chain. But that will violate step 7, and so we can only have one of these features either.

  3. The only possible Coxeter graphs with a double edge are those underlying the Dynkin diagrams B_n, C_n, and F_4.
  4. Here we’ll use the labels on the above graph. We define



    As in step 6, we find that 2\langle\epsilon_i,\epsilon_{i+1}\rangle=-1=2\langle\eta_j,\eta_{j+1}\rangle and all other pairs of vectors are orthogonal. And so we calculate


    And similarly, \langle\eta,\eta\rangle=\frac{q(q+1)}{2}. We also know that 4\langle\epsilon_p,\eta_q\rangle^2=2, and so we find


    Now we can use the Cauchy-Schwarz inequality to conclude that


    where the inequality is strict, since \epsilon and \eta are linearly independent. And so we find


    We thus must have either p=q=2, which gives us the F_4 diagram, or p=1 or q=1 with the other arbitary, which give rise the the B_n and C_n Coxeter graphs.

February 25, 2010 Posted by | Geometry, Root Systems | 5 Comments