The Unapologetic Mathematician

Mathematics for the interested outsider

Products of Algebras of Sets

As we deal with algebras of sets, we’ll be wanting to take products of these structures. But it’s not as simple as it might seem at first. We won’t focus, yet, on the categorical perspective, and will return to that somewhat later.

Okay, so what’s the problem? Well, say we have sets X_1 and X_2, and algebras of subsets \mathcal{E}_1\subseteq P(X_1) and \mathcal{E}_2\subseteq P(X_2). We want to take the product set X=X_1\times X_2 and come up with an algebra of sets \mathcal{E}\subseteq P(X). It’s sensible to expect that if we have E_1\in\mathcal{E}_1 and E_2\in\mathcal{E}_2, we should have E_1\times E_2\in\mathcal{E}. Unfortunately, the collection of such products is not, itself, an algebra of sets!

So here’s where our method of generating an algebra of sets comes in. In fact, let’s generalize the setup a bit. Let’s say we’ve got \mathcal{R}_1\subseteq P(X_1) which generates \mathcal{E}_1 as the collection of finite disjoint unions of sets in \mathcal{R}_1, and let \mathcal{R}_2\subseteq P(X_2) be a similar collection. Of course, since the algebras \mathcal{E}_1 and \mathcal{E}_2 are themselves closed under finite disjoint unions, we could just take \mathcal{R}_1=\mathcal{E}_1 and \mathcal{R}_2=\mathcal{E}_2, but we could also have a more general situation.

Now we can define \mathcal{R} to be the collection of products R_1\times R_2 of sets R_1\in\mathcal{R}_1 and R_2\in\mathcal{R}_2, and we define \mathcal{E} as the set of finite disjoint unions of sets in \mathcal{R}. I say that \mathcal{R} satisfies the criteria we set out yesterday, and thus \mathcal{E} is an algebra of subsets of X.

First off, \emptyset is in both \mathcal{R}_1 and \mathcal{R}_2, and so \emptyset\times\emptyset=\emptyset is in \mathcal{R}. On the other hand, X_1\in\mathcal{R}_1 and X_2\in\mathcal{R}_2, so X_1\times X_2=X is in \mathcal{R}. That takes care of the first condition.

Next, is \mathcal{R} closed under pairwise intersections? Let R_1\times R_2 and S_1\times S_2 be sets in \mathcal{R} A point (x_1,x_2) is in the first of these sets if x_1\in R_1 and x_2\in R_2; it’s in the second if x_1\in S_1 and x_2\in S_2. Thus to be in both, we must have x_1\in R_1\cap S_1 and x_2\in R_2\cap S_2. That is,

\displaystyle(R_1\times R_2)\cap(S_1\times S_2)=(R_1\cap S_1)\times(R_2\cap S_2)

Since \mathcal{R}_1 and \mathcal{R}_2 are themselves closed under intersections, this set is in \mathcal{R}.

Finally, can we write (R_1\times R_2)\setminus(S_1\times S_2) as a finite disjoint union of sets in \mathcal{R}? A point (x_1,x_2) is in this set if it misses S_1 in the first coordinate — x_1\in R_1\setminus S_1 and x_2\in R_2 — or if it does hit S_1 but misses S_2 in the second coordinate — x_1\in R_1\cap S_1 and x_2\in R_2\setminus S_2. That is:

\displaystyle(R_1\times R_2)\setminus(S_1\times S_2)=\left((R_1\setminus S_1)\times R_2\right)\cup\left((R_1\cap S_1)\times(R_2\setminus S_2)\right)

Now R_1\setminus S_1\in\mathcal{E}_1, and so it can be written as a finite disjoint union of sets in \mathcal{R}_1; thus (R_1\setminus S_1)\times R_2 can be written as a finite disjoint union of sets in \mathcal{R}. Similarly, we see that (R_1\cap S_1)\times(R_2\setminus S_2) can be written as a finite disjoint union of sets in \mathcal{R}. And no set from the first collection can overlap any set in the second collection, since they’re separated by the first coordinate being contained in S_1 or not. Thus we’ve written the difference as a finite disjoint union of sets in \mathcal{R}, and so (R_1\times R_2)\setminus(S_1\times S_2)\in\mathcal{E}.

Therefore, \mathcal{R} satisfies our conditions, and \mathcal{E} is the algebra of sets it generates.

March 17, 2010 Posted by | Analysis, Measure Theory | 3 Comments

Generating Algebras of Sets

We might not always want to lay out an entire algebra of sets in one go. Sometimes we can get away with a smaller collection that tells us everything we need to know.

Suppose that \mathcal{R} is a subset of P(X) — a collection of subsets of X — and define \mathcal{E}\subseteq P(X) to be the collection of finite disjoint unions of subsets in \mathcal{R}. If we impose the following three conditions on \mathcal{R}:

  • The empty set \emptyset and the whole space X are both in \mathcal{R}.
  • If A and B are in \mathcal{R}, then so is their intersection A\cap B.
  • If A and B are in \mathcal{R}, then their difference A\setminus B is in \mathcal{E}

then \mathcal{E} is an algebra of sets.

If A\in\mathcal{R}, then A\in\mathcal{E}, and so \mathcal{E} contains \emptyset and X. We can also find A^c\in\mathcal{E}, since A^c=X\setminus A.

Let’s take E_1=\bigcup_{i=1}^m R_i and E_2=\bigcup_{j=1}^nS_j to be two sets in \mathcal{E}, written as finite disjoint unions of sets in \mathcal{R}. Then their intersection is

\displaystyle E_1\cap E_2=\bigcup\limits_{i=1}^m\bigcup\limits_{j=1}^n R_i\cap S_j

Each of the R_i\cap S_j is in \mathcal{R}, as an intersection of two sets in \mathcal{R}, and no two of them can intersect. Thus finite intersections of sets in \mathcal{E} are again in \mathcal{E}.

If E=\bigcup_{i=1}^n R_i, then E^c=\bigcap_{i=1}^n R_i^c. Since each of the R_i^c are in \mathcal{E}, their (finite) intersection E^c must be as well, and \mathcal{E} is closed under complements.

And so we can find that if E_1 and E_2 are in \mathcal{E}, then E_1\setminus E_2=E_1\cap E_2^c and E_1\cup E_2=(E_1\setminus E_2)\cup E_2 are both in \mathcal{E}, and \mathcal{E} is thus an algebra of sets.

March 16, 2010 Posted by | Analysis, Measure Theory | 3 Comments

Algebras of Sets

Okay, now that root systems are behind us, I’m going to pick back up with some analysis. Specifically, more measure theory. But it’s not going to look like the real analysis we’ve done before until we get some abstract basics down.

We take some set X, which we want to ultimately consider as a sort of space so that we can measure parts of it. We’ve seen before that the power set P(X) — the set of all the subsets of X — is an orthocomplemented lattice. That is, we can take meets (intersections) U\cap V, joins (unions) U\cup V and complements U^c=X\setminus U of subsets of X, and these satisfy all the usual relations. More generally, we can use these operations to construct differences U\setminus V=U\cap V^c.

Now, an algebra \mathcal{A} of subsets of X will be just a sublattice of P(X) which contains both the bottom and top of P(X): the empty subset \emptyset and the whole set X. The usual definition is that if it contains U and V, then it contains both the union U\cup V and the difference U\setminus V, along with \emptyset and X. But from this we can get complements — U^c=X\setminus U — and DeMorgan’s laws give us intersections — U\cap V=(U^c\cup V^c)^c.

It’s important to note here that these operations let us define finite unions and intersections, just by iteration. But finite operations like this are just algebra. What makes analysis analysis is limits. And so we want to add an “infinite” operation.

Let’s say we have a countably infinite collection of subsets, \{U_i\}_{i=1}^\infty. Then we define the countable union as a limit

\displaystyle\bigcup\limits_{i=1}^\infty U_i=\lim\limits_{n\to\infty}\bigcup\limits_{i=1}^nU_i

We could also just say that the countable union consists of all points in any of the U_i, but it will be useful to explicitly think of this as a process: Starting with U_1 we add in U_2, then U_3, and so on. If x\in U_k for some k, then by the time we reach the kth step we’ve folded x into the growing union. The countable union is the limit of this process.

This viewpoint also brings us into contact with the category-theoretic notion of a colimit (feel free to ignore this if you’re category-phobic). Indeed, if we define V_0=\emptyset and

\displaystyle V_n=\bigcup\limits_{i=1}^nU_i

then clearly we have an inclusion mapping V_i\to V_{i+1} for every natural number i. That is, we have a functor from the natural numbers \mathbb{N} as an order category to the power set P(X) considered as one. And the colimit of this functor is the countable union.

So, let’s say we have an algebra \mathcal{A} of subsets of X and add the assumption that \mathcal{A} is closed under such countable unions. In this case, we say that \mathcal{A} is a “\sigma-algebra”. We can extend DeMorgan’s laws to show that a \sigma-algebra \mathcal{A} will be closed under countable intersections as well as countable unions.

March 15, 2010 Posted by | Analysis, Measure Theory | 9 Comments

Root Systems Recap

Let’s look back over what we’ve done.

After laying down some definitions on reflections, we defined a root system \Phi as a collection of vectors with certain properties. Specifically, each vector is a point in a vector space, and it also gives us a reflection of the same vector space. Essentially, a root system is a finite collection of such vectors and corresponding reflections so that the reflections shuffle the vectors among each other. Our project was to classify these configurations.

The flip side of seeing a root system as a collection of vectors is seeing it as a collection of reflections, and these reflections generate a group of transformations called the Weyl group of the root system. It’s one of the most useful tools we have at our disposal through the rest of the project.

To get a perspective on the classification, we defined the category of root systems. In particular, this leads us to the idea of decomposing a root system into irreducible root systems. If we can classify these pieces, any other root system will be built from them.

Like a basis of a vector space, a base \Delta of a vector space \Phi contains enough information to reconstruct the whole root system. Further, any two bases for a given root system look essentially the same, and the Weyl group shuffles them around. So really what we need to classify are the irreducible bases; for each such base there will be exactly one irreducible root system.

To classify these, we defined Cartan matrices and verified that we can use it to reconstruct a root system. Then we turned Cartan matrices into Dynkin diagrams.

Finally, we could start the real work of classification: a list of the Dynkin diagrams that might arise from root systems. And then we could actually construct root systems that gave rise to each of these examples.

As a little followup, we could look back at the category of root systems and use the Dynkin diagrams and Weyl groups to completely describe the automorphism group of any root system.

Root systems come up in a number of interesting contexts. I’ll eventually be talking about them as they relate to Lie algebras, but (as we’ve just seen) they can be introduced and discussed as a self-motivated, standalone topic in geometry.

March 12, 2010 Posted by | Geometry, Root Systems | Leave a comment

The Automorphism Group of a Root System

Finally, we’re able to determine the automorphism group of our root systems. That is, given an object in the category of root systems, the morphisms from that root system back to itself (as usual) form a group, and it’s interesting to study the structure of this group.

First of all, right when we first talked about the category of root systems, we saw that the Weyl group \mathcal{W} of \Phi is a normal subgroup of \mathrm{Aut}(\Phi). This will give us most of the structure we need, but there may be automorphisms of \Phi that don’t come from actions of the Weyl group.

So fix a base \Delta of \Phi, and consider the collection \Gamma of automorphisms which send \Delta back to itself. We’ve shown that the action of \mathcal{W} on bases of \Phi is simply transitive, which means that if \tau\in\Gamma comes from the Weyl group, then \tau can only be the identity transformation. That is, \Gamma\cap\mathcal{W}=\{1\} as subgroups of \mathrm{Aut}(\Phi).

On the other hand, given an arbitrary automorphism \tau\in\mathrm{Aut}(\Phi), it sends \Delta to some other base \Delta'. We can find a \sigma\in\mathcal{W} sending \Delta' back to \Delta. And so \sigma\tau\in\Gamma; it’s an automorphism sending \Delta to itself. That is, \tau\in\mathcal{W}\Gamma; any automorphism can be written (not necessarily uniquely) as the composition of one from \Gamma and one from \mathcal{W}. Therefore we can write the automorphism group as the semidirect product:

\displaystyle\mathrm{Aut}(\Phi)=\Gamma\ltimes\mathcal{W}

All that remains, then, is to determine the structure of \Gamma. But each \tau\in\Gamma shuffles around the roots in \Delta, and these roots correspond to the vertices of the Dynkin diagram of the root system. And for \tau to be an automorphism of \Phi, it must preserve the Cartan integers, and thus the numbers of edges between any pair of vertices in the Dynkin diagram. That is, \Gamma must be a transformation of the Dynkin diagram of \Phi back to itself, and the reverse is also true.

So we can determine \Gamma just by looking at the Dynkin diagram! Let’s see what this looks like for the connected diagrams in the classification theorem, since disconnected diagrams just add transformations that shuffle isomorphic pieces.

Any diagram with a multiple edge — G_2, F_4, and the B_n and C_n series — has only the trivial symmetry. Indeed, the multiple edge has a direction, and it must be sent back to itself with the same direction. It’s easy to see that this specifies where every other part of the diagram must go.

The diagram A_1 is a single vertex, and has no nontrivial symmetries either. But the diagram A_n for n\geq2 can be flipped end-over-end. We thus find that \Gamma=\mathbb{Z}_2 for all these diagrams. The diagram E_6 can also be flipped end-over-end, leaving the one “side” vertex fixed, and we again find \Gamma=\mathbb{Z}_2, but E_7 and E_8 have no nontrivial symmetries.

There is a symmetry of the D_n diagram that swaps the two “tails”, so \Gamma=\mathbb{Z}_2 for n\geq5. For n=4, something entirely more interesting happens. Now the “body” of the diagram also has length {1}, and we can shuffle it around just like the “tails”. And so for D_4 we find \Gamma=S_3 — the group of permutations of these three vertices. This “triality” shows up in all sorts of interesting applications that connect back to Dynkin diagrams and root systems.

March 11, 2010 Posted by | Geometry, Root Systems | 1 Comment

Construction of E-Series Root Systems

Today we construct the last of our root systems, following our setup. These correspond to the Dynkin diagrams E_6, E_7, and E_8. But there are transformations of Dynkin diagrams that send E_6 into E_7, and E_7 on into E_8. Thus all we really have to construct is E_8, and then cut off the right simple roots in order to give E_7, and then E_6.

We start similarly to our construction of the F_4 root system; take the eight-dimensional space with the integer-coefficient lattice I, and then build up the set of half-integer coefficient vectors

\displaystyle I'=I+\frac{1}{2}(\epsilon_1+\epsilon_2+\epsilon_3+\epsilon_4+\epsilon_5+\epsilon_6+\epsilon_7+\epsilon_8)

Starting from lattice I\cup I', we can write a generic lattice vector as

\displaystyle\frac{c^0}{2}(\epsilon_1+\epsilon_2+\epsilon_3+\epsilon_4+\epsilon_5+\epsilon_6+\epsilon_7+\epsilon_8)+c^1\epsilon_1+c^2\epsilon_2+c^3\epsilon_3+c^4\epsilon_4+c^5\epsilon_5+c^6\epsilon_6+c^7\epsilon_7+c^8\epsilon_8

and we let J\subseteq I\cup I' be the collection of lattice vectors so that the sum of the coefficients c^i is even. This is well-defined even though the coefficients aren’t unique, because the only redundancy is that we can take {2} from c^0 and add {1} to each of the other eight coefficients, which preserves the total parity of all the coefficients.

Now let \Phi consist of those vectors \alpha\in J with \langle\alpha,\alpha\rangle=2. The explicit description is similar to that from the F_4 root system. From I, we get the vectors \pm(\epsilon_i\pm\epsilon_j), but not the vectors \pm\epsilon_i because these don’t make it into J. From I' we get some vectors of the form

\displaystyle\frac{1}{2}(\pm\epsilon_1\pm\epsilon_2\pm\epsilon_3\pm\epsilon_4\pm\epsilon_5\pm\epsilon_6\pm\epsilon_7\pm\epsilon_8)

Starting with the choice of all minus signs, this vector is not in J because c^0=-1 and all the other coefficients are {0}. To flip a sign, we add \epsilon_i, which flips the total parity of the coefficients. Thus the vectors of this form that make it into \Phi are exactly those with an odd number of minus signs.

We need to verify that \frac{2\langle\beta,\alpha\rangle}{\langle\alpha,\alpha\rangle}=\langle\beta,\alpha\rangle\in\mathbb{Z} for all \alpha and \beta in \Phi (technically we should have done this yesterday for F_4, but here it is. If both \alpha and \beta come from I, this is clear since all their coefficients are integers. If \alpha=\pm\epsilon_i\pm\epsilon_j\in I and \beta\in I', then the inner product is the sum of the ith and jth coefficients of \beta, but with possibly flipped signs. No matter how we choose \alpha\in I and \beta\in I', the resulting inner product is either -1, {0}, or {1}. Finally, if both \alpha and \beta are chosen from I', then each one is c=-\frac{1}{2}(\epsilon_1+\epsilon_2+\epsilon_3+\epsilon_4+\epsilon_5+\epsilon_6+\epsilon_7+\epsilon_8) plus an odd number of the \epsilon_i, which we write as a and b, respectively. Thus the inner product is

\displaystyle\langle\alpha,\beta\rangle=\langle c+a,c+b\rangle=\langle c,c\rangle+\langle c,b\rangle+\langle a,c\rangle+\langle a,b\rangle

The first term here is 2, and the last term is also an integer because the coefficients of a and b are all integers. The middle two terms are each a sum of an odd number of \pm\frac{1}{2}, and so each of them is a half-integer. The whole inner product then is an integer, as we need.

What explicit base \Delta should we pick? We start out as we’ve did for F_4 with \epsilon_2-\epsilon_3, \epsilon_3-\epsilon_4, and so on up to \epsilon_7-\epsilon_8. These provide six of our eight vertices, and the last two of them are perfect for cutting off later to make the E_7 and E_6 root systems. We also throw in \epsilon_2+\epsilon_3, like we did for the D_n series. This provides us with the triple vertex in the E_8 Dynkin diagram.

We need one more vertex off to the left. It should be orthogonal to every one of the simple roots we’ve chosen so far except for \epsilon_2+\epsilon_3, with which it should have the inner product -1. It should also be a half-integer root, so that we can get access to the rest of them. For this purpose, we choose the root \frac{1}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4-\epsilon_5-\epsilon_6-\epsilon_7-\epsilon_8). Establishing that the reflection with respect to this vector preserves the lattice J — and thus the root system \Phi — proceeds as in the F_4 case.

The Weyl group of E_8 is again the group of symmetries of a polytope. In this case, it turns out that the vectors in \Phi are exactly the vertices of a regular eight-dimensional polytope inscribed in the sphere of radius {2}, and the Weyl group of E_8 is exactly the group of symmetries of this polyhedron! Notice that this is actually something interesting; in the A_2 case the roots formed the vertices of a hexagon, but the Weyl group wasn’t the whole group of symmetries of the hexagon. This is related to the fact that the A_2 diagram possesses a symmetry that flips it end-over-end, and we will explore this behavior further.

The Weyl groups of E_7 and E_6 are also the symmetries of seven- and six-dimensional polytopes, respectively, but these aren’t quite so nicely apparent from their root systems.

As the most intricate (in a sense) of these root systems, E_8 has inspired quite a lot of study and effort to visualize its structure. I’ll leave you with an animation I found on Garrett Lisi’s notewiki, Deferential Geometry (with the help of Sarah Kavassalis).

March 10, 2010 Posted by | Geometry, Root Systems | Leave a comment

Construction of the F4 Root System

Today we construct the F_4 root system starting from our setup.

As we might see, this root system lives in four-dimensional space, and so we start with this space and its integer-component lattice I. However, we now take another copy of I and push it off by the vector \frac{1}{2}(\epsilon_1+\epsilon_2+\epsilon_3+\epsilon_4). This set I' consists of all vectors each of whose components is half an odd integer (a “half-integer” for short). Together with I, we get a new lattice J=I\cup I' consisting of vectors whose components are either all integers or all half-integers. Within this lattice J, we let \Phi consist of those vectors of squared-length 2 or 1: \langle\alpha,\alpha\rangle=2 or \langle\alpha,\alpha\rangle=1; we want to describe these vectors explicitly.

When we constructed the B_n and C_n series, we saw that the vectors of squared-length 1 and 2 in I are those of the form \pm\epsilon_i (squared-length 1) and of the form \pm(\epsilon_i\pm\epsilon_j) for i\neq j (squared-length 2). But what about the vectors in I'? We definitely have \left(\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2}\right) — with squared-length 1 — but can we have any others? The next longest vector in I' will have one component \pm\frac{3}{2} and the rest \pm\frac{1}{2}, but this has squared-length 3 and won’t fit into \Phi! We thus have twenty-four long roots of squared-length 2 and twenty-four short roots of squared-length 1.

Now, of course we need an explicit base \Delta, and we can guess from the diagram F_4 that two must be long and two must be short. In fact, in a similar way to the B_3 root system, we start by picking \epsilon_2-\epsilon_3 and \epsilon_3-\epsilon_4 as two long roots, along with \epsilon_4 as one short root. Indeed, we can see a transformation of Dynkin diagrams sending B_3 into F_4, and sending the specified base of B_3 to these three vectors.

But we need another short root which will both give a component in the direction of \epsilon_1 and will give us access to I'. Further, it should be orthogonal to both \epsilon_2-\epsilon_3 and \epsilon_3-\epsilon_4, and should have a Cartan integer of -1 with \epsilon_4 in either order. For this purpose, we pick \frac{1}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4), which then gives us the last vertex of the F_4 Dynkin diagram.

Does the reflection with respect to this last vector preserve the root system, though? What is its effect on vectors in J? We calculate

\displaystyle\begin{aligned}\sigma_{\frac{1}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4)}(v)&=v-\frac{2\left\langle v,\frac{1}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4)\right\rangle}{\left\langle\frac{1}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4),\frac{1}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4)\right\rangle}\frac{1}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4)\\&=v-\left\langle v,\frac{1}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4)\right\rangle(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4)\\&=v-\frac{v^1-v^2-v^3-v^4}{2}(\epsilon_1-\epsilon_2-\epsilon_3-\epsilon_4)\end{aligned}

Now the sum v^1-v^2-v^3-v^4 is always an integer, whether the components of v are integers or half-integers. If the sum is even, then we are changing each component of v by an integer, which sends I and I' back to themselves. If the sum is off, then we are changing each component of v by a half-integer, which swaps I and I'. In either case, the lattice J is sent back to itself, and so this reflection fixes \Phi.

Like we say for G_2 it’s difficult to understand the Weyl group of F_4 in terms of its action on the components of v. However, also like G_2, we can understand it geometrically. But instead of a hexagon, now the long and short roots each make up a four-dimensional polytope called the “24-cell”. It’s a shape with 24 vertices, 96 edges, 96 equilateral triangular faces, and 24 three-dimensional “cells”, each of which is a regular octahedron; the Weyl group of F_4 is its group of symmetries, just like the Weyl group of G_2 was the group of symmetries of the hexagon.

Also like the G_2 case, the F_4 root system is isomorphic to its own dual. The long roots stay the same length when dualized, while the short roots double in length and become the long roots of the dual root system. Again, a scaling and rotation sends the dual system back to the one we constructed.

March 9, 2010 Posted by | Geometry, Root Systems | 2 Comments

Construction of the G2 Root System

We’ve actually already seen the G_2 root system, back when we saw a bunch of two-dimensional root system. But let’s examine how we can construct it in line with our setup.

The G_2 root system is, as we can see by looking at it, closely related to the A_2 root system. And so we start again with the 2-dimensional subspace of \mathbb{R}^3 consisting of vectors with coefficients summing to zero, and we use the same lattice J. But now we let \Phi be the vectors \alpha\in J of squared-length 2 or 6: \langle\alpha,\alpha\rangle=2 or \langle\alpha,\alpha\rangle=6. Explicitly, we have the six vectors from A_2\pm(\epsilon_1-\epsilon_2), \pm(\epsilon_1-\epsilon_3), and \pm(\epsilon_2-\epsilon_3) — and six new vectors — \pm(2\epsilon_2-\epsilon_1-\epsilon_3), \pm(2\epsilon_1-\epsilon_2-\epsilon_3), and \pm(\epsilon_1+\epsilon_2-2\epsilon_3).

We can pick a base \Delta=\{\epsilon_1-\epsilon_2,2\epsilon_2-\epsilon_1-\epsilon_3\}. These vectors are clearly independent. We can easily write each of the above vectors with a positive sign as a positive sum of the two vectors in \Delta. For example, in accordance with an earlier lemma, we can write

\displaystyle\begin{aligned}\epsilon_1+\epsilon_2-2\epsilon_3&=(2\epsilon_2-\epsilon_1-\epsilon_3)\\&+(\epsilon_1-\epsilon_2)\\&+(\epsilon_1-\epsilon_2)\\&+(\epsilon_1-\epsilon_2)\\&+(2\epsilon_2-\epsilon_1-\epsilon_3)\end{aligned}

where after adding each term we have one of the positive roots. In fact, this path hits all but one of the six positive roots on its way to the unique maximal root.

It’s straightforward to calculate the Cartan integers for \Delta.

\displaystyle\frac{2\langle2\epsilon_2-\epsilon_1-\epsilon_3,\epsilon_1-\epsilon_2\rangle}{\langle\epsilon_1-\epsilon_2,\epsilon_1-\epsilon_2\rangle}=\langle2\epsilon_2-\epsilon_1-\epsilon_3,\epsilon_1-\epsilon_2\rangle=-3

\displaystyle\frac{2\langle\epsilon_1-\epsilon_2,2\epsilon_2-\epsilon_1-\epsilon_3\rangle}{\langle2\epsilon_2-\epsilon_1-\epsilon_3,2\epsilon_2-\epsilon_1-\epsilon_3\rangle}=\frac{1}{3}\langle\epsilon_1-\epsilon_2,2\epsilon_2-\epsilon_1-\epsilon_3\rangle=-1

which shows that we do indeed get the Dynkin diagram G_2.

And, of course, we must consider the reflections with respect to both vectors in \Delta. Unfortunately, computations like those we’ve used before get complicated. However, we can just go back to the picture that we drew before (and that I linked to at the top of this post). It’s a nice, clean, two-dimensional picture, and it’s clear that these reflections send \Phi back to itself, which establishes that \Phi is really a root system.

We can also figure out the Weyl group geometrically from this picture. Draw line segments connecting the tips of either the long or the short roots, and we find a regular hexagon. Then the reflections with respect to the roots generate the symmetry group of this shape. The twelve roots are the twelve axes of symmetry of the polygon, and we can get rotations by first reflecting across one root and then across another. For example, rotating by a sixth of a turn can be effected by reflecting with the basic short root, followed by reflecting with the basic long root.

Finally, we can see that this root system is isomorphic to its own dual. Indeed, if \alpha is a short root, then the dual root is \alpha itself:

\alpha^\vee=\frac{2}{\langle\alpha,\alpha\rangle}\alpha=\alpha

On the other hand, if \alpha is a long root, then we find

\alpha^\vee=\frac{2}{\langle\alpha,\alpha\rangle}\alpha=\frac{1}{3}\alpha

and so the squared-length of \alpha^\vee is \frac{2}{3}. These are now the short roots of the dual system. Scaling the dual system up by a factor of 9 and rotating \frac{1}{12} of a turn, we recover the original G_2 root system.

March 8, 2010 Posted by | Geometry, Root Systems | 1 Comment

Transformations of Dynkin Diagrams

Before we continue constructing root systems, we want to stop and observe a couple things about transformations of Dynkin diagrams.

First off, I want to be clear about what kinds of transformations I mean. Given Dynkin diagrams X and Y, I want to consider a mapping \phi that sends every vertex of X to a vertex of Y. Further, if \xi_1 and \xi_2 are vertices of X joined by n edges, then \phi(\xi_1) and \phi(\xi_2) should be joined by n edges in Y as well, and the orientation of double and triple edges should be the same.

But remember that \xi_1 and \xi_2, as vertices, really stand for vectors in some base of a root system, and the number of edges connecting them encodes their Cartan integers. If we slightly abuse notation and write X and Y for these bases, then the mapping \phi defines images of the vectors in X, which is a basis of a vector space. Thus \phi extends uniquely to a linear transformation from the vector space spanned by X to that spanned by Y. And our assumption about the number of edges joining two vertices means that \phi preserves the Cartan integers of the base X.

Now, just like we saw when we showed that the Cartan matrix determines the root system up to isomorphism, we can extend \phi to a map from the root system generated by X to the root system generated by Y. That is, a transformation of Dynkin diagrams gives rise to a morphism of root systems.

Unfortunately, the converse doesn’t necessarily hold. Look back at our two-dimensional examples; specifically, consider the A_2 and G_2 root systems. Even though we haven’t really constructed the latter yet, we can still use what we see. There are linear maps taking the six roots in A_2 to either the six long roots or the six short roots in G_2. These maps are all morphisms of root systems, but none of them can be given by transformations of Dynkin diagrams. Indeed, the image of any base for A_2 would contain either two long roots in G_2 or two short roots, but any base of G_2 would need to contain both a long and a short root.

However, not all is lost. If we have an isomorphism of root systems, then it must send a base to a base, and thus it can be seen as a transformation of the Dynkin diagrams. Indeed, an isomorphism of root systems gives rise to an isomorphism of Dynkin diagrams.

The other observation we want to make is that duality of root systems is easily expressed in terms of Dynkin diagrams: just reverse all the oriented edges! Indeed, we’ve already seen this in the case of B_n and C_n root systems. When we get to constructing G_2 and F_4, we will see that they are self-dual, in keeping with the fact that reversing the directed edge in each case doesn’t really change the diagram.

March 5, 2010 Posted by | Geometry, Root Systems | 3 Comments

Construction of B- and C-Series Root Systems

Starting from our setup, we construct root systems corresponding to the B_n (for n\geq2) and C_n (for n\geq3) Dynkin diagrams. First will be the B_n series.

As we did for the D_n series, we start out with an n dimensional space with the lattice I of integer-coefficient vectors. This time, though, we let \Phi be the collection of vectors \alpha\in I of squared-length {2} or {1}: either \langle\alpha,\alpha\rangle=2 or \langle\alpha,\alpha\rangle=1. Explicitly, this is the collection of vectors \pm(\epsilon_i\pm\epsilon_j) for i\neq j (signs chosen independently) from the D_n root system, plus all the vectors \pm\epsilon_i.

Similarly to the A_n series, and exactly as in the D_n series, we define \alpha_i=\epsilon_i-\epsilon_{i+1} for 1\leq i\leq n-1. This time, though, to get vectors whose coefficients don’t sum to zero we can just define \alpha_n=\epsilon_n, which is independent of the other vectors. Since it has n vectors, the independent set \Delta=\{\alpha_i\} is a basis for our vector space.

As in the A_n and D_n cases, any vector \epsilon_i-\epsilon_j with i<j can be written

\epsilon_i-\epsilon_j=(\epsilon_i-\epsilon_{i+1})+\dots+(\epsilon_{j-1}-\epsilon_j)

This time, any of the \epsilon_i can be written

\epsilon_i=(\epsilon_i-\epsilon_{i+1})+\dots+(\epsilon_{n-1}-\epsilon_n)+\epsilon_n

Thus any vector \epsilon_i+\epsilon_j can be written as the sum of two of these vectors. And so \Delta is a base for \Phi.

We calculate the Cartan integers. For i and j less than n, we again have the same calculation as in the A_n case, which gives a simple chain of length n-1 vertices. But when we involve \alpha_n things are a little different.

\displaystyle\frac{2\langle\epsilon_i-\epsilon_{i+1},\epsilon_n\rangle}{\langle\epsilon_n,\epsilon_n\rangle}=2\langle\epsilon_i-\epsilon_{i+1},\epsilon_n\rangle

\displaystyle\frac{2\langle\epsilon_n,\epsilon_i-\epsilon_{i+1}\rangle}{\langle\epsilon_i-\epsilon_{i+1},\epsilon_i-\epsilon_{i+1}\rangle}=\langle\epsilon_n,\epsilon_i-\epsilon_{i+1}\rangle

If 1\leq i<n-1, then both of these are zero. On the other hand, if i=n-1, then the first is -2 and the second is -1. Thus we get a double edge from \alpha_{n-1} to \alpha_n, and \alpha_{n-1} is the longer root. And so we obtain the B_n Dynkin diagram.

Considering the reflections with respect to the \alpha_i, we find that \sigma_{\alpha_i} swaps the coefficients of \epsilon_i and \epsilon_{i+1} for 1\leq i\leq n-1. But what about \alpha_n? We calculate

\displaystyle\begin{aligned}\sigma_{\alpha_n}(v)&=v-\frac{2\langle v,\alpha_n\rangle}{\langle\alpha_n,\alpha_n\rangle}\alpha_n\\&=v-2\langle v,\alpha_n\rangle\alpha_n\\&=v-2v^n\epsilon_n\end{aligned}

which flips the sign of the last coefficient of v. As we did in the D_n case, we can use this to flip the signs of whichever coefficients we want. Since these transformations send the lattice I back into itself, they send \Phi to itself and we do have a root system.

Finally, since we don’t have any restrictions on how many signs we can flip, the Weyl group for B_n is exactly the wreath product S_n\wr\mathbb{Z}_2.

So, what about C_n? This is just the dual root system to B_n! The roots of squared-length {2} are left unchanged, but the roots of squared-length {1} are doubled. The Weyl group is the same — S_n\wr\mathbb{Z}_2 — but now the short root in the base \Delta is the long root, and so we flip the direction of the double arrow in the Dynkin diagram, giving the C_n diagram.

March 4, 2010 Posted by | Geometry, Root Systems | 2 Comments

Follow

Get every new post delivered to your Inbox.

Join 392 other followers