The Unapologetic Mathematician

Mathematics for the interested outsider

A Hodge Star Example

I want to start getting into a nice, simple, concrete example of the Hodge star. We need an oriented, Riemannian manifold to work with, and for this example we take \mathbb{R}^3, which we cover with the usual coordinate patch with coordinates we call \{x,y,z\}.

To get a metric, we declare the coordinate covector basis \{dx,dy,dz\} to be orthonormal, which means that we have the matrix

\displaystyle g^{ij}=\begin{pmatrix}1&0&0\\{0}&1&0\\{0}&0&1\end{pmatrix}

and also the inner product matrix

\displaystyle g_{ij}=\begin{pmatrix}1&0&0\\{0}&1&0\\{0}&0&1\end{pmatrix}

since we know that g_{ij} and g^{ij} are inverse matrices. And so we get the canonical volume form

\displaystyle\omega=\sqrt{\det\left(g_{ij}\right)}dx\wedge dy\wedge dz=dx\wedge dy\wedge dz

We declare our orientation of \mathbb{R}^3 to be the one corresponding to this top form.

Okay, so now we can write down the Hodge star in its entirety. And in fact we’ve basically done this way back when we were talking about the Hodge star on a single vector space:

\displaystyle\begin{aligned}*1&=dx\wedge dy\wedge dz\\ *dx&=dy\wedge dz\\ *dy&=-dx\wedge dz=dz\wedge dx\\ *dz&=dx\wedge dy\\ *(dx\wedge dy)&=dz\\ *(dx\wedge dz)&=-dy\\ *(dy\wedge dz)&=dx\\ *(dx\wedge dy\wedge dz)&=1\end{aligned}

So, what does this buy us? Something else that we’ve seen before in the context of a single vector space. Let’s say that v and w are two vector fields defined on an open subset U\subseteq\mathbb{R}^3. We can write these out in our coordinate basis:

\displaystyle\begin{aligned}v&=v_x\frac{\partial}{\partial x}+v_y\frac{\partial}{\partial y}+v_z\frac{\partial}{\partial z}\\w&=w_x\frac{\partial}{\partial x}+w_y\frac{\partial}{\partial y}+w_z\frac{\partial}{\partial z}\end{aligned}

Now, we can use our metric to convert these vectors to covectors — vector fields to 1-forms. We use the matrix g_{ij} to get


Next we can wedge these together

\displaystyle\begin{aligned}g(v,\underline{\hphantom{X}})\wedge g(w,\underline{\hphantom{X}})=&(v_yw_z-v_zw_y)dy\wedge dz\\&+(v_zw_x-v_xw_z)dz\wedge dx\\&+(v_xw_y-v_yw_x)dx\wedge dy\end{aligned}

Now we come to the Hodge star!

\displaystyle\begin{aligned}*(g(v,\underline{\hphantom{X}})\wedge g(w,\underline{\hphantom{X}}))=&(v_yw_z-v_zw_y)dx\\&+(v_zw_x-v_xw_z)dy\\&+(v_xw_y-v_yw_x)dz\end{aligned}

and now we’re back to a 1-form, so we can use the metric to flip it back to a vector field:

\displaystyle\begin{aligned}g\left(*(g(v,\underline{\hphantom{X}})\wedge g(w,\underline{\hphantom{X}})),\underline{\hphantom{X}}\right)=&(v_yw_z-v_zw_y)\frac{\partial}{\partial x}\\&+(v_zw_x-v_xw_z)\frac{\partial}{\partial y}\\&+(v_xw_y-v_yw_x)\frac{\partial}{\partial z}\end{aligned}

Here, the outermost g(\underline{\hphantom{X}},\underline{\hphantom{X}}) is the inner product on 1-forms, while the inner ones are the inner product on vector fields. This is exactly the cross product of vector fields on \mathbb{R}^3.


October 11, 2011 Posted by | Differential Geometry, Geometry | 3 Comments

The Hodge Star in Coordinates

It will be useful to be able to write down the Hodge star in a local coordinate system. So let’s say that we’re in an oriented coordinate patch (U,x) of an oriented Riemannian manifold M, which means that we have a canonical volume form that locally looks like

\displaystyle\omega=\sqrt{\lvert g_{ij}\rvert}dx^1\wedge\dots\wedge dx^n

Now, we know that any k-form on U can be written out as a sum of functions times k-fold wedges:

\displaystyle\eta=\sum\limits_{1\leq i_1<\dots<i_k\leq n}\eta_{i_1\dots i_k}dx^{i_1}\wedge\dots\wedge dx^{i_k}

Since the star operation is linear, we just need to figure out what its value is on the k-fold wedges. And for these the key condition is that for every k-form \zeta we have

\displaystyle\zeta\wedge*(dx^{i_1}\wedge\dots\wedge dx^{i_k})=\langle\zeta,dx^{i_1}\wedge\dots\wedge dx^{i_k}\rangle\omega

Since both sides of this condition are linear in \zeta, we also only need to consider values of \zeta which are k-fold wedges. If \zeta is not the same wedge as \eta, then the inner product is zero, while if \zeta=\eta then

\displaystyle\begin{aligned}(dx^{i_1}\wedge\dots\wedge dx^{i_k})\wedge*(dx^{i_1}\wedge\dots\wedge dx^{i_k})&=\langle dx^{i_1}\wedge\dots\wedge dx^{i_k},dx^{i_1}\wedge\dots\wedge dx^{i_k}\rangle\omega\\&=\det\left(\langle dx^{i_j},dx^{i_k}\rangle\right)\omega\\&=\det\left(\delta^{jk}\right)\omega\\&=\sqrt{\lvert g_{ij}\rvert}dx^1\wedge\dots\wedge dx^n\end{aligned}

And so *(dx^{i_1}\wedge\dots\wedge dx^{i_k}) must be \pm\sqrt{\lvert g_{ij}\rvert} times the n-k-fold wedge made up of all the dx^i that do not show up in \eta. The positive or negative sign is decided by which order gives us an even permutation of all the dx^i on the left-hand side of the above equation.

October 8, 2011 Posted by | Differential Geometry, Geometry | 5 Comments

The Hodge Star on Differential Forms

Let’s say that M is an orientable Riemannian manifold. We know that this lets us define a (non-degenerate) inner product on differential forms, and of course we have a wedge product of differential forms. We have almost everything we need to define an analogue of the Hodge star on differential forms; we just need a particular top — or “volume” — form at each point.

To this end, pick one or the other orientation, and let (U,x) be a coordinate patch such that the form dx^1\wedge\dots\wedge dx^n is compatible with the chosen orientation. We’d like to use this form as our top form, but it’s heavily dependent on our choice of coordinates, so it’s very much not a geometric object — our ideal choice of a volume form will be independent of particular coordinates.

So let’s see how this form changes; if (V,y) is another coordinate patch, we can assume that U=V by restricting each patch to their common intersection. We’ve already determined that the forms differ by a factor of the Jacobian determinant:

\displaystyle dx^1\wedge\dots\wedge dx^n=\det\left(\frac{\partial x^i}{\partial x^j}\right)dy^1\wedge\dots\wedge dy^n

What we want to do is multiply our form by some function that transforms the other way, so that when we put them together the product will be invariant.

Now, we already have something else floating around in our discussion: the metric tensor g. When we pick coordinates x^i we get a matrix-valued function:

\displaystyle g^x_{ij}=g\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right)

and similarly with respect to the alternative coordinates y^i:

\displaystyle g^y_{ij}=g\left(\frac{\partial}{\partial y^i},\frac{\partial}{\partial y^j}\right)

So, what’s the difference between these two matrix-valued functions? We can calculate two ways:

\displaystyle\begin{aligned}g^x_{ij}&=g\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right)\\&=g\left(\sum\limits_{k=1}^n\frac{\partial y^k}{\partial x^i}\frac{\partial}{\partial y^k},\sum\limits_{l=1}^n\frac{\partial y^l}{\partial x^j}\frac{\partial}{\partial y^l}\right)\\&=\sum\limits_{k,l=1}^n\frac{\partial y^k}{\partial x^i}\frac{\partial y^l}{\partial x^j}g\left(\frac{\partial}{\partial y^k},\frac{\partial}{\partial y^l}\right)\\&=\sum\limits_{k,l=1}^n\frac{\partial y^k}{\partial x^i}\frac{\partial y^l}{\partial x^j}g^y_{kl}\end{aligned}

That is, we transform the metric tensor with two copies of the inverse Jacobian matrix. Indeed, we could have come up with this on general principles, since g has type (0,2) — a tensor of type (m,n) transforms with m copies of the Jacobian and n copies of the inverse Jacobian.

Anyway, now we can take the determinant of each side:

\displaystyle\lvert g^x_{ij}\rvert=\left\lvert\frac{\partial y^i}{x^j}\right\rvert^2\lvert g^y_{ij}\rvert

and taking square roots we find:

\displaystyle\sqrt{\lvert g^x_{ij}\rvert}=\left\lvert\frac{\partial y^i}{x^j}\right\rvert\sqrt{\lvert g^y_{ij}\rvert}

Thus the square root of the metric determinant is a function that transforms from one coordinate patch to the other by the inverse Jacobian determinant. And so we can define:

\displaystyle\omega_U=\sqrt{\lvert g^x_{ij}\rvert}dx^1\wedge\dots\wedge dx^n\in\Omega^n_M(U)

which does depend on the coordinate system to write down, but which is actually invariant under a change of coordinates! That is, \omega_U=\omega_V on the intersection U\cap V. Since the algebras of differential forms form a sheaf \Omega^n_M, we know that we can patch these \omega_U together into a unique \omega\in\Omega^n_M(M), and this is our volume form.

And now we can form the Hodge star, point by point. Given any k-form \eta we define the dual form *\eta to be the unique n-k-form such that


for all k-forms \zeta\in\Omega^k(M). Since at every point p\in M we have an inner product and a wedge \omega(p)\in A^n(\mathcal{T}^*_pM), we can find a *\eta(p)\in A^{n-k}(\mathcal{T}^*_pM). Some general handwaving will suffice to show that *\eta varies smoothly from point to point.

October 6, 2011 Posted by | Differential Geometry, Geometry | 12 Comments

Inner Products on Differential Forms

Now that we’ve defined inner products on 1-forms we can define them on k forms for all other k. In fact, our construction will not depend on the fact that they come from a metric at all.

In fact, we’ve basically seen this already when we were just dealing with vector spaces and we introduced inner products on tensor spaces. Pretty much everything goes just as it did then, so going back and reviewing those constructions will pay dividends now.

Anyway, the upshot: we know that we can write any k-form as a sum of k-fold wedges, so the bilinearity of the inner product means we just need to figure out how to calculate the inner product of such k-fold wedges. And this works out like

\displaystyle\begin{aligned}\langle \alpha_1\wedge\dots\wedge \alpha_k,\beta_1\wedge\dots\wedge \beta_k\rangle&=\frac{1}{k!}\frac{1}{k!}\sum\limits_{\pi\in S_k}\sum\limits_{\hat{\pi}\in S_k}\mathrm{sgn}(\pi\hat{\pi})\langle \alpha_{\pi(1)}\otimes\dots\otimes \alpha_{\pi(k)},\beta_{\hat{\pi}(1)}\otimes\dots\otimes \beta_{\hat{\pi}(k)}\rangle\\&=\frac{1}{k!}\frac{1}{k!}\sum\limits_{\pi\in S_k}\sum\limits_{\hat{\pi}\in S_k}\mathrm{sgn}(\pi\hat{\pi})\langle \alpha_{\pi(1)},\beta_{\hat{\pi}(1)}\rangle\dots\langle \alpha_{\pi(k)},\beta_{\hat{\pi}(k)}\rangle\\&=\frac{1}{k!}\frac{1}{k!}\sum\limits_{\pi\in S_k}\sum\limits_{\hat{\pi}\in S_k}\mathrm{sgn}(\pi^{-1}\hat{\pi})\langle \alpha_1,\beta_{\pi^{-1}(\hat{\pi}(1))}\rangle\dots\langle \alpha_{k},\beta_{\pi^{-1}(\hat{\pi}(k))}\rangle\\&=\frac{1}{k!}\frac{1}{k!}\sum\limits_{\pi\in S_k}\sum\limits_{\sigma\in S_k}\mathrm{sgn}(\sigma)\langle \alpha_1,\beta_{\sigma(1)}\rangle\dots\langle \alpha_k,\beta_{\sigma(k)}\rangle\\&=\frac{1}{k!}\sum\limits_{\sigma\in S_k}\mathrm{sgn}(\sigma)\langle \alpha_1,\beta_{\sigma(1)}\rangle\dots\langle \alpha_k,\beta_{\sigma(k)}\rangle\\&=\frac{1}{k!}\det\left(\langle\alpha_i,\beta_j\rangle\right)\end{aligned}

Now let’s say we have an orthonormal basis of 1-forms \{\eta^i\} — a collection of 1-forms such that \langle\eta^i,\eta^j\rangle is the constant function with value 1 if i=j and 0 otherwise. Taking them in order gives us an n-fold wedge \eta^1\wedge\dots\wedge\eta^n. We can calculate its inner product with itself as follows:


We’ve seen this before when talking about the volume of a parallelepiped, but it still feels like this should have volume 1. For this reason, many authors will rescale the inner products on k-forms to compensate. That is, they’ll define the inner product on \Omega^k(U) to be the determinant above, rather than \frac{1}{k!} times the determinant like we wrote. We’ll stick with this version, but remember that not everyone does it this way.

October 4, 2011 Posted by | Differential Geometry, Geometry | 2 Comments

Inner Products on 1-Forms

Our next step after using a metric to define an inner product on the module \mathfrak{X}(U) of vector spaces over the ring \mathcal{O}(U) of smooth functions is to flip it around to the module \Omega^1(U) of 1-forms. The nice thing is that the hard part is already done. All we really need to do is define an inner product on the cotangent space \mathcal{T}^*_p(M); then the extension to 1-forms is exactly like extending from inner products on each tangent space to an inner product on vector fields.

And really this construction is just a special case of a more general one. Let’s say that \langle\underbar{\hphantom{X}},\underbar{\hphantom{X}}\rangle is an inner product on a vector space V. As we mentioned when discussing adjoint transformations, this gives us an isomorphism from V to its dual space V^*. That is, when we have a metric floating around we have a canonical way of identifying tangent vectors in \mathcal{T}_pM with cotangent vectors in \mathcal{T}^*_pM.

Everything is perfectly well-defined at this point, but let’s consider this a bit more explicitly. Say that \{e_i\} is a basis of \mathcal{T}_pM. We automatically have a dual basis \{\eta^i\} defined by \eta^i(e_j)=\delta^i_j, even before defining the metric. So if the inner product g_p defines a mapping \mathcal{T}_pM\to\mathcal{T}^*_pM, what does it look like with respect to these bases? It takes the vector e_i and sends it to a linear functional whose value at e_j is g_p(e_i,e_j). Since we get a number at each point p, we will also write this as a function g_{ij}(p) That is, we can break the image of e_i out as the linear combination

\displaystyle e_i\mapsto\sum\limits_{j=1}^ng_p(e_i,e_j)\eta^j=\sum\limits_{j=1}^ng_{ij}(p)\eta^j

What about a vector with components v^i? We easily calculate


So g_{ij} is the matrix of this transformation. The fact that both indices on the bottom tells us that we are moving from vectors to covectors.

The same sort of reasoning can be applied to the inner product on the dual space. If we write it again by g_p, then we get another matrix:

\displaystyle g^{ij}(p)=g_p(\eta^i,\eta^j)

which tells us how to send a basis covector \eta^i to a vector:

\displaystyle \eta^i\mapsto\sum\limits_{j=1}^ng^{ij}(p)e_j

and thus we can calculate the image of any covector with components \lambda_i:


But these are supposed to be inverses to each other! Thus we can send a vector v=v^ie_i to a covector and back:


If this is to be the original vector back, the coefficient of e_k must be v^k, which means the inner sum — the matrix product of g_{ij} and g^{jk} — must be the Kronecker delta. That is, g^{jk} must be the right matrix inverse of g_{ij}.

Similarly, if we start with a covector we will find that g^{ij} must be the left matrix inverse of g_{jk}. Since it’s a left and a right inverse, it must be the inverse; in particular, g_{ij} must be invertible, which is equivalent to the assumption that g_p is nondegenerate! It also means that we can always find the matrix of the inner product on the dual space in terms of the dual basis, assuming we have the matrix of the inner product on the original space.

And to return to differential geometry, let’s say we have a coordinate patch (U,x). We get a basis of coordinate vector fields, which let us define the matrix-valued function

\displaystyle g_{ij}(p)=g_p\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right)

This much we calculate from the metric we are given by assumption. But then we can invert the matrix at each point to get another one:

\displaystyle g^{ij}(p)=g_p\left(dx^i,dx^j\right)

where this is how we define the inner product on covectors. Of course, the situation is entirely symmetric, and if we’d started with a symmetric tensor field of type (2,0) that defined an inner product at each point, we could flip it over to get a metric.

October 1, 2011 Posted by | Differential Geometry, Geometry | 5 Comments

Inner Products of Vector Fields

Now that we can define the inner product of two vectors using a metric g, we want to generalize this to apply to vector fields.

This should be pretty straightforward: if v and w are vector fields on an open region U it gives us vectors v_p and w_p at each p\in U. We can hit these pairs with g_p to get g_p(v_p,w_p), which is a real number. Since we get such a number at each point p, this gives us a function g(v,w):U\to\mathbb{R}.

That this g is a bilinear function is clear. In fact we’ve already implied this fact when saying that g is a tensor field. But in what sense is it an inner product? It’s symmetric, since each g_p is, and positive definite as well. To be more explicit: g_p(v_p,v_p)\geq0 with equality if and only if v_p is the zero vector in \mathcal{T}_pM. Thus the function g(v,v) always takes on nonneative values, is zero exactly where v is, and is the zero function if and only if v is the zero vector field.

What about nondegeneracy? This is a little trickier. Given a nonzero vector field, we can find some point p where v_p is nonzero, and we know that there is some w_p such that g_p(v_p,w_p)\neq0. In fact, we can find some region U around p where v is everywhere nonzero, and for each point q\in U we can find a w_q such that g_q(v_q,w_q)\neq0. The question is: can we do this in such a way that w_q is a smooth vector field?

The trick is to pick some coordinate map x on U, shrinking the region if necessary. Then there must be some i such that

\displaystyle g_p\left(v_p,\frac{\partial}{\partial x^i}\bigg\vert_p\right)\neq0

because otherwise g_p would be degenerate. Now we get a smooth function near p:

\displaystyle g\left(v,\frac{\partial}{\partial x^i}\right)

which is nonzero at p, and so must be nonzero in some neighborhood of p. Letting w be this coordinate vector field gives us a vector field that when paired with v using g gives a smooth function that is not identically zero. Thus g is also nonzero, and is worthy of the title “inner product” on the module of vector fields \mathfrak{X}(U) over the ring of smooth functions \mathcal{O}(U).

Notice that we haven’t used the fact that the g_p are positive-definite except in the proof that g is, which means that if g is merely pseudo-Riemannian then g is still symmetric and nondegenerate, so it’s still sort of like an inner product, like an symmetric, nondegenerate, but indefinite form is still sort of like an inner product.

September 30, 2011 Posted by | Differential Geometry, Geometry | 2 Comments


Sorry for the delay but it’s been sort of hectic with work, my non-math hobbies, and my latest trip up to DC.

Anyway, now that we’ve introduced the idea of a metric on a manifold, it’s natural to talk about mappings that preserve them. We call such maps “isometries”, since they give the same measurements on tangent vectors before and after their application.

Now, normally there’s no canonical way to translate tensor fields from one manifold to another so that we can compare them, but we’ve seen one case where we can do it: pulling back differential forms. This works because differential forms are entirely made from contravariant vector fields, so we can pull back by using the derivative to push forward vectors and then evaluate.

So let’s get explicit: say we have a metric g_N on the manifold N, which gives us an inner product g_{N,q} on each tangent space \mathcal{T}_qN. If we have a smooth map f:M\to N, we want to use it to define a metric f^*g_N on M. That is, we want an inner product (f^*g_N)_p on each tangent space \mathcal{T}_pM.

Now, given vectors v and w in \mathcal{T}_pM, we can use the derivative f_* to push them forward to f_*v and f_*w in \mathcal{T}_{f(p)}N. We can hit these with the inner product g_{N,f(p)}, defining


It should be straightforward to check that this is indeed an inner product. To be thorough, we must also check that f^*g_N is actually a tensor field. That is, as we move p continuously around M the inner product (f^*g_N)_p varies smoothly.

To check this, we will use our trick: let (U,x) be a coordinate patch around p, giving us the basic coordinate vector fields \frac{\partial}{\partial x^i} in the patch. If (V,y) is a coordinate patch around f(p), then we know how to calculate the derivative f_* applied to these vectors:

\displaystyle f_*\frac{\partial}{\partial x^i}=\sum\limits_{k=1}^n\frac{\partial(y^k\circ f)}{\partial x^i}\frac{\partial}{\partial y^k}

so we can stick this into the above calculation:

\displaystyle\begin{aligned}\left[(f^*g_N)_p\right]\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right)&=g_{N,f(p)}\left(f_*\frac{\partial}{\partial x^i},f_*\frac{\partial}{\partial x^j}\right)\\&=g_{N,f(p)}\left(\sum\limits_{k=1}^n\frac{\partial(y^k\circ f)}{\partial x^i}\frac{\partial}{\partial y^k},\sum\limits_{l=1}^n\frac{\partial(y^l\circ f)}{\partial x^j}\frac{\partial}{\partial y^l}\right)\\&=\sum\limits_{k,l=1}^n\frac{\partial(y^k\circ f)}{\partial x^i}\frac{\partial(y^l\circ f)}{\partial x^j}g_{N,f(p)}\left(\frac{\partial}{\partial y^k},\frac{\partial}{\partial y^l}\right)\end{aligned}

Since we assumed that g_N is a metric, the evaluation on the right side is a smooth function, and thus the left side is as well. So we conclude that f^*g_N is a smooth tensor field, and thus a metric.

Now if M comes with its own metric g_M, we can ask if the pull-back f^*g_N is equal to g_M at each point. If it is, then we call f an isometry. It’s also common to say that f “preserves the metric”, even though the metric gets pulled back not pushed forward.

September 27, 2011 Posted by | Differential Geometry, Geometry | 2 Comments

(Pseudo-)Riemannian Metrics

Ironically, in order to tie what we’ve been doing back to more familiar material, we actually have to introduce more structure. It’s sort of astonishing in retrospect how much structure comes along with the most basic, intuitive cases, or how much we can do before even using that structure.

In particular, we need to introduce something called a “Riemannian metric”, which will move us into the realm of differential geometry instead of just topology. Everything up until this point has been concerned with manifolds as “shapes”, but we haven’t really had any sense of “size” or “angle” or anything else we could measure. Having these notions — and asking that they be preserved — is the difference between geometry and topology.

Anyway, a Riemannian metric on a manifold M is nothing more than a certain kind of tensor field g of type (0,2) on M. At each point p\in M, the field g gives us a tensor:

\displaystyle g_p\in\mathcal{T}_p^*M\otimes\mathcal{T}_p^*M\cong\left(\mathcal{T}_pM\otimes\mathcal{T}_pM\right)^*

We can interpret this as a bilinear function which takes in two vectors v_p,w_p\in\mathcal{T}_pM and spits out a number g_p(v_p,w_p). That is, g_p is a bilinear form on the space \mathcal{T}_pM of tangent vectors at p.

So, what makes g into a Riemannian metric? We now add the assumption that g_p is not just a bilinear form, but that it’s an inner product. That is, g_p is symmetric, nondegenerate, and positive-definite. We can let the last condition slip a bit, in which case we call g a “pseudo-Riemannian metric”. When equipped with a metric, we call M a “(pseudo-)Riemannian manifold”.

It’s common to also say “Riemannian” in the case of negative-definite metrics, since there’s little difference between the cases of signature (n,0) and (0,n). Another common special case is that of a “Lorentzian” metric, which is signature (n-1,1) or (1,n-1).

As we might expect, g is called a metric because it lets us measure things. Specifically, since g_p is an inner product it gives us notions of the length and angle for tangent vectors at p. We must be careful here; we do not yet have a way of measuring distances between points on the manifold M itself. The metric only tells us about the lengths of tangent vectors; it is not a metric in the sense of metric spaces. However, if two curves cross at a point p we can use their tangent vectors to define the angle between the curves, so that’s something.

September 20, 2011 Posted by | Differential Geometry, Differential Topology, Geometry, Topology | 20 Comments

Root Systems Recap

Let’s look back over what we’ve done.

After laying down some definitions on reflections, we defined a root system \Phi as a collection of vectors with certain properties. Specifically, each vector is a point in a vector space, and it also gives us a reflection of the same vector space. Essentially, a root system is a finite collection of such vectors and corresponding reflections so that the reflections shuffle the vectors among each other. Our project was to classify these configurations.

The flip side of seeing a root system as a collection of vectors is seeing it as a collection of reflections, and these reflections generate a group of transformations called the Weyl group of the root system. It’s one of the most useful tools we have at our disposal through the rest of the project.

To get a perspective on the classification, we defined the category of root systems. In particular, this leads us to the idea of decomposing a root system into irreducible root systems. If we can classify these pieces, any other root system will be built from them.

Like a basis of a vector space, a base \Delta of a vector space \Phi contains enough information to reconstruct the whole root system. Further, any two bases for a given root system look essentially the same, and the Weyl group shuffles them around. So really what we need to classify are the irreducible bases; for each such base there will be exactly one irreducible root system.

To classify these, we defined Cartan matrices and verified that we can use it to reconstruct a root system. Then we turned Cartan matrices into Dynkin diagrams.

Finally, we could start the real work of classification: a list of the Dynkin diagrams that might arise from root systems. And then we could actually construct root systems that gave rise to each of these examples.

As a little followup, we could look back at the category of root systems and use the Dynkin diagrams and Weyl groups to completely describe the automorphism group of any root system.

Root systems come up in a number of interesting contexts. I’ll eventually be talking about them as they relate to Lie algebras, but (as we’ve just seen) they can be introduced and discussed as a self-motivated, standalone topic in geometry.

March 12, 2010 Posted by | Geometry, Root Systems | 1 Comment

The Automorphism Group of a Root System

Finally, we’re able to determine the automorphism group of our root systems. That is, given an object in the category of root systems, the morphisms from that root system back to itself (as usual) form a group, and it’s interesting to study the structure of this group.

First of all, right when we first talked about the category of root systems, we saw that the Weyl group \mathcal{W} of \Phi is a normal subgroup of \mathrm{Aut}(\Phi). This will give us most of the structure we need, but there may be automorphisms of \Phi that don’t come from actions of the Weyl group.

So fix a base \Delta of \Phi, and consider the collection \Gamma of automorphisms which send \Delta back to itself. We’ve shown that the action of \mathcal{W} on bases of \Phi is simply transitive, which means that if \tau\in\Gamma comes from the Weyl group, then \tau can only be the identity transformation. That is, \Gamma\cap\mathcal{W}=\{1\} as subgroups of \mathrm{Aut}(\Phi).

On the other hand, given an arbitrary automorphism \tau\in\mathrm{Aut}(\Phi), it sends \Delta to some other base \Delta'. We can find a \sigma\in\mathcal{W} sending \Delta' back to \Delta. And so \sigma\tau\in\Gamma; it’s an automorphism sending \Delta to itself. That is, \tau\in\mathcal{W}\Gamma; any automorphism can be written (not necessarily uniquely) as the composition of one from \Gamma and one from \mathcal{W}. Therefore we can write the automorphism group as the semidirect product:


All that remains, then, is to determine the structure of \Gamma. But each \tau\in\Gamma shuffles around the roots in \Delta, and these roots correspond to the vertices of the Dynkin diagram of the root system. And for \tau to be an automorphism of \Phi, it must preserve the Cartan integers, and thus the numbers of edges between any pair of vertices in the Dynkin diagram. That is, \Gamma must be a transformation of the Dynkin diagram of \Phi back to itself, and the reverse is also true.

So we can determine \Gamma just by looking at the Dynkin diagram! Let’s see what this looks like for the connected diagrams in the classification theorem, since disconnected diagrams just add transformations that shuffle isomorphic pieces.

Any diagram with a multiple edge — G_2, F_4, and the B_n and C_n series — has only the trivial symmetry. Indeed, the multiple edge has a direction, and it must be sent back to itself with the same direction. It’s easy to see that this specifies where every other part of the diagram must go.

The diagram A_1 is a single vertex, and has no nontrivial symmetries either. But the diagram A_n for n\geq2 can be flipped end-over-end. We thus find that \Gamma=\mathbb{Z}_2 for all these diagrams. The diagram E_6 can also be flipped end-over-end, leaving the one “side” vertex fixed, and we again find \Gamma=\mathbb{Z}_2, but E_7 and E_8 have no nontrivial symmetries.

There is a symmetry of the D_n diagram that swaps the two “tails”, so \Gamma=\mathbb{Z}_2 for n\geq5. For n=4, something entirely more interesting happens. Now the “body” of the diagram also has length {1}, and we can shuffle it around just like the “tails”. And so for D_4 we find \Gamma=S_3 — the group of permutations of these three vertices. This “triality” shows up in all sorts of interesting applications that connect back to Dynkin diagrams and root systems.

March 11, 2010 Posted by | Geometry, Root Systems | 1 Comment