Well it’s been quite a while, but I think I can carve out the time to move forwards again. I was all set to start with Lie algebras today, only to find that I’ve already defined them over a year ago. So let’s pick up with a recap: a Lie algebra is a module — usually a vector space over a field — called and give it a bilinear operation which we write as . We often require such operations to be associative, but this time we impose the following two conditions:
Now, as long as we’re not working in a field where — and usually we’re not — we can use bilinearity to rewrite the first condition:
so . This antisymmetry always holds, but we can only go the other way if the character of is not , as stated above.
The second condition is called the “Jacobi identity”, and antisymmetry allows us to rewrite it as:
That is, bilinearity says that we have a linear mapping that sends an element to a linear endomorphism in . And the Jacobi identity says that this actually lands in the subspace of “derivations” — those which satisfy something like the Leibniz rule for derivatives. To see what I mean, compare to the product rule:
where takes the place of , takes the place of , and takes the place of . And the operations are changed around. But you should see the similarity.
Lie algebras obviously form a category whose morphisms are called Lie algebra homomorphisms. Just as we might expect, such a homomorphism is a linear map that preserves the bracket:
We can obviously define subalgebras and quotient algebras. Subalgebras are a bit more obvious than quotient algebras, though, being just subspaces that are closed under the bracket. Quotient algebras are more commonly called “homomorphic images” in the literature, and we’ll talk more about them later.
We will take as a general assumption that our Lie algebras are finite-dimensional, though infinite-dimensional ones absolutely exist and are very interesting.
And I’ll finish the recap by reminding you that we can get Lie algebras from associative algebras; any associative algebra can be given a bracket defined by
The above link shows that this satisfies the Jacobi identity, or you can take it as an exercise.
At last we’re ready to explain the Higgs mechanism. We start where we left off last time: a complex scalar field with a gauged phase symmetry that brings in a (massless) gauge field . The difference is that now we add a new self-interaction term to the Lagrangian:
where is a constant that determines the strength of the self-interaction. We recall the gauged symmetry transformations:
If we write down an expression for the energy of a field configuration we get a bunch of derivative terms — basically like kinetic energy — that all occur with positive signs and then the potential energy term that comes in the brackets above:
Now, the “ground state” of the system should be one that minimizes the total energy, but the usual choice of setting all the fields equal to zero doesn’t do that here. The potential has a “bump” in the center, like the punt in the bottom of a wine bottle, or like a sombrero.
So instead of using that as our ground state, we’ll choose one. It doesn’t matter which, but it will be convenient to pick:
where is chosen to minimize the potential. We can still use the same field as before, but now we will write
Since the ground state is a point along the real axis in the complex plane, vibrations in the field measure movement that changes the length of , while vibrations in measure movement that changes the phase.
We want to consider the case where these vibrations are small — the field basically sticks near its ground state — because when they get big enough we have enough energy flying around in the system that we may as well just work in the more symmetric case anyway. So we are justified in only working out our new Lagrangian in terms up to quadratic order in the fields. This will also make our calculations a lot simpler. Indeed, to quadratic order (and ignoring an irrelevant additive constant) we have
so vibrations of the field don’t show up at all in quadratic interactions.
We should also write out our covariant derivative up to linear terms:
so that the quadratic Lagrangian is
Now, the term in parentheses on the right looks like the mass term of a vector field with mass . But what is the kinetic term of this field?
And so we can write down the final form of our quadratic Lagrangian:
In order to deal with the fact that our normal vacuum was not a minimum for the energy, we picked a new ground state that did minimize energy. But the new ground state doesn’t have the same symmetry the old one did — we have broken the symmetry — and when we write down the Lagrangian in terms of excitations around the new ground state, we find it convenient to change variables. The previously massless gauge field “eats” part of the scalar field and gains a mass, leaving behind the Higgs field.
This is essentially what’s going on in the Standard Model. The biggest difference is that instead of the initial symmetry being a simple phase, which just amounts to rotations around a circle, we have a (slightly) more complicated symmetry to deal with. For those that are familiar with some classical groups, we start with an action of on a column vector made of two complex scalar fields with a potential of the form:
which is invariant under the obvious action of and a phase action of . Since the group is three-dimensional there are three gauge fields to introduce for its symmetry and one more for the symmetry.
When we pick a ground state that breaks the symmetry it doesn’t completely break; a one-dimensional subgroup still leaves the new ground state invariant — though it’s important to notice that this is not just the factor, but rather a mixture of this factor and a subgroup of . Thus only three of these gauge fields gain mass; they become the and bosons that carry the weak force. The other gauge field remains massless, and becomes — the photon.
At high enough energies — when the fields bounce around enough that the bump doesn’t really affect them — then the symmetry comes back and we see that the electromagnetic and weak interactions are really two different aspects of the same, unified phenomenon, just like electricity and magnetism are really two different aspects of electromagnetism.
Now we’re starting to get to the really meaty stuff. We talked about the phase symmetry of the complex scalar field:
which basically wants to express the idea that the physics of this field only really depends on the length of the complex field values and not on their phases. But another big principle of physics is locality — what happens here doesn’t instantly affect what happens elsewhere — so why should the phase change be global?
To answer this, we “gauge” the symmetry and make it local. The origin of the term is fascinating, but takes us too far afield. The upshot is that we now have the symmetry transformation:
where is no longer a constant, but a function of the spacetime point .
And here’s the big problem: since varies from point to point, it now affects our derivative terms! Before we had
and similarly for . We say that the derivatives are “covariant” under the transformation; they transform in the same way as the underlying fields. And this is what lets us say that
and makes the whole Lagrangian symmetric.
On the other hand, what do we see now?
We pick up this extra term when we differentiate, and it ruins the symmetry.
The way out is to add another field that can “soak up” this extra term. Since the derivative is a vector, we introduce a vector field and say that it transforms as
Next, we introduce a new derivative operator: . That is:
And we calculate
So the derivative does vary the same way as the underlying field does! We call the “covariant derivative”. If we use it in our Lagrangian, we do recover our symmetry, though now we’ve got a new field to contend with. Just like the electromagnetic potential we use the derivative to write
which is now symmetric under the gauged symmetry transformations.
It may not be apparent, but this Lagrangian does contain interaction terms. We can expand out the second term to find:
Our rules of thumb tell us that if we vary the Lagrangian with respect to we get the field equation
which — if we expand out as if it’s the Faraday field into “electric” and “magnetic” fields — give us Gauss’ and Ampère’s law in the presence of a charge-current density .
The charge-current, in particular, we can write as
or, in a gauge-invariant manner, as
which is just the conserved current from last time with the regular derivatives replaced by covariant ones. Similarly, varying with respect to the field we find the “covariant” Klein-Gordon equation:
and, when this holds, we can show that .
So we’ve found that if we take the global symmetry of the complex scalar field and “gauge” it, something like electromagnetism naturally pops out, and the particle of the complex scalar field interacts with it like charged particles interact with the real electromagnetic field.
This is part two of a four-part discussion of the idea behind how the Higgs field does its thing. Read Part 1 first.
Okay, now that we’re sold on the Lagrangian formalism you can rest easy: I’m not going to go through the gory details of any more variational calculus. I do want to clear a couple notational things out of the way, though. They might not all matter for the purposes of our discussion, but better safe than sorry.
First off, I’m going to use a coordinate system where the speed of light is 1. That is, if my unit of time is seconds, my unit of distance is light-seconds. Mostly this helps keep annoying constants out of the way of the equations; physicists do this basically all the time. The other thing is that I’m going to work in four-dimensional spacetime, meaning we’ve got four coordinates: , , , and . We calculate dot products by writing . Yes, that minus sign is weird, but that’s just how spacetime works.
Also instead of writing spacetime vectors, I’m going to write down their components, indexed by a subscript that’s meant to run from 0 to 3. Usually this will be a Greek letter from the middle of the alphabet like or . Similarly, instead of writing for the vector composed of the four spacetime derivatives of a field I’ll just write down the derivatives, and I’ll write instead of .
Along with writing down components instead of vectors I won’t be writing dot products explicitly. Instead I’ll use the common convention that when the same index appears twice we’re supposed to sum over it, remembering that the zero component gets a minus sign. That is, is the dot product from above. Similarly, we can multiply a matrix with entries by a vector to get ; notice how the summed index gets “eaten up” in the process.
Okay, now even without going through the details there’s a fair bit we can infer from general rules of thumb. Any term in the Lagrangian that contains a derivative of the field we’re varying is almost always going to be the squared-length of that derivative, and the resulting term in the variational equations will be the negative of a second derivative of the field. For any term that involves the plain field we basically take its derivative as if the field were a variable. Any term that doesn’t involve the field at all just goes away. And since we prefer positive second-derivative terms to negative ones, we usually flip the sign of the resulting equation; since the other side is zero this doesn’t matter.
So if, for instance, we have the following Lagrangian of a complex scalar field :
we get two equations by varying the field and its complex conjugate separately:
It may not seem to make sense to vary the field and its complex conjugate separately, but the two equations we get at the end are basically the same anyway, so we’ll let this slide for now. Anyway, what we get is a second derivative of set equal to times itself, which we call the “Klein-Gordon wave equation” for . Since the term gives rise to the term in the field equations, we call this the “mass term”.
In the case of electromagnetism in a vacuum we just have the electromagnetic fields and no charge or current distribution. We use the Faraday field to write down the Lagrangian
which gives rise to the field equations
or, equivalently in terms of the potential field :
The second equation just expresses a choice we can make to always consider divergence-free potentials without affecting the predictions of electromagnetism; the first equation looks like the Klein-Gordon equation again, except there’s no mass term. Indeed, we know that photons — the particles associated to the electromagnetic field — have no rest mass!
Turning back to the complex scalar field, we notice that there’s a certain symmetry to this Lagrangian. Specifically, if we replace and by
for any constant , we get the same result. This is important, and it turns out to be a clue that leads us — I won’t go into the details — to consider the quantity
This is interesting because we can calculate
where we’ve used the results of the Klein-Gordon equations. Since , this is a suitable vector field to use as a charge-current distribution; the equation just says that charge is conserved! That is, we can write down a Lagrangian involving both electromagnetism — that is, our “massless vector field” and our scalar field:
where is a “coupling constant” that tells us how important the “interaction term” involving both and is. If it’s zero, then the fields don’t actually interact at all, but if it’s large then they affect each other very strongly.
This is part one of a four-part discussion of the idea behind how the Higgs field does its thing.
Wow, about six months’ hiatus as other parts of my life have taken precedence. But I drag myself slightly out of retirement to try to fill a big gap in the physics blogosphere: how the Higgs mechanism works.
There’s a lot of news about this nowadays, since the Large Hadron Collider has announced evidence of a “Higgs-like” particle. As a quick explanation of that, I use an analogy I made up on Twitter: “If Mirror-Spock exists, he has a goatee. We have found a man with a goatee. We do not yet know if he is Mirror-Spock.”
So, what is the Higgs boson? Well, it’s the particle expression of the Higgs field. That doesn’t explain anything, so we go one step further. What is the Higgs field? It’s the (conjectured) thing that gives some other particles (some of their) mass, in certain situations where normally we wouldn’t expect there to be any mass. And then there’s hand-waving about something like the ether that particles have to push through or shag carpet that they have to rub against that slows them down and hey, mass. Which doesn’t really explain anything, but sort of sounds like it might and so people nod sagely and then either forget about it all or spin their misconceptions into a new wave of Dancing Wu-Li Masters.
I think we can do better, at least for the science geeks out there who are actually interested and not allergic to a little math.
A couple warnings and comments before we begin. First off: I’m not going to go through this in my usual depth because I want to cram it into just three posts, albeit longer ones than usual, even though what I will say touches on all sorts of insanely cool mathematics that disappointingly few people see put together like this. Second: Ironically, that seems to include a lot of the physicists, who are generally more concerned with making predictions than with understanding how the underlying theory connects to everything else and it’s totally fine, honestly, that they’re interested in different aspects than I am. But I’m going to make a relatively superficial pass over describing the theory as physicists talk about it rather than go into those underlying structures. Lastly: I’m not going to describe the actual Higgs particle or field as they exist in the Standard Model; that would require quantum field theory and all sorts of messy stuff like that, when it turns out that the basic idea already shows up in classical field theory, which is a lot easier to explain. Even within classical field theory I’m going to restrict myself to a simpler example of the sort of thing that happens. Because reasons.
That all said, let’s dive in with Lagrangian mechanics. This is a subject that you probably never heard about unless you were a physics major or maybe a math major. Basically, Newtonian mechanics works off of the three laws that were probably drilled into your head by the end of high school science classes:
- Newton’s Laws of Motion
- An object at rest tends to stay at rest; an object in motion tends to stay in that motion.
- Force applied to an object is proportional to the acceleration that object experiences. The constant of proportionality is the object’s mass.
- Every action comes paired with an equal and opposite reaction.
It’s the second one that gets the most use since we can write it down in a formula: . And for most forces we’re interested in the force is a conservative vector field, meaning that it’s the (negative) gradient (fancy word for “derivative” that comes up in more than one dimension) of a potential energy function: . What this means is that things like to move in the direction that potential energy decreases, and they “feel a force” pushing them in that direction. Upshot for Newton: .
Lagrangian mechanics comes at this same formula with a different explanation: objects like to move along paths that (locally) minimize some quantity called “action”. This principle unifies the usual topics of high school Newtonian physics with things like optics where we say that light likes to move along the shortest path between two points. Indeed, the “action” for light rays is just the distance they travel! This also explains things like “the angle of incidence equals the angle of reflection”; if you look at all paths between two points that bounce off of a mirror, the one that satisfies this property has the shortest length, making it a local minimum for the action.
Let’s set this up for a body moving around in some potential field to show you how it works. The action of a suggested path — the body is at the point at time over a time interval is:
where is the velocity vector of the particle, is the square of its length, and is a potential function depending only on the position of the particle. Don’t worry: there’s a big scary integral here, but we aren’t going to actually do any integration.
The function on the inside of the integral is called the Lagrangian function, and we calculate the action of the path by integrating the Langrangian over the time interval we’re concerned with. We write this as with square brackets to emphasize that this is a “functional” that takes a function and gives a number back. Of course, as mathematicians there’s really nothing inherently special about functions taking functions as arguments, but for beginners it helps keep things straight.
Now, what happens if we “wiggle” the path a bit? What if we calculate the action of , where is some “small” function called the “variation” of ? We calculate:
Taking the derivative is linear, so we see that ; “the variation of the derivative is the derivative of the variation”. Plugging this in:
where we’ve thrown away terms involving second and higher powers of ; the variation is small, so the square (and cube, and …) is negligible. So what’s the difference between this and ? What’s the variation of the action?
where again we throw away negligible terms. Now we can handle the first term here using integration by parts:
“Wait a minute!” those of you paying attention will cry out, “what about the boundary terms!?” Indeed, when we use integration by parts we should pick up , but we will assume that we know where the body is at the beginning and the end of our time interval, and we’re just trying to figure out how it gets from one point to the other. That is, is zero at both endpoints.
So, now we apply our Lagrangian principle: bodies like to move along action-minimizing paths. We know how action changes if we “wiggle” the path by a little variation , and this should remind us about how to find local minima: they happen when no matter how we change the input, the “first derivative” of the output is zero. Here the first derivative is the variation in the action, throwing away the negligible terms. So, what condition will make zero no matter what function we put in for ? Well, the other term in the integrand will have to vanish:
But this is just Newton’s second law from above, coming back again!
Everything we know from Newtonian mechanics can be written down in Lagrangian mechanics by coming up with a suitable action functional, which usually takes the form of an integral of an appropriate Lagrangian function. But lots more things can be described using the Lagrangian formalism, including field theories like electromagnetism.
In the presence of a charge distribution and a current distribution , we take the potentials and as fundamental and start with the action (suppressing the space and time arguments so we can write instead of :
When we vary with respect to and insist that the variance of be zero we get Gauss’ law:
The other two of Maxwell’s equations come automatically from taking the potentials as fundamental and coming up with the electric and magnetic fields from them.
A comment just came in on my short rant about electromagnetism texts. Dripping with condescension, it states:
Here’s the fundamental reason for your discomfort: as a mathematician, you don’t realize that scalar and vector potentials have *no physical significance* (or for that matter, do you understand the distinction between objects of physical significance and things that are merely convenient mathematical devices?).
It really doesn’t matter how scalar and vector potentials are defined, found, or justified, so long as they make it convenient for you to work with electric and magnetic fields, which *are* physical (after all, if potentials were physical, gauge freedom would make no sense).
On rare occasions (e.g. Aharonov-Bohm effect), there’s the illusion that (vector) potential has actual physical significance, but when you realize it’s only the *differences* in the potential, it ought to become obvious that, once again, potentials are just mathematically convenient devices to do what you can do with fields alone.
P.S. We physicists are very happy with merely achieving self-consistency, thankyouverymuch. Experiments will provide the remaining justification.
The thing is, none of that changes the fact that you’re flat-out lying to students when you say that the vanishing divergence of the magnetic field, on its own, implies the existence of a vector potential.
I think the commenter is confusing my complaint with a different, more common one: the fact that potentials are not uniquely defined as functions. But I actually don’t have a problem with that, since the same is true of any antiderivative. After all, what is an antiderivative but a potential function in a one-dimensional space? In fact, the concepts of torsors and gauge symmetries are intimately connected with this indefiniteness.
No, my complaint is that physicists are sloppy in their teaching, which they sweep under the carpet of agreement with certain experiments. It’s trivial to cook up magnetic fields in non-simply-connected spaces which satisfy Maxwell’s equations and yet have no globally-defined potential at all. It’s not just that a potential is only defined up to an additive constant; it’s that when you go around certain loops the value of the potential must have changed, and so at no point can the function take any “self-consistent” value.
In being so sloppy, physicists commit the sin of making unstated assumptions, and in doing so in front of kids who are too naïve to know better. A professor may know that this is only true in spaces without holes, but his students probably don’t, and they won’t until they rely on the assumption in a case where it doesn’t hold. That’s really all I’m saying: state your assumptions; unstated assumptions are anathema to science.
As for the physical significance of potentials, I won’t even bother delving into the fact that explaining Aharonov-Bohm with fields alone entails chucking locality right out the window. Rest assured that once you move on from classical electromagnetism to quantum electrodynamics and other quantum field theories, the potential is clearly physically significant.
Before we push ahead with the Faraday field in hand, we need to properly define the Hodge star in our four-dimensional space, and we need a pseudo-Riemannian metric to do this. Before we were just using the standard , but now that we’re lumping in time we need to choose a four-dimensional metric.
And just to screw with you, it will have a different signature. If we have vectors and — with time here measured in the same units as space by using the speed of light as a conversion factor — then we calculate the metric as:
In particular, if we stick the vector into the metric twice, like we do to calculate a squared-length when working with an inner product, we find:
This looks like the Pythagorean theorem in two or three dimensions, but when we get to the time dimension we subtract instead of adding them! Four-dimensional real space equipped with a metric of this form is called “Minkowski space”. More specifically, it’s called 4-dimensional Minkowski space, or “(3+1)-dimensional” Minkowski space — three spatial dimensions and one temporal dimension. Higher-dimensional versions with “spatial” dimensions (with plusses in the metric) and one “temporal” dimension (with minuses) are also called Minkowski space. And, perversely enough, some physicists write it all backwards with one plus and minuses; this version is useful if you think of displacements in time as more fundamental — and thus more useful to call “positive” — than displacements in space.
What implications does this have on the coordinate expression of the Hodge star? It’s pretty much the same, except for the determinant part. You can think about it yourself, but the upshot is that we pick up an extra factor of when the basic form going into the star involves .
So the rule is that for a basic form , the dual form consists of those component -forms not involved in , ordered such that , with a negative sign if and only if is involved in . Let’s write it all out for easy reference:
Note that the square of the Hodge star has the opposite sign from the Riemannian case; when is odd the double Hodge dual of a -form is the original form back again, but when is even the double dual is the negative of the original form.
Now that we’ve seen that we can use the speed of light as a conversion factor to put time and space measurements on an equal footing, let’s actually do it to Maxwell’s equations. We start by moving the time derivatives over on the same side as all the space derivatives:
The exterior derivatives here written as comprise the derivatives in all the spatial directions. If we pick coordinates , , and , then we can write the third equation as three component equations that each look something like
This doesn’t look right at all! We’ve got a partial derivative with respect to floating around, but I see no corresponding . So if we’re going to move to a four-dimensional spacetime and still use exterior derivatives, we can pick up terms from the time derivative of . But for the others to cancel off, they already need to have a around in the first place. That is, we don’t actually have an electric -form:
In truth we have an electric -form:
Now, what does this mean for the exterior derivative ?
Nothing has really changed, except now there’s an extra factor of at the end of everything.
What happens to the exterior derivative of now that we’re using as another coordinate? Well, in components we write:
and thus we calculate:
Now the first part of this is just the old, three-dimensional exterior derivative of , corresponding to the divergence. The second of Maxwell’s equations says that it’s zero. And the other part of this is the time derivative of , but with an extra factor of .
So let’s take the -form and the -form and put them together:
The first term vanishes because of the second of Maxwell’s equations, and the rest all vanish because they’re the components of the third of Maxwell’s equations. That is, the second and third of Maxwell’s equations are both subsumed in this one four-dimensional equation.
When we rewrite the electric and magnetic fields as -forms like this, their sum is called the “Faraday field” . The second and third of Maxwell’s equations are equivalent to the single assertion that .
Now let’s notice that while the electric field has units of force per unit charge, the magnetic field has units of force per unit charge per unit velocity. Further, from our polarized plane-wave solutions to Maxwell’s equations, we see that for these waves the magnitude of the electric field is — a velocity — times the magnitude of the magnetic field. So let’s try collecting together factors of :
Now each of the time derivatives comes along with a factor of . We can absorb this by introducing a new variable , which is measured in units of distance rather than time. Then we can write:
The easy thing here is to just write instead of , but this hides a deep insight: the speed of light is acting like a conversion factor from units of time to units of distance. That is, we don’t just say that light moves at a speed of , we say that one second of time is 299,792,457 meters of distance. This is an incredibly identity that allows us to treat time and space on an equal footing, and it is borne out in many more or less direct experiments. I don’t want to get into all the consequences of this fact — the name for them as a collection is “special relativity” — but I do want to use it.
This lets us go back and write instead of , since the factor of here is just an artifact of using some coordinate system that treats time and distance separately; we see that the electric and magnetic fields in a propagating electromagnetic plane-wave are “really” the same size, and the factor of is just an artifact of our coordinate system. We can also just write instead of for the same reason. Finally, we can collect together to put it on the exact same footing as .
The meanings of these terms are getting further and further from familiarity. The -form is still made of the same components as the electric field; the -form is times the Hodge star of the -form whose components are those of the magnetic field; the function is times the charge density; and the vector field is the current density.
To this point, we’ve mostly followed a standard approach to classical electromagnetism, and nothing I’ve said should be all that new to a former physics major, although at some points we’ve infused more mathematical rigor than is typical. But now I want to go in a different direction.
Starting again with Maxwell’s equations, we see all these divergences and curls which, though familiar to many, are really heavy-duty equipment. In particular, they rely on the Riemannian structure on . We want to strip this away to find something that works without this assumption, and as a first step we’ll flip things over into differential forms.
So let’s say that the magnetic field corresponds to a -form , while the electric field corresponds to a -form . To avoid confusion between and the electric constant , let’s also replace some of our constants with the speed of light — . At the same time, we’ll replace with a -form . Now Maxwell’s equations look like:
Now I want to juggle around some of these Hodge stars:
Notice that we’re never just using the -form , but rather the -form . Let’s actually go back and use to represent a -form, so that corresponds to the -form :
In the static case — where time derivatives are zero — we see how symmetric this new formulation is:
For both the -form and the -form , the exterior derivative vanishes, and the operator connects the fields to sources of physical charge and current.