A comment just came in on my short rant about electromagnetism texts. Dripping with condescension, it states:
Here’s the fundamental reason for your discomfort: as a mathematician, you don’t realize that scalar and vector potentials have *no physical significance* (or for that matter, do you understand the distinction between objects of physical significance and things that are merely convenient mathematical devices?).
It really doesn’t matter how scalar and vector potentials are defined, found, or justified, so long as they make it convenient for you to work with electric and magnetic fields, which *are* physical (after all, if potentials were physical, gauge freedom would make no sense).
On rare occasions (e.g. Aharonov-Bohm effect), there’s the illusion that (vector) potential has actual physical significance, but when you realize it’s only the *differences* in the potential, it ought to become obvious that, once again, potentials are just mathematically convenient devices to do what you can do with fields alone.
P.S. We physicists are very happy with merely achieving self-consistency, thankyouverymuch. Experiments will provide the remaining justification.
The thing is, none of that changes the fact that you’re flat-out lying to students when you say that the vanishing divergence of the magnetic field, on its own, implies the existence of a vector potential.
I think the commenter is confusing my complaint with a different, more common one: the fact that potentials are not uniquely defined as functions. But I actually don’t have a problem with that, since the same is true of any antiderivative. After all, what is an antiderivative but a potential function in a one-dimensional space? In fact, the concepts of torsors and gauge symmetries are intimately connected with this indefiniteness.
No, my complaint is that physicists are sloppy in their teaching, which they sweep under the carpet of agreement with certain experiments. It’s trivial to cook up magnetic fields in non-simply-connected spaces which satisfy Maxwell’s equations and yet have no globally-defined potential at all. It’s not just that a potential is only defined up to an additive constant; it’s that when you go around certain loops the value of the potential must have changed, and so at no point can the function take any “self-consistent” value.
In being so sloppy, physicists commit the sin of making unstated assumptions, and in doing so in front of kids who are too naïve to know better. A professor may know that this is only true in spaces without holes, but his students probably don’t, and they won’t until they rely on the assumption in a case where it doesn’t hold. That’s really all I’m saying: state your assumptions; unstated assumptions are anathema to science.
As for the physical significance of potentials, I won’t even bother delving into the fact that explaining Aharonov-Bohm with fields alone entails chucking locality right out the window. Rest assured that once you move on from classical electromagnetism to quantum electrodynamics and other quantum field theories, the potential is clearly physically significant.
Before we push ahead with the Faraday field in hand, we need to properly define the Hodge star in our four-dimensional space, and we need a pseudo-Riemannian metric to do this. Before we were just using the standard , but now that we’re lumping in time we need to choose a four-dimensional metric.
And just to screw with you, it will have a different signature. If we have vectors and — with time here measured in the same units as space by using the speed of light as a conversion factor — then we calculate the metric as:
In particular, if we stick the vector into the metric twice, like we do to calculate a squared-length when working with an inner product, we find:
This looks like the Pythagorean theorem in two or three dimensions, but when we get to the time dimension we subtract instead of adding them! Four-dimensional real space equipped with a metric of this form is called “Minkowski space”. More specifically, it’s called 4-dimensional Minkowski space, or “(3+1)-dimensional” Minkowski space — three spatial dimensions and one temporal dimension. Higher-dimensional versions with “spatial” dimensions (with plusses in the metric) and one “temporal” dimension (with minuses) are also called Minkowski space. And, perversely enough, some physicists write it all backwards with one plus and minuses; this version is useful if you think of displacements in time as more fundamental — and thus more useful to call “positive” — than displacements in space.
What implications does this have on the coordinate expression of the Hodge star? It’s pretty much the same, except for the determinant part. You can think about it yourself, but the upshot is that we pick up an extra factor of when the basic form going into the star involves .
So the rule is that for a basic form , the dual form consists of those component -forms not involved in , ordered such that , with a negative sign if and only if is involved in . Let’s write it all out for easy reference:
Note that the square of the Hodge star has the opposite sign from the Riemannian case; when is odd the double Hodge dual of a -form is the original form back again, but when is even the double dual is the negative of the original form.
Now that we’ve seen that we can use the speed of light as a conversion factor to put time and space measurements on an equal footing, let’s actually do it to Maxwell’s equations. We start by moving the time derivatives over on the same side as all the space derivatives:
The exterior derivatives here written as comprise the derivatives in all the spatial directions. If we pick coordinates , , and , then we can write the third equation as three component equations that each look something like
This doesn’t look right at all! We’ve got a partial derivative with respect to floating around, but I see no corresponding . So if we’re going to move to a four-dimensional spacetime and still use exterior derivatives, we can pick up terms from the time derivative of . But for the others to cancel off, they already need to have a around in the first place. That is, we don’t actually have an electric -form:
In truth we have an electric -form:
Now, what does this mean for the exterior derivative ?
Nothing has really changed, except now there’s an extra factor of at the end of everything.
What happens to the exterior derivative of now that we’re using as another coordinate? Well, in components we write:
and thus we calculate:
Now the first part of this is just the old, three-dimensional exterior derivative of , corresponding to the divergence. The second of Maxwell’s equations says that it’s zero. And the other part of this is the time derivative of , but with an extra factor of .
So let’s take the -form and the -form and put them together:
The first term vanishes because of the second of Maxwell’s equations, and the rest all vanish because they’re the components of the third of Maxwell’s equations. That is, the second and third of Maxwell’s equations are both subsumed in this one four-dimensional equation.
When we rewrite the electric and magnetic fields as -forms like this, their sum is called the “Faraday field” . The second and third of Maxwell’s equations are equivalent to the single assertion that .