The Unapologetic Mathematician

Mathematics for the interested outsider

A Continued Rant on Electromagnetism Texts and the Pedagogy of Science

A comment just came in on my short rant about electromagnetism texts. Dripping with condescension, it states:

Here’s the fundamental reason for your discomfort: as a mathematician, you don’t realize that scalar and vector potentials have *no physical significance* (or for that matter, do you understand the distinction between objects of physical significance and things that are merely convenient mathematical devices?).

It really doesn’t matter how scalar and vector potentials are defined, found, or justified, so long as they make it convenient for you to work with electric and magnetic fields, which *are* physical (after all, if potentials were physical, gauge freedom would make no sense).

On rare occasions (e.g. Aharonov-Bohm effect), there’s the illusion that (vector) potential has actual physical significance, but when you realize it’s only the *differences* in the potential, it ought to become obvious that, once again, potentials are just mathematically convenient devices to do what you can do with fields alone.

P.S. We physicists are very happy with merely achieving self-consistency, thankyouverymuch. Experiments will provide the remaining justification.

The thing is, none of that changes the fact that you’re flat-out lying to students when you say that the vanishing divergence of the magnetic field, on its own, implies the existence of a vector potential.

I think the commenter is confusing my complaint with a different, more common one: the fact that potentials are not uniquely defined as functions. But I actually don’t have a problem with that, since the same is true of any antiderivative. After all, what is an antiderivative but a potential function in a one-dimensional space? In fact, the concepts of torsors and gauge symmetries are intimately connected with this indefiniteness.

No, my complaint is that physicists are sloppy in their teaching, which they sweep under the carpet of agreement with certain experiments. It’s trivial to cook up magnetic fields in non-simply-connected spaces which satisfy Maxwell’s equations and yet have no globally-defined potential at all. It’s not just that a potential is only defined up to an additive constant; it’s that when you go around certain loops the value of the potential must have changed, and so at no point can the function take any “self-consistent” value.

In being so sloppy, physicists commit the sin of making unstated assumptions, and in doing so in front of kids who are too naïve to know better. A professor may know that this is only true in spaces without holes, but his students probably don’t, and they won’t until they rely on the assumption in a case where it doesn’t hold. That’s really all I’m saying: state your assumptions; unstated assumptions are anathema to science.

As for the physical significance of potentials, I won’t even bother delving into the fact that explaining Aharonov-Bohm with fields alone entails chucking locality right out the window. Rest assured that once you move on from classical electromagnetism to quantum electrodynamics and other quantum field theories, the potential is clearly physically significant.

March 8, 2012 Posted by | Electromagnetism, Mathematical Physics | 30 Comments

Minkowski Space

Before we push ahead with the Faraday field in hand, we need to properly define the Hodge star in our four-dimensional space, and we need a pseudo-Riemannian metric to do this. Before we were just using the standard \mathbb{R}^3, but now that we’re lumping in time we need to choose a four-dimensional metric.

And just to screw with you, it will have a different signature. If we have vectors v_1=(x_1,y_1,z_1,t_1) and v_2=(x_2,y_2,z_2,t_2) — with time here measured in the same units as space by using the speed of light as a conversion factor — then we calculate the metric as:

\displaystyle g(v_1,v_2)=x_1x_2+y_1y_2+z_1z_2-t_1t_2

In particular, if we stick the vector v=(x,y,z,t) into the metric twice, like we do to calculate a squared-length when working with an inner product, we find:

\displaystyle g(v,v)=x^2+y^2+z^2-t^2

This looks like the Pythagorean theorem in two or three dimensions, but when we get to the time dimension we subtract t^2 instead of adding them! Four-dimensional real space equipped with a metric of this form is called “Minkowski space”. More specifically, it’s called 4-dimensional Minkowski space, or “(3+1)-dimensional” Minkowski space — three spatial dimensions and one temporal dimension. Higher-dimensional versions with n-1 “spatial” dimensions (with plusses in the metric) and one “temporal” dimension (with minuses) are also called Minkowski space. And, perversely enough, some physicists write it all backwards with one plus and n-1 minuses; this version is useful if you think of displacements in time as more fundamental — and thus more useful to call “positive” — than displacements in space.

What implications does this have on the coordinate expression of the Hodge star? It’s pretty much the same, except for the determinant part. You can think about it yourself, but the upshot is that we pick up an extra factor of -1 when the basic form going into the star involves dt.

So the rule is that for a basic form \alpha, the dual form *\alpha consists of those component 1-forms not involved in \alpha, ordered such that \alpha\wedge(*\alpha)=\pm dx\wedge dy\wedge dz\wedge dt, with a negative sign if and only if dt is involved in \alpha. Let’s write it all out for easy reference:

\displaystyle\begin{aligned}*1&=dx\wedge dy\wedge dz\wedge dt\\ *dx&=dy\wedge dz\wedge dt\\ *dy&=dz\wedge dx\wedge dt\\ *dz&=dx\wedge dy\wedge dt\\ *dt&=dx\wedge dy\wedge dz\\ *(dx\wedge dy)&=dz\wedge dt\\ *(dz\wedge dx)&=dy\wedge dt\\ *(dy\wedge dz)&=dx\wedge dt\\ *(dx\wedge dt)&=-dy\wedge dz\\ *(dy\wedge dt)&=-dz\wedge dx\\ *(dz\wedge dt)&=-dx\wedge dy\\ *(dx\wedge dy\wedge dz)&=dt\\ *(dx\wedge dy\wedge dt)&=dz\\ *(dz\wedge dx\wedge dt)&=dy\\ *(dy\wedge dz\wedge dt)&=dx\\ *(dx\wedge dy\wedge dz\wedge dt)&=-1\end{aligned}

Note that the square of the Hodge star has the opposite sign from the Riemannian case; when k is odd the double Hodge dual of a k-form is the original form back again, but when k is even the double dual is the negative of the original form.

March 7, 2012 Posted by | Electromagnetism, Mathematical Physics | 2 Comments

The Faraday Field

Now that we’ve seen that we can use the speed of light as a conversion factor to put time and space measurements on an equal footing, let’s actually do it to Maxwell’s equations. We start by moving the time derivatives over on the same side as all the space derivatives:

\displaystyle\begin{aligned}*d*\epsilon&=\mu_0c\rho\\d\beta&=0\\d\epsilon+\frac{\partial\beta}{\partial t}&=0\\{}*d*\beta-\frac{\partial\epsilon}{\partial t}&=\mu_0c\iota\end{aligned}

The exterior derivatives here written as d comprise the derivatives in all the spatial directions. If we pick coordinates x, y, and z, then we can write the third equation as three component equations that each look something like

\displaystyle\frac{\partial\epsilon_x}{\partial y}dy\wedge dx+\frac{\partial\epsilon_y}{\partial x}dx\wedge dy+\frac{\partial\beta_x}{\partial t}dx\wedge dy=\left(\frac{\partial\epsilon_y}{\partial x}-\frac{\partial\epsilon_x}{\partial y}+\frac{\partial\beta_z}{\partial t}\right)dx\wedge dy=0

This doesn’t look right at all! We’ve got a partial derivative with respect to t floating around, but I see no corresponding dt. So if we’re going to move to a four-dimensional spacetime and still use exterior derivatives, we can pick up dt terms from the time derivative of \beta. But for the others to cancel off, they already need to have a dt around in the first place. That is, we don’t actually have an electric 1-form:

\displaystyle\epsilon=\epsilon_xdx+\epsilon_ydy+\epsilon_zdz

In truth we have an electric 2-form:

\displaystyle\epsilon=\epsilon_xdx\wedge dt+\epsilon_ydy\wedge dt+\epsilon_zdz\wedge dt

Now, what does this mean for the exterior derivative d\epsilon?

\displaystyle\begin{aligned}d\epsilon=&\frac{\partial\epsilon_x}{\partial y}dy\wedge dx\wedge dt+\frac{\partial\epsilon_x}{\partial z}dz\wedge dx\wedge dt\\&+\frac{\partial\epsilon_y}{\partial x}dx\wedge dy\wedge dt+\frac{\partial\epsilon_y}{\partial z}dz\wedge dy\wedge dt\\&+\frac{\partial\epsilon_z}{\partial x}dx\wedge dz\wedge dt+\frac{\partial\epsilon_z}{\partial y}dy\wedge dz\wedge dt\\=&\left(\frac{\partial\epsilon_y}{\partial x}-\frac{\partial\epsilon_x}{\partial y}\right)dx\wedge dy\wedge dt\\&+\left(\frac{\partial\epsilon_x}{\partial z}-\frac{\partial\epsilon_z}{\partial x}\right)dz\wedge dx\wedge dt\\&+\left(\frac{\partial\epsilon_z}{\partial y}-\frac{\partial\epsilon_y}{\partial z}\right)dy\wedge dz\wedge dt\end{aligned}

Nothing has really changed, except now there’s an extra factor of dt at the end of everything.

What happens to the exterior derivative of \beta now that we’re using t as another coordinate? Well, in components we write:

\displaystyle\beta=\beta_xdy\wedge dz+\beta_ydz\wedge dx+\beta_zdx\wedge dy

and thus we calculate:

\displaystyle\begin{aligned}d\beta=&\frac{\partial\beta_x}{\partial x}dx\wedge dy\wedge dz+\frac{\partial\beta_x}{\partial t}dt\wedge dy\wedge dz\\&+\frac{\partial\beta_y}{\partial y}dy\wedge dz\wedge dx+\frac{\partial\beta_y}{\partial t}dt\wedge dz\wedge dx\\&+\frac{\partial\beta_z}{\partial z}dz\wedge dx\wedge dy+\frac{\partial\beta_z}{\partial t}dt\wedge dx\wedge dy\\=&\left(\frac{\partial\beta_x}{\partial x}+\frac{\partial\beta_y}{\partial y}+\frac{\partial\beta_z}{\partial z}\right)dx\wedge dy\wedge dz\\&+\frac{\partial\beta_z}{\partial t}dx\wedge dy\wedge dt+\frac{\partial\beta_y}{\partial t}dz\wedge dx\wedge dt+\frac{\partial\beta_x}{\partial t}dy\wedge dz\wedge dt\end{aligned}

Now the first part of this is just the old, three-dimensional exterior derivative of \beta, corresponding to the divergence. The second of Maxwell’s equations says that it’s zero. And the other part of this is the time derivative of \beta, but with an extra factor of dt.

So let’s take the 2-form \epsilon and the 2-form \beta and put them together:

\displaystyle\begin{aligned}d(\epsilon+\beta)=&d\epsilon+d\beta\\=&\left(\frac{\partial\beta_x}{\partial x}+\frac{\partial\beta_y}{\partial y}+\frac{\partial\beta_z}{\partial z}\right)dx\wedge dy\wedge dz\\&+\left(\frac{\partial\epsilon_y}{\partial x}-\frac{\partial\epsilon_x}{\partial y}+\frac{\partial\beta_z}{\partial t}\right)dx\wedge dy\wedge dt\\&+\left(\frac{\partial\epsilon_x}{\partial z}-\frac{\partial\epsilon_z}{\partial x}+\frac{\partial\beta_y}{\partial t}\right)dz\wedge dx\wedge dt\\&+\left(\frac{\partial\epsilon_z}{\partial y}-\frac{\partial\epsilon_y}{\partial z}+\frac{\partial\beta_x}{\partial t}\right)dy\wedge dz\wedge dt\end{aligned}

The first term vanishes because of the second of Maxwell’s equations, and the rest all vanish because they’re the components of the third of Maxwell’s equations. That is, the second and third of Maxwell’s equations are both subsumed in this one four-dimensional equation.

When we rewrite the electric and magnetic fields as 2-forms like this, their sum is called the “Faraday field” F. The second and third of Maxwell’s equations are equivalent to the single assertion that dF=0.

March 6, 2012 Posted by | Electromagnetism, Mathematical Physics | 4 Comments

The Meaning of the Speed of Light

Let’s pick up where we left off last time converting Maxwell’s equations into differential forms:

\displaystyle\begin{aligned}*d*\epsilon&=\mu_0c^2\rho\\d\beta&=0\\d\epsilon&=-\frac{\partial\beta}{\partial t}\\{}*d*\beta&=\mu_0\iota+\frac{1}{c^2}\frac{\partial\epsilon}{\partial t}\end{aligned}

Now let’s notice that while the electric field has units of force per unit charge, the magnetic field has units of force per unit charge per unit velocity. Further, from our polarized plane-wave solutions to Maxwell’s equations, we see that for these waves the magnitude of the electric field is c — a velocity — times the magnitude of the magnetic field. So let’s try collecting together factors of c\beta:

\displaystyle\begin{aligned}*d*\epsilon&=\mu_0c^2\rho\\d(c\beta)&=0\\d\epsilon&=-\frac{1}{c}\frac{\partial(c\beta)}{\partial t}\\{}*d*(c\beta)&=\mu_0c\iota+\frac{1}{c}\frac{\partial\epsilon}{\partial t}\end{aligned}

Now each of the time derivatives comes along with a factor of \frac{1}{c}. We can absorb this by introducing a new variable \tau=ct, which is measured in units of distance rather than time. Then we can write:

\displaystyle\begin{aligned}*d*\epsilon&=\mu_0c^2\rho\\d(c\beta)&=0\\d\epsilon&=-\frac{\partial(c\beta)}{\partial\tau}\\{}*d*(c\beta)&=\mu_0c\iota+\frac{\partial\epsilon}{\partial\tau}\end{aligned}

The easy thing here is to just write t instead of \tau, but this hides a deep insight: the speed of light c is acting like a conversion factor from units of time to units of distance. That is, we don’t just say that light moves at a speed of c=299\,792\,457\frac{\mathrm{m}}{\mathrm{s}}, we say that one second of time is 299,792,457 meters of distance. This is an incredibly identity that allows us to treat time and space on an equal footing, and it is borne out in many more or less direct experiments. I don’t want to get into all the consequences of this fact — the name for them as a collection is “special relativity” — but I do want to use it.

This lets us go back and write \beta instead of c\beta, since the factor of c here is just an artifact of using some coordinate system that treats time and distance separately; we see that the electric and magnetic fields in a propagating electromagnetic plane-wave are “really” the same size, and the factor of c is just an artifact of our coordinate system. We can also just write t instead of c t for the same reason. Finally, we can collect c\rho together to put it on the exact same footing as \iota.

\displaystyle\begin{aligned}*d*\epsilon&=\mu_0c\rho\\d\beta&=0\\d\epsilon&=-\frac{\partial\beta}{\partial t}\\{}*d*\beta&=\mu_0c\iota+\frac{\partial\epsilon}{\partial t}\end{aligned}

The meanings of these terms are getting further and further from familiarity. The 1-form \epsilon is still made of the same components as the electric field; the 2-form \beta is c times the Hodge star of the 1-form whose components are those of the magnetic field; the function \rho is c times the charge density; and the vector field \iota is the current density.

February 24, 2012 Posted by | Electromagnetism, Mathematical Physics | 4 Comments

Maxwell’s Equations in Differential Forms

To this point, we’ve mostly followed a standard approach to classical electromagnetism, and nothing I’ve said should be all that new to a former physics major, although at some points we’ve infused more mathematical rigor than is typical. But now I want to go in a different direction.

Starting again with Maxwell’s equations, we see all these divergences and curls which, though familiar to many, are really heavy-duty equipment. In particular, they rely on the Riemannian structure on \mathbb{R}^3. We want to strip this away to find something that works without this assumption, and as a first step we’ll flip things over into differential forms.

So let’s say that the magnetic field B corresponds to a 1-form \beta, while the electric field E corresponds to a 1-form \epsilon. To avoid confusion between \epsilon and the electric constant \epsilon_0, let’s also replace some of our constants with the speed of light — \epsilon_0\mu_0=\frac{1}{c^2}. At the same time, we’ll replace J with a 1-form \iota. Now Maxwell’s equations look like:

\displaystyle\begin{aligned}*d*\epsilon&=\mu_0c^2\rho\\{}*d*\beta&=0\\{}*d\epsilon&=-\frac{\partial\beta}{\partial t}\\{}*d\beta&=\mu_0\iota+\frac{1}{c^2}\frac{\partial\epsilon}{\partial t}\end{aligned}

Now I want to juggle around some of these Hodge stars:

\displaystyle\begin{aligned}*d*\epsilon&=\mu_0c^2\rho\\d(*\beta)&=0\\d\epsilon&=-\frac{\partial(*\beta)}{\partial t}\\{}*d*(*\beta)&=\mu_0\iota+\frac{1}{c^2}\frac{\partial\epsilon}{\partial t}\end{aligned}

Notice that we’re never just using the 1-form \beta, but rather the 2-form *\beta. Let’s actually go back and use \beta to represent a 2-form, so that B corresponds to the 1-form *\beta:

\displaystyle\begin{aligned}*d*\epsilon&=\mu_0c^2\rho\\d\beta&=0\\d\epsilon&=-\frac{\partial\beta}{\partial t}\\{}*d*\beta&=\mu_0\iota+\frac{1}{c^2}\frac{\partial\epsilon}{\partial t}\end{aligned}

In the static case — where time derivatives are zero — we see how symmetric this new formulation is:

\displaystyle\begin{aligned}d\epsilon&=0\\d\beta&=0\\{}*d*\epsilon&=\mu_0c^2\rho\\{}*d*\beta&=\mu_0\iota\end{aligned}

For both the 1-form \epsilon and the 2-form \beta, the exterior derivative vanishes, and the operator *d* connects the fields to sources of physical charge and current.

February 22, 2012 Posted by | Electromagnetism, Mathematical Physics | 2 Comments

A Short Rant about Electromagnetism Texts

I’d like to step aside from the main line to make one complaint. In refreshing my background in classical electromagnetism for this series I’ve run into something that bugs the hell out of me as a mathematician. I remember it from my own first course, but I’m shocked to see that it survives into every upper-level treatment I’ve seen.

It’s about the existence of potentials, and the argument usually goes like this: as Faraday’s law tells us, for a static electric field we have \nabla\times E=0; therefore E=\nabla\phi for some potential function \phi because the curl of a gradient is zero.

What?

Let’s break this down to simple formal logic that any physics undergrad can follow. Let P be the statement that there exists a \phi such that E=\nabla\phi. Let Q be the statement that \nabla\times E=0. The curl of a gradient being zero is the implication P\implies Q. So here’s the logic:

\displaystyle\begin{aligned}&Q\\&P\implies Q\\&\therefore P\end{aligned}

and that doesn’t make sense at all. It’s a textbook case of “affirming the consequent”.

Saying that E has a potential function is a nice, convenient way of satisfying the condition that its curl should vanish, but this argument gives no rationale for believing it’s the only option.

If we flip over to the language of differential forms, we know that the curl operator on a vector field corresponds to the operator \alpha\mapsto*d\alpha on 1-forms, while the gradient operator corresponds to f\mapsto df. We indeed know that *ddf=0 automatically — the curl of a gradient vanishes — but knowing that d\alpha=0 is not enough to conclude that \alpha=df for some f. In fact, this question is exactly what de Rham cohomology is all about!

So what’s missing? Full formality demands that we justify that the first de Rham cohomology of our space vanish. Now, I’m not suggesting that we make physics undergrads learn about homology — it might not be a terrible idea, though — but we can satisfy this in the context of a course just by admitting that we are (a) being a little sloppy here, and (b) the justification is that (for our purposes) the electric field E is defined in some simply-connected region of space which has no “holes” one could wrap a path around. In fact, if the students have had a decent course in multivariable calculus they’ve probably seen the explicit construction of a potential function for a vector field whose curl vanishes subject to the restriction that we’re working over a simply-connected space.

The problem arises again in justifying the existence of a vector potential: as Gauss’ law for magnetism tells us, for a magnetic field we have \nabla\cdot B=0; therefore B=\nabla\times A for some vector potential A because the divergence of a curl is zero.

Again we see the same problem of affirming the consequent. And again the real problem hinges on the unspoken assumption that the second de Rham cohomology of our space vanishes. Yes, this is true for contractible spaces, but we must make mention of the fact that our space is contractible! In fact, I did exactly that when I needed to get ahold of the magnetic potential once.

Again: we don’t need to stop simplifying and sweeping some of these messier details of our arguments under the rug when dealing with undergraduate students, but we do need to be honest that those details were there to be swept in the first place. The alternative most texts and notes choose now is to include statements which are blatantly false, and to rely on our authority to make students accept them unquestioningly.

February 18, 2012 Posted by | Electromagnetism, Mathematical Physics | 27 Comments

Conservation of Electromagnetic Energy

Let’s start with Ampère’s law, including Maxwell’s correction:

\displaystyle\nabla\times B=\mu_0J+\epsilon_0\mu_0\frac{\partial E}{\partial t}

Now let’s take the dot product of this with the electric field:

\displaystyle E\cdot(\nabla\times B)=\mu_0E\cdot J+\epsilon_0\mu_0E\cdot\frac{\partial E}{\partial t}

On the left, we can run a product rule in reverse:

\displaystyle B\cdot(\nabla\times E)-\nabla\cdot(E\times B)=\mu_0E\cdot J+\epsilon_0\mu_0E\cdot\frac{\partial E}{\partial t}

Now, Faraday’s law tells us that

\displaystyle\nabla\times E=-\frac{\partial B}{\partial t}

so we can write:

\displaystyle-B\cdot\frac{\partial B}{\partial t}-\nabla\cdot(E\times B)=\mu_0E\cdot J+\epsilon_0\mu_0E\cdot\frac{\partial E}{\partial t}

Let’s rearrange this a bit:

\displaystyle-\frac{1}{\mu_0}B\cdot\frac{\partial B}{\partial t}-\epsilon_0E\cdot\frac{\partial E}{\partial t}=\nabla\cdot\left(\frac{E\times B}{\mu_0}\right)+E\cdot J

The dot product of a vector field with its own derivative should look familiar; we can rewrite:

\displaystyle-\frac{\partial}{\partial t}\left(\frac{1}{2\mu_0}B\cdot B-\frac{\epsilon_0}{2}E\cdot E\right)=\nabla\cdot\left(\frac{E\times B}{\mu_0}\right)+E\cdot J

But now we should recognize almost all the terms in sight! On the left, we’re taking the derivative of the combined energy densities of the electric and magnetic fields:

\displaystyle U=\frac{\epsilon_0}{2}\lvert E\rvert^2+\frac{1}{2\mu_0}\lvert B\rvert^2

The second term on the right is the energy density lost to Joule heating per unit time. The only thing left is this vector field:

\displaystyle u=\frac{E\times B}{\mu_0}

which we call the “Poynting vector”. It’s really named after British physicist John Henry Poynting, but generations of students remember it because it “points” in the direction electromagnetic energy flows.

To see this, look at the final form of our equation:

\displaystyle-\frac{\partial U}{\partial t}=\nabla\cdot u+E\cdot J

On the left we have the rate at which the electromagnetic energy is going down at any given point. On the right, we have two terms; the second is the rate electromagnetic energy density is being lost to heat energy at the point, while the first is the rate electromagnetic energy is “flowing away from” the point.

Compare this with the conservation of charge:

\displaystyle-\frac{\partial\rho}{\partial t}=\nabla\cdot J

where the rate at which charge density decreases is equal to the rate that charge is “flowing away” through currents. The only difference is that there is no dissipation term for charge like there is for energy.

One other important thing to notice is what this tells us about our plane wave solutions. If we take such an electromagnetic wave propagating in the direction k and with the electric field polarized in some particular direction, then we can determine that

\displaystyle u=\frac{E\times B}{\mu_0}=\frac{\lvert E\rvert^2}{\mu_0c}k=\epsilon_0c\lvert E\rvert^2k

showing that electromagnetic waves carry electromagnetic energy in the direction that they propagate.

February 17, 2012 Posted by | Electromagnetism, Mathematical Physics | 3 Comments

Ohm’s Law

When calculating the potential energy of the magnetic field, we calculated the power needed to run a certain current around a certain circuit. Let’s look into that a little more deeply.

We start with Ohm’s law, which basically says that — as a first approximation — the electromotive force around a circuit is proportional to the current around it; push harder and you’ll move charge faster. As a formula:

\displaystyle V=IR

The electromotive force — or “voltage” — on the left is equal to the current around the circuit times the “resistance”. What’s the resistance? Well, here it’s basically just a constant of proportionality, which we read as “how hard is it to push charge around this circuit?”

But let’s dig in a bit more. A current doesn’t really flow around an infinitely-thin wire; it flows around a wire with some thickness. The thicker the wire is — the bigger its cross-sectional area — the easier it should be to push charge around, while the longer the circuit is, the harder. We’ll write down our resistance in the form

\displaystyle R=\eta\frac{l}{A}

where l is the length of the wire, A is its cross-sectional area, and \eta is a new proportionality constant we call “resistivity”. Putting this together with the first form of Ohm’s law we find

\displaystyle V=\eta\frac{l}{A}I

But look at this: the current is made up of a current density flowing along the wire, integrated across a cross-section. If the wire is running in the z direction and the current density in that direction is constantly J_z, then I=JA. Further — at least to a first approximation — the electromotive force is the z-component of the electric field E_z times the length l traveled in that direction.

Thus we conclude that E_z=\eta J_z. But since there’s nothing really special about the z direction, we actually find that

\displaystyle E=\eta J

which is Ohm’s law again, but now in terms of fields and current distributions.

But what about the power? We’ve got a battery pushing a current around a circuit and using power to do it; where does the energy go? Well, if we think about pushing little bits of charge around the wire, they’re going to hit parts of the wire and lose some energy in the process. The parts they hit get shaken up, and this appears as heat energy; the process is called “Ohmic” or “Joule” heating, the latter from Joule’s own experiments using a resistive wire to heat up a tub of water.

If we have a current I made up of N bits of charge q per unit time, then each bit takes an energy of qV to go around the circuit once. This happens N times per unit time, so the total power expenditure is

\displaystyle P=NqV=IV

just as we said last time. But now we can do the same trick as above and write

\displaystyle P=IV=(J\cdot E)Al

or

\displaystyle\frac{P}{Al}=E\cdot J

which measures the power per unit volume dissipated through Joule heating in the circuit.

February 16, 2012 Posted by | Electromagnetism, Mathematical Physics | 1 Comment

Energy and the Magnetic Field

Last time we calculated the energy of the electric field. Now let’s repeat with the magnetic field, and let’s try to be a little more careful about it since magnetic fields can be slippery.

Let’s consider a static magnetic field B generated by a collection of circuits C_i, each carrying a current I_i. Recall that Gauss’ law for magnetism tells us that \nabla\cdot B=0; since space is contractible, we know that its homology is trivial, and thus B must be the curl of some other vector field A, which we call the “magnetic potential” or “vector potential”. Now we can write down the flux of the magnetic field through each circuit:

\displaystyle\Phi_i=\int\limits_{S_i}B\cdot dS_i=\int\limits_{C_i}A\cdot dr_i

Now Faraday’s law tells us about the electromotive force induced on the circuit:

\displaystyle V_i=\frac{d\Phi_i}{dt}

This electromotive force must be counterbalanced by a battery maintaining the current or else the magnetic field wouldn’t be static.

We can determine how much power the battery must expend to maintain the current; a charge q moving around the circuit goes down by qV_i in potential energy, which the battery must replace to send it around again. If n such charges pass around in unit time, this is a work of nqV_i per unit time; since nq=I — the current — we find that the power expenditure is P_i=I_iV_i, or.

\displaystyle P_i=I_i\frac{d\Phi_i}{dt}

Thus if we want to ramp the currents — and the field — up from a cold start in a time T it takes a total work of

\displaystyle W=\sum\limits_{i=1}^N\int\limits_0^TI_i\frac{d\Phi_i}{dt}\,dt

which is then the energy stored in the magnetic field.

This expression doesn’t depend on exactly how the field turns on, so let’s say the currents ramp up linearly:

\displaystyle I_i(t)=I_i(T)\frac{t}{T}

and since the fluxes are proportional to the currents they must also ramp up linearly:

\displaystyle\Phi_i(t)=\Phi_i(T)\frac{t}{T}

Plugging these in above, we find:

\displaystyle W=\sum\limits_{i=1}^N\int\limits_0^TI_i(T)\Phi_i(T)\frac{t}{T^2}\,dt=\frac{1}{2}\sum\limits_{i=1}^NI_i(T)\Phi_i(T)

Now we can plug in our original expression for the flux:

\displaystyle W=\frac{1}{2}\sum\limits_{i=1}^NI_i\int\limits_{C_i}A\cdot dr_i

This is great. But to be more general, let’s replace our currents with a current distribution:

\displaystyle W=\frac{1}{2}\int\limits_{\mathbb{R}^3}A\cdot J\,dV

Now we can use Ampère’s law to write

\displaystyle\begin{aligned}W&=\frac{1}{2\mu_0}\int\limits_{\mathbb{R}^3}A\cdot(\nabla\times B)\,dV\\&=\frac{1}{2\mu_0}\int\limits_{\mathbb{R}^3}B\cdot(\nabla\times A)-\nabla\cdot(A\times B)\,dV\\&=\frac{1}{2\mu_0}\int\limits_{\mathbb{R}^3}B\cdot B\,dV-\frac{1}{2\mu_0}\int\limits_{\mathbb{R}^3}\nabla\cdot(A\times B)\,dV\end{aligned}

We can pull the same sort of trick last time to make the second integral go away; use the divergence theorem to convert to

\displaystyle\frac{1}{2\mu_0}\lim\limits_{R\to\infty}\int\limits_{S_R}(A\times B)\cdot dA

and take the surface far enough away that the integral becomes negligible. We handwave that A\times B falls off roughly as the inverse fifth power of R, while the area of S_R only grows as the second power, and say that the term goes to zero.

So now we have a similar expression as last time for a magnetic energy density:

\displaystyle u_B=\frac{1}{2\mu_0}\lvert B\rvert^2

Again, we can check the units; the magnetic field has units of force per unit charge per unit velocity:

\displaystyle\frac{\mathrm{kg}}{\mathrm{C}\cdot\mathrm{s}}

while the magnetic constant has units of henries per meter:

\displaystyle\frac{\mathrm{m}\cdot\mathrm{kg}}{\mathrm{C}^2}

Putting together an inverse factor of the magnetic constant and two of the magnetic field and we get:

\displaystyle\frac{\mathrm{C}^2}{\mathrm{m}\cdot\mathrm{kg}}\frac{\mathrm{kg}}{\mathrm{C}\cdot\mathrm{s}}\frac{\mathrm{kg}}{\mathrm{C}\cdot\mathrm{s}}=\frac{\mathrm{kg}\cdot\mathrm{m}^2}{\mathrm{m}^3\cdot\mathrm{s}^2}=\frac{\mathrm{J}}{\mathrm{m}^3}

or, units of energy per unit volume, just like we expect for an energy density.

February 14, 2012 Posted by | Electromagnetism, Mathematical Physics | 4 Comments

Energy and the Electric Field

Okay, now let’s consider the electric field from the perspective of energy. We have an idea that this might be interesting because we know that the field produces a force, and forces and energies interact in interesting ways.

So recall that if we have a “test charge” q at a point p in an electric field E it experiences a force F=qE(p). As we saw when discussing Faraday’s law, for a static electric field we can write E=-\nabla\phi for some “electric potential” function \phi. Thus we can also write F=-\nabla U for the potential energy function U=q\phi.

Now, say the field is generated by a charge distribution \rho; how much potential energy is contained in the force the field exerts on the little bit of charge at p? We count U=\rho(p)\phi(p), but this is too much — half of it is due to the rest of the distribution acting on the bit of charge at r and half of it comes from r acting back. We can thus find the total potential energy by integrating

\displaystyle U=\frac{1}{2}\int\limits_{\mathbb{R}^3}\rho(p)\phi(p)\,d^3p

Now, Gauss’ law tells us that \rho=\epsilon_0\nabla\cdot E, so we substitute:

\displaystyle U=\frac{1}{2}\int\limits_{\mathbb{R}^3}\epsilon_0(\nabla\cdot E)\phi\,dV

Next we use a form of the product rule — \nabla\cdot(fV)=(\nabla f)\cdot V+f(\nabla\cdot V) — and run it backwards to write:

\displaystyle\begin{aligned}U&=\frac{\epsilon_0}{2}\int\limits_{\mathbb{R}^3}\nabla\cdot(\phi E)-(\nabla\phi)\cdot E\,dV\\&=\frac{\epsilon_0}{2}\int\limits_{\mathbb{R}^3}\nabla\cdot(\phi E)\,dV+\frac{\epsilon_0}{2}\int\limits_{\mathbb{R}^3}(-\nabla\phi)\cdot E\,dV\\&=\frac{\epsilon_0}{2}\lim\limits_{R\to\infty}\int\limits_{B_R}\nabla\cdot(\phi E)+\frac{\epsilon_0}{2}\int\limits_{\mathbb{R}^3}E\cdot E\,dV\end{aligned}

where we evaluate the first integral over space by evaluating it over the solid ball of radius R and taking the limit as R goes off to infinity. The divergence theorem says we can write:

\displaystyle\begin{aligned}U&=\frac{\epsilon_0}{2}\lim\limits_{R\to\infty}\int\limits_{S_R}\phi E\cdot dA+\frac{\epsilon_0}{2}\int\limits_{\mathbb{R}^3}E\cdot E\,dV\\&=\frac{\epsilon_0}{2}\int\limits_{\mathbb{R}^3}E\cdot E\,dV\end{aligned}

where, as usual, we have taken the charge distribution to be compactly supported, so as our sphere gets large enough, the potential energy \phi goes to zero. Yes, this is very hand-wavy, but this is how the physicists do it.

Anyway, what does this tell us? It means that a static electric field contains energy with a density

\displaystyle u_E=\frac{1}{2}\epsilon_0\lvert E\rvert^2

which we can integrate over any region of space to find the electrostatic potential energy contained in the field.

We can also check the units here; the electric field has units of force per unit charge:

\displaystyle\frac{\mathrm{kg}\cdot\mathrm{m}}{\mathrm{C}\cdot\mathrm{s}^2}

while the electric constant has units of farads per meter:

\displaystyle\frac{\mathrm{s}^2\cdot\mathrm{C}^2}{\mathrm{m}^3\cdot\mathrm{kg}}

Putting these together — two factors of E and one of \epsilon_0 we find the units:

\displaystyle\frac{\mathrm{s}^2\cdot\mathrm{C}^2}{\mathrm{m}^3\cdot\mathrm{kg}}\frac{\mathrm{kg}\cdot\mathrm{m}}{\mathrm{C}\cdot\mathrm{s}^2}\frac{\mathrm{kg}\cdot\mathrm{m}}{\mathrm{C}\cdot\mathrm{s}^2}=\frac{\mathrm{kg}\cdot\mathrm{m}^2}{\mathrm{m}^3\cdot\mathrm{s}^2}=\frac{\mathrm{J}}{\mathrm{m}^3}

Joules per cubic meter — energy per unit of volume, just as we’d expect for an energy density.

February 14, 2012 Posted by | Electromagnetism, Mathematical Physics | 5 Comments

Follow

Get every new post delivered to your Inbox.

Join 392 other followers