The Einstein Summation Convention
Look at the formulas we were using yesterday. There’s a lot of summations in there, and a lot of big sigmas. Those get really tiring to write over and over, and they get tiring really quick. Back when Einstein was writing up his papers, he used a lot of linear transformations, and wrote them all out in matrices. Accordingly, he used a lot of those big sigmas.
When we’re typing nowadays, or when we write on a pad or on the board, this isn’t a problem. But remember that up until very recently, publications had to actually set type. Actual little pieces of metal with characters raised (and reversed!) on them would get slathered with ink and pressed to paper. Incidentally, this is why companies that produce fonts are called “type foundries”. They actually forged those metal bits with letter shapes in different styles, and sold sets of them to printers.
Now Einstein was using a lot of these big sigmas, and there were pages that had so many of them that the printer would run out! Even if they set one page at once and printed them off, they just didn’t have enough little pieces of metal with big sigmas on them to handle it. Clearly something needed to be done to cut down on demand for them.
Here we note that we’re always summing over some basis. Even if there’s no basis element in a formula — say, the formula for a matrix product — the summation is over the dimension of some vector space. We also notice that when we chose to write some of our indices as subscripts and some as superscripts, we’re always summing over one of each. We now adopt the convention that if we ever see a repeated index — once as a superscript and once as a subscript — we’ll read that as summing over an appropriate basis.
For example, when we wanted to write a vector , we had to take the basis
of
and write up the sum
but now we just write . The repeated index and the fact that we’re talking about a vector in
means we sum for
running from
to the dimension of
. Similarly we write out the value of a linear transformation on a basis vector:
. Here we determine from context that
should run from
to the dimension of
.
What about finding the coefficients of a linear transformation acting on a vector? Before we wrote this as
Where now we write the result as . Since the
are the coefficients of a vector in
,
must run from
to the dimension of
.
And similarly given linear transformations and
represented (given choices of bases) by the matrices with components
and
, the matrix of their product is then written
. Again, we determine from context that we should be summing
over a set indexing a basis for
.
One very important thing to note here is that it’s not going to matter what basis for we use here! I’m not going to prove this quite yet, but built right into this notation is the fact that the composite of the two transformations is completely independent of the choice of basis of
. Of course, the matrix of the composite still depends on the bases of
and
we pick, but the dependence on
vanishes as we take the sum.
Einstein had a slightly easier time of things: he was always dealing with four-dimensional vector spaces, so all his indices had the same range of summation. We’ve got to pay some attention here and be careful about what vector space a given index is talking about, but in the long run it saves a lot of time.