The Einstein Summation Convention
Look at the formulas we were using yesterday. There’s a lot of summations in there, and a lot of big sigmas. Those get really tiring to write over and over, and they get tiring really quick. Back when Einstein was writing up his papers, he used a lot of linear transformations, and wrote them all out in matrices. Accordingly, he used a lot of those big sigmas.
When we’re typing nowadays, or when we write on a pad or on the board, this isn’t a problem. But remember that up until very recently, publications had to actually set type. Actual little pieces of metal with characters raised (and reversed!) on them would get slathered with ink and pressed to paper. Incidentally, this is why companies that produce fonts are called “type foundries”. They actually forged those metal bits with letter shapes in different styles, and sold sets of them to printers.
Now Einstein was using a lot of these big sigmas, and there were pages that had so many of them that the printer would run out! Even if they set one page at once and printed them off, they just didn’t have enough little pieces of metal with big sigmas on them to handle it. Clearly something needed to be done to cut down on demand for them.
Here we note that we’re always summing over some basis. Even if there’s no basis element in a formula — say, the formula for a matrix product — the summation is over the dimension of some vector space. We also notice that when we chose to write some of our indices as subscripts and some as superscripts, we’re always summing over one of each. We now adopt the convention that if we ever see a repeated index — once as a superscript and once as a subscript — we’ll read that as summing over an appropriate basis.
For example, when we wanted to write a vector , we had to take the basis
of
and write up the sum
but now we just write . The repeated index and the fact that we’re talking about a vector in
means we sum for
running from
to the dimension of
. Similarly we write out the value of a linear transformation on a basis vector:
. Here we determine from context that
should run from
to the dimension of
.
What about finding the coefficients of a linear transformation acting on a vector? Before we wrote this as
Where now we write the result as . Since the
are the coefficients of a vector in
,
must run from
to the dimension of
.
And similarly given linear transformations and
represented (given choices of bases) by the matrices with components
and
, the matrix of their product is then written
. Again, we determine from context that we should be summing
over a set indexing a basis for
.
One very important thing to note here is that it’s not going to matter what basis for we use here! I’m not going to prove this quite yet, but built right into this notation is the fact that the composite of the two transformations is completely independent of the choice of basis of
. Of course, the matrix of the composite still depends on the bases of
and
we pick, but the dependence on
vanishes as we take the sum.
Einstein had a slightly easier time of things: he was always dealing with four-dimensional vector spaces, so all his indices had the same range of summation. We’ve got to pay some attention here and be careful about what vector space a given index is talking about, but in the long run it saves a lot of time.
Oh yuck. I’m firmly with Spivak on the matter of the Einstein summation convention – it’s awful, makes the mathematics unreadable and confusing.
I tried thrice to learn differential geometry. I failed each time at the task. And one of the first things that really turned me off each time was the summation convention.
That said I don’t want to turn into yet another troll hounding your poor blog; so I’ll shut up about the summation convention after this.
Mikael, I think the problem is not the summation convention so much as working with matrices in the first place. Once you’re committed to using bases, the convention just simplifies notation. Unreadable formulas in terms of the summation convention are even more unreadable without it.
While I talk about linear algebra I’ll use matrices, and I’ll use the summation convention to simplify the notation. However (and as I’ll say more explicitly soon) the point is to tie matrices to the abstract formulations and lift concepts up. That is, many people may have seen matrices and matrix operations. Often a problem arises in terms of matrices. Part of my coverage of linear algebra is about making the transition from matrices and bases to abstract linear transformations. When speaking abstractly I won’t have to use the summation convention.
[…] With the summation convention firmly in hand, we continue our discussion of […]
Pingback by Matrices II « The Unapologetic Mathematician | May 22, 2008 |
[…] . Multiplying corresponding elements and summing gives the single field element (remember the summation convention). We get of these elements — one for each row — and we arrange them in a new […]
Pingback by Matrix notation « The Unapologetic Mathematician | May 30, 2008 |
[…] We compose two morphisms by the process of matrix multiplication. If is an matrix in and is a matrix in , then their product is a matrix in (remember the summation convention). […]
Pingback by The Category of Matrices I « The Unapologetic Mathematician | June 2, 2008 |
[…] so on. We can write (remember the summation convention), so the vector components of the basis vectors are given by the Kronecker delta. We will think of […]
Pingback by The Category of Matrices III « The Unapologetic Mathematician | June 23, 2008 |
[…] show up to the first power, there is no ambiguity about writing our indices as superscripts — something we’ve done before. Anyhow, we might write an […]
Pingback by Linear Equations « The Unapologetic Mathematician | July 3, 2008 |
[…] here that we’re not using the summation convention for polynomials, though we could in principle. Remember, an algebra is a vector space, and what […]
Pingback by Polynomials I « The Unapologetic Mathematician | July 28, 2008 |
[…] is surjective, given any there is some with . But uniquely (remember the summation convention) because the form a basis. Then , and so we have an expression of as a linear combination of the […]
Pingback by Isomorphisms of Vector Spaces « The Unapologetic Mathematician | October 17, 2008 |
[…] basis. So we introduce the so-called “Sweedler notation”. If you didn’t like the summation convention, you’re going to hate […]
Pingback by Sweedler notation « The Unapologetic Mathematician | November 10, 2008 |
[…] we’re not using the summation convention for the […]
Pingback by Calculating the Determinant « The Unapologetic Mathematician | January 2, 2009 |
[…] instead of the variable, we’ll have a term . We add all of these up, summing over — as our notation suggests we should! And now we have the second coefficient of the characteristic polynomial. We […]
Pingback by The Trace of a Linear Transformation « The Unapologetic Mathematician | January 30, 2009 |
[…] of the basis at hand to writing the function as taking real variables . I know that some people don’t like superscript indices and the summation convention, but they’ll be standard when we get to more […]
Pingback by Directional Derivatives « The Unapologetic Mathematician | September 23, 2009 |
[…] index in the denominator, we’ll consider it to be like an upper index for the purposes of the summation convention. We can even incorporate evaluation […]
Pingback by Examples and Notation « The Unapologetic Mathematician | October 2, 2009 |
[…] differential. Today I want to identify exactly what goes wrong, and I’ll make use of the summation convention to greatly simplify the […]
Pingback by Higher Differentials and Composite Functions « The Unapologetic Mathematician | October 19, 2009 |
[…] to we call a “multi-index”, and sometimes we just write it as , which in the summation convention runs over all increasing collections of indices. Correspondingly, we can just write for the […]
Pingback by Inner Products on Exterior Algebras and Determinants « The Unapologetic Mathematician | October 30, 2009 |
[…] the space consisting of -tuples of real numbers . Remember that we’re writing our indices as superscripts, so we shouldn’t think of these as powers of some number , but as the components of a vector. […]
Pingback by Reflections « The Unapologetic Mathematician | January 18, 2010 |
[…] of vectors I won’t be writing dot products explicitly. Instead I’ll use the common convention that when the same index appears twice we’re supposed to sum over it, remembering that the […]
Pingback by The Higgs Mechanism part 2: Examples of Lagrangian Field Equations « The Unapologetic Mathematician | July 17, 2012 |