## Matrix notation

I just spent all day on the road back to NOLA to handle some end-of-month business, clean out my office, and so on. This one will have to do for today and tomorrow.

It gets annoying to write out matrices using the embedded LaTeX here, but I suppose I really should, just for thoroughness’ sake.

In general, a matrix is a collection of field elements with an upper and a lower index. We can write out all these elements in a rectangular array. The upper index should list the rows of our array, while the lower index should list the columns. The matrix with entries for running from to and running from to is written out in full as

We call this an matrix, because the array is rows high and columns wide.

There is a natural isomorphism . This means that every vector in dimension , written out in the components relative to a given basis, can be seen as an “column vector”:

Similarly, a linear functional on an -dimensional space can be written as a “row vector”:

Notice that evaluation of linear transformations is now just a special case of matrix multiplication! Let’s practice by writing out the composition of a linear functional , a linear map , and a vector .

A matrix product makes sense if and only if the number of columns in the left-hand matrix is the same as the number of rows in the right-hand matrix. That is, an and an can be multiplied. The result will be an matrix. We calculate it by taking a row from the left-hand matrix and a column from the right-hand matrix. Since these are the same length (by assumption) we can multiply corresponding elements and sum up.

In the example above, the matrix and the matrix can be multiplied. There is only one column in the latter to pick, so we simply choose row out of on the left: . Multiplying corresponding elements and summing gives the single field element (remember the summation convention). We get of these elements — one for each row — and we arrange them in a new matrix:

Then we can multiply the row vector by this column vector to get the matrix:

Just like we slip back and forth between vectors and matrices, we will usually consider a field element and the matrix with that single entry as being pretty much the same thing.

The first multiplication here turned an -dimensional (column) vector into an -dimensional one, reflecting the source and target of the transformation . Then we evaluated the linear functional on the resulting vector. But by the associativity of matrix multiplication we could have first multiplied on the left:

turning the linear functional on into one on . But this is just the dual transformation ! Then we can evaluate this on the column vector to get the same result: .

There is one slightly touchy thing we need to be careful about: Kronecker products. When the upper index is a pair with and we have to pick an order on the set of such pairs. We’ll always use the “lexicographic” order. That is, we start with , then , and so on until before starting over with , , and so on. Let’s write out a couple examples just to be clear:

So the Kronecker product depends on the order of multiplication. But this dependence is somewhat illusory. The only real difference is reordering the bases we use for the tensor products of the vector spaces involved, and so a change of basis can turn one into the other. This is an example of how matrices can carry artifacts of our choice of bases.

This isn’t exactly relevant to what’s going on right now, but might be entertaining for people interested in the category theory entries from last year, and with the tensor symbols flying around again, maybe here later:

Categorical semantics of linear logic: a survey

http://www.pps.jussieu.fr/~mellies/

A decent workout on a fair range of basic concepts.

Comment by Avery Andrews | May 31, 2008 |

That’s a good point, Avery. I think you already know where I’m going with this next week.

Comment by John Armstrong | May 31, 2008 |

[...] we have to be careful about what we’re saying. In accordance with our convention, the pair of indices (with and ) should be considered as the single index . It’s clear that [...]

Pingback by The Category of Matrices II « The Unapologetic Mathematician | June 3, 2008 |

[...] articles from Wikipedia. On reading a post on The Unapologetic Mathematician that discusses Kronecker product I went to Wikipedia and I’m now reading four or five articles on [...]

Pingback by Blockbuster yet again « Michael Cassidy Weblog | June 12, 2008 |

[...] Now here’s the really important thing: There’s a functor that assigns the finite-dimensional vector space of -tuples of elements of to each object of . Such a vector space of -tuples comes with the basis , where the vector has a in the th place and a elsewhere. In matrix notation: [...]

Pingback by The Category of Matrices III « The Unapologetic Mathematician | June 23, 2008 |

[...] introduce one of the most popular applications of linear algebra, at least outside mathematics. Matrices can encode systems of linear equations, and matrix algebra can be used to solve [...]

Pingback by Linear Equations « The Unapologetic Mathematician | July 3, 2008 |

[...] this is all but writing out exactly our matrix notation! We can take the above system and rewrite it [...]

Pingback by The Matrix of a Linear System « The Unapologetic Mathematician | July 11, 2008 |