## Matrices I

Yesterday we talked about the high-level views of linear algebra. That is, we’re discussing the category of vector spaces over a field and -linear transformations between them.

More concretely, now: we know that every vector space over is free as a module over . That is, every vector space has a basis — a set of vectors so that every other vector can be uniquely written as an -linear combination of them — though a basis is far from unique. Just how nonunique it is will be one of our subjects going forward.

Now if we’ve got a linear transformation from one finite-dimensional vector space to another, and if we have a basis of and a basis of , we can use these to write the transformation in a particular form: as a matrix. Take the transformation and apply it to each basis element of to get vectors . These can be written uniquely as linear combinations

for certain . These coefficients, collected together, we call a matrix. They’re enough to calculate the value of the transformation on any vector , because we can write

We’re writing the indices of the components as superscripts here, just go with it. Then we can evaluate using linearity

So the coefficients defining the vector and the matrix coefficients together give us the coefficients defining the vector .

If we have another finite-dimensional vector space with basis and another transformation then we have another matrix

Now we can compose these two transformations and calculate the result on a basis element

This last quantity in parens is then the matrix of the composite transformation . Thus we can represent the operation of composition by this formula for matrix multiplication.

[…] Einstein Summation Convention Look at the formulas we were using yesterday. There’s a lot of summations in there, and a lot of big sigmas. Those get really tiring to […]

Pingback by The Einstein Summation Convention « The Unapologetic Mathematician | May 21, 2008 |

[…] Matrices II With the summation convention firmly in hand, we continue our discussion of matrices. […]

Pingback by Matrices II « The Unapologetic Mathematician | May 22, 2008 |

[…] compose two morphisms by the process of matrix multiplication. If is an matrix in and is a matrix in , then their product is a matrix in (remember the […]

Pingback by The Category of Matrices I « The Unapologetic Mathematician | June 2, 2008 |

[…] satisfies ), we construct the column vector (here ). But we’ve already established that matrix multiplication represents composition of linear transformations. Further, it’s straightforward to see that the linear transformation corresponding to a […]

Pingback by The Category of Matrices III « The Unapologetic Mathematician | June 23, 2008 |

[…] vector space comes equipped with a basis , where has a in the th place, and elsewhere. And so we can write any such transformation as an […]

Pingback by The General Linear Groups « The Unapologetic Mathematician | October 20, 2008 |

[…] Okay, back to linear algebra and inner product spaces. I want to look at the matrix of a linear map between finite-dimensional inner product […]

Pingback by Matrix Elements « The Unapologetic Mathematician | May 29, 2009 |

[…] the space of -tuples of complex numbers — and that linear transformations are described by matrices. Composition of transformations is reflected in matrix multiplication. That is, for every […]

Pingback by Some Review « The Unapologetic Mathematician | September 8, 2010 |