The Unapologetic Mathematician

Mathematics for the interested outsider

Matrices II

With the summation convention firmly in hand, we continue our discussion of matrices.

We’ve said before that the category of vector space is enriched over itself. That is, if we have vector spaces U and V over the field \mathbb{F}, the set of linear transformations \hom(U,V) is itself a vector space over \mathbb{F}. In fact, it inherits this structure from the one on V. We define the sum and the scalar product

\left[S+T\right](u)=S(u)+T(u)
\left[cT\right](u)=cT(u)

for linear transformations S and T from U to V, and for a constant c\in\mathbb{F}. Verifying that these are also linear transformations is straightforward.

So what do these structures look like in the language of matrices? If U and V are finite-dimensional, let’s pick bases \left\{e_i\right\} of U and \left\{f_j\right\} of V. Now we get matrix coefficients s_i^j and t_i^j, where i indexes the basis of U and j indexes the basis of V. Now we can calculate the matrices of the sum and scalar product above.

We do this, as usual, by calculating the value the transformations take at each basis element. First, the sum:

\left[S+T\right](e_i)=S(e_i)+T(e_i)=s_i^jf_j+t_i^jf_j=(s_i^j+t_i^j)f_j

and now the scalar product:

\left[cT\right](e_i)=cT(e_i)=(ct_i^j)f_j

so we calculate the matrix coefficients of the sum of two linear transformations by adding the corresponding matrix coefficients of each transformation, and the matrix coefficients of the scalar product by multiplying each coefficient by the same scalar.

May 22, 2008 Posted by | Algebra, Linear Algebra | 1 Comment