## Matrices III

Given two finite-dimensional vector spaces and , with bases and respectively, we know how to build a tensor product: use the basis .

But an important thing about the tensor product is that it’s a *functor*. That is, if we have linear transformations and , then we get a linear transformation . So what does *this* operation look like in terms of matrices?

First we have to remember exactly how we get the tensor product . Clearly we can consider the function . Then we can compose with the bilinear function to get a bilinear function from to . By the universal property, this must factor uniquely through a linear function . It is this map we call .

We have to pick bases of and of . This gives us a matrix coefficients for and for . To calculate the matrix for we have to evaluate it on the basis elements of . By definition we find:

that is, the matrix coefficient between the index pair and the index pair is .

It’s not often taught anymore, but there is a name for this operation: the Kronecker product. If we write the matrices (as opposed to just their coefficients) and , then we write the Kronecker product .