So, let’s say and are inner product spaces with orthonormal bases and , respectively, and let be a linear map from one to the other. We know that we can write down the matrix for , where the matrix entries are defined as the coefficients in the expansion
But now that we’ve got an inner product on , it will be easy to extract these coefficients. Just consider the inner product
Presto! We have a nice, neat function that takes a linear map and gives us back the – entry in its matrix — with respect to the appropriate bases, naturally.
But this is also the root of a subtle, but important, shift in understanding what a matrix entry actually is. Up until now, we’ve thought of matrix entries as artifacts which happen to be useful for calculations. But now we’re very explicitly looking at the question “what scalar shows up in this slot of the matrix of a linear map with respect to these particular bases?” as a function. In fact, is now not just some scalar value peculiar to the transformation at hand; it’s now a particular linear functional on the space of all transformations .
And, really, what do the indices and matter? If we rearranged the bases we’d find the same function in a new place in the new array. We could have taken this perspective before, with any vector space, but what we couldn’t have asked before is this more general question: “Given a vector and a vector , how much does the image is made up of ?” This new question only asks about these two particular vectors, and doesn’t care anything about any of the other basis vectors that may (or may not!) be floating around. But in the context of an inner product space, this question has an answer:
Any function of this form we’ll call a “matrix element”. We can use such matrix elements to probe linear transformations even without full bases to work with, sort of like the way we generalized “elements” of an abelian group to “members” of an object in an abelian category. This is especially useful when we move to the infinite-dimensional context and might find it hard to come up with a proper basis to make a matrix with. Instead, we can work with the collection of all matrix elements and use it in arguments in place of some particular collection of matrix elements which happen to come from particular bases.
Now it would be really neat if matrix elements themselves formed a vector space, but the situation’s sort of like when we constructed tensor products. Matrix elements are like the “pure” tensors . They (far more than) span the space of all linear functionals on the space of linear transformations, just like pure tensors span the whole tensor product space. But almost all linear functionals have to be written as a nontrivial sum of matrix elements — they usually can’t be written with just one. Still, since they span we know that many properties which hold for all matrix elements will immediately hold for all linear functionals on .