The Unapologetic Mathematician

Mathematics for the interested outsider

The Category of Inner Product Spaces

So far we’ve been considering the category \mathbf{Vect} of vector spaces (over either \mathbb{R} or \mathbb{C}) and adding the structure of an inner product to some selected spaces. But of course there should be a category \mathbf{Inn} of inner product spaces.

Clearly the objects should be inner product spaces, and the morphisms should be linear maps, but what sorts of linear maps? Let’s just follow our noses and say “those that preserve the inner product”. That is, a linear map T:V\rightarrow W is a morphism of inner product spaces if and only if for any two vectors v_1,v_2\in V we have

\displaystyle\langle T(v_1),T(v_2)\rangle_W=\langle v_1,v_2\rangle_V

where the subscripts denote which inner product we’re using at each point.

Of course, given any inner product space we can “forget” the inner product and get the underlying vector space. This is a forgetful functor, and the usual abstract nonsense can be used to show that it creates limits. And from there it’s straightforward to check that the category of inner product spaces inherits some nice properties from the category of vector spaces.

Most of the structures we get this way are pretty straightforward — just do the same constructions on the underlying vector spaces. But one in particular that we should take a close look at is the biproduct. What is the direct sum V\oplus W of two inner product spaces? The underlying vector space will be the direct sum of the underlying vector spaces of V and W, but what inner product should we use?

Well, if v_1 and v_2 are vectors in V, then they get included into V\oplus W. But the inclusions have to preserve the inner product between these two vectors, and so we must have

\displaystyle\langle\iota_V(v_1),\iota_V(v_2)\rangle_{V\oplus W}=\langle v_1,v_2\rangle_V

and similarly for any two vectors w_1 and w_2 in W we must have

\displaystyle\langle\iota_W(w_1),\iota_W(w_2)\rangle_{V\oplus W}=\langle w_1,w_2\rangle_W

So the only remaining question is what do we do with one vector from each space? Now we use a projection from the biproduct, which must again preserve the inner product. It lets us calculate

\displaystyle\langle\iota_V(v),\iota_W(w)\rangle_{V\oplus W}=\langle\pi_V(\iota_V(v)),\pi_V(\iota_W(w))\rangle_V=\langle v,0\rangle_V=0

Thus the inner product between vectors from different subspaces must be zero. That is, distinct subspaces in a direct sum must be orthogonal. Incidentally, this shows that the direct sum between a subspace U\subseteq V and its orthogonal complement U^\perp is also a direct sum of inner product spaces.

About these ads

May 6, 2009 - Posted by | Algebra, Linear Algebra

6 Comments »

  1. Sorry to have to say this, but I think there’s a bit of a problem here. This category is not abelian; for one thing, there’s no zero object. The trouble is that when we impose

    \langle T v, T w \rangle = \langle v, w \rangle

    it is automatic that T is injective, because we have

    |T v|^2 = |v|^2

    so T cannot take a nonzero vector to a zero vector.

    Comment by Todd Trimble | May 6, 2009 | Reply

  2. damn. On this one you’re right.

    Comment by John Armstrong | May 6, 2009 | Reply

  3. But another possibility might be to consider the category whose morphisms consist of adjoint pairs of linear maps T: V \to W, T^*: W \to V. I haven’t thought about this thoroughly, but it’s connected with other interesting categories I’ve thought about, e.g., Chu spaces.

    Comment by Todd Trimble | May 6, 2009 | Reply

  4. Please excuse me if I’m speaking out of turn on this, but I’ve had a moment to reflect on my last comment further, and maybe I could share a bit.

    As many readers will be aware, the adjoint of a linear map T: V \to W is the (uniquely determined) linear map T^*: W \to V such that

    $\langle T^* w, v \rangle_V = \langle w, T v \rangle_W$

    Let me call Inner the category of inner product spaces whose morphisms V \to W are such adjoint pairs $(T: V \to W, T^*: W \to V)$. (Throughout this discussion I’ll stick to finite-dimensional spaces.) Now one observation is that this category isn’t all that exciting, because the forgetful functor

    Inner \to Vect,

    taking a morphism (T, T^*) to T, is an equivalence. (The functor is full, faithful, and surjective on objects, which is enough.) But at least this assures that Inner is abelian.

    However, there is no canonical quasi-inverse to the forgetful functor, so there may be some things which we can say canonically with regard to Inner which cannot be canonically expressed in Vect. This applies in particular to the inner product on biproducts that John was alluding to above.

    I’ll spell this out a bit. We have an evident contravariant functor

    (-)^*: Inner \to Inner

    which takes a morphism (T, T^*) to (T^*, T). This is an involution in the sense that (-)^{**} equals the identity functor (on the nose). Being a contravariant involution, it takes coproducts to products and vice-versa.

    It thus takes biproducts to biproducts as well, but there’s a slightly subtle point that unless we choose the inner product structure on the biproduct “correctly”, this transpose functor (-)^* changes the biproduct structure. To be more precise, suppose that

    i_V: V \to V \oplus W \qquad i_W: W \to V \oplus W

    are the coproduct injections of a biproduct, and

    \pi_V: V \oplus W \to V \qquad \pi_W: V \oplus W \to W

    are the product projections, so that

    \pi_V i_V = 1_V \qquad \pi_W i_W = 1_W

    \pi_V i_W = 0 \qquad \pi_W i_V = 0

    (compatibility between coproduct and product structures). Then, a priori, there is no guarantee that we have

    i_V^* = \pi_V \qquad i_W^* = \pi_W

    i.e., the transpose functor $(-)^*$ may take the coproduct data on a biproduct to a different product structure than the one we started with (hence change the biproduct structure).

    But it may be checked that we do get these equations by endowing the biproduct V \oplus W with the orthogonal direct sum that John was describing in his post.

    Comment by Todd Trimble | May 6, 2009 | Reply

  5. [...] vector spaces affect forms on those spaces. We’ve seen a hint when we talked about the category of inner product spaces: if we have a bilinear form on a space and a linear map , then we can “pull back” the [...]

    Pingback by Transformations of Bilinear Forms « The Unapologetic Mathematician | July 24, 2009 | Reply

  6. [...] They both start the same way: given root systems and in inner-product spaces and , we take the direct sum of the vector spaces, which makes vectors from each vector space orthogonal to vectors from the [...]

    Pingback by Coproduct Root Systems « The Unapologetic Mathematician | January 25, 2010 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 389 other followers

%d bloggers like this: