And here’s the post I wrote today:
First, we note that by definition. Since our complementation reverses order, we find . Similarly, . And thus we conclude that .
On the other hand, by definition. Then we find by invoking the involutive property of our complement. Similarly, , and so . And thus we conclude . Putting this together with the other inequality, we get the first of DeMorgan’s laws.
To get the other, just invoke the first law on the objects and . We find
Similarly, the first of DeMorgan’s laws follows from the second.
Interestingly, DeMorgan’s laws aren’t just a consequence of order-reversal. It turns out that they’re equivalent to order-reversal. Now if then . So . And thus .
I just noticed in my drafts this post which I’d written last Friday never went up.
Let’s say we have a real or complex vector space of finite dimension with an inner product, and let be a linear map from to itself. Further, let be a basis with respect to which the matrix of is upper-triangular. It turns out that we can also find an orthonormal basis which also gives us an upper-triangular matrix. And of course, we’ll use Gram-Schmidt to do it.
What it rests on is that an upper-triangular matrix means we have a nested sequence of invariant subspaces. If we define to be the span of then clearly we have a chain
Further, the fact that the matrix of is upper-triangular means that . And so the whole subspace is invariant: .
Now let’s apply Gram-Schmidt to the basis and get an orthonormal basis . As a bonus, the span of is the same as the span of , which is . So we have exactly the same chain of invariant subspaces, and the matrix of with respect to the new orthonormal basis is still upper-triangular.
In particular, since every complex linear transformation has an upper-triangular matrix with respect to some basis, there must exist an orthonormal basis giving an upper-triangular matrix. For real transformations, of course, it’s possible that there isn’t any upper-triangular matrix at all. It’s also worth pointing out here that there’s no guarantee that we can push forward and get an orthonormal Jordan basis.
We know that the poset of subspaces of a vector space is a lattice. Now we can define complementary subspaces in a way that doesn’t depend on any choice of basis at all. So what does this look like in terms of the lattice?
First off, remember that the “meet” of two subspaces is their intersection, which is again a subspace. On the other hand their “join” is their sum as subspaces. But now we have a new operation called the “complement”. In general lattice-theory terms, a complement of an element in a bounded lattice (one that has a top element and a bottom element ) is an element so that and .
In particular, since the top subspace is itself, and the bottom subspace is we can see that the orthogonal complement satisfies these properties. The intersection is trivial, since the inner product is positive-definite as a bilinear form, and the sum is all of , as we’ve seen.
Even more is true. The orthogonal complement is involutive (when is finite-dimensional), and order-reversing, which makes it an “orthocomplement”. In lattice-theory terms, this means that , and that if then .
First, let’s say we’ve got two subspaces of . I say that . Indeed, if is a vector in then it for all . But since any is also a vector in , we can see that , and so as well. Thus orthogonal complementation is
Now let’s take a single subspace of , and let be a vector in . If is any vector in , then by the (conjugate) symmetry of the inner product and the definition of . Thus is a vector in , and so . Note that this much holds whether is finite-dimensional or not.
On the other hand, if is finite-dimensional we can take an orthonormal basis of and expand it into an orthonormal basis of all of . Then the new vectors form a basis of , so that . A vector in is orthogonal to every vector in exactly when it can be written using only the first basis vectors, and thus lies in . That is, when is finite-dimensional.
So far we’ve been considering the category of vector spaces (over either or ) and adding the structure of an inner product to some selected spaces. But of course there should be a category of inner product spaces.
Clearly the objects should be inner product spaces, and the morphisms should be linear maps, but what sorts of linear maps? Let’s just follow our noses and say “those that preserve the inner product”. That is, a linear map is a morphism of inner product spaces if and only if for any two vectors we have
where the subscripts denote which inner product we’re using at each point.
Of course, given any inner product space we can “forget” the inner product and get the underlying vector space. This is a forgetful functor, and the usual abstract nonsense can be used to show that it creates limits. And from there it’s straightforward to check that the category of inner product spaces inherits some nice properties from the category of vector spaces.
Most of the structures we get this way are pretty straightforward — just do the same constructions on the underlying vector spaces. But one in particular that we should take a close look at is the biproduct. What is the direct sum of two inner product spaces? The underlying vector space will be the direct sum of the underlying vector spaces of and , but what inner product should we use?
Well, if and are vectors in , then they get included into . But the inclusions have to preserve the inner product between these two vectors, and so we must have
and similarly for any two vectors and in we must have
So the only remaining question is what do we do with one vector from each space? Now we use a projection from the biproduct, which must again preserve the inner product. It lets us calculate
Thus the inner product between vectors from different subspaces must be zero. That is, distinct subspaces in a direct sum must be orthogonal. Incidentally, this shows that the direct sum between a subspace and its orthogonal complement is also a direct sum of inner product spaces.
An important fact about the category of vector spaces is that all exact sequences split. That is, if we have a short exact sequence
we can find a linear map from to which lets us view it as a subspace of , and we can write . When we have an inner product around and is finite-dimensional, we can do this canonically.
What we’ll do is define the orthogonal complement of to be the vector space
That is, consists of all vectors in perpendicular to every vector in .
First, we should check that this is indeed a subspace. If we have vectors , scalars , and a vector , then we can check
and thus the linear combination is also in .
Now to see that , take an orthonormal basis for . Then we can expand it to an orthonormal basis of . But now I say that is a basis for . Clearly they’re linearly independent, so we just have to verify that their span is exactly .
First, we can check that for any between and , and so their span is contained in . Indeed, if is a vector in , then we can calculate the inner product
since and . Of course, we omit the conjugation when working over .
Now, let’s say we have a vector . We can write it in terms of the full basis as . Then we can calculate its inner product with each of the basis vectors of as
Since this must be zero, we find that the coefficient of must be zero for all between and . That is, is contained within the span of
So between a basis for and a basis for we have a basis for with no overlap, we can write any vector uniquely as the sum of one vector from and one from , and so we have a direct sum decomposition as desired.