The Unapologetic Mathematician

Mathematics for the interested outsider

Galois Connections

I want to mention a topic I thought I’d hit back when we talked about adjoint functors. We know that every poset is a category, with the elements as objects and a single arrow from a to b if a\leq b. Functors between such categories are monotone functions, preserving the order. Contravariant functors are so-called “antitone” functions, which reverse the order, but the same abstract nonsense as usual tells us this is just a monotone function to the “opposite” poset with the order reversed.

So let’s consider an adjoint pair F\dashv G of such functors. This means there is a natural isomorphism between \hom(F(a),b) and \hom(a,G(b)). But each of these hom-sets is either empty (if a\not\leq b) or a singleton (if a\leq b). So the adjunction between F and G means that F(a)\leq b if and only if a\leq G(b). The analogous condition for an antitone adjoint pair is that b\leq F(a) if and only if a\leq G(b).

There are some immediate consequences to having a Galois connection, which are connected to properties of adjoints. First off, we know that a\leq G(F(a)) and F(G(b))\leq b. This essentially expresses the unit and counit of the adjunction. For the antitone version, let’s show the analogous statement more directly: we know that F(a)\leq F(a), so the adjoint condition says that a\leq G(F(a)). Similarly, b\leq F(G(b)). This second condition is backwards because we’re reversing the order on one of the posets.

Using the unit and the counit of an adjunction, we found a certain quasi-inverse relation between some natural transformations on functors. For our purposes, we observe that since a\leq G(F(a)) we have the special case G(b)\leq G(F(G(b))). But F(G(b))\leq b, and G preserves the order. Thus G(F(G(b)))\leq G(b). So G(b)=G(F(G(b))). Similarly, we find that F(G(F(a)))=F(a), which holds for both monotone and antitone Galois connections.

Chasing special cases further, we find that G(F(G(F(a))))=G(F(a)), and that F(G(F(G(b))))=F(G(b)) for either kind of Galois connection. That is, F\circ G and G\circ F are idempotent functions. In general categories, the composition of two adjoint functors gives a monad, and this idempotence is just the analogue in our particular categories. In particular, these functions behave like closure operators, but for the fact that general posets don’t have joins or bottom elements to preserve in the third and fourth Kuratowski axioms.

And so elements left fixed by G\circ F (or F\circ G) are called “closed” elements of the poset. The images of F and G consist of such closed elements

May 18, 2009 Posted by | Algebra, Category theory, Lattices | 6 Comments

DeMorgan’s Laws

And here’s the post I wrote today:

Today, I want to prove two equations that hold in any orthocomplemented lattice. They are the famous DeMorgan’s laws:

\displaystyle\neg(x\vee y)=\neg x\wedge\neg y
\displaystyle\neg(x\wedge y)=\neg x\vee\neg y

First, we note that x\leq x\vee y by definition. Since our complementation reverses order, we find \neg(x\vee y)\leq\neg x. Similarly, \neg(x\vee y)\leq\neg y. And thus we conclude that \neg(x\vee y)\leq\neg x\wedge\neg y.

On the other hand, \neg x\wedge\neg y\leq\neg x by definition. Then we find x=\neg\neg x\leq\neg(\neg x\wedge\neg y) by invoking the involutive property of our complement. Similarly, y\leq\neg(\neg x\wedge\neg y), and so x\vee y\leq\neg(\neg x\wedge\neg y). And thus we conclude \neg x\wedge\neg y\leq\neg(x\vee y). Putting this together with the other inequality, we get the first of DeMorgan’s laws.

To get the other, just invoke the first law on the objects \neg x and \neg y. We find

\displaystyle\begin{aligned}\neg x\vee\neg y&=\neg\neg(\neg x\vee\neg y)\\&=\neg(\neg\neg x\wedge\neg\neg y)\\&=\neg(x\wedge y)\end{aligned}

Similarly, the first of DeMorgan’s laws follows from the second.

Interestingly, DeMorgan’s laws aren’t just a consequence of order-reversal. It turns out that they’re equivalent to order-reversal. Now if x\leq y then x=x\wedge y. So \neg x=\neg(x\wedge y)=\neg x\vee\neg y. And thus \neg y\leq\neg x.

May 8, 2009 Posted by | Algebra, Lattices | 8 Comments

Upper-Triangular Matrices and Orthonormal Bases

I just noticed in my drafts this post which I’d written last Friday never went up.

Let’s say we have a real or complex vector space V of finite dimension d with an inner product, and let T:V\rightarrow V be a linear map from V to itself. Further, let \left\{v_i\right\}_{i=1}^d be a basis with respect to which the matrix of T is upper-triangular. It turns out that we can also find an orthonormal basis which also gives us an upper-triangular matrix. And of course, we’ll use Gram-Schmidt to do it.

What it rests on is that an upper-triangular matrix means we have a nested sequence of invariant subspaces. If we define U_k to be the span of \left\{v_i\right\}_{i=1}^k then clearly we have a chain

\displaystyle U_1\subseteq\dots\subseteq U_{d-1}\subseteq U_d=V

Further, the fact that the matrix of T is upper-triangular means that T(v_i)\in U_i. And so the whole subspace is invariant: T(U_i)\subseteq U_i.

Now let’s apply Gram-Schmidt to the basis \left\{v_i\right\}_{i=1}^d and get an orthonormal basis \left\{e_i\right\}_{i=1}^d. As a bonus, the span of \left\{e_i\right\}_{i=1}^k is the same as the span of \left\{e_i\right\}_{i=1}^k, which is U_k. So we have exactly the same chain of invariant subspaces, and the matrix of T with respect to the new orthonormal basis is still upper-triangular.

In particular, since every complex linear transformation has an upper-triangular matrix with respect to some basis, there must exist an orthonormal basis giving an upper-triangular matrix. For real transformations, of course, it’s possible that there isn’t any upper-triangular matrix at all. It’s also worth pointing out here that there’s no guarantee that we can push forward and get an orthonormal Jordan basis.

May 8, 2009 Posted by | Algebra, Linear Algebra | 1 Comment

Orthogonal Complements and the Lattice of Subspaces

We know that the poset of subspaces of a vector space V is a lattice. Now we can define complementary subspaces in a way that doesn’t depend on any choice of basis at all. So what does this look like in terms of the lattice?

First off, remember that the “meet” of two subspaces is their intersection, which is again a subspace. On the other hand their “join” is their sum as subspaces. But now we have a new operation called the “complement”. In general lattice-theory terms, a complement of an element x in a bounded lattice L (one that has a top element {1} and a bottom element {0}) is an element \neg x\in L so that x\vee\neg x=1 and x\wedge\neg x=0.

In particular, since the top subspace is V itself, and the bottom subspace is \mathbf{0} we can see that the orthogonal complement U^\perp satisfies these properties. The intersection U\cap U^\perp is trivial, since the inner product is positive-definite as a bilinear form, and the sum U+U^\perp is all of V, as we’ve seen.

Even more is true. The orthogonal complement is involutive (when V is finite-dimensional), and order-reversing, which makes it an “orthocomplement”. In lattice-theory terms, this means that \neg\neg x=x, and that if x\leq y then \neg y\leq\neg x.

First, let’s say we’ve got two subspaces U\subseteq W of V. I say that W^\perp\subseteq U^\perp. Indeed, if p is a vector in W^\perp then it \langle w,p\rangle=0 for all w\in W. But since any u\in U is also a vector in W, we can see that \langle u,p\rangle=0, and so p\in U^\perp as well. Thus orthogonal complementation is

Now let’s take a single subspace U of V, and let u be a vector in U. If v is any vector in U^\perp, then \langle v,u\rangle=\overline{\langle u,v\rangle}=0 by the (conjugate) symmetry of the inner product and the definition of U^\perp. Thus u is a vector in \left(U^\perp\right)^\perp, and so U\subseteq U^{\perp\perp}. Note that this much holds whether V is finite-dimensional or not.

On the other hand, if V is finite-dimensional we can take an orthonormal basis \left\{e_i\right\}_{i=1}^n of U and expand it into an orthonormal basis \left\{e_i\right\}_{i=1}^d of all of V. Then the new vectors \left\{e_i\right\}_{i=n+1}^d form a basis of U^\perp, so that V=U\oplus U^\perp. A vector in V is orthogonal to every vector in U^\perp exactly when it can be written using only the first n basis vectors, and thus lies in U. That is, U^{\perp\perp}=U when V is finite-dimensional.

May 7, 2009 Posted by | Algebra, Lattices, Linear Algebra | 7 Comments

The Category of Inner Product Spaces

So far we’ve been considering the category \mathbf{Vect} of vector spaces (over either \mathbb{R} or \mathbb{C}) and adding the structure of an inner product to some selected spaces. But of course there should be a category \mathbf{Inn} of inner product spaces.

Clearly the objects should be inner product spaces, and the morphisms should be linear maps, but what sorts of linear maps? Let’s just follow our noses and say “those that preserve the inner product”. That is, a linear map T:V\rightarrow W is a morphism of inner product spaces if and only if for any two vectors v_1,v_2\in V we have

\displaystyle\langle T(v_1),T(v_2)\rangle_W=\langle v_1,v_2\rangle_V

where the subscripts denote which inner product we’re using at each point.

Of course, given any inner product space we can “forget” the inner product and get the underlying vector space. This is a forgetful functor, and the usual abstract nonsense can be used to show that it creates limits. And from there it’s straightforward to check that the category of inner product spaces inherits some nice properties from the category of vector spaces.

Most of the structures we get this way are pretty straightforward — just do the same constructions on the underlying vector spaces. But one in particular that we should take a close look at is the biproduct. What is the direct sum V\oplus W of two inner product spaces? The underlying vector space will be the direct sum of the underlying vector spaces of V and W, but what inner product should we use?

Well, if v_1 and v_2 are vectors in V, then they get included into V\oplus W. But the inclusions have to preserve the inner product between these two vectors, and so we must have

\displaystyle\langle\iota_V(v_1),\iota_V(v_2)\rangle_{V\oplus W}=\langle v_1,v_2\rangle_V

and similarly for any two vectors w_1 and w_2 in W we must have

\displaystyle\langle\iota_W(w_1),\iota_W(w_2)\rangle_{V\oplus W}=\langle w_1,w_2\rangle_W

So the only remaining question is what do we do with one vector from each space? Now we use a projection from the biproduct, which must again preserve the inner product. It lets us calculate

\displaystyle\langle\iota_V(v),\iota_W(w)\rangle_{V\oplus W}=\langle\pi_V(\iota_V(v)),\pi_V(\iota_W(w))\rangle_V=\langle v,0\rangle_V=0

Thus the inner product between vectors from different subspaces must be zero. That is, distinct subspaces in a direct sum must be orthogonal. Incidentally, this shows that the direct sum between a subspace U\subseteq V and its orthogonal complement U^\perp is also a direct sum of inner product spaces.

May 6, 2009 Posted by | Algebra, Linear Algebra | 6 Comments

Orthogonal Complements

An important fact about the category of vector spaces is that all exact sequences split. That is, if we have a short exact sequence

\displaystyle\mathbf{0}\rightarrow U\rightarrow V\rightarrow W\rightarrow\mathbf{0}

we can find a linear map from W to V which lets us view it as a subspace of V, and we can write V\cong U\oplus W. When we have an inner product around and V is finite-dimensional, we can do this canonically.

What we’ll do is define the orthogonal complement of U\subseteq V to be the vector space

\displaystyle U^\perp=\left\{v\in V\vert\forall u\in U,\langle u,v\rangle=0\right\}

That is, U^\perp consists of all vectors in V perpendicular to every vector in U.

First, we should check that this is indeed a subspace. If we have vectors v,w\in U^\perp, scalars a,b, and a vector u\in U, then we can check

\displaystyle\langle u,av+bw\rangle=a\langle u,v\rangle+b\langle u,w\rangle=0

and thus the linear combination av+bw is also in U^\perp.

Now to see that U\oplus U^\perp\cong V, take an orthonormal basis \left\{e_i\right\}_{i=1}^n for U\subseteq V. Then we can expand it to an orthonormal basis \left\{e_i\right\}_{i=1}^d of V. But now I say that \left\{e_i\right\}_{i=n+1}^d is a basis for U^\perp. Clearly they’re linearly independent, so we just have to verify that their span is exactly U^\perp.

First, we can check that e_k\in U^\perp for any k between n+1 and d, and so their span is contained in U^\perp. Indeed, if u=u^ie_i is a vector in U, then we can calculate the inner product

\displaystyle\langle u^ie_i,e_k\rangle=\bar{u^i}\langle e_i,e_k\rangle=\bar{u^i}\delta_{ik}=0

since i\leq n and k\geq n+1. Of course, we omit the conjugation when working over \mathbb{R}.

Now, let’s say we have a vector v\in U^\perp\subseteq V. We can write it in terms of the full basis \left\{e_k\right\}_{k=1}^d as v^ke_k. Then we can calculate its inner product with each of the basis vectors of U as

\displaystyle\langle e_i,v^ke_k\rangle=v^k\langle e_i,e_k\rangle=v^k\delta_{ik}=v^i

Since this must be zero, we find that the coefficient v^i of e_i must be zero for all i between {1} and n. That is, U^\perp is contained within the span of \left\{e_i\right\}_{i=n+1}^d

So between a basis for U and a basis for U^\perp we have a basis for V with no overlap, we can write any vector v\in V uniquely as the sum of one vector from U and one from U^\perp, and so we have a direct sum decomposition as desired.

May 4, 2009 Posted by | Algebra, Linear Algebra | 12 Comments

Follow

Get every new post delivered to your Inbox.

Join 388 other followers