I want to continue yesterday’s post with some more explicit calculations to hopefully give a bit more of the feel.
First up, let’s consider wedges of degree . That is, we pick vectors and wedge them all together (in order) to get . What is its inner product with another of the same form? We calculate
where in the third line we’ve rearranged the factors at the right and used the fact that , and in the fourth line we’ve relabelled . This looks a lot like the calculation of a determinant. In fact, it is times the determinant of the matrix with entries .
If we use the “renormalized” inner product on from the end of yesterday’s post, then we get an extra factor of , which cancels off the and gives us exactly the determinant.
We can use the inner product to read off components of exterior algebra elements. If is an element of degree we write
As an explicit example, we may take to have dimension and consider an element of degree in
We call what we’re writing in the superscript to we call a “multi-index”, and sometimes we just write it as , which in the summation convention runs over all increasing collections of indices. Correspondingly, we can just write for the multi-index .
Alternatively, we could expand the wedges out in terms of tensors:
where we just think of the superscript as a collection of separate indices, all of which run from to the dimension of , with the understanding that , and similarly for higher degrees; swapping two indices switches the sign of the component. All this index juggling gets distracting and confusing, but it’s sometimes necessary for explicit computations, and the physicists love it.
Anyway, we can use this to get back to our original definition of the determinant of a linear transformation . Pick a orthonormal basis for and wedge them all together to get an element of top degree in . Since the space of top degree is one-dimensional, any linear transformation on it just consists of multiplying by a scalar. So we can let act on this one element we’ve cooked up, and then read off the coefficient using the inner product.
The linear transformation sends to the vector . By functoriality, it sends to . And now we want to calculate the coefficient.
The determinant of is exactly the factor by which acting on the top degree subspace in expands any given element.
Let’s focus back in on a real, finite-dimensional vector space and give it an inner product. As a symmetric bilinear form, the inner product provides us with an isomorphism . Now we can use functoriality to see what this does for our tensor algebras. Again, I’ll be mostly interested in the exterior algebra , so I’ll stick to talking about that one.
The isomorphism sends a vector to the linear functional . Functoriality then defines an isomorphism that sends the wedge of degree to the wedge , also of degree . This is the antisymmetrization of the tensor product of all these linear functionals. We’ve seen that we can consider this as a linear functional on the space of degree tensors by applying the functionals to the tensorands in order and then multiplying together all the results. This defines an isomorphism , and extending by linearity we find an isomorphism .
Let’s get a little more explicit about how this works by picking an orthonormal basis for , and the corresponding dual basis for . That is, we have — which defines the isomorphism from to explicitly in terms of bases — and .
Now we can use the to write down an explicit basis of . An element of degree is the sum of wedges of vectors . We can write each of these vectors out in terms of components , getting the wedge (really a sum of wedges)
We factor out all the scalar components to get
If in a given term we ever have two of the indices equal to each other, then the whole wedge will be zero by antisymmetry. On the other hand, if none of them are equal we can sort them into increasing order (at the possible cost of multiplying by the sign of the needed permutation). In the end, we can write down any wedge of degree uniquely as a sum of constants times the basic wedges , where . For example, if has basis , then will have basis
where the lines correspond to the different degrees.
Now it’s obvious how the isomorphism acts on this basis. It just turns a wedge of basis vectors like into a wedge of basis linear functionals like . The action on the rest of just extends by linearity. When we compose this with the isomorphism between and , we get an isomorphism . That is, we have an inner product on the algebra !
Let’s consider how this inner product behaves on our basis Clearly to line these up we need the degrees to be equal. We also find that we get zero unless the collections of indices are the same. For example, if we try to pair with , we find
In each arrangement, we’ll find two indices that don’t line up, and thus each term will be zero. On the other hand, if the collections of indices are the same, we find (for example)
When we consider a basic wedge of degree (here, ) and pair it with itself, we’ll have a sum of terms corresponding to summing over permutations of both tensors. Of these, terms that pick different permutations will have at least one pair of basis vectors that don’t line up, and make the whole term zero. The remaining terms that pick the same permutation twice will give the product of copies of , and this will always occur with a positive sign. This will exactly cancel one of the two normalizing factors from the antisymmetrizers, and thus the inner product of a basic wedge of degree with itself will always be . It’s not an orthonormal basis, but it’s close.
Notice, in particular, how in the second example we’ve avoided explicit use of the dual basis and just defined the inner product on tensors of rank as the -fold product of inner products of vectors. We’ll stick to this notation in the future for tensors.
The factor of isn’t really terrible, but it can get annoying. Often the inner product on is modified to compensate for it. We consider the different degrees to be orthogonal, as before, and we define the inner product in degree to include an extra factor of . This has the effect of making the collection of wedges of basis vectors into an orthonormal basis for , but it means that the inner product on wedges can not be calculated simply by considering them as antisymmetric tensors.
Now, I’ve never really looked closely at exactly what happens, so as an experiment I’m going to try to not use this extra factor of and see what happens. I’ll refer, as I do, to the “renormalized” inner product on , where appropriate. And if the work starts becoming too complicated without this factor, I’ll give in and use it, explicitly saying when I’ve given up.
The three constructions we’ve just shown — the tensor, symmetric tensor, and exterior algebras — were all asserted to be the “free” constructions. This makes them functors from the category of vector spaces over to appropriate categories of -algebras, and that means that they behave very nicely as we transform vector spaces, and we can even describe exactly how nicely with explicit algebra homomorphisms. I’ll work through this for the exterior algebra, since that’s the one I’m most interested in, but the others are very similar.
Okay, we want the exterior algebra to be the “free” graded-commutative algebra on the vector space . That’s a tip-off that we’re thinking should be the left adjoint of the “forgetful” functor
which sends a graded-commutative algebra to its underlying vector space (Todd makes a correction to which forgetful functor we’re using below). We’ll define this adjunction by finding a collection of universal arrows, which (along with the forgetful functor ) is one of the many ways we listed to specify an adjunction.
So let’s run down the checklist. We’ve got the forgetful functor which we’re going to make the right-adjoint. Now for each vector space we need a graded-commutative algebra — clearly the one we’ll pick is — and a universal arrow . The underlying vector space of the exterior algebra is the direct sum of all the spaces of antisymmetric tensors on .
Yesterday we wrote this without the , since we often just omit forgetful functors, but today we want to remember that we’re using it. But we know that , so the obvious map to use is the one that sends a vector to itself, now considered as an antisymmetric tensor with a single tensorand.
But is this a universal arrow? That is, if is another graded-commutative algebra, and is another linear map, then is there a unique homomorphism of graded-commutative algebras so that ? Well, tells us where in we have to send any antisymmetric tensor with one tensorand. Any other element in is the sum of a bunch of terms, each of which is the wedge of a bunch of elements of . So in order for to be a homomorphism of graded-commutative algebras, it has to act by simply changing each element of in our expression for into the corresponding element of , and then wedging and summing these together as before. Just write out the exterior algebra element all the way down in terms of vectors, and transform each vector in the expression. This will give us the only possible such homomorphism . And this establishes that is the object-function of a functor which is left-adjoint to .
So how does work on morphisms? It’s right in the proof above! If we have a linear map , we need to find some homomorphism . But we can compose with the linear map , which gives us . The universality property we just proved shows that we have a unique homomorphism . And, specifically, it is defined on an element by writing down in terms of vectors in and applying to each vector in the expression to get a sum of wedges of elements of , which will be an element of the algebra .
Of course, as stated above, we get similar constructions for the commutative algebra and the tensor algebra .
Since, given a linear map the induced homomorphisms , , and preserve the respective gradings, they can be broken into one linear map for each degree. And if is invertible, so must be its image under each functor. These give exactly the tensor, symmetric, and antisymmetric representations of the group , if we consider how these functors act on invertible morphisms . Functoriality is certainly a useful property.
Let’s continue yesterday’s discussion of algebras we can construct from a vector space. Today, we consider the “exterior algebra” on , which consists of the direct sum of all the spaces of antisymmetric tensors
Yes, that’s a capital , not an . This is just standard notation, probably related to the symbol for its multiplication we’ll soon come to.
Again, despite the fact that each is a subspace of the tensor space , this isn’t a subalgebra of , because the tensor product of two antisymmetric tensors may not be antisymmetric itself. Instead, we will take the tensor product of and , and then antisymmetrize it, to give . This will be bilinear, but will it be associative?
Our proof parallels the one we ran through yesterday, writing the symmetric group as the disjoint union of cosets indexed by a set of representatives
and rewriting the symmetrizer in just the right way. But now we’ve got the signs of our permutations to be careful with. Still, let’s dive in with the antisymmetrizers
Where throughout we’ve used the fact that is a representation, and so the signum of the product of two group elements is the product of their signa. We also make the crucial combination of the double sum over into a single sum by noting that each group element shows up exactly times, and each time it shows up with the exact same sign, which lets us factor out from the sum and cancel the normalizing factor.
Now this multiplication is not commutative. Instead, it’s graded-commutative. If and are elements of the exterior algebra, then we find
That is, elements of odd degree anticommute with each other, while elements of even degree commute with everything.
Indeed, given and , we can let be the permutation which moves the last slots to the beginning of the term and the first slots to the end. We can construct by moving each of the last slots one-by-one past the first , taking swaps for each one. That gives a total of swaps, so . Then we write
The dual to the exterior algebra is the algebra of all alternating multilinear functionals on , providing a counterpart to the algebra of polynomial functions on . But where the variables in polynomial functions commute with each other, the basic covectors — analogous to variables reading off components of a vector — anticommute with each other in this algebra.
There are a few graded algebras we can construct with our symmetric and antisymmetric tensors, and at least one of them will be useful. Remember that we also have symmetric and alternating multilinear functionals in play, so the same constructions will give rise to even more algebras.
First and easiest we have the tensor algebra on . This just takes all the tensor powers of and direct sums them up
This gives us a big vector space — an infinite-dimensional one, in fact — but it’s not an algebra until we define a bilinear multiplication. For this one, we’ll just define the multiplication by the tensor product itself. That is, if and are two tensors, their product will be , which is by definition bilinear. This algebra has an obvious grading by the number of tensorands.
This is exactly the free algebra on a vector space, and it’s just like we built the free ring on an abelian group. If we perform the construction on the dual space we get an algebra of functions. If has dimension , then this is isomorphic to the algebra of noncommutative polynomials in variables.
Next we consider the symmetric algebra on , which consists of the direct sum of all the spaces of symmetric tensors
with a grading again given by the number of tensorands.
Now, despite the fact that each is a subspace of the tensor space , this is not a subalgebra of . This is because the tensor product of two symmetric tensors may well not be symmetric itself. Instead, we will take the tensor product of and , and then symmetrize it, to give . This will be bilinear, and it will work with our choice of grading, but will it be associative?
If we have three symmetric tensors , , and , then we could multiply them by or by . To get the first of these, we tensor and , symmetrize the result, then tensor with and symmetrize that. But since symmetrizing consists of adding up a number of shuffled versions of this tensor, we could tensor with first and then symmetrize only the first tensorands, before finally tensoring the entire thing. I assert that these two symmetrizations — the first one on only part of the whole term — are equivalent to simply symmetrizing the whole thing. Similarly, symmetrizing the last tensorands followed by symmetrizing the whole thing is equivalent to just symmetrizing the whole thing. And so both orders of multiplication are the same, and the operation indeed defines an associative multiplication.
To see this, remember that symmetrizing the whole term involves a sum over the symmetric group , while symmetrizing over the beginning involves a sum over the subgroup consisting of those permutations acting on only the first places. This will be key to our proof. We consider the collection of left cosets of within . For each one, we can pick a representative element (this is no trouble since there are only a finite number of cosets with a finite number of elements each) and collect these representatives into a set . Then the whole group is the disjoint union
This will let us rewrite the symmetrizer in such a way as to make our point. So let’s write down the product of the two group algebra elements we’re interested in
Essentially, because the symmetrization of the whole term subsumes symmetrization of the first tensorands, the smaller symmetrization can be folded in, and the resulting sum counts the whole sum exactly times, which cancels out the normalization factor. And this proves that the multiplication is, indeed, associative.
This multiplication is also commutative. Indeed, given and , we can let be the permutation which moves the last slots to the beginning of the term and the first slots to the end. Then we write
because right-multiplication by just shuffles around the order of the sum.
The symmetric algebra is the free commutative algebra on the vector space . And so it should be no surprise that the symmetric algebra on the dual space is isomorphic to the algebra of polynomial functions on , where the grading is the total degree of a monomial. If has finite dimension , we have .
We’re about to talk about certain kinds of algebras that have the added structure of a “grading”. It’s not horribly important at the moment , but we might as well talk about it now so we don’t forget later.
Given a monoid , a -graded algebra is one that, as a vector space, we can write as a direct sum
so that the product of elements contained in two grades lands in the grade given by their product in the monoid. That is, we can write the algebra multiplication by
for each pair of grades and . As usual, we handle elements that are the sum of two elements with different grades by linearity.
By far the most common grading is by the natural numbers under addition, in which case we often just say “graded”. For example, the algebra of polynomials is graded, where the grading is given by the total degree. That is, if is the algebra of polynomials in variables, then the grade consists of sums of products of of the variables at a time. This is a grading because the product of two such homogeneous polynomials is itself homogeneous, and the total degree of each term in the product is the sum of the degrees of the factors. For instance, the product of in grade and in grade is
in grade .
Other common gradings include -grading and -grading. The latter algebras are often called “superalgebras”, related to their use in studying supersymmetry in physics. “Superalgebra” sounds a lot more big and impressive than “-graded algebra”, and physicists like that sort of thing.
In the context of graded algebras we also have graded modules. A -graded module over the -graded algebra can also be written down as a direct sum
But now it’s the action of on that involves the grading:
We can even talk about grading in the absence of a multiplicative structure, like a graded vector space. Now we don’t even really need the grades to form a monoid. Indeed, for any index set we might have the graded vector space
This doesn’t seem to be very useful, but it can serve to recognize natural direct summands in a vector space and keep track of them. For instance, we may want to consider a linear map between graded vector spaces and that only acts on one grade of and with an image contained in only one grade of :
We’ll say that such a map is graded . Any linear map from to can be decomposed uniquely into such graded components
giving a grading on the space of linear maps.
Okay, time for a diversion from all this calculus. Don’t worry, there’s tons more ahead.
We’re going to need some geometric concepts tied to linear algebra, and before we get into that we need to revisit an old topic: tensor powers and the subspaces of symmetric and antisymmetric tensors. Specifically, how do all of these interact with duals. Through these post we’ll be working with a vector space over a field , which at times will be assumed to be finite-dimensional, but will not always be.
First, we remember that elements of the dual space are called “linear functionals”. These are -linear functions from the vector space to the base field . Similarly, a “-multilinear functional” is a function that takes vectors from and gives back a field element in in a way that’s -linear in each variable. That is,
for scalars and , and for any index . Equivalently, by the defining universal property of tensor products, this is equivalent to a linear function — a linear functional on . That is, the space of -multilinear functionals is the dual space .
There’s a good way to come up with -multilinear functionals. Just take linear functionals and sew them together. That is, if we have an -tuple of functionals we can define an -multilinear functional by the formula
We just feed the th tensorand into the th functional and multiply all the resulting field elements together. Since field multiplication is multilinear, so is this operation. Then the universal property of tensor products tells us that this mapping from -tuples of linear functionals to -multilinear functionals is equivalent to a unique linear map from the th tensor power . It’s also easy to show that this map has a trivial kernel.
This is not to say that dualization and tensor powers commute. Indeed, in general this map is a proper monomorphism. But it turns out that if is finite-dimensional, then it’s actually an isomorphism. Just count the dimensions — if has dimension then each space has dimension — and use the rank-nullity theorem to see that they must be isomorphic. That is, every -multilinear functional is a linear combination of the ones we can construct from -tuples of linear functionals.
Now we can specialize this result. We define a multilinear functional to be symmetric if its value is unchanged when we swap two of its inputs. Equivalently, it commutes with the symmetrizer. That is, it must kill everything that the symmetrizer kills, and so must really define a linear functional on the subspace of symmetric tensors. That is, the space of symmetric -multilinear functionals is the dual space . We can construct such symmetric multilinear functionals by taking linear functionals as before and symmetrizing them. This gives a monomorphism , which is an isomorphism if is finite-dimensional.
Similarly, we define a multilinear functional to be asymmetric or “alternating” if its value changes sign when we swap two of its inputs. Then it commutes with the antisymmetrizer, must kill everything the antisymmetrizer kills, and descends to a linear functional on the subspace of antisymmetric tensors. As before, we can construct just such an antisymmetric -multilinear functional by antisymmetrizing linear functionals, and get a monomorphism . And yet again, this map is an isomorphism if is finite-dimensional.
Okay, one last post to fill out the week.
It turns out that we don’t even need all the shears. We can just use neighboring shears to build all the others. Indeed:
In terms of elementary row operations, first we add the third row to the second. Then we add the second to the first, effectively adding the third row to the first as well. Then we subtract the third row from the second, undoing that first step. Finally, we subtract the second row (alone now) from the first, undoing the extra addition of the second row to the first. At the end of the whole process we’ve added the third row to the first. We could modify this by adding a multiple of the third row to the second, and subtracting the same multiple later. Check to see what result that has. And we have similar results using neighboring lower shears
So we can generate the special linear group using only the neighboring shears. If we have an matrix in we can take its determinant . Then we can write . Here we’ve factored out a scaling by the determinant in the first row and we’re left with a matrix in , which can then be written in terms of neighboring shears. So we need (families of) generators here.
These are the best I can do, and I don’t see a way of improving. Roughly, upper shears can’t build up lower shears, any collection of neighboring shears can only affect the rows they cover in sequence and so can’t build up a new neighboring shear, and no shears can handle that one scaling. So it seems there’s no way to pare down this collection of generators. But there might be a completely different approach that leads to fewer families of generators. If someone has one, I’d be glad to see it.
We established that if we restrict to upper shears we can generate all upper-unipotent matrices. On the other hand if we use all shears and scalings we can generate any invertible matrix we want (since swaps can be built from shears and scalings). We clearly can’t build any matrix whatsoever from shears alone, since every shear has determinant and so must any product of shears. But it turns out that we can use shears to generate any matrix of determinant — those in the special linear group.
First of all, let’s consider the following matrix equations, which should be easy to verify
These show that we can always pull a scaling to the left past a shear. In the first two cases, the scaling and the shear commute if the row and column the scaling acts on are uninvolved in the shear. In the last two cases, we have to modify the shear in the process, but we end up with the scaling written to the left of a shear instead of to the right. We can use these toy examples to see that we can always pull a scaling from the right to the left of a shear, possibly changing the shear in the process.
What does this mean? When we take a matrix and write it out in terms of elementary matrices, we can always modify this expression so that all the scalings are to the left of all the shears. Then we have a diagonal matrix to the left of a long product of shears, since the product of a bunch of scalings is a diagonal matrix. But now the determinant of each shear is , and the determinant of the diagonal matrix must be the product of the diagonal entries, which are the scaling factors. And so the product of the scaling factors is the determinant of our original matrix.
We’re specifically concerned with matrices of determinant , meaning the product of all the diagonal entries must come out to be . I’m going to use this fact to write the diagonal matrix as a product of scalings in a very particular way. Let’s say the diagonal entry in row is . Then I’m going to start by writing down
I’ve scaled the first row by the right amount, and then scaled the second row by the inverse amount so the product of the two scaling factors is . Then I write down
The product of the two scalings of the second row ends up scaling it by , and we scale the third row to compensate. We continue this way, scaling each row to the right amount, and the next one by the inverse factor. Once we scale the next-to-last row we’re done, since the scaling factor for the last row must be exactly what we need to make the total product of all the scaling factors come out to . That is, as long as the total scaling factor is , we can write the diagonal matrix as the product of these pairs of scalings with inverse scaling factors.
Now let’s take four shears, alternating upper and lower, since two upper shears in a row are the same as a single upper shear, and similarly for lower shears. We want it to come out to one of these pairs of scalings.
This gives us four equations to solve
These quickly simplify to
Which can be solved to find
So we could pick and for any scaling factor write
And so we can write such a pair of scalings with inverse scaling factors as a product of four shears. Since in the case at hand we can write the diagonal part of our elementary matrix decomposition with such pairs of scalings, we can translate them all into shears. And at the end of the day, we can write any special linear transformation as a product of a bunch of shears.
We’ve got down the notion of the general linear group of a vector space , including the particular case of the matrix group of the space . We also have defined the orthogonal group of matrices over whose transpose and inverse are the same, which is related to the orthogonal group of orthogonal transformations of the real vector space preserving a specified bilinear form . Lastly, we’ve defined the group of unitary transformations on — complex matrices whose conjugate transpose and inverse are the same.
For all of these matrix groups — which are all subgroups of some appropriate — we have a homomorphism to the multiplicative group of given by the determinant. We originally defined the determinant on itself, but we can easily restrict it to any subgroup. We actually know that for unitary and orthogonal transformations the image of this homomorphism must lie in a particular subgroup of . But in any case, the homomorphism must have a kernel, and this kernel turns out to be important.
In the case of the general linear group , the kernel of the determinant homomorphism consists of the automorphisms of with determinant . We call this subgroup of the “special linear group” , and transformations in this subgroup are sometimes called “special linear transformations”. Of course, we also have the particular special linear group . When we take the kernel of any of the other groups, we prepend the adjective “special” and an to the notation. Thus we have the special orthogonal groups and and the special unitary group .
In a sense, all the interesting part of the general linear group is contained in the special linear subgroup. Outside of that, what remains is “just” a scaling. It’s a little more complicated than it seems on the surface, but not much.