# The Unapologetic Mathematician

## The Determinant of a Noninvertible Transformation

We’ve defined and calculated the determinant representation of $\mathrm{GL}(V)$ for a finite-dimensional vector space $V$. But we can extend this definition to apply to any linear transformation sending $V$ to itself.

So what happens when $T:V\rightarrow V$ fails to be invertible? Its image must miss some vectors in $V$. That is, we have a nontrivial kernel. The index of $T$ is zero, so a trivial kernel would mean a trivial cokernel. We would then have a one-to-one and onto linear transformation, and $T$ would be invertible.

Let’s take a basis of $\mathrm{Ker}(T)$. Since this is a linearly independent set spanning a subspace of $V$, it can be completed to a basis $\{e_i\}$ for all of $V$. Now we can use this basis of $V$ to write out the matrix of $T$ and use our formula from last time to calculate $\det(T)$.

The $i$th column of the matrix is the vector $T(e_i)$ written out in terms of our basis. But since the first few basis vectors are in the kernel of $T$ we have at least $T(e_1)=0$. So the first column of the matrix must be all zeroes. Now as we pick a permutation and walk down the rows, some row is going to tell us to multiply by the entry in the first column, which we have just seen is zero. That is, for every permutation, the term in our determinant formula for that permutation is zero, and so the sum of them all — the determinant itself — must also be zero.

Notice that this still lets us think of the determinant as preserving multiplications in the algebra of endomorphisms of $V$. Any noninvertible linear transformation is sent to zero. The product of a noninvertible transformation and any other transformation will be noninvertible, and the product of their determinants will be zero. This also gives us a test for invertibility! Take the linear transformation $T$ and run it through the determinant function. If the result is zero, then $T$ is noninvertible. If the result is nonzero, then $T$ is invertible.