In distinction from homogenous systems we have inhomogenous systems. These are systems of linear equations where some of the constants on the right of the equations are nonzero. In our matrix notation we have , where at least one .
Now we no longer have the nice property that solutions form a vector space. For example, if are the components of one solution, and are the components of another solution, then their sum is definitely not a solution. Indeed, we calculate
And this only works out as a solution to our system if for each index , implying that each .
However, we do have something interesting. What if we take the difference between two solutions? Now we calculate
That is, the difference between the two solutions satisfies the “associated homogenous equation” . If we take some solution to the inhomogenous equation, any other solution is the sum of this particular solution and a solution of the associated homogenous equation.
So we’re still very interested in solving homogenous equations. Once we have a complete solution to the homogenous equation — usually through having a basis for the kernel of — we just need a particular solution (with components ) to the inhomogenous equation and we can parametrize all the solutions. If we remember that a vector space is built up from an abelian group, we might think back to group theory and the language of cosets. The solution set to the inhomogenous equation is the coset .
Of course, we should also remember that expressions for cosets are far from unique. Any particular solution in the coset is just as good as any other. If is another solution, then . Both cosets consist of the same collection of vectors, but they are written with different offsets (picked from ) from different base-points — and . This illustrates that, unlike in a vector space, the solution set has no special element like the identity. All solutions are on the same footing.