Okay, now I really should introduce one of the most popular applications of linear algebra, at least outside mathematics. Matrices can encode systems of linear equations, and matrix algebra can be used to solve them.
What is a linear equation? It’s simply an algebraic equation where each variable shows up at most to the first power. For example, we could consider the equation
and clearly we can solve this equation by dividing by on each side to find . Of course, there could be more than one variable.
Sometimes people might use different names like , , and , but since we want to be open-ended about things we’ll just say , , , and so on. Notice here that because variables can only show up to the first power, there is no ambiguity about writing our indices as superscripts — something we’ve done before. Anyhow, we might write an equation
Now we have many different possible solutions. We could set and , or we could set and , or all sorts of combinations in between. This one equation is not enough to specify a unique solution.
Things might change, though, if we add more equations. Consider the system
Now we can rewrite the second equation as , and then drop this into the first equation to find , or . This just involves the one variable, and we can easily solve it to find , which quickly yields . We now have a single solution for the pair of equations.
What if we add another equation? Consider
Now we know that the only values solving both of the first two equations are and . But then , so the third equation cannot possibly be satisfied. Too many equations can be impossible to solve.
In general, things work out when we’ve got one equation for each variable. As we move forwards we’ll see how to express this fact more concretely in terms of matrices and vector spaces.