Linear Equations
Okay, now I really should introduce one of the most popular applications of linear algebra, at least outside mathematics. Matrices can encode systems of linear equations, and matrix algebra can be used to solve them.
What is a linear equation? It’s simply an algebraic equation where each variable shows up at most to the first power. For example, we could consider the equation
and clearly we can solve this equation by dividing by on each side to find
. Of course, there could be more than one variable.
Sometimes people might use different names like ,
, and
, but since we want to be open-ended about things we’ll just say
,
,
, and so on. Notice here that because variables can only show up to the first power, there is no ambiguity about writing our indices as superscripts — something we’ve done before. Anyhow, we might write an equation
Now we have many different possible solutions. We could set and
, or we could set
and
, or all sorts of combinations in between. This one equation is not enough to specify a unique solution.
Things might change, though, if we add more equations. Consider the system
Now we can rewrite the second equation as , and then drop this into the first equation to find
, or
. This just involves the one variable, and we can easily solve it to find
, which quickly yields
. We now have a single solution for the pair of equations.
What if we add another equation? Consider
Now we know that the only values solving both of the first two equations are and
. But then
, so the third equation cannot possibly be satisfied. Too many equations can be impossible to solve.
In general, things work out when we’ve got one equation for each variable. As we move forwards we’ll see how to express this fact more concretely in terms of matrices and vector spaces.
Interesting, i have authors use subscripts. . . . but never seen superscript. It seems to me that this could lead to some ambiguity.
Well, I’m using superscripts so it will mesh with the summation convention when I write out a linear system as a matrix equation. Then the many variables will be the components of a single vector variable.
so ambiguous… maybe you want to think of some better notation…
Mgccl: I’m not just blindly following convention here. I specifically chose superscript indices for the very specific reason I just pointed out.
It might be useful to specify the domain and range. Are these variable real or complex, for example? It is important to specify that they are not restricted to integers, otherwise we get the undecidability of Diophantine equations, via Godel numbers, Hilbert’s tenth problem, and Yuri Matiyasevich.
JVP: There’s no “domain” and “range” yet. You’re jumping ahead to the next step, where we encode a linear system in a matrix, and thus as a linear map.
Even taken over the integers, I don’t think undecidability issues come into play here; these are just linear systems we’re talking about.
how about superscript before the variable, instead of after?
3 ^1x+4 ^2x = 12
^1x -^2x = 11
Expanding on Todd Trimble’s point, remembering the decidability of Presburger arithmetic should obviate any decidability concerns for solving linear equations in the integral context (and similarly by Tarski’s results for the real and complex contexts). But the superscript notation has perhaps fed confusion over just what kinds of equations are being looked at.
[…] consider a system of linear equations. We’ll use the variables , , and so on up to ; and we’ll let there be equations. […]
Pingback by The Matrix of a Linear System « The Unapologetic Mathematician | July 11, 2008 |
[…] Linear Systems An important special case of a linear system is a set of homogenous equations. All this means (in this case) is that the right side of each of […]
Pingback by Homogenous Linear Systems « The Unapologetic Mathematician | July 14, 2008 |
[…] must there always be a particular solution to begin with? Clearly not. When we first talked about linear systems we mentioned the […]
Pingback by Unsolvable Inhomogenous Systems « The Unapologetic Mathematician | July 18, 2008 |
[…] Today I want to talk about the index of a linear map , which I’ll motivate in terms of a linear system for some linear map […]
Pingback by The Index of a Linear Map « The Unapologetic Mathematician | July 22, 2008 |
Hi,
Any thoughts on how to find the range for each independent variable, for a given range of dependent variable in a linear equation with, say 1 dependent and 3 independent variables? i’m struggling with this, despite trying Monte-Carlo simulation…
I’m not sure what you mean, srinivas. Can you give a more explicit example?
Hi John,
Thanks for the response. What I was referring to is a multiple regression eqation (y = ax1 + bx2 + cx3 + d). In this case for a given range of y, there exist a whole lot of values for each of x1, x2 and x3. What i’m interested in knowing is whether I can come up with the range for one variable, say x3 which I believe has highest impact on y. All this while not keeping x1 or x2 as constant.
Please tell me if it is possible. (I looked at simulation to give me an answer, but was unsuccessful)
I still don’t know what you mean by “has the highest impact on
“. The point of linear functions is that the effect on the output of a change in the input is constant over the entire domain.
[…] SVD also comes in handy for solving systems of linear equations. Let’s say we have a system written down as the matrix […]
Pingback by The Meaning of the SVD « The Unapologetic Mathematician | August 18, 2009 |
[…] The row echelon form of a matrix isn’t unique, but it’s still useful for solving systems of linear equations. If we have a system written in matrix form […]
Pingback by Solving Equations with Gaussian Elimination « The Unapologetic Mathematician | September 2, 2009 |