30 Chapter 2 Matrices and Linear Systems
Methods for solving general linear systems abound, varying in generality, com-
plexity, efficiency, and stability. One of the most commonly used is called Gaussian
elimination—it’s the generalization of the elimination scheme described in the pre-
vious section. Full details of the Gaussian elimination algorithm can be found in
Section A.1 in Appendix A.
2.4.4 Row Reductions, Echelon Form, and Rank
We can look back at the example of the use of the technique of elimination for solving
a linear system and represent it in augmented matrix form:
326
1 −11
We then multiplied the second row by −3, yielding an equivalent pair of equations,
whose matrix representation is
326
−33−3
The next step was to add the two equations together and replace one of the equations
with their sum:
326
01
3
5
We then took a “shortcut” by substituting
3
5
into the first row and solving directly.
Note that the lower left-hand corner element of the matrix is 0, which of course
resulted from our choosing the multiplier for the second row in a way that the sum
of the first row and the “scaled” second row eliminated that element.
So, it’s clear we can apply these operations—multiplying a row by a scalar and
replacing a row by the sum of it and another row—without affecting the solution(s).
Another operation we can do on a system of linear equations (and hence the
matrices representing them) is to interchange rows, without affecting the solution(s)
to the system; clearly, from a mathematical standpoint, the order is not significant.
If we take these two ideas together, we essentially have described one of the
basic ideas of Gaussian elimination (see Section A.1): by successively eliminating
the leading elements of the rows, we end up with a system we can solve via back
substitution. What is important for the discussion here, though, is the form of the
system we end up with (just prior to the back-substitution phase); our system of
equations that starts out like this: