
280 Computational Methods—Ordinary Differential Equations
operations, as we have seen, these errors are determined by the order of approximation
made in deriving the finite difference form. It is often suggested that after performing
a procedure at one step size, the procedure should be repeated using half the original
step size. This, of course, may not be practical in large calculations, and reducing the
step size too far may increase the errors discussed in the previous paragraph.
Sometimes calculations can become unstable. An example of this would be the
solution of an ordinary differential equation where a parameter in the equation means
that there are a family of solutions generated by solving the differential equation for
various values of the parameter. If at one point in the solution space the members of the
family lie close together, round-off errors can cause the procedure to leave the original
solution and drift to a neighboring solution. A simple model of this is to compute integer
powers of the Golden Mean =
√
5−1
2
. While powers can be computed by successive
multiplications, it is easy to verify that
n+1
=
n−1
−
n
. Thus, knowing
0
= 1 and
1
=061803398, successive powers can be computed by simple subtraction.
Unfortunately, while higher powers of rapidly decrease, the function −
√
5+1
2
satisfies the same recursion relation, and this function in absolute value is greater than
unity. Unavoidable computational errors indicate that a little round-off error means
that use of the recurrence relationship will soon give completely wrong errors, usually
around the power 16 (Press et al., 1986).
Another situation where instability occurs is in dealing with stiff systems. These
systems are those where there is more than one length scale to the problem, and at least
two of the scales differ greatly in magnitude. An example is the ordinary differential
equation
d
2
y
dx
2
−100y =0. This has the general solution yx =Ae
10x
+Be
−10x
, with length
scales differing by a factor of one hundred. So, even if the boundary conditions are such
that A is zero—for example, if y0 = 1y =0—round-off errors would result in a
small nonzero value for A, and the solution would grow rapidly.
The general form of this example equation is
d
2
y
dx
2
−
2
y = 0. Here, the values of
are the eigenvalues of the problem, and the equation is stiff because in the example the
two values of differ substantially. A similar example occurs in the solution of algebraic
equations. For a set of algebraic equations Ay =b, where A is an n by n matrix and y and
b are column vectors, the eigenvalues of the matrix A are the solution of the determinant
A −I
=0, I being the identity matrix. Again, if the values of the eigenvalue differ
too greatly, the usual solution methods can run into trouble. Since finite difference
methods frequently reduce the computation to a set of algebraic equations, it should not
be surprising that stiffness can occur in both algebra and calculus.
Should the Runge-Kutta scheme not work for a particular set of equations, another
method that has been successful in some cases
2
is the Bulirsch-Stoer method, intro-
duced in 1966. The method is based on three different concepts, and so programming
it is reasonably complicated. Programs can be found in Press et al. (1986) or on the
Web. The concepts are Richardson’s deferred approach to the limit (an extrapolation
procedure), choice of a proper fitting functions (the original choice was rational poly-
nomials, but ordinary polynomials have been used), and the use of a method whose
error function is even in the step size. Further details can be found in the preceding
sources.
2
One may hope—but never expect—to find one method that works for all problems. See
www.nr.com/CiP97.pdf for some comments on this.