How do round-off errors propagate in arithmetic computations? The error in the
individual quantities is found to be additive for the arithmetic operations of addi-
tion, subtraction, and multiplication. For example, if a and b are two numbers to be
added having errors of Δa and Δb, respectively, the sum will be (a + b)+(Δa + Δb),
with the error in the sum being (Δa + Δb). The result of a calculation involving
a number that has error E , raised to the power of a, will have approximately an
error of aE.
If a number that has a small error well within tolerance limits is fed into a series
of arithmetic operations that produce small deviations in the final calculated value of
the same order of magnitude as the initial error or less, the numerical algorithm or
method is said to be stable. In such algorithms, the initial errors or variations in
numerical values are kept in check during the calculations and the growth of error is
either linear or diminished. However, if a slight change in the initial values produces
a large change in the result, this indicates that the initial error is magnified through
the course of successive computations. This can yield results that may not be of much
use. Such algorithms or numerical systems are unstable.InChapter 2, we discuss the
source of ill-conditioning (instability) in systems of linear equations.
1.6 Taylor series and truncation error
You will be well aware that functions such as sin x, cos x, e
x
, and log x can be
represented as a series of infinite terms. For example,
e
x
¼ 1 þ
x
1!
þ
x
2
2!
þ
x
3
3!
þ; ∞
5
x
5
∞: (1:10)
The series shown above is known as the Taylor expan sion of the function e
x
. The
Taylor series is an infinite series representation of a differentiable function f (x). The
series expansion is made about a point x
0
at which the value of f is known. The terms
in the series are progressively higher-order derivatives of fx
0
ðÞ. The Taylor expan-
sion in one variable is shown below:
fxðÞ¼fx
0
ðÞþf
0
x
0
ðÞxx
0
ðÞþ
f
00
x
0
ðÞx x
0
ðÞ
2
2!
þþ
f
n
x
0
ðÞx x
0
ðÞ
n
n!
þ;
(1:11)
every minute versus compounding every second.
1
Strangely, the maturity amount drops with even larger
compounding frequencies. This cannot be correct.
This error occurs due to the large value of n and therefore the smallness of r=100n. For the last two
scenarios, r=100n = 3.171 × 10
–13
and 3.171 × 10
–16
, respectively. Since floating-point numbers
have at most 16 significant digits, adding two numbers that have a magnitude difference of 10
16
results
in the loss of significant digits or information as described in this section. To illustrate this point fur ther,
if the calculation were performed for a compounding frequency of every nanosecond, the maturity
amount is calcuated by MATLAB to be $250 000 – the principal amount! This is the result of 1+
ðr=100nÞ = 1, as per floating-point addition. We have encountered a limit imposed by the finite
precision in floating-point calculations.
1
The compound-interest problem spurred the discovery of the constant e. Jacob Bernoulli (1654–1705)
noticed that compound interest approaches a limit as n → ∞; lim
n!∞
1 þ
r
n
n
, when expanded using the
binomial theorem, produces the Taylor series expansion for e
r
(see Section 1.6 for an explanation on the
Taylor series). For starting principal P, continuous compounding (maximum frequency of compound-
ing) at rate r per annum will yield $ Pe
r
at the end of one year.
26
Types and sources of numerical error