ERRORS
IN SUMMATION
31
adding from smallest to largest
is
denoted by
SL.
Table
1.3
uses
chopped
arithmetic, with
(1.5 .5)
and Table 1.4 uses rounded arithmetic, with
-
.0005
.::::;
Ej
.::::;
.0005
(1.5.6)
The numbers
£.
1
refer to (1.5.4), and their bounds come from (1.2.8) and (1.2.9).
In
both tables, it
is
clear that the strategy of adding S from the smallest term
to the largest
is
superior to the summation from the largest term
to
the smallest.
Of much more significance, however,
is
the far smaller error with rounding as
compared to chopping. The difference
is
much more than the factor of 2 that
would come from the relative size of the bounds in
(1.5.5) and (1.5.6).
We
next
give an analysis of this.
A statistical analysis of error propagation Consider a general error sum
n
E = L
f.}
j=l
of the type that occurs in the summation error (1.5.4). A simple bound
is
lEI.::::;
nS
(1.5.7)
(1.5 .8)
where S is a bound on £
1
,
...
,
f.n.
Then S = .001 or
.0005
in
the preceding
example, depending on whether chopping or rounding
is
used. This bound (1.5.8)
is
for the worst possible case
in
which all the errors £
1
are as large
as
possible and
of the same sign.
-----When-using-rounding,-the
symmetry
in
sign behavior of the
£.
1
,
as shown in
(1.2.9), makes a major difference in the
size
of
E.
In
this case, a better model
is
to
assume that the errors
£
1
are uniformly distributed random variables in the
interval
[-
8,
8]
and that they are independent. Then
The sample mean
i
is
a new random variable, having a probability distribution
with mean
0 and variance S
1
j3n.
To calculate probabilities for statements
involving
i,
it
is
important to note that the probability distribution for i
is
well-approximated by the normal distribution with the same mean and variance,
even for small values such as
n
;;;:
10. This follows from the Central Limit
Theorem of probability theory [e.g., see Hogg and Craig (1978, chap.
5)].
Using
the approximating normal distribution, the probability
is
t that
lEI
.::::;
.39SVn