The method of least squares 173
structure in a map produced from an extremely poor trial structure, but
such perspicacity is uncommon.
Most investigators currently view electron-density and difference
maps on a computer screen. There are several mouse-driven three-
dimensional interactive programs such as O (Jones et al., 1991) and
COOT (Emsley and Cowtan, 2004) that show electron densities as three-
dimensional wire-frame entities. These can be rotated by the user to
better view them, and a diagram of a three-dimensional trial structure
can be overlaid on them. Some refinement can even take place at the
computer screen as the trial structure diagram is moved to best fit the
map. When the user is satisfied with the fit, the program will then
generate the atomic coordinates of the new and better position of the
model and these coordinates can be further refined.
The method of least squares
The method of least squares, first used by Legendre (1805), is a common
technique for finding the best fit of a particular assumed model toasetof
experimental data when there are more experimental observations than
parameters to be determined. Parameters for the assumed model are
improved by this method by minimizing the sum of the squares of the
deviations between the experimental quantities and the values of the
same quantities calculated with the derived parameters of the model.
The method of least squares is often used to calculate the best straight
line through a series of points, when it is known that there is an experi-
mental error (assumed random) in the measurement of each point. The
equation for a line may be calculated such that the sum of the squares
of the deviations from the line is a minimum. Of course, if the points,
which were assumed to lie on a straight line, actually lie on a curve
(described very well by a nonlinear equation), the method will not tell
what this curve is, but will approximate it by a straight line as best it
may. It is possible to “weight” the points; that is, if one measurement
is believed to be more precise than the others, then this measurement
may, and indeed should, be given higher weight than the others. The
weight w(hkl) assigned to each measurement is inversely proportional
to its precision, that is, the square of the standard uncertainty (formerly
known as the estimated standard deviation).
The least-squares method has been extended to the problem of fitting
the observed diffraction intensities to calculated ones (Hughes, 1946),
and has been for more than six decades by far the most commonly
used method of structure refinement, although this practice has not
been without serious criticism.
**
Just as in a least-squares fit of data to a
**
These criticisms are based in part on the
fact that the theory of the least-squares
method is founded on the assumption
that the experimental errors in the data
are normally distributed (that is, follow a
Gaussian error curve), or at least that the
data are from a population with finite sec-
ond moments. This assumption is largely
untested with most data sets. Weighting
of the observations may help to alleviate
the problem, but it depends on a knowl-
edge of their variance, which is usually
assumed rather than experimentally mea-
sured. For a discussion of some of these
points, see Dunitz’s discussion of least-
squares methods (Dunitz, 1996).
straight line (a two-parameter problem), the observed data are fitted to
those calculated for a particular assumed model. If we let ƒ|F (hkl)| be
the difference in the amplitudes of the observed and calculated struc-
ture factors, |F
o
|−|F
c
|, and let the standard uncertainty of the experi-
mental value of F
o
(hkl)
2
be [1/w(hkl)], then, according to the theory of