industrial design, traffic control, meteorology, and econ omics. In this chapter,
several well-established classical optimization tools are presented; however, our
discussion only scratches the surface of the body of optimization literature currently
available. We first address the topic of unconstrained optimization of one variable in
Section 8.2. We demonstrate three classic methods used in one-dimensional mini-
mization: (1) Newton’s method (see Section 5.5 for a demonstration of its use as a
nonlinear root-finding method), (2) successive parabolic interpolation method, and
(3) the golden section search method. The minimization problem becomes more
challenging when we need to optimize over more than one variable. Simply using a
trial-and-error method for multivariable optimization problems is discouraged due
to its inefficiency. Section 8.3 discusses popular methods used to perform uncon-
strained optimization in several dimensions. We discuss the topic of constrained
optimization in Section 8.4. Finally, in Section 8.5 we demonstrate Monte Carlo
techniques that are used in practice to estimate the standard error in the estimat es of
the model parameters.
8.2 Unconstrained single-variable optimization
The behavior of a nonlinear function in one variable and the location of its extreme
points can be studied by plotting the function. Consider any smooth function f in a
single variable x that is twice differentiable on the interval ½a; b.Iff
0
xðÞ
4
0 over any
interval x 2½a; b, then the single-variable function is increasing on that interval.
Similarly, when f
0
xðÞ
5
0 over any interval of x, then the single-variable function is
decreasing on that interval. On the other hand, if there lies a point x* in the interval
½a; b such that f
0
x
ðÞ¼0, the function is neither increasing nor decreasing at that
point.
In calculus courses you have learned that an extreme point of a continuous and
smooth function fðxÞ can be found by taking the derivative of fðxÞ, equating it to
zero, and solving for x. One or more solutions of x that satisfy f
0
xðÞ¼0 may exist.
The points x* at which f
0
ðxÞ equals zero are called critical points or stationary points.
At a critical point x*, the function is parallel with the x-axis. Three possibilities arise
regarding the nature of the critical point x*:
(1) if f
0
x
ðÞ¼0; f
00
x
ðÞ
4
0, the point is a local minimum;
(2) if f
0
x
ðÞ¼0; f
00
x
ðÞ
5
0, the point is a local maximum;
(3) if f
0
x
ðÞ¼0; f
00
x
ðÞ¼0, the point may be either an inflection point or an extreme
point. As such, this test is inconclusive. The values of highe r derivatives
f
000
xðÞ; ...; f
k1
xðÞ; f
k
xðÞare needed to define this point. If f
k
xðÞis the first non-
zero kth derivative, then if k is even, the point is an extremum (minimum if f
k
xðÞ
4
0;
maximum if f
k
xðÞ
5
0), and if k is odd, the critical point is an inflection point.
The method of establishing the nature of an extreme point by calculating the value
of the second derivative is called the second derivative test. Figure 8.3 graphic ally
illustrates the different types of critical points that can be exhibited by a nonlinear
function.
To find the local minima of a function, one can solve for the zeros of f
0
ðxÞ, which
are the critical points of fðxÞ. Using the second derivative test (or by plotting the
function), one can then establish the character of the critical point. If f
0
xðÞ¼0isa
nonlinear equation whose solution is difficult to obtain analytically, root-finding
methods such as those described in Chapter 5 can be used to solve iteratively for the
487
8.2 Unconstrained single-variable optimization