
In this appendix we describe what a Hessian is, and how it can be used to classify the station-
ary points of an unconstrained optimization problem. In Section 5.4 (page 391) the conditions
for a function f(x, y) to have a minimum were stated as:
f
xx
> 0, f
yy
> 0 and f
xx
f
yy
− f
2
xy
> 0
where all of the partial derivatives are evaluated at a stationary point, (a, b).
It turns out that the second condition, f
yy
> 0, is actually redundant. If the first and third
conditions are met then the second one is automatically true. To see this notice that
f
xx
f
yy
− f
2
xy
> 0
is the same as f
xx
f
yy
> f
2
xy
. The right-hand side is non-negative (being a square term) and so
f
xx
f
yy
> 0
The only way that the product of two numbers is positive is when they are either both positive
or both negative. Consequently, when f
xx
> 0, say, the other factor f
yy
will also be positive.
Similarly, for a maximum point f
xx
< 0, which forces the condition f
yy
< 0.
The two conditions for a minimum point, f
xx
> 0 and f
xx
f
yy
− f
2
xy
> 0 can be expressed more
succinctly in matrix notation.
The 2 × 2 matrix, H = (where f
xy
= f
yx
) made from second-order partial derivatives
is called a Hessian matrix and has determinant
= f
xx
f
yy
− f
2
xy
so the conditions for a minimum are:
(1) the number in the top left-hand corner of H (called the first principal minor) is positive
(2) the determinant of H (called the second principal minor) is positive.
For a maximum, the first principal minor is negative and the second principal minor is
positive.
f
xx
f
xy
f
yx
f
yy
J
K
L
f
xx
f
xy
f
yx
f
yy
G
H
I
Appendix 3
Hessians
MFE_Z01.qxd 16/12/2005 10:50 Page 594