8.1 Problem of Optimal Control and Principle of Minimum 301
Let us now introduce the Hamilton function or Hamiltonian
H = F + λ
T
f(x, u) (8.23)
If the adjoint vector λ(t) satisfies the differential equation
dλ
dt
= −
∂H
∂x
(8.24)
then for a fixed initial state x(t
0
)=x
0
(where δx(t
0
)=0) the necessary
condition for existence of optimal control is given as
δI =
t
f
t
0
∂H
∂u
T
δudt = 0 (8.25)
with the terminal condition for the adjoint vector λ(t)
λ(t
f
)=
∂G
tf
∂x
t=t
f
(8.26)
Equation (8.25) specifies the relation between variation of the cost function
and variation of the control trajectory. If some of the elements of the vector
x(t
f
) are fixed at the terminal time then the variation of control at this point
is not arbitrary. However, it can be shown that an equivalent result can be
obtained in unspecified or fixed terminal points.
If we suppose that the variation of δu(t) is arbitrary (u(t) is unbounded)
then the following equation
∂H
∂u
= 0 (8.27)
is the necessary condition for extremum (minimum) of the cost function.
If there are constraints on control variables of the form
−α
j
≤ u
j
(t) ≤ β
j
,j=1, 2,..., (8.28)
where α
j
and β
j
are constants for minimum and maximum values of elements
u
j
of the vector u then δu(t) cannot be arbitrary and (8.27) does not guarantee
the necessary condition for existence of extremum. If the control variable is
on the lower constraint, the only variation allowed is δu
j
> 0. Equation (8.25)
then requires that
u
∗
j
= −α
j
, if
∂H
∂u
j
> 0 (8.29)
Similarly, if the control variable u
j
is on the upper constraint, the only
variation allowed is δu
j
< 0. Equation (8.25) then requires that