322 8 Optimal Process Control
8.4 Dynamic Programming
8.4.1 Continuous-Time Systems
Bellman with his coworkers developed around 1950 a new approach to system
optimisation – dynamic programming. This method is often used in analysis
and design of automatic control systems.
Consider the vector differential equation
dx(t)
dt
= f(x(t), u(t),t) (8.124)
where f and x are of dimension n, vector u is of dimension m. The control
vector u is within a region
u ∈ U (8.125)
where U is a closed part of the Euclidean space E
m
.
The cost function is given as
I =
t
f
t
0
F (x(t), u(t),t)dt (8.126)
The terminal time t
f
>t
0
can be free of fixed. We will consider the case with
fixed time.
Let us define a new state variable
x
n+1
(t)=
t
t
0
F (x(τ), u(τ ),τ)dτ (8.127)
The problem of minimisation of I is equivalent to minimisation of the state
x
n+1
(t
f
) of the system described by equations
d˜x(t)
dt
=
˜
f (˜x(t), u(t),t) (8.128)
where
˜x
T
=
x
T
,x
n+1
˜
f
T
=
f
T
,F
and with initial conditions
˜x
T
(t
0
)=
x
T
(t
0
), 0
(8.129)
This includes the time optimal control with F =1.
If a direct presence of time t in (8.124) is undesired, it is possible to intro-
duce a new state variable x
0
(t)=t and a new differential equation