8.4 Dynamic Programming 331
From
x(N − 1) = f [x(N −2), u(N − 2)] (8.173)
is evident that I
∗
N
2
depends only on x(N −2). Continuing with the same line
of reasoning, we can write
I
∗
N
j
[x(N − j)] =
min
u(N−j)
F [x(N − j), u(N − j)] + I
∗
N(j−1)
[x(N − j + 1)]
(8.174)
From equation
x(N − j +1)=f [x(N −j), u(N − j)] (8.175)
follows that I
∗
N
j
depends only on x(N −j).
Equations (8.170), (8.172)–(8.174), . . . make it possible to calculate recur-
sively the optimal control input u(N −1), u(N − 2),...,u(N −j),...
We give the discrete equivalent of the principle of minimum without details
at this place. It can be derived analogically as its continuous-time counterpart.
Consider the system (8.165) with the initial state x(0) and the cost func-
tion (8.166). The Hamiltonian of the system is defined as
H(k)=F [x(k), u(k)] + λ
T
(k +1)f [x(k), u(k)] (8.176)
For λ(k) holds
λ(k)=
∂H(k)
∂x(k)
,k=0, 1,...,N − 1 (8.177)
λ(N)=
∂G
1
∂x(N)
(8.178)
The necessary condition for the existence of minimum (8.166) is
∂H(k)
∂u(k)
= 0,k=0, 1,...,N − 1 (8.179)
8.4.3 Optimal Feedback
Consider the system described by the equation
x(k)=Ax(k)+Bu(k) (8.180)
with initial condition x(0). We would like to find u(0), u(1),...,u(N − 1)
such that the cost function