NonlinearBook10pt November 20, 2007
DISCRETE-TIME NONLINEAR CONTROL 849
=
k
f
X
k=k
0
{−∆V (x(k), k) + L(x(k), u(k), k)
+V (F (x(k), u(k), k), k) − V (x(k), k)}
=
k
f
X
k=k
0
{−∆V (x(k), k) + H(x(k), u(k), V (x(k), k), k)}
≥
k
f
X
k=k
0
{−∆V (x(k), k) + H(x(k), u
∗
(k), V (x(k), k), k)}
=
k
f
X
k=k
0
−∆V (x(k), k)
= V (x
0
, k
0
)
= J
∗
(x
0
, k
0
),
which completes the proof.
Note that (14.5) and (14.6) imply
0 = min
u(k)∈U
H(x(k), u(k), V (x(k), k), k), (14.10)
which is known as the Bellman equation. It follows from Theorems 14.1 and
14.2 that the Bellman equation pr ovides necessary and sufficient conditions
for characterizing the optimal control for time-varying nonlinear dynamical
systems over a finite time interval or the infinite horizon. I n the infinite-
horizon, time-invariant case, V (·) is independent of k so that the Bellman
equation reduces to the time-invariant equation
0 = min
u∈U
H(x, u, V (x)), x ∈ D. (14.11)
14.3 Stability Analysis of Discrete-Time No n linear Systems
In this section, we present sufficient conditions for stability of nonlinear
discrete-time systems. In particular, we consider the problem of evaluating a
nonlinear-nonquadratic performance functional depending upon a nonlinear
discrete-time difference equation. As in the continuous-time case, it is shown
that the cost functional can be evaluated in closed form as long as the cost
functional is related in a specific way to an und erlying Lyapunov function
that guarantees stability. Here, we restrict our attention to time-invariant
infinite horizon systems. For the following result, let D ⊂ R
n
be an open
set, assume 0 ∈ D, let L : D → R, and let f : D → D be s uch that f (0) = 0.
Theorem 14.3. Consider the nonlinear discrete-time dynamical sys-