NonlinearBook10pt November 20, 2007
522 CHAPTER 8
x(t) ≡ 0 to (8.51) is globally asymptotically stable and
J(x
0
, φ(x(·))) = x
T
0
P x
0
, x
0
∈ R
n
. (8.54)
Furth ermore,
J(x
0
, φ(x(·))) = min
u(·)∈S(x
0
)
J(x
0
, u(·)), (8.55)
where S(x
0
) is the set of regulation controllers for (8.51) and x
0
∈ R
n
.
Proof. The result is a direct consequence of Theorem 8.2 with F (x, u)
= Ax+Bu, L(x, u) = x
T
R
1
x+u
T
R
2
u, V (x) = x
T
P x, D = R
n
, and U = R
m
.
Specifically, conditions (8.41) and (8.42) are trivially satisfied. Next, it
follows from (8.53) that H(x, φ(x)) = 0, and hence, V
′
(x)F (x, φ(x)) < 0
for all x ∈ R
n
and x 6= 0. Thus, H(x, u) = H(x, u) − H(x, φ(x)) = [u −
φ(x)]
T
R
2
[u−φ(x)] ≥ 0 so that all the conditions of Theorem 8.2 are satisfied.
Finally, since V (·) is radially unbounded the zero solution x(t) ≡ 0 to (8.51)
with u(t) = φ(x(t)) = −R
−1
2
B
T
P x(t) is globally asymptotically s table.
The optimal feedback control law φ(x) in Corollary 8.2 is derived using
the properties of H(x, u) as defined in Theorem 8.2. Specifically, since
H(x, u) = x
T
R
1
x + u
T
R
2
u + x
T
(A
T
P + P A)x + 2x
T
P Bu it follows that
∂
2
H
∂u
2
= R
2
> 0. Now,
∂H
∂u
= 2R
2
u + 2B
T
P x = 0 gives the unique global
minimum of H(x, u). Hence, since φ(x) minimizes H(x, u) it follows that
φ(x) satisfies
∂H
∂u
= 0 or, equivalently, φ(x) = −R
−1
2
B
T
P x.
8.4 Inverse Optimal Control for Nonlinear Affi n e Systems
In this section, we specialize Theorem 8.2 to affine systems. Specifically, we
construct nonlinear f eedback controllers using an optimal control fr amework
that minimizes a nonlinear-nonquadratic perform ance criterion. This is
accomplished by choosing the controller such that the time derivative of
the Lyapunov function is negative along the closed-loop system trajectories
while providing sufficient cond itions for the existence of asymptotically
stabilizing solutions to the Hamilton-Jacobi-Bellman equation. Thus, these
results provide a family of globally stabilizing controllers parameterized by
the cost functional that is minimized.
The controllers obtained in this section are predicated on an inverse
optimal control problem [10, 127, 135,186, 217,218, 227, 317,320, 321, 449]. In
particular, to avoid the complexity in solving the steady-state Hamilton-
Jacobi-Bellman equation we do not attempt to min imize a given cost
functional, but rather, we parameterize a family of stabilizing controllers
that minimize some derived cost functional that provides flexibility in
specifying the control law. The performance integrand is shown to explicitly
depend on the nonlinear system dynamics, the Lyapunov fu nction of the