Control of Uncertain Systems 11.4 State Feedback Controller Development 209
can be reduced. When, on the other hand, the output
tracking error is small, the network removes RBFs
in order to avoid a redundant structure. If the design
parameter e
max
is too large, the network may stop
adding RBFs prematurely or even never adjust its
structure at all. Thus, e
max
should be at least smaller
than |e(t
0
)|.However,ife
max
is too small, the net-
work may keep adding and removing RBFs all the
time and cannot approach a steady structure even
though the output tracking error is already within
the acceptable bound. In the worst case, the network
will try to add RBFs forever. This, of course, leads
to an unnecessary large network size and, at the
same time, undesirable high computational cost. An
appropriate e
max
may be chosen by trial and error
through numerical simulations.
2. The advantage of the raised-cosine RBF over the
Gaussian RBF is the property of the compact sup-
port associated with the raised-cosine RBF. The
number of terms in (11.22) grows rapidly with the
increase of both the number of grid nodes M
i
in
each coordinate and the dimensionality n of the in-
put space. For the GRBF network, all the terms
will be nonzero due to the unbounded support,
even though most of them are quite small. Thus,
a lot of computations are required for the net-
work’s output evaluation, which is impractical for
real-time applications, especially for higher-order
systems. However, for the RCRBF network, most
of the terms in (11.22) are zero and therefore do not
have to be evaluated. Specifically, for a given in-
put x, the number of nonzero raised-cosine RBFs
in each coordinate is either one or two. Conse-
quently, the number of nonzero terms in (11.22)
is at most 2
n
. This feature allows one to speed up
the output evaluation of the network in compari-
son with a direct computation of (11.22)forthe
GRBF network. To illustrate the above discussion,
suppose M
i
=10 and n = 4. Then the GRBF net-
work will require 10
4
function evaluations, whereas
the RCRBF network will only require 2
4
func-
tion evaluations, which is almost three orders of
magnitude less than that required by the GRBF
network. For a larger value of n and a finer grid,
the saving of computations is even more dramatic.
The same saving is also achieved for the network’s
training. When the weights of the RCRBF net-
work are updated, there are also only 2
n
weights
to be updated for each output neuron, whereas
n × M weights has to be updated for the GRBF
network. Similar observations were also reported
in [11.60,p.6].
11.4 State Feedback Controller Development
The direct adaptive robust state feedback controller pre-
sented in this chapter has the form
u = u
a,v
+u
s,v
=
1
ˆ
g
v
(x)
−
ˆ
f
v
(x)+y
(n)
d
−ke
+u
s,v
, (11.24)
where
ˆ
f
v
(x) =ω
f,v
ξ
v
(x),
ˆ
g
v
(x) =ω
g,v
ξ
v
(x)andu
s,v
is
the robustifying component to be described later. To
proceed, let Ω
e
0
denote the compact set including all
the possible initial tracking errors and let
c
e
0
= max
e∈Ω
e
0
1
2
e
P
m
e , (11.25)
where P
m
is the positive-definite solution to the con-
tinuous Lyapunov matrix equation A
m
P
m
+P
m
A
m
=
−2Q
m
for Q
m
=Q
m
> 0. Choose c
e
> c
e
0
and let
Ω
e
=
e :
1
2
e
P
m
e ≤c
e
.
(11.26)
Then the compact set Ω
x
is defined as
Ω
x
=
x : x =e+x
d
, e ∈Ω
e
, x
d
∈Ω
x
d
,
over which the unknown functions f(x)andg(x) are
approximated.
For practical implementation, ω
f,v
and ω
g,v
are con-
strained, respectively, to resideinside compact sets Ω
f,v
and Ω
g,v
defined as
Ω
f,v
=
ω
f,v
:ω
f
≤ω
fj,v
≤ω
f
, 1 ≤ j ≤ M
v
(11.27)
and
Ω
g,v
=
ω
g,v
:0<ω
g
≤ω
gj,v
≤ω
g
, 1 ≤ j ≤ M
v
,
(11.28)
where ω
f
, ω
f
, ω
g
and ω
g
are design parameters.
Let ω
∗
f,v
and ω
∗
g,v
denote the optimal constant weight
vectors corresponding to each admissible network
Part B 11.4