178
CHAPTER
4.
DISCRETE SMOOTH INTERPOLATION
•
choose
a
constraint
•
update
•
update
}
for_all(
c
e
C ) {
if(
c is
dynamic
) •
update
c
}
let
gradi
be the
vector
defined
by
equation
(4.59)
} // end :
while(
more
iterations
are
needed
)
It is
important
to
note that, contrary
to the
pure conjugate gradient method,
the
above algorithm allows
the
dynamic
and
hard constraints
to be
taken into
account
at
each step
of the
iterative process.
As a
consequence:
• if
there
is no
active
dynamic
and/or
hard
constraint
other
than those
corre-
sponding
to
C
L
,
then
the
above
algorithm
reduces
to the
classical
conjugate
gradient
method,
and
• if
there
are
active
dynamic
and/or
hard
constraints,
then
the
above
algorithm
is
equivalent
to the
classical
DSI
algorithm based
on the
local
form
of the
DSI
equation.
For
these
reasons,
we
propose calling
"pseudo"
Conjugate-Gradient
method
the
above algorithm derived
from
a
method initially proposed
by
Fletcher
and
Reeves [81]
for
minimizing non-quadratic objective
functions.
Computing
the
optimal
value
s
It can be
observed
that
the
above Pseudo-Conjugate-Gradient method
re-
quires
to
compute
the
optimal value
of s at
each
iteration.
Unfortunately,
if
there
are
active dynamic constraints,
the
matrix
[W//]
varies
at
each iteration
and the
pseudo-conjugate directions cannot rigorously honor
the
conjugacy
constraint (4.58):
as a
consequence, equation
(4.57)
cannot
be
used
to
deter-
mine
the
optimal value
of s. In
this section
we
will
show
how to
compute this
optimal value
as a
function
of the
terms
g
u
(oi),
G"(a\(p),
7"(en)
and
T
y
(ot\(f>)
denned
by
equations
(4.47),
and
(4.48).
For
this
purpose,
let us
consider
the
direction d as a vector of IRn' partitioned in a similar way to the partition
of
(p
denned
by
equation (4.30):
For the
sake
of
simplicity,
the (n •
|L|)
components
of
dL
are
assumed
to be
equal to zero and we introduce the vector x(s) of lRn' denned by