let
gradi
<—
[Wjj]
•
<pr
~
ipi
let
di
<—
—
gradi
and 7
<—
||gra<i/||
2
while(
more
iterations
are
needed
) {
•
compute
the
optimal value
s
}
If
end :
while(
more iterations
are
needed
)
This method
is
very
efficient.
Furthermore,
it is
possible
to
prove
that
it
converges
in, at
most,
(n •
|/|)
iterations.
However,
there
are two
major
drawbacks:
1.
Because
ipj
is
used only
in the
initialization phase,
it is
impossible
to
change
if)i
between
two
iterations,
implying
that
dynamic
constraints
cannot
be
taken
into account.
2.
The
method
does
not
allow
hard
equality
or
inequality
constraints
to be
taken
into account.
For
these reasons,
the
next section proposes
a
"pseudo"
Conjugate-Gradient
method
that also converges very rapidly,
yet
still
offers
the
possibility
of
taking
hard
and
dynamic constraints into account.
The
Pseudo-Conjugate-Gradient method
Taking
into account equation
(4.45),
we can
observe that,
for all a
e
O
and
all
v
6
{!,...,n},
the
components
{gradf(a)}
of the
vector gradi
in the
conjugate
gradient method presented above
are
such
that
This suggests
modifying
the
conjugate gradient algorithm
as
follows:
//—
DSI
algorithm using Pseudo-Conjugate-Gradient
let / be the set of
nodes where
<p(a)
is
unknown
let
ip
<—
c/?[o]
be a
given initial approximated solution
let
gradi
be the
vector
defined
by
equation (4.59)
let
di
<—
—gradi
and 7
<—
||grad/||
2
while(
more iterations
are
needed
) {
•
determine
the
optimal value
s
such
that
while(
A(C
=
,C
>
\(p)
is not
empty
) {
4.6.
ACCELERATING THE CONVERGENCE
177