214 8 Filters and Predictors
Thus, one obtains, together with the expansion of the performance up to the
second-order in Y and w, a stochastic linear quadratic problem which we
have discussed in Sect. 7.3.2. In particular, one obtains the control law (7.92)
presenting the classical linear feedback relation.
However, the application of this theory to real problems causes some new
problems. The first problem belongs to the stochastic sources which drive
the system under control. It is often impossible to determine the coupling
functions d
k
(t), which connect the system dynamics with the noise processes.
In addition, it cannot be guaranteed that a real system is described exclusively
at the Markov level by pure diffusion processes related to several realizations
of the Wiener process. In principle, the stochastic terms may also represent
various jump processes or combined diffusion-jump processes.
1
Since the majority of physical processes in complex systems consist of a
sufficiently large number of different external noise sources, the estimation of
the noise terms can be made in the framework of the limit distributions. This
will be done in the following parts of this chapter.
The second problem belongs to the observability of a system. It means
that we have the complete information about the stochastic dynamics of the
system, given by the sum of the noise terms and the matrices A(t)andB(t),
but we are not able to measure the state X(t) or equivalently the difference
Y (t)=X(t) − X
∗
(t). Instead of this, we have only a reduced piece of infor-
mation given by the observable output
Z(t)=C(t)X(t)+η(t) , (8.6)
where the output Z(t) is a vector of p components, usually with p<N,
C(t) is a matrix of type p × N,andη(t) represents the p-component noise
process modelling the observation error. The problem occurs if the reduced
information Z(t) and all previous observations Z(τ) with τ<tcanbeusedfor
the control of the system at the current time t. Such so-called filter problems
will be considered in the two subsequent chapters.
Finally, it may be possible that the dynamics of the system is unknown.
The only available information is the historical set of observations and control
functions while the system itself behaves like a black box. In this case it is
necessary to estimate the most probable evolution of the system under control.
1
We remark that the stochastic Ito differential equation is related to a Fokker–
Planck equation, which is only a special case of the differential Chapman–
Kolmogorov equation (6.91). This equation is valid for all Markov processes and
considers also jump processes. Thus we may reversely conclude that (8.3) can also
be generalized to combined diffusion-jump processes.