Elements of applied functional analysis 279
At prescribed value of u it is possible to extract the total square for the function ϕ:
(ϕ − K
δϕ
L
∗
K
−1
ε
u, K
−1
δϕ
(ϕ − K
δϕ
L
∗
K
−1
ε
u)),
and for the fixed value of ϕ the total square for u is written as
(u − K
ε
K
−1
ε
Lϕ, K
−1
ε
(u − K
ε
K
−1
ε
Lϕ)).
It should be noted that the operator K
δϕ
represents a difference of the two
positive operators: K
ϕ
> 0 and K
ϕ
L
∗
(LK
ϕ
L
∗
+ +K
ε
)
−1
LK
ϕ
> 0, and in this case
K
δϕ
> 0.
We consider statistical criteria for searching of the field ϕ estimate for the model
u = Lϕ+ε, using obtained expression for the conditional probability densities p(ϕ|u)
and p(u|ϕ).
1. Let it is known only a distribution of the random component (noise) ε, i.e.
it is given the density p(ε). This distribution can be obtained a priori by means
of the statistical analysis of a measurement error of the experiment. Then it is
reasonable as a solution to take a such function ˆϕ, that the difference u − L ˆϕ is
maximum “similar” to the noise ε, i.e. the function u −L ˆϕ = ˆε should be such that
the probability p
ˆε
would be maximal:
ˆϕ = arg sup p
ε
(u − Lϕ) = arg sup p(u|ϕ) = arg sup ln p
ε
(u − Lϕ). (9.25)
Such approach of the estimation is called the maximum likelihood method (Rao,
1972), in this case the function ln p(u|ϕ) is called the likelihood function. The
monotone nondecreasing function (logarithm) is chosen in connection with that
many empirical distributions are exponential, in particular this concerns to the
Gaussian distribution, which in accordance with the central limit theorem (Rao,
1972) is the most prevailing approximation of the distributions. The extremum
problem (9.25) solution can be obtained by one of the numerical methods. Here
we extract the Gaussian distribution. Since the likelihood function is a quadratic
functional in this case, and it is possible to write Euler’s equation in the explicit
form
ˆϕ = arg inf(u −Lϕ, K
−1
ε
(u − Lϕ)),
(L
∗
K
−1
ε
L)ϕ = L
∗
K
−1
ε
u . (9.26)
The analysis of the equation (9.26) shows that the maximum likelihood method is
identical to the method of least squares in the case of the Gaussian distribution and
has the same disadvantage: the solution instability since the operator L
∗
K
−1
ε
L is
compact operator. In the case of the Laplace distribution, the maximum likelihood
method leads to the extremum problem solution in the norm L
1
:
ˆϕ = arg inf ||u − Lϕ||
L
1
,
that is equivalent to the method of the least modulus:
ˆϕ = arg inf |u − Lϕ|.