88 7 Solutions of O verdetermined Systems
Equations (7.12) and (7.13) are the two main equations that are applied during the
combinatorial optimization. The dispersion matrix (variance-covariance matrix) Σ
is unknown and is obtained by means of estimators of type MINQUE, BIQUUE or
BIQE as in [160, 338, 339, 340, 341, 342, 353]. In Definition 7.1, we used the term
‘special’. This implies the case where the matrix A has full rank and A
0
Σ
−1
A
is invertible, i.e., regular. In the event that A
0
Σ
−1
A is not regular (i.e., A has a
rank deficiency), the rank deficiency can be overcome by procedures such as those
presented by [180, pp. 107–165], [244, pp. 181–197] and [94, 176, 177, 299, 302, 329]
among others.
Definition 7.3 (Nonlinear Gauss-Markov model). The model
E{y} = y − e = A(ξ), D{y} = Σ, (7.14)
with a real n×1 random vector y ∈ R
n
of observations, a real m×1 vector ξ ∈ R
m
of unknown fixed parameters, n ×1 vector e of random errors (with zero mean and
dispersion matrix Σ), A being an injective function from an open domain into
n−dimensional space R
n
(m < n) and E the “expectation” operator is said to be a
nonlinear Gauss-Markov model.
While the solution of the linear Gauss-Markov model by Best Linear Uniformly
Unbiased Estimator (BLUUE) is straight forward, the solution of the nonlinear
Gauss-Markov model is not straight forward owing to the nonlinearity of the in-
jective function (or map function) A that maps R
m
to R
n
. The difference between
the linear and nonlinear Gauss-Markov models therefore lies on the injective func-
tion A. For the linear Gauss-Markov model, the injective function A is linear and
thus satisfies the algebraic axiom discussed in Chap. 2, i.e.,
A(αξ
1
+ βξ
2
) = αA(ξ
1
) + βA(ξ
2
), α, β ∈ R, ξ
1
, ξ
2
∈ R
m
. (7.15)
The m–dimensional manifold traced by A(.) for varying values of ξ is flat. For
the nonlinear Gauss-Markov model on the other hand, A(.) is a nonlinear vector
function that maps R
m
to R
n
tracing an m–dimensional manifold that is curved.
The immediate problem that presents itself is that of obtaining a global minimum.
Procedures that are useful for determining global minimum and maximum can be
found in [334, pp. 387–448].
In geodesy and geoinformatics, many nonlinear functions are normally assumed
to be moderately nonlinear thus permitting linearization by Taylor series expan-
sion and then applying the linear model (Definition 7.1, Eqs. 7.12 and 7.13) to
estimate the unknown fixed parameters and their dispersions [244, pp. 155–156].
Whereas this may often hold, the effect of nonlinearity of these models may still
be significant on the estimated parameters. In such cases, the Gauss-Jacobi com-
binatorial algorithm presented in Sect. 7-33 can be used as we will demonstrate in
the chapters ahead.
7-33 Gauss-Jacobi combinatorial formulation
The C. F. Gauss and C. G. I Jacobi [236] combinatorial Lemma is stated as
follows: