and (11.23) with the Cram r–Rao lower bounds defined in Section 9.2.2. In
order to evaluate these lower bounds, a probability distribution of Y must be
made available. Without this knowledge, however, we can still show, in Theorem
11.2, that the least squares technique leads to linear unbiased minimum-variance
estimators for and ; that is, among all unbiased estimators which are linear
in Y , least-square estimators have minimum variance.
Theorem 11.2: let random variable Y be defined by Equation (11.4). Given
asample(x
1
,Y
1
), (x
2
,Y
2
), . . . , (x
n
,Y
n
) of Y with its associated x values, least-
given by Equation (11.17) are minimum variance
linear unbiased
estimators
for and , respectively.
Proof of Theorem 11.2: the proof of this important theorem is sketched
below with use of vector–matrix notation.
Consider a linear unbiased estimator of the form
We thus wish to prove that G 0 if
*
is to be minimum variance.
The unbiasedness requirement leads to, in view of Equation (11.19),
Consider now the covariance matrix
Upon using Equations (11.19), (11.24), and (11.25) and expanding the covari-
ance, we have
Now, in order to minimize the variances associated with the components of
,
we must minimize each diagonal element of GG
T
. Since the iith diagonal
element of GG
T
is given by
where g
ij
is the ijth element of G, we must have
and we obtain
344
Fundamentals of Probability and Statistics for Engineers
square estimators and
Â
^
A
^
B
Q
*
C
T
C
1
C
T
GY: 11:24
Q
*
GC 0: 11:25
covfQ
*
gEfQ
*
qQ
*
q
T
g: 11:26
covfQ
*
g
2
C
T
C
1
GG
T
:
Q
*
GG
T
ii
X
n
j1
g
2
ij
;
g
ij
0; for all i and j:
G 0: 11:27
e