3.5 EFFICIENCY OF OLS: THE GAUSS-MARKOV
THEOREM
In this section, we state and discuss the important Gauss-Markov Theorem, which jus-
tifies the use of the OLS method rather than using a variety of competing estimators.
We know one justification for OLS already: under Assumptions MLR.1 through
MLR.4, OLS is unbiased. However, there are many unbiased estimators of the
j
under
these assumptions (for example, see Problem 3.12). Might there be other unbiased esti-
mators with variances smaller than the OLS estimators?
If we limit the class of competing estimators appropriately, then we can show that
OLS is best within this class. Specifically, we will argue that, under Assumptions
MLR.1 through MLR.5, the OLS estimator
ˆ
j
for
j
is the best linear unbiased esti-
mator (BLUE). In order to state the theorem, we need to understand each component
of the acronym “BLUE.” First, we know what an estimator is: it is a rule that can be
applied to any sample of data to produce an estimate. We also know what an unbiased
estimator is: in the current context, an estimator, say
˜
j
, of
j
is an unbiased estimator
of
j
if E(
˜
j
)
j
for any
0
,
1
,…,
k
.
What about the meaning of the term “linear”? In the current context, an estimator
˜
j
of
j
is linear if, and only if, it can be expressed as a linear function of the data on the
dependent variable:
˜
j
兺
n
i1
w
ij
y
i
,
(3.59)
where each w
ij
can be a function of the sample values of all the independent variables.
The OLS estimators are linear, as can be seen from equation (3.22).
Finally, how do we define “best”? For the current theorem, best is defined as small-
est variance. Given two unbiased estimators, it is logical to prefer the one with the
smallest variance (see Appendix C).
Now, let
ˆ
0
,
ˆ
1
,…,
ˆ
k
denote the OLS estimators in the model (3.31) under
Assumptions MLR.1 through MLR.5. The Gauss-Markov theorem says that, for any
estimator
˜
j
which is linear and unbiased, Var(
ˆ
j
) Var(
˜
j
), and the inequality is usu-
ally strict. In other words, in the class of linear unbiased estimators, OLS has the small-
est variance (under the five Gauss-Markov assumptions). Actually, the theorem says
more than this. If we want to estimate any linear function of the
j
, then the corre-
sponding linear combination of the OLS estimators achieves the smallest variance
among all linear unbiased estimators. We conclude with a theorem, which is proven in
Appendix 3A.
THEOREM 3.4 (GAUSS-MARKOV THEOREM)
Under Assumptions MLR.1 through MLR.5,
ˆ
0
,
ˆ
1
, …,
ˆ
k
are the best linear unbiased esti-
mators (BLUEs) of
0
,
1
, …,
k
, respectively.
It is because of this theorem that Assumptions MLR.1 through MLR.5 are known as the
Gauss-Markov assumptions (for cross-sectional analysis).
Chapter 3 Multiple Regression Analysis: Estimation
101
d 7/14/99 4:55 PM Page 101