The usual OLS standard errors are in parentheses, ( ), below the corresponding OLS esti-
mate, and the heteroskedasticity-robust standard errors are in brackets, [ ]. The numbers in
brackets are the only new things, since the equation is still estimated by OLS.
Several things are apparent from equation (8.6). First, in this particular application, any
variable that was statistically signficant using the usual t statistic is still statistically signifi-
cant using the heteroskedasticity-robust t statistic. This is because the two sets of standard
errors are not very different. (The associated p-values will differ slightly because the robust
t statistics are not identical to the usual, nonrobust, t statistics.) The largest relative change
in standard errors is for the coefficient on educ: the usual standard error is .0067, and the
robust standard error is .0074. Still, the robust standard error implies a robust t statistic
above 10.
Equation (8.6) also shows that the robust standard errors can be either larger or smaller
than the usual standard errors. For example, the robust standard error on exper is .0051,
whereas the usual standard error is .0055. We do not know which will be larger ahead of
time. As an empirical matter, the robust standard errors are often found to be larger than
the usual standard errors.
Before leaving this example, we must emphasize that we do not know, at this point,
whether heteroskedasticity is even present in the population model underlying equation
(8.6). All we have done is report, along with the usual standard errors, those that are valid
(asymptotically) whether or not heteroskedasticity is present. We can see that no important
conclusions are overturned by using the robust standard errors in this example. This often
happens in applied work, but in other cases the differences between the usual and robust
standard errors are much larger. As an example of where the differences are substantial, see
Problem 8.7.
At this point, you may be asking the following question: If the heteroskedasticity-
robust standard errors are valid more often than the usual OLS standard errors, why do
we bother with the usual standard errors at all? This is a valid question. One reason they
are still used in cross-sectional work is that, if the homoskedasticity assumption holds
and the errors are normally distributed, then the usual t statistics have exact t distribu-
tions, regardless of the sample size (see Chapter 4). The robust standard errors and
robust t statistics are justified only as the sample size becomes large. With small sam-
ple sizes, the robust t statistics can have distributions that are not very close to the t dis-
tribution, which would could throw off our inference.
In large sample sizes, we can make a case for always reporting only the
heteroskedasticity-robust standard errors in cross-sectional applications, and this prac-
tice is being followed more and more in applied work. It is also common to report both
standard errors, as in equation (8.6), so that a reader can determine whether any con-
clusions are sensitive to the standard error in use.
It is also possible to obtain F and LM statistics that are robust to heteroskedastic-
ity of an unknown, arbitrary form. The heteroskedasticity-robust F statistic (or a
simple transformation of it) is also called a heteroskedasticity-robust Wald statistic. A
general treatment of this statistic is beyond the scope of this text. Nevertheless, since
many statistics packages now compute these routinely, it is useful to know that
Part 1 Regression Analysis with Cross-Sectional Data
252
d 7/14/99 6:18 PM Page 252