CHAPTER 20
✦
Serial Correlation
907
A time-series model will typically describe the path of a variable y
t
in terms of
contemporaneous (and perhaps lagged) factors x
t
, disturbances (innovations), ε
t
, and
its own past, y
t−1
,....For example,
y
t
= β
1
+ β
2
x
t
+ β
3
y
t−1
+ ε
t
.
The time series is a single occurrence of a random event. For example, the quarterly
series on real output in the United States from 1950 to 2000 that we examined in Ex-
ample 20.1 is a single realization of a process, GDP
t
. The entire history over this period
constitutes a realization of the process. At least in economics, the process could not be
repeated. There is no counterpart to repeated sampling in a cross section or replication
of an experiment involving a time-series process in physics or engineering. Nonethe-
less, were circumstances different at the end of World War II, the observed history
could have been different. In principle, a completely different realization of the en-
tire series might have occurred. The sequence of observations, {y
t
}
t=∞
t=−∞
is a time-series
process, which is characterized by its time ordering and its systematic correlation be-
tween observations in the sequence. The signature characteristic of a time-series process
is that empirically, the data generating mechanism produces exactly one realization of
the sequence. Statistical results based on sampling characteristics concern not random
sampling from a population, but from distributions of statistics constructed from sets
of observations taken from this realization in a time window, t =1,...,T. Asymptotic
distribution theory in this context concerns behavior of statistics constructed from an
increasingly long window in this sequence.
The properties of y
t
as a random variable in a cross section are straightforward
and are conveniently summarized in a statement about its mean and variance or the
probability distribution generating y
t
. The statement is less obvious here. It is common
to assume that innovations are generated independently from one period to the next,
with the familiar assumptions
E [ε
t
] = 0,
Var[ε
t
] = σ
2
ε
,
and
Cov[ε
t
,ε
s
] = 0 for t = s.
In the current context, this distribution of ε
t
is said to be covariance stationary or
weakly stationary. Thus, although the substantive notion of “random sampling” must
be extended for the time series ε
t
, the mathematical results based on that notion apply
here. It can be said, for example, that ε
t
is generated by a time-series process whose
mean and variance are not changing over time. As such, by the method we will discuss
in this chapter, we could, at least in principle, obtain sample information and use it to
characterize the distribution of ε
t
. Could the same be said of y
t
? There is an obvious
difference between the series ε
t
and y
t
; observations on y
t
at different points in time
are necessarily correlated. Suppose that the y
t
series is weakly stationary and that, for
the moment, β
2
= 0. Then we could say that
E [y
t
] = β
1
+ β
3
E [y
t−1
] + E [ε
t
] = β
1
/(1 − β
3
)
and
Var[y
t
] = β
2
3
Var[y
t−1
] + Var[ε
t
],