7 The Britannica Guide to Statistics and Probability 7
104
Markovian Processes
A stochastic process is called Markovian (after the Russian
mathematician Andrey Andreyevich Markov) if at any
time t the conditional probability of an arbitrary future
event given the entire past of the process, i.e., given X(s)
for all s ≤ t, equals the conditional probability of that future
event given only X(t). Thus, to make a probabilistic state-
ment about the future behaviour of a Markov process, it is
no more helpful to know the entire history of the process
than it is to know only its current state. The conditional
distribution of X(t + h) given X(t) is called the transition
probability of the process. If this conditional distribution
does not depend on t, then the process is said to have “sta-
tionary” transition probabilities. A Markov process with
stationary transition probabilities may or may not be a
stationary process in the sense of the preceding paragraph.
If Y
1
, Y
2
, . . . are independent random variables and
X(t) = Y
1
+⋯+ Y
t
, then the stochastic process X(t) is a
Markov process. Given X(t) = x, the conditional probabil-
ity that X(t + h) belongs to an interval (a, b) is just the
probability that Y
t + 1
+⋯+ Y
t + h
belongs to the translated
interval (a − x, b − x). Because of independence this condi-
tional probability would be the same if the values of X(1), .
. . , X(t − 1) were also given. If the Ys are identically distrib-
uted as well as independent, this transition probability
does not depend on t, and then X(t) is a Markov process
with stationary transition probabilities. Sometimes X(t) is
called a random walk, but this terminology is not com-
pletely standard. Because both the Poisson process and
Brownian motion are created from random walks by sim-
ple limiting processes, they, too, are Markov processes
with stationary transition probabilities. The Ornstein-
Uhlenbeck process defined as the solution (19) to the