120 Markov Processes
on (S, S) for every x ∈ S, one may construct a Markov process with
transition probability p(.,.) for any given initial state X
0
= x
0
(or initial
distribution µ of X
0
).
The present chapter is devoted to the general theory of Markov pro-
cesses. A broad understanding of this vast and fascinating field is facili-
tated by an analysis of these processes on countable state spaces (Markov
chains). Irreducible chains, where every state can be reached from ev-
ery other state with positive probability, are either transient or recurrent
(Theorem 7.1). In a transient chain, every state eventually disappears
from the view, i.e., after a while, the chain will not return to it. A recur-
rent chain, on the other hand, will reappear again and again at any given
state. Among the most celebrated chains are simple random walks on the
integer lattice
Z
k
, and a famous result of Polya (Theorem 7.2) says that
these walks are recurrent in dimensions k = 1, 2 and are transient in all
higher dimensions k ≥ 3.
Recurrent chains may be further classified as null recurrent or positive
recurrent (Section 2.8). In a null recurrent chain, it takes a long time for
any given state to reappear – the mean return time to any state being
infinite. If the mean return time to some state is finite, then so is the case
for every state, and in this case the chain is called positive recurrent. In
the latter case the Markov process has a unique invariant distribution,or
steady state, to which it approaches as time goes on, no matter where it
starts (Theorem 8.1). Simple random walks in dimensions k = 1, 2 are
null recurrent. A rich class of examples is provided by the birth-and-death
chains in which the process moves a unit step to the left or to the right at
each period of time (Examples 8.1–8.3), but the probabilities of moving
in one direction or another may depend on the current state. Depending on
these probabilities, the chain may be transient, null recurrent, or positive
recurrent (Propositions 8.3 and 8.4). Two significant applications are to
physics (Ehrenfest model of heat exchange) and economics (a thermostat
model of managerial behavior, due to Radner).
Moving on to general state spaces in Section 2.9, two proofs are given
of an important result on the exponential convergence to a unique steady
state under the so-called Doeblin minorization condition (Theorem 9.1).
Simple examples that satisfy this condition are irreducible and aperiodic
Markov chains on finite state spaces, and Markov processes with continu-
ous and positive transition probability densities on a compact rectangle in
R
k
(k ≥ 1). A third proof of Theorem 9.1 is given in Chapter 3, using no-
tions of random dynamical systems. Theorem 9.2 provides an extension