2.5 Convergence to Steady States for Markov Processes 133
C
m
= C
m mod(d)
for all m = 0, 1, 2,.... In particular, if the initial state
is in the class C
j
, then the chain cyclically moves back to C
j
in d steps.
Thus if one views p
d
as a one-step transition probability matrix on S,
then the sets C
j
become (disjoint) closed sets ( j = 0, 1,...,d − 1) and
the Markov chain restricted to C
j
is irreducible and aperiodic (i.e., has
period one).
2.5 Convergence to Steady States for Markov Processes
on Finite State Spaces
As will be demonstrated in this section, if the state space is finite, a
complete analysis of the limiting behavior of p
n
,asn →∞, may be car-
ried out by elementary methods that also provide rates of convergence
to the unique invariant distribution. Although, later, the asymptotic be-
havior of general Markov chains is analyzed in detail, including the law
of large numbers and the central limit theorem for Markov chains that
admit unique invariant distributions, the methods of the present section
are also suited for applications to certain more general state spaces (e.g.,
closed and bounded intervals).
First, we consider what happens to the n-step transition law if all states
communicate with each other in one step. In what follows, for a finite set
A, (#A) denotes the number of elements in A.
Theorem 5.1 Suppose S is finite and p
ij
> 0 for all i, j. Then there
exists a unique probability distribution π ={π
j
: j ∈ S} such that
i
π
i
p
ij
= π
j
for all j ∈ S (5.1)
and
)
)
)
p
(n)
ij
− π
j
)
)
)
(1 −(#S)χ)
n
for all i, j ∈ S, n 1, (5.2)
where χ = min{p
ij
: i, j ∈ S} and (#S) is the number of elements in S.
Also, π
j
χ for all j ∈ S and χ 1/(#S).
Proof. Let M
(n)
j
, m
(n)
j
denote the maximum and the minimum, respec-
tively, of the elements {p
(n)
ij
: i ∈ S} of the j th column of p
n
. Since