144 Markov Processes
The next result says that the conditional distributions of X
+
n
,given
{X
0
, X
1
,...,X
n
}, depend only on X
n
and is in fact P
X
n
, i.e., the distri-
bution of a Markov chain with transition probability matrix p and initial
state X
n
.
Theorem 6.1 The conditional distribution of X
+
n
, given {X
0
, X
1
,...,
X
n
},isP
X
n
, i.e., for every (measurable) set F in the infinite product
space S
∞
, one has
P(X
+
n
∈ F | X
0
= i
0
, X
1
=i
1
,...,X
n
= i
n
) = P
i
n
((X
0
, X
1
,...,) ∈ F).
(6.2)
Proof. When F is finite-dimensional, i.e., of the form F ={ω =
( j
0
, j
1
,...) ∈ S
∞
:(j
0
, j
1
,..., j
m
) ∈ G}={(X
0
, X
1
,...,X
m
) ∈ G}
for some G ⊂ S
m+1
, (6.2) is just the Markov property in the form (6.1).
Since this is true for all m, and since the Kolmogorov sigmafield on the
product space S
∞
is generated by the class of all finite-dimensional sets,
(6.2) holds (see Complements and Details).
To state the second and the most important strengthening of the Markov
property, we need to define a class of random times τ such that the Markov
property holds, given the past up to these random times.
Definition 6.2 A random variable τ with values in Z
+
∪ {∞} =
{0, 1, 2,...,} ∪ {∞} is a stopping time if the event {τ = m} is deter-
mined by {X
0
, X
1
,...,X
m
} for every m ∈ Z
+
.
If τ is a stopping time and m
< m, then the event {τ = m
} is deter-
mined by {X
0
, X
1
,...,X
m
}and, therefore, by {X
0
, X
1
,...,X
m
}. Hence
an equivalent definition of a stopping time τ is {τ ≤ m} is determined
by {X
0
, X
1
, .., X
m
} for every m ∈ Z
+
.
Informally, whether or not the Markov chain stops at a stopping time
τ depends only on {X
0
, X
1
,...,X
τ
}, provided τ<∞.
Example 6.1 Let A be an arbitrary nonempty proper subset of the
(countable) state space S. The hitting time η
A
of a chain {X
n
: n =
0, 1,...} is defined by
η
A
= inf{n ≥ 1: X
n
∈ A}, (6.3)
the infinum being infinity (∞) if the set within {}above is empty, i.e.,
if the process never hits A after time 0. This random time takes values in