248 Random Dynamical Systems
α
n−1
···α
1
, and by the independence of ˜α and X
0
,
(T
∗n
µ)(A) =
"
n
µ(˜γ
−1
A)Q
n
(dγ )(A ∈ S,µ∈ P(S)). (3.7)
Finally, we come to the definition of stability.
Definition 3.1 A Markov process X
n
is stable in distribution if there
is a unique invariant probability measure π such that X
n
(x) converges
in distribution to π irrespective of the initial state x, i.e., if p
(n)
(x, dy)
converges weakly to the same probability measure π for all x. In the case
one has (1/n)
n
m=1
p
(m)
(x, dy) converging weakly to the same invariant
π for all x, we may define the Markov process to be stable in distribution
on the average.
3.4 The Role of Uncertainty: Two Examples
Our first example contrasts the steady state of a random dynamical system
(S,,Q) with the steady states of deterministic laws of .
Example 4.1 Let S = [0, 1] and consider the maps
¯
f and
=
f on S into
S defined by
¯
f (x) = x/2
=
f (x) = x/2 +
1
2
.
Now, if we consider the deterministic dynamical systems (S,
¯
f ) and
(S,
=
f ), then for each system all the trajectories converge to a unique
fixed point (0 and 1, respectively) independently of initial condition.
Think of the random dynamical system (S,,Q) where ={
¯
f ,
=
f }
and Q({
¯
f }) = p > 0, Q({
=
f }) = 1 − p > 0. It follows from Theorem
5.1 that, irrespective of the initial x, the distribution of X
n
(x) converges
in the Kolmogorov metric (see (5.3)) to a unique invariant distribution π,
which is nonatomic (i.e., the distribution function of π is continuous).
Exercise 4.1 If p =
1
2
, then the uniform distribution over [0, 1] is the
unique invariant distribution.
The study of existence of a unique invariant distribution and its sta-
bility is relatively simple for those cases in which the transition proba-
bility p(x, dy) has a density p(x, y), say, with respect to some reference