294 Random Dynamical Systems
Section 3.7. Theorem 7.2 is in Silverstrov and Stenflo (1998). See also
Stenflo (2001) and Elton (1990). For additional results on iterations of
i.i.d. contractions, including measurability questions, rates of conver-
gence to equilibrium, and a number of interesting applications, we re-
fer to Diaconis and Freedman (1999). Random iterates of linear and
distance-diminishing maps have been studied extensively in psychology:
the monograph by Norman (1972) is an introduction to the literature with
a comprehensive list of references. In the context of economics, the liter-
ature on learning has more recently been studied and surveyed by Evans
and Honkapohja (1995, 2001). The “reduced form” model of Honkapo-
hja and Mitra (2003) is a random dynamical system with a state space
S ⊂
R
. They use Theorem 7.3 to prove the existence and stability of an
invariant distribution.
3.9 Supplementary Exercises
(1) On the state space S ={1, 2} consider the maps γ
1
,γ
2
,γ
3
on
S defined as γ
1
(1) = γ
1
(2) = 1, γ
2
(1) = γ
2
(2) = 2, γ
3
(1) = 2, and
γ
3
(2) = 1. Let {α
n
: n ≥ 1} be i.i.d. random maps on S such that
P(α
n
= γ
j
) = θ
j
( j = 1, 2, 3), θ
1
+ θ
2
+ θ
3
= 1.
(a) Write down the transition probabilities of the Markov process {X
n
:
n ≥ 0} generated by iterations of α
n
(n ≥ 1) (for an arbitrary initial state
X
0
independent of {α
n
: n ≥ 1}). [Hint: p
12
= θ
2
+ θ
3
.]
(b) Given arbitrary transition probabilities p
ij
(i, j = 1, 2) satisfying
p
11
+ p
22
≤ 1 (i.e., p
11
≤ p
21
), find θ
1
,θ
2
,θ
3
to construct the corre-
sponding Markov process. [Hint: p
11
= θ
1
, p
22
= θ
2
.]
(c) In addition to the maps above, let γ
4
be defined by γ
4
(1) = 1,
γ
4
(2) = 2. Let {α
n
: n ≥ 1} be i.i.d. with P(α
n
= γ
j
) = θ
j
( j =
1, 2, 3, 4). Given arbitrary transition probabilities p
ij
(i, j = 1, 2), find
θ
j
( j = 1, 2, 3, 4) to construct the corresponding Markov process (by
iteration of α
n
(n ≥ 1)). [Hint: p
11
= θ
1
+ θ
4
, p
21
= θ
1
+ θ
3
.]
(2) Consider the model in Example 4.2 on the state space S = [−2, 2]
with {ε
n
: n ≥ 1} i.i.d. uniform on [−1, 1].
(a) Write down the transition probability density p(x, y), i.e., the prob-
ability density of X
1
(x) = f (x) +ε
1
. [Hint: Uniform distribution on
[ f (x) − 1, f (x) + 1].]
(b) Compute the distribution of X
2
(x) = f (X
1
(x)) + ε
2
. [Hint: Sup-
pose x ∈ [−2, 0]. Then X
1
(x) is uniform on [x, x + 2], and f (X
1
(x))