6.4 Analyzing Markov Chains: Sample Paths 247
However, formulating the models is not the only problem. Even if we had a fin-
ished model, manipulating large models will stretch computer algebra systems to the
limit. Normally, it is therefore only possible to obtain precise results from Markov
chain models when the models are of relatively modest size.
In biological modeling, Markov chains find widespread application. As a conse-
quence, many methods and computational tools have been developed to solve them.
Many of these methods can be used without actually being aware of the fact that the
underlying model is a Markov chain model. One can distinguish two approaches in
“solving” Markov chain models.
The first approach is to generate so-called “sample paths” of the model, that is,
a particular sequence of random states that is consistent with the specification of
the Markov chain model. Taking as an example again the case of the king who
walks through his palace, such a sample path would be a sequence of rooms each
with a time tag that specifies at what time the king entered the room. One normally
uses computer programs to solve such Markov chain models using a pseudo-random
number generator.
In the case of a discrete time Markov chain, such sample paths are very easy
to generate. The algorithm essentially consists of generating a random number to
decide which state to enter next. Once this is decided, the time counter is incre-
mented by 1, and the process repeated. For example, assume the system is in state
X
0
at time t =t
1
, and the available transitions from X
0
are to X
0
,X
1
,X
2
,X
3
with
probabilities 0.1, 0.4, 0.2, 0.3 respectively. To determine the next state, we draw a
random number between 0 and 1; let us call this number n
0
.Ifn
0
∈[0, 0.1), then
the system stays in state X
0
;ifn
0
∈[0.1, 0.1 +0.4) it makes the transition into X
1
;
if n
0
∈[0.5, 0.5 +0.2) it transitions into X
2
, and so on. Repeating this over an over
generates a single sample path of the discrete time Markov chain (Table 6.1).
Generating sample paths for continuous time Markov chains is somewhat more
involved and requires more sophisticated algorithms. There are a number of highly
efficient algorithms; the two best known ones are the so-called Gillespie algorithm
and the somewhat more efficient Gibson-Bruck algorithm. Details of these algo-
rithms will be discussed in Chap. 7.
Such sample paths are of great value in exploring the possible behaviors of the
system, and to obtain a feeling for the kinds of behavior the model can display. They
are, so to speak, an example of what the system could do. On the downside, by
themselves sample paths do not provide much insight into general properties of the
system. For example, to understand how the system behaves in the mean over time, it
is necessary to generate a number of paths, starting from the same initial conditions.
We can then calculate the ensemble average at each time point, thus generating a
sequence of averages. Given a large enough number of generated sample paths, this
sequence of ensemble averages normally reflects more or less accurately the mean
behavior of the system over time.
While the generation of sample paths is interesting, important, insightful and,
in many cases, the only choice one has of obtaining general insights from sample
paths requires many sample paths to be generated. This is not always either prac-
tical or possible. Moreover, sample paths will only give insight for one specific set