Chapter 3
Markov Chains
3.1 Introduction
We explained in Chapter 1 that in order to study a stochastic system we map its
random output to one or more random variables. In Chapter 2 we studied other
systems where the output was mapped to random processes which are functions of
time. In either case we characterized the system using the expected value, variance,
correlation, and covariance functions. In this chapter we study stochastic systems
that are best described using Markov processes. A Markov process is a random
process where the value of the random variable at instant n depends only on its
immediate past value at instant n −1. The way this dependence is defined gives rise
to a family of sample functions just like in any other random process. In a Markov
process the random variable represents the state of the system at a given instant
n. The state of the system depends on the nature of the system under study as we
shall see in that chapter. We will have a truly rich set of parameters that describe a
Markov process. This will be the topic of the next few chapters. The following are
the examples of Markov processes we see in many real life situations:
1. telecommunication protocols and hardware systems
2. customer arrivals and departures at banks
3. checkout counters at supermarkets
4. mutation of a virus or DNA molecule
5. random walk such as Brownian motion
6. arrival of cars at an intersection
7. bus rider population during the day, week, month, etc
8. machine breakdown and repair during use
9. the state of the daily weather
3.2 Markov Chains
If the state space of a Markov process is discrete, the Markov process is called a
Markov chain. In that case the states are labeled by the integers 0, 1, 2, etc. We will
F. Gebali, Analysis of Computer and Communication Networks,
DOI: 10.1007/978-0-387-74437-7
3,
C
Springer Science+Business Media, LLC 2008
65