Professional Documents
Culture Documents
J. M. Akinpelu
Stochastic Processes
A stochastic process is a collection of random variables
X X t , t T .
3
Markov Chains
A stochastic process X X n , n 0, 1, 2, is
called a Markov chain provided that
P{ X n 1 j | X n i, X n 1 in 1 , , X 1 i1 , X 0 i0 }
P( X n 1 j | X n i}
4
Markov Chains
We restrict ourselves to Markov chains such that the
conditional probabilities
Pij P{ X n 1 j | X n i}
6
Markov Matrix
Let P be a square matrix of entries defined for
all i, j E. Then P is a Markov matrix provided
that
for each i, j E , Pij 0
for each i E , P
j 0
ij 1.
P{ X 9 j | X 8 k , X 7 i}P{ X 8 k | X 7 i}
P{ X 9 j | X 8 k}P{ X 8 k | X 7 i}
Pik Pkj
9
Chapman-Kolmogorov Equations
P{ X n 1 i1 , , X n m im | X n i0 }
10
Chapman-Kolmogorov Equations
Now note that
P{ X n 2 j | X n i}
P{ X n 2 j | X n 1 k , X n i}P{ X n 1 k | X n i}
k 0
P{ X n 2 j | X n 1 k}P{ X n 1 k | X n i}
k 0
Pik Pkj
k 0
P 2
ij
11
Chapman-Kolmogorov Equations
Example:
1 1 1
167 3 3
2 4 4
16 8
P 0 P P P 83 3
1 1
2 2 2
1
4 8
1 1 1 3 7
4 4 2 8
3
16 16
3
Hence, P{ X n 2 2 | X n 3} P (3, 2) .
2
16
12
Chapman-Kolmogorov Equations
In general, for all n, m 0 and i, j E ,
P{ X n m j | X n i} P mij .
13
Chapman-Kolmogorov Equations
Also note that if (0) is the probability distribution of
the states at time 0, i.e.,
i(0) P{ X 0 i}.
( n ) (0) P n .
14
Classification of States
Fix state j, and let T be the time of the first visit to
state j.
– State j is called recurrent if
P{T | X 0 j} 1 .
– A recurrent state j is call positive
.
if
E[T | X 0 j ]
– Otherwise, it is called null.
T
j j
15
Classification of States
– A recurrent state j is said to be periodic with
period d if
Pjjn 0
– whenever n is not divisible by d, and d is that
largest integer with this property. A state with
period 1 is said to be aperiodic.
– Recurrent positive, aperiodic states are called
ergodic. A Markov chain is called ergodic if all
of its states are ergodic.
16
Classification of States
– State j is called transient if
P{T | X 0 j} 1 .
– There is a positive probability of never returning
to that state.
17
Classification of States
– A state j is accessible from state i if there
exists n 0 such that Pijn 0 .
– A set of states is said to be closed if no state
outside it is accessible from any state in it.
– A state forming a closed set by itself is
called an absorbing state.
closed
closedset
set
i k
j 18
Classification of States
19
Classification of States
j
recurrent transient
positive null
absorbing non-absorbing
20
Classification of States
State j is
– absorbing if and only if
Pjj 1 .
– recurrent if and only if
jj .
P n
n 1
jj .
P n
n 1
21
Classification of States
Theorem 2. In an irreducible Markov chain, either
all states are transient, all states are recurrent null,
or all states are recurrent positive.
22
Classification of States
Theorem 4. In a Markov chain, the states can
be divided, in a unique manner, into
irreducible sets of recurrent states and a set of
transient states.
23
Classification of States
K1
K2 0
.
P .
0 .
Kn
L1 L2 . . . Ln PT
25
Classification of States
It is useful to draw a graph with the states as vertices and a
directed edge from i to j if P(i, j) > 0.
12 0 12 0 0
0 1 0 3 0
4 4
P 0 0 13 0 23
1 1
4 2 0 1
4 0
1 1
3 0 1
3 0 3
3
2 4 1 5
26
Classification of States
Note that:
– {1, 3, 5} and {1, 2, 3, 4, 5} are closed.
– {1, 3, 5} is irreducible.
2 4 1 5
12 0 1
0 0
2
0 1
0 3
0 12 1
0
P 0
4 4
2 2
0 1
3
0 3 K 0 1 2
1 1 1
0 3 3
14 2
0 4 1 1 1
1
3 0 1
3
0 3 3 3 3
K is a Markov matrix corresponding to the state space {1, 3, 5}. Since the set is
finite, all states in the set are recurrent positive.
27
Classification of States
If the states are reordered
{1, 3, 5, 2, 4} {1, 2, 3, 4, 5},
then the transition matrix becomes
12 12 0 0 0
0 1 2 0 0
2 3
P 13 13 13 0 0
3
0 0 0 1
4 4
1 1
4 0 0 1
2 4
( n) (0) n
lim lim P
n n
j i Pij
iE
jE
j 1.
1
mj
j
j j
31
Another Look at Ergodicity
It follows that for recurrent positive states, the
limiting probabilities have two interpretations:
– the limiting distribution of the state at time t
– the long-run proportion of time that the process
spends in each state.
32
Another Look at Ergodicity
Probability that process is in state j at time t j
0 t
33
Computing Limiting Probabilities
If is a solution of = P, then any constant c, c
is also a solution. In solving = P, 1=1, it is best
to solve = P first and then normalize the resulting
solution to satisfy the second condition.
34
Limiting Probabilities
Example:
12 1 1
1 4 4
P 2 0 1
2
1 1 1
4 4 2
P, 1 1 implies that 52 , 15 , 52 .
35
Limiting Probabilities
Compare 52 , 15 , 52 with
13 13 25
512
205 205 409
32 64 64
1024 1024
P 13
3 3 13 P 5 512
205 51 205
32 16 32
256 512
25 13 13 409 205 205
64 64 32 1024 1024 512
36
Time Spent in Transient States
• If a Markov chain has infinitely many
transient states, then it is possible for the
chain to remain in the set of transient states
forever.
• However, if there are finitely many transient
states, then the chain will eventually leave the
set of transient states never to return.
37
Time Spent in Transient States
Theorem 6. For a Markov chain with a finite number of
transient states, let T be the set of transient states, and
let the transition matrix P be written as
K 0
P
L PT
38
Time Spent in Transient States
Let s ij be the expected number of visits to state j, starting at
state i, and let S be the square matrix of entries s ij defined for
all i, j T .Then
S ( I P T ) 1.
39
Time Spent in Transient States
12 12 0 0 0
Example: 0 1 2 0 0
2 3
P 13 13 13 0 0
3
0 0 0 4 4
1
1 1
4 0 0 1
2 4
14 3 34 34 4 4
PT 1 4 I PT 1 ( I PT ) 1
8
1 3
2 4 2 4 3 4
where
0 p 1, q 1 p.
41
Random Walk
All states can be reached from each other, so the
Markov chain is irreducible. Consequently, all
states are transient, all are recurrent positive, or all
are recurrent null.
P.
42
Random Walk
Any solution is of the form
j 1
1 p
j o , j 1, 2,
qq
43
Random Walk
If p < q, then p/q < 1 and
2q 1 p
j 0
j
q p
0 , 0 1
2 q
and
1 1 p
2 q
if j 0
j
2 q 1 q
1 p p j 1
q if j 1
44
Random Walk
45
Random Walk
If p q, then we can use the following theorem to determine if
the states are transient or recurrent null.
Theorem 7. Let X be a irreducible Markov chain with transition
matrix P, and let Q be the matrix obtained from P by deleting
the k-row and the k-column for some k E. Then all states are
recurrent if and only if the only solution of
46
Random Walk
We eliminate state 0 and solve for h, obtaining:
0 for all i if p q
hi
1 p
q i
for all i if p q.
47
Random Walk
48
Random Walk
49
The Gambler’s Ruin
Let X be a Markov chain with state space
E {0and
, 1, 2transition
, , N },matrix
1
q 0 p 0
q 0 p
P
. . .
0 q 0 p
1
where This is also called a random
0 absorbing
walk with p 1, q barriers.
1 p.
50
The Gambler’s Ruin
If Pi is the probability of reaching N, starting in
state i, then
1 q p i 1
, if p
1 q p N
2
Pi
i 1
, if p
N 2
51
Instability of Slotted Aloha
Slotted Aloha is a Network Random Access Method
– Used in packet radio and satellite communications in which satellites
communicated by radio
– Time is divided into “slots” of one packet duration
– When a node has a packet to send, it waits until the start of the next
slot to send it
– If no other node attempts transmission during that time slot, the
transmission is successful
– If a collision occurs, each collided packet is transmitted with
probability p in each successive slot until it is successfully transmitted
1 2 3 4 5 6
Success Idle Idle Collision Idle Success
52
Instability of Slotted Aloha
Slotted Aloha can be represented as a Markov chain.
Assume that i packets arrive in a time slot with
probability ai, and at most one packet arrives at each
node in each time slot. If k 1, then
a0 k p (1 p ) k 1 j k 1
a0 [1 k p(1 p) k 1 ] a1 (1 p ) k jk
pk , j
a
1 [1 (1 p ) k
] j k 1
a j k j k2
53
Instability of Slotted Aloha
Using Markov chain analysis, one can show that, if
a0 + a1 < 1, then slotted Aloha is unstable, i.e., all
states are transient, and the number of the packets
that have not been successfully transmitted grows
without bound (see Ross pg 21).
54
Homework
Assign discussion problems to two students.
– Ross Chapter 4: 5, 14, 29, 45
– Prove that for a random walk, the all of the
states are recurrent null if p = ½ and transient
if p > ½.
55
Class Exercise
Let X be a Markov chain with state space {1, 2}, initial
distribution = (1/3, 2/3) and transition matrix
0.5 0.5
P
0.3 0.7
1. Calculate
1. P{X1 = 1}
2. P{X3 = 2 | X1 = 1}
3. the limiting distribution
2. Classify the states in this Markov chain
56