You are on page 1of 144

Stochastic Processes: An Introduction

Solutions Manual
Peter W Jones and Peter Smith
School of Computing and Mathematics, Keele University, UK
May 2009
Preface
The website includes answers and solutions of all the end-of-chapter problems in the textbook
Stochastic Processes: An Introduction. We hope that they will prove of help to lecturers and
students. Both the original problems as numbered in the text are also included so that the material
can be used as an additional source of worked problems.
There are obviously references to results and examples from the textbook, and the manual
should be viewed as a supplement to the book. To help identify the sections and chapters, the full
contents of Stochastic Processes follow this preface.
Every eort has been made to eliminate misprints or errors (or worse), and the authors, who
were responsible for the LaTeX code, apologise in advance for any which occur.
Peter Jones
Peter Smith Keele, May 2009
1
Contents of Stochastic Processes
Chapter 1: Some Background in Probability
1.1 Introduction
1.2 Probability
1.3 Conditional probability and independence
1.4 Discrete random variables
1.5 Continuous random variables
1.6 Mean and variance
1.7 Some standard discrete probability distributions
1.8 Some standard continuous probability distributions
1.9 Generating functions
1.10 Conditional expectation
Problems
Chapter 2: Some Gambling Problems
2.1 Gamblers ruin
2.2 Probability of ruin
2.3 Some numerical simulations
2.4 Expected duration of the game
2.5 Some variations of gamblers ruin
2.5.1 The innitely rich opponent
2.5.2 The generous gambler
2.5.3 Changing the stakes
Problems
Chapter 3: Random Walks
3.1 Introduction
3.2 Unrestricted random walks
3.3 Probability distribution after n steps
3.4 First returns of the symmetric random walk
3.5 Other random walks
Problems
Chapter 4: Markov Chains
4.1 States and transitions
4.2 Transition probabilities
4.3 General two-state Markov chain
4.4 Powers of the transition matrix for the m-state chain
4.5 Gamblers ruin as a Markov chain
4.6 Classication of states
4.7 Classication of chains
Problems
Chapter 5: Poisson Processes
5.1 Introduction
5.2 The Poisson process
5.3 Partition theorem approach
5.4 Iterative method
5.5 The generating function
5.6 Variance for the Poisson process
2
5.7 Arrival times
5.8 Summary of the Poisson process
Problems
Chapter 6: Birth and Death Processes
6.1 Introduction
6.2 The birth process
6.3 Birth process: generating function equation
6.4 The death process
6.5 The combined birth and death process
6.6 General population processes
Problems
Chapter 7: Queues
7.1 Introduction
7.2 The single server queue
7.3 The stationary process
7.4 Queues with multiple servers
7.5 Queues with xed service times
7.6 Classication of queues
7.7 A general approach to the M()/G/1 queue
Problems
Chapter 8: Reliability and Renewal
8.1 Introduction
8.2 The reliability function
8.3 The exponential distribution and reliability
8.4 Mean time to failure
8.5 Reliability of series and parallel systems
8.6 Renewal processes
8.7 Expected number of renewals
Problems
Chapter 9: Branching and Other Random Processes
9.1 Introduction
9.2 Generational growth
9.3 Mean and variance
9.4 Probability of extinction
9.5 Branching processes and martingales
9.6 Stopping rules
9.7 The simple epidemic
9.8 An iterative scheme for the simple epidemic
Problems
Chapter 10: Computer Simulations and Projects
3
Chapters of the Solutions Manual
Chapter 1: Some Background in Probability 5
Chapter 2: Some Gambling Problems 16
Chapter 3: Random Walks 30
Chapter 4: Markov Chains 44
Chapter 5: Poisson Processes 65
Chapter 6: Birth and Death Processes 71
Chapter 7: Queues 93
Chapter 8: Reliability and Renewal 108
Chapter 9: Branching and Other Random Processes 116
4
Chapter 1
Some background in probability
1.1. The Venn diagram of three events is shown in Figure 1.5(in the text). Indicate on the diagram
the following events:
(a) A B; (b) A (B C); (c) A (B C); (d) (A C)
c
; (e) (A B) C
c
.
(c)
(a) (b)
(d)
(e)
S S
S
S
S
A A
A
A
A
B B
B B
B
C
C
C C
C
Figure 1.1:
The events are shaded in Figure 1.1.
1.2. In a random experiment, A, B, C are three events. In set notation write down expressions
for the events:
(a) only A occurs;
(b) all three events A, B, C occur;
(c) A and B occur but C does not;
(d) at least one of the events A, B, C occurs;
(e) exactly one of the events A, B, C occurs;
(f ) not more than two of the events occur.
5
(a) A (B C)
c
; (b) A (B C) = A B C; (c) (A B) C
c
; (d) A B C;
(e) A (B C)
c
represents an event in A but not in either B nor C: therefore the answer is
(A (B C)
c
) (B (A C))
c
(C (A B)
c
).
1.3. For two events A and B, P(A) = 0.4, P(B) = 0.5 and P(A B) = 0.3. Calculate
(a) P(A B); (b) P(A B
c
); (c) P(A
c
B
c
).
(a) From (1.1) P(A B) = P(A) +P(B) P(A B), it follows that
P(A B) = 0.4 + 0.5 0.3 = 0.6.
(b) Since A = (A B
c
) (A B) and A B
c
, and A B are mutually exclusive, then,
P(A) = P[(A B
c
) (A B)] = P(A B
c
) +P(A B),
so that
P(A B
c
) = P(A) P(A B) = 0.4 0.3 = 0.1.
(c) Since A
c
B
c
= (A B)
c
, then
P(A
c
B
c
) = P[(A B)
c
] = 1 P(A B) = 1 0.3 = 0.7.
1.4. Two distinguishable fair dice a and b are rolled. What are the elements of the sample space?
What is the probability that the sum of the face values of the two dice is 9? What is the probability
that at least one 5 or at least one 3 appears?
The elements of the sample space are listed in Example 1.1. The event A
1
, that the sum is 9,
is given by
A
1
= {(3, 6), (4, 5), (5, 4), (6, 3)}.
Hence P =
4
36
=
1
9
.
Let A
2
be the event that at least one 5 or at least one 3 appears. Then by counting the elements
in the sample space in Example 11, P(A
2
) =
20
36
=
5
9
.
1.5. Two distinguishable fair dice a and b are rolled. What is the probability that the sum of the
faces is not more than 6?
Let the random variable X be the sum of the faces. By counting events in the sample space in
Example 1.1, P(X) =
15
36
=
5
12
.
1.6. A probability function {p
n
}, (n = 0, 1, 2, . . .) has a probability generating function
G(s) =

n=0
p
n
s
n
=
1
4
(1 + s)(3 + s)
1
2
.
Find the probability function {p
n
} and its mean.
Note that G(1) = 1. Using the binomial theorem
G(s) =
1
4
(1 + s)(3 + s)
1
2
=

3
4
(1 + s)(1 +
1
3
s)
1
2
=

3
4

n=0

1
2
n

s
3

n
+
3

3
4

n=1

1
2
n 1

s
3

n
.
6
The probabilities can now be read o from the coecients of the series:
p
0
=

3
4
, p
n
=

3
3
n
4

1
2
n

+ 3

1
2
n 1

, (n = 1, 2, . . .).
The expected value is given by
= G

(1) =
1
4
d
ds

(1 + s)(3 + s)
1
2

s=1
=

1
4
(3 + s)
1
2
+
1
8
(1 + s)(3 + s)

1
2

s=1
=
5
8
1.7. Find the probability generating function G(s) of the Poisson distribution (see Section 1.7) with
parameter given by
p
n
=
e

n
n!
, n = 0, 1, 2, . . . .
Determine the mean and variance of {p
n
} from the generating function.
Given p
n
= e

n
/n!, the generating function is given by
G(s) =

n=0
p
n
s
n
=

n=0
e

n
s
n
n!
= e

n=0
(s)
n
n!
= e
(s1)
.
The mean and variance are given by
= G

(1) =
d
ds

e
(s1)

s=1
= ,

2
= G

(1) + G

(1) [G

(1)]
2
= [
2
e
(s1)
+ e
(s1)

2
e
2(s1)
]
s=1
= ,
as expected.
1.8. A panel contains n warning lights. The times to failure of the lights are the independent
random variables T
1
, T
2
, . . . , T
n
which have exponential distributions with parameters
1
,
2
, . . . ,
n
respectively. Let T be the random variable of the time to rst failure, that is
T = min{T
1
, T
2
, . . . , T
n
}.
Show that T has an exponential distribution with parameter

n
j=1

j
. Show also that the probability
that the i-th panel light fails rst is
i
/(

n
j=1

j
).
The probability that no warning light has failed by time t is
P(T t) = P(T
1
t T
2
t T
n
t)
= P(T
1
t)P(T
2
t) P(T
n
t)
= e

1
t
e

2
t
e

n
t
= e
(
1
+
2
++
n
)t
.
7
Let T
i
represent the random variable that the ith component fails rst. The probability that
the ith component fails rst is
P(T
i
= T) =

n=i
P(T
n
> t)P(t < T
i
< t + t)
=

n=i
P(T
n
> t)[e

i
t
e

i
(t+t)
]

n=i
P(T
n
> t)
i
te

i
t

i
exp

t
n

i=1

dt =

i

n
i=1

i
as t 0.
1.9. The geometric probability function with parameter p is given by
p(x) = q
x1
p, x = 1, 2, . . .
where q = 1 p (see Section 1.7). Find its probability generating function. Calculate the mean
and variance of the geometric distribution from its pgf.
The generating function is given by
G(s) =

x=1
q
x1
ps
x
=
p
q

x=1
(qs)
x
=
p
q
qs
1 qs
=
ps
1 qs
,
using the formula for the sum of a geometric series.
The mean is given by
= G

(1) =
d
ds

ps
1 qs

s=1
=

p
1 qs
+
pqs
(1 qs)
2

s=1
=
1
p
.
For the variance,
G

(s) =
d
ds

p
(1 qs)
2

=
2pq
(1 qs)
3
.
is required. Hence

2
= G

(1) + G

(1) [G

(1)]
2
=
2q
p
2
+
1
p

1
p
2
=
1 p
p
2
.
1.10. Two distinguishable fair dice a and b are rolled. What are the probabilities that:
(a) at least one 4 appears;
(b) only one 4 appears;
(c) the sum of the face values is 6;
(d) the sum of the face values is 5 and one 3 is shown;
(e) the sum of the face values is 5 or only one 3 is shown?
From the Table in Example 1.1:
(a) If A
1
is the event that at least one 4 appears, then P(A
1
) =
11
36
.
(b) If A
2
is the event that only one 4 appears, then P(A
2
) =
10
36
=
5
18
.
(c) If A
3
is the event that the sum of the faces is 6, then P(A
3
) =
5
36
.
8
(d) If A
4
is the event that the face values is 5 and one 3 is shown, then P(A
4
) =
2
36
=
1
18
.
(e) If A
5
is the event that the sum of the faces is 5 or only one 3 is shown, then P(A
5
) =
7
36
.
1.11. Two distiguishable fair dice a and b are rolled. What is the expected sum of the face values?
What is the variance of the sum of the face values?
Let N be the random variable representing the sum x+y, where x and y are face values of the
two dice. Then
E(N) =
1
36
6

x=1
6

y=1
(x + y) =
1
36

6
6

x=1
x + 6
6

y=1
y

= 7.
and
V(N) = E(N
2
) E(N) =
1
36
6

x=1
6

y=1
(x + y)
2
7
2
=
1
36

12
6

x=1
x
2
+ 2(
6

x=1
x)
2

49
=
1
36
[(12 91) + 2 21
2
] 49 =
35
6
= 5.833 . . . .
1.12. Three distinguishable fair dice a, b and c are rolled. How many possible outcomes are there
for the faces shown? When the dice are rolled, what is the probability that just two dice show the
same face values and the third one is dierent?
The sample space contains 6
3
= 216 elements of the form, (in the order a, b, c),
S = {(i, j, k)}, (i = 1, . . . , 6; j = 1, . . . , 6; k = 1, . . . , 6).
Let A be the required event. Suppose that a and b have the same face values, which can occur in
6 ways, and that c has a dierent face value which can occurs in 5 ways. Hence the total number
of ways in which a and b are the same but c is dierent is 6 5 = 30 ways. The faces b and c,
and c and a could also be the same so that the total number of ways for the possible outcome is
3 30 = 90 ways. Therefore the required probability is
P(A) =
90
216
=
5
12
.
1.13. In a sample space S, the events B and C are mutually exclusive, but A and B, and A and
C are not. Show that
P(A (B C)) = P(A) +P(B) +P(C) P(A (B C)).
From a well-shued pack of 52 playing cards a single card is randomly drawn. Find the proba-
bility that it is a club or an ace or the king of hearts.
From (1.1) (in the book)
P(A (B C)) = P(A) +P(B C) P(A (B C)) (i).
Since B and C are mutually exclusive,
P(B C) = P(B) +P(C). (ii)
9
From (i) and (ii), it follows that
P(A (B C)) = P(A) +P(B) +P(C) P(A (B C)).
Let A be the event that the card is a club, B the event that it is an ace, and C the event that
it is the king of hearts. We require P(A(BC)). Since B and C are mutually exclusive, we can
use the result above. The individual probabilities are
P(A) =
13
52
=
1
4
; P(B) =
4
52
=
1
13
; P(C) =
1
52
,
and since A (B C) is the ace of clubs,
P(A (B C)) =
1
52
.
Finally
P(A (B C)) =
1
4
+
1
13
+
1
52

1
52
=
17
52
.
1.14. Show that
f(x) =

0 x < 0
1
2a
0 x a
1
2a
e
(xa)/a
x > a
is a possible probability density function. Find the corresponding probability function.
Check the density function as follows:

f(x)dx =
1
2a

a
0
dx +
1
2a


a
e
(xa)/a
dx
=
1
2

1
2
[e
(xa)/a
]

a
= 1.
The probability function is given by, for 0 x a,
F(x) =

f(u)du =

x
0
1
2a
du =
x
2a
,
and, for x > a, by
F(x) =

x
0
f(u)du =

a
0
1
2a
du +

a
0
1
2a
e
(ua)/a
du
=
1
2

1
2a
[ae
(ua)/a
]
x
a
= 1
1
2
e
(xa)/a
.
1.15. A biased coin is tossed. The probability of a head is p. The coin is tossed until the rst head
appears. Let the random variable N be the total number of tosses including the rst head. Find
P(N = n), and its pgf G(s). Find the expected value of the number of tosses.
The probability that the total number of throws is n (including the head) until the rst head
appears is
P(N = n) =
(n1) times
. .. .
(1 p)(1 p) (1 p) p = (1 p)
n1
p, (n 1)
10
The probability generating function is given by
G(s) =

n=1
(1 p)
n1
ps
n
=
p
1 p

n=1
[(1 p)s]
n
=
p
1 p

s(1 p)
[1 s(1 p)]
=
ps
1 s(1 p)
,
after summing the geometric series.
For the mean, we require G

(s) given by,


G

(s) =
p
[1 s(1 p)]
+
sp(1 p)
[1 s(1 p)]
2
=
p
[1 s(1 p)]
2
.
The mean is given by = G

(1) = 1/p.
1.16. The n random variables X
1
, X
2
, . . . , X
m
are independent and identically distributed each with
a gamma distribution with parameters n and . The random variable Y is dened by
Y = X
1
+ X
2
+ + X
m
.
Using the moment generating function, nd the mean and variance of Y .
The probability density function for the gamma distribution with parameters n and is
f(x) =

n
(n)
x
n1
e
x
.
It was shown in Section 1.9 that the moment generating function for Y is given, in general, by
M
Y
(s) = [M
X
(s)]
m
,
where X has a gamma distribution with the same parameters. Hence
M
Y
(s) =

nm
=

1
s

nm
= 1 +
nm

s +
nm(nm + 1)
2
2
s
2
+
Hence
E(Y ) =
nm

, V(Y ) = E(Y
2
) [E(Y )]
2
=
nm

2
.
1.17. A probability generating function with parameter 0 < < 1 is given by
G(s) =
1 (1 s)
1 + (1 s)
.
Find p
n
= P(N = n) by expanding the series in powers of s. What is the mean of the probability
function {p
n
}?
Applying the binomial theorem
G(s) =
1 (1 s)
1 + (1 s)
=
(1 )[1 + (/(1 ))s]
(1 + )[1 (/(1 + ))s]
=

1
1 +

1 +
s
1

n=

s
1 +

n
=
1
1 +

n=0


1 +

n
s
n
+

1 +

n=0


1 +

n
s
n+1
.
11
The summation of the two series leads to
G(s) =
1
1 +

n=0


1 +

n
s
n
+

n=1


1 +

n
s
n
=
1
1 +
+
2
1 +

n=1


1 +

n
s
n
Hence
p
0
=
1 +
1
, p
n
=
2
n
(1 + )
n+1
, (n = 1, 2, . . .).
The mean is given by
G

(1) =
d
ds

1 (1 s)
1 + (1 s)

s=1
=

2
[1 + a(1 s)]
2

s=1
= 2
1.18. Find the moment generating function of the random variables X which has the uniform
distribution
f(x) =

1/(b a) a x b
0 for all other values of x
Deduce E(X
n
).
The moment generating function of the uniform distribution is
M
X
(s) =

b
a
e
xs
b a
dx =
1
b a
1
s
[e
bs
e
as
]
=
1
b a

n=1

b
n
a
n
n!

s
n1
Hence
E(X) =
1
2
(b + a), E(X
n
) =
b
n+1
a
n+1
(n + 1)(b a)
.
1.19. A random variable X has the normal distribution N(,
2
). Find its momemt generating
function.
By denition
M
X
(s) = E(e
Xs
) =
1

e
sx
exp

(x )
2
2
2

dx
=
1

exp

2
2
xs (x )
2
2
2

dx
Apply the substitution x = +(v s): then
M
X
(s) = exp(s +
1
2

2
s
2
)

2
e

1
2
v
2
dv
= exp(s +
1
2

2
s
2
) + 1 = exp(s +
1
2

2
s
2
)
(see the Appendix for the integral).
12
Expansion of the exponential function in powers of s gives
M
X
(s) = 1 +s +
1
2
(
2
+
2
)s
2
+ .
So, for example, E(X
2
) =
2
+
2
.
1.20. Find the probability generating functions of the following distributions, in which 0 < p < 1:
(a) Bernoulli distribution: p
n
= p
n
(1 p)
n
, (n = 0, 1);
(b) geometric distribution: p
n
= p(1 p)
n1
, (n = 1, 2, . . .);
(c) negative binomial distribution with parameter r expressed in the form:
p
n
=

r + n 1
r 1

p
r
(1 p)
n
, (n = 0, 1, 2, . . .)
where r is a positive integer. In each case nd also the mean and variance of the distribution using
the probability generating function.
(a) For the Bernoulli distribution
G(s) = p
0
+ p
1
s = (1 p) + ps.
The mean is given by
= G

(1) = p,
and the variance by

2
= G

(1) + G

(1) [G

(1)]
2
= p p
2
= p(1 p).
(b) For the geometric distribution (with q = 1 p)
G(s) =

n=1
pq
n1
s
n
= ps

n=0
(qs)
n
=
ps
1 qs
summing the geometric series. The mean and variance are given by
= G

(1) =

p
1 qs

s=1
=
1
p
,

2
= G

(1) + G

(1) [G

(1)]
2
=

2pq
(1 qs)
3

s=1
+
1
p

1
p
2
=
1 p
p
2
.
(c) For the negative binomial distribution (with q = 1 p)
G(s) =

n=0

r + n 1
r 1

p
r
q
n
s
n
= p
r

1 + r(qs) +
r(r + 1)
2!
(qs)
2
+

=
p
r
(1 qs)
r
The derivatives of G(s) are given by
G

(s) =
rqp
r
(1 qs)
r+1
, G

(s) =
r(r + 1)q
2
p
r
(1 qs)
r+2
.
Hence the mean and variance are given by
= G

(1) =
rq
p
,
13

2
= G

(1) + G

(1) [G

(1)]
2
=
r(r + 1)q
2
p
2
+
rq
p

r
2
q
2
p
2
=
rq
p
2
1.21. A word of ve letters is transmitted by code to a receiver. The transmission signal is weak,
and there is a 5% probability that any letter is in error independently of the others. What is the
probability that the word is received correctly? The same word is transmitted a second time with
the same errors in the signal. If the same word is received, what is the probability now that the
word is correct?
Let A
1
, A
2
, A
3
, A
4
, A
5
be the events that the letters in the word are correct. Since the events
are independent, the probability that the word is correctly transmitted is
P(A
1
A
2
A
3
A
4
A
5
) = 0.95
5
0.774.
If a letter is sent a second time the probability that one error occurs twice is 0.05
2
= 0.0025.
Hence the probability that the letter is correct is 0.9975. For 5 letters the probability that the
word is correct is 0.9975
5
0.988.
1.22. A random variable N over the positive integers has the probability distribution with
p
n
= P(N = n) =

n
nln(1 )
, (0 < < 1; n = 1, 2, 3, . . .).
What is its probability generating function? Find the mean of the random variable.
The probability generating function is given by
G(s) =

n=1

n
s
n
nlog(1 )
=
log(1 s)
log(1 )
for 0 s < 1/.
Since
G

(s) =

(1 s) log(1 )
,
the mean is
= G

(1) =

(1 ) log(1 )
.
1.23. The source of a beam of light is a perpendicular distance d from a wall of length 2a, with
the perpendicular from the source meeting the wall at its midpoint. The source emits a pulse of
light randomly in a direction , the angle between the direction of the pulse and the perpendicular
is chosen uniformly in the range tan
1
(a/d) tan
1
(a/d). Find the probability distribution
of x (a x a) where the pulses hit the wall. Show that its density function is given by
f(x) =
d
2(x
2
+ d
2
) tan
1
(a/d)
,
(this the density function of a Cauchy distribution). If a , what can you say about the
mean of this distribution?
Figure 1.2 shows the beam and wall. Let X be the random variable representing any displacement
14
x
d
a
a

source
wall
beam
Figure 1.2: Source and beam for Problem 1.23
between a and x. Then
P(a X x) = P(a d tan x)
= P(tan
1
(a/d) + tan
1
(x/d))
=
tan
1
(x/d) + tan
1
(a/d)
2 tan
1
(a/d)
by uniformity. The density is given by
f(x) =
d
dx

tan
1
(x/d) + tan
1
(a/d)
2 tan
1
(a/d)

=
d
2(x
2
+ d
2
) tan
1
(a/d)
The mean is given by
=

a
a
xd
2(x
2
+ d
2
) tan
1
(a/d)
dx = 0,
since the integrand is an odd function and the limits are a.
For the innite wall the integral dening the mean becomes divergent.
1.24. Suppose that the random variable X can take the integer values 0, 1, 2, . . .. Let p
j
and q
j
be
the probabilities
p
j
= P(X = j), q
j
= P(X > j), (j = 0, 1, 2, . . .).
Show that, if
G(s) =

j=0
p
j
s
j
, H(s) =

j=0
q
j
s
j
,
then (1 s)H(s) = 1 G(s).
Show also that E(X) = H(1).
Using the series for H(s),
(1 s)H(s) = (1 s)

j=0
q
j
s
j
=

j=0
q
j
s
j

j=0
q
j
s
j+1
= q
0
+

j=1
(q
j
q
j1
)s
j
= q
0

j=1
P(X = j)s
j
15
= 1 p
0

j=1
p
j
s
j
= 1 G(s)
Note that generally H(s) is not a probability generating function.
The mean of the random variable X is given by
E(X) =

j=1
jp
j
= G

(1) = H(1),
dierentiating the formula above.
16
Chapter 2
Some gambling problems
2.1. In the standard gamblers ruin problem with total stake a and gamblers stake k and the
gamblers probability of winning at each play is p, calculate the probability of ruin in the following
cases;
(a) a = 100, k = 5, p = 0.6;
(b) a = 80, k = 70, p = 0.45;
(c) a = 50, k = 40, p = 0.5.
Also nd the expected duration in each case.
For p =
1
2
, the probability of ruin u
k
and the expected duration of the game d
k
are given by
u
k
=
s
k
s
a
1 s
a
, d
k
=
1
1 2p

k
a(1 s
k
)
(1 s
a
)

.
(a) u
k
0.132, d
k
409.
(b) u
k
0.866, d
k
592.
(c) For p =
1
2
,
u
k
=
a k
a
, d
k
= k(a k).
so that u
k
= 0.2, d
k
= 400.
2.2. In a casino game based on the standard gamblers ruin, the gambler and the dealer each start
with 20 tokens and one token is bet on at each play. The game continues until one player has no
further tokens. It is decreed that the probability that any gambler is ruined is 0.52 to protect the
casinos prot. What should the probability that the gambler wins at each play be?
The probability of ruin is
u =
s
k
s
a
1 s
a
,
where k = 20, a = 40, p is the probability that the gambler wins at each play, and s = (1 p)/p.
Let r = s
20
. Then u = r/(1 + r), so that r = u/(1 u) and
s =

u
1 u

1/20
.
Finally
p =
1
1 + s
=
(1 u)
1/20
(1 u)
1/20
+ u
1/20
0.498999.
17
2.3. Find general solutions of the following dierence equations:
(a) u
k+1
4u
k
+ 3u
k1
= 0;
(b) 7u
k+2
8u
k+1
+ u
k
= 0;
(c) u
k+1
3u
k
+ u
k1
+ u
k2
= 0.
(d) pu
k+2
u
k
+ (1 p)u
k1
= 0, (0 < p < 1).
(a) The characteristic equation is
m
2
4m + 3 = 0
which has the solutions m
1
= 1 and m
2
= 3. The general solution is
u
k
= Am
k
1
+ Bm
k
2
= A + B.3
k
,
where A and B are any constants.
(b) The characteristic equation is
7m
2
8m + 1 = 0,
which has the solutions m
1
= 1 and m
2
=
1
7
. The general solution is
u
k
= A + B
1
7
k
.
(c) The characteristic equation is the cubic equation
m
3
3m
2
+ m + 1 = (m1)(m
2
2m + 1) = 0,
which has the solutions m
1
= 1, m
2
= 1 +

2, and m
3
= 1

2. The general solution is


u
k
= A + B(1 +

2)
k
+ C(1

2)
k
.
(d) The characteristic equation is the cubic equation
pm
3
m + (1 p) = (m1)(pm
2
+ pm(1 p)) = 0,
which has the solutions m
1
= 1, m
2
=
1
2
+
1
2

[(4 3p)/p] and m


3
=
1
2

1
2

[(4 3p)/p]. The


general solution is
u
k
= A + Bm
k
2
+ Cm
k
3
.
2.4 Solve the following dierence equations subject to the given boundary conditions:
(a) u
k+1
6u
k
+ 5u
k1
= 0, u
0
= 1, u
4
= 0;
(b) u
k+1
2u
k
+ u
k1
= 0, u
0
= 1, u
20
= 0;
(c) d
k+1
2d
k
+ d
k1
= 2, d
0
= 0, d
10
= 0;
(d) u
k+2
3u
k
+ 2u
k1
= 0, u
0
= 1, u
10
= 0, 3u
9
= 2u
8
.
(a) The charactedristic equation is
m
2
6m + 5 = 0,
which has the solutions m
1
= 1 and m
2
= 5. Therefore the general solution is given by
u
k
= A + 5
k
B.
The boundary conditions u
0
= 1, u
4
= 0 imply
A + B = 1, A + 5
4
B = 0,
18
which have the solutions A = 625/624 and B = 1/624. The required solution is
u
k
=
625
624

5
k
624
.
(b) The characteristic equation is
m
2
2m + 1 = (m1)
2
= 0,
which has one solution m = 1. Using the rule for repeated roots,
u
k
= A + Bk.
The boundary conditions u
0
= 1 and u
20
= 0 imply A = 1 and B = 1/20. The required solution
is u
k
= (20 k)/20.
(c) This is an inhomogeneous equation. The characteristic equation is
m
2
2m + 1 = (m1)
2
= 0,
which has the repeated solution m = 1. Hence the complementary function is A + Bk. For a
particular solution, we must try u
k
= Ck
2
. Then
d
k+1
2d
k
+ d
k1
= C(k + 1)
2
2Ck
2
+ C(k 1)
2
= 2C = 2
if C = 1. Hence the general solution is
u
k
= A + Bk k
2
.
The boundary conditions d
0
= d
10
= 0 imply A = 0 and B = 10. Therefore the required solution
is u
k
= k(10 k).
(d) The characteristic equation is
m
3
3m + 2 = (m1)
2
(m + 2) = 0,
which has two solutions m
1
= 1 (repeated) and m
2
= 2. The general solution is given by
u
k
= A + Bk + C(2)
k
.
The boundary conditions imply
A + C = 1, A + 10B + C(2)
10
= 0, 3A + 27B + 3C(2)
9
= 2[A + 8B + C(2)
8
].
The solutions of these linear equations are
A =
31744
31743
, B =
3072
31743
, C =
1
31743
so that the required solution is
u
k
=
1024(31 2k) (2)
k
31743
.
2.5. Show that a dierence equation of the form
au
k+2
+ bu
k+1
u
k
+ cu
k1
= 0,
where a, b, c 0 are probabilities with a + b + c = 1, can never have a characteristic equation with
complex roots.
19
The characteristic equation can be expressed in the form
am
3
+ bm
2
m + c = (m1)[am
2
+ (a + b)m(1 a b)] = 0,
since a + b + c = 1. One solution is m
1
= 1, and the others satisfy the quadratic equation
am
2
+ (a + b)m(1 a b) = 0.
The discriminant is given by
(a + b)
2
+ 4(1 a b) = (a b)
2
+ 4a(1 a) 0,
since a 1.
2.6. In the standard gamblers ruin problem with equal probabilities p = q =
1
2
, nd the expected
duration of the game given the usual initial stakes of k units for the gambler and a k units for
the opponent.
The expected duration d
k
satises
d
k+1
2d
k
+ d
k1
= 2.
The complementary function is A + Bk, and for a particular solution try d
k
= Ck
2
. Then
d
k+1
2d
k
+ d
k1
+ 2 = C(k + 1)
2
2Ck
2
+ C(k 1)
2
+ 2 = 2C + 2 = 0
if C = 1. Hence
d
k
= A + Bk k
2
.
The boundary conditions d
0
= d
a
= 0 imply A = 0 and B = a. The required solution is therefore
d
k
= k(a k).
2.7. In a gamblers ruin problem the possibility of a draw is included. Let the probability that the
gambler wins, loses or draws against an opponent be respectively, p, p, 12p, (0 < p <
1
2
). Find the
probability that the gambler loses the game given the usual initial stakes of k units for the gambler
and a k units for the opponent. Show that d
k
, the expected duration of the game, satises
pd
k+1
2pd
k
+ pd
k1
= 1.
Solve the dierence equation and nd the expected duration of the game.
The dierence equation for the probability of ruin u
k
is
u
k
= pu
k+1
+ (1 2p)u
k
+ pu
k1
or u
k+1
2u
k
+ u
k1
= 0.
The general solution is u
k
= A + Bk. The boundary conditions u
0
= 1 and u
a
= 0 imply A = 1
and B = 1/a, so that the required probability is given by u
k
= (a k)/a.
The expected duration d
k
satises
d
k+1
2d
k
+ d
k+1
= 1/p.
The complementary function is A + Bk. For the particular solution try d
k
= Ck
2
. Then
C(k + 1)
2
2Ck
2
+ C(k 1)
2
= 2C = 1/p
20
if C = 1/(2p). The boundary conditions d
0
= d
a
= 0 imply A = 0 and B = a/(2p), so that the
required solution is
d
k
= k(a 2p)/(2p).
2.8. In the changing stakes game in which a game is replayed with each player having twice as
many units, 2k and 2(a k) respectively, suppose that the probability of a win for the gambler at
each play is
1
2
. Whilst the probability of ruin is unaected by how much is the expected duration of
the game extended compared with the original game?
With initial stakes of k and a k, the expected duration is d
k
= k(a k). If the initial stakes
are doubled to 2k and 2a 2k, then the expected duration becomes, using the same formula,
d
2k
= 2k(2a 2k) = 4k(a k) = 4d
k
.
2.9. A roulette wheel has 37 radial slots of which 18 are red, 18 are black and 1 is green. The
gambler bets one unit on either red or black. If the ball falls into a slot of the same colour, then
the gambler wins one unit, and if the ball falls into the other colour (red or black), then the casino
wins. If the ball lands in the green slot, then the bet remains for the next spin of the wheel or
more if necessary until the ball lands on a red or black. The original bet is either returned or lost
depending on whether the outcome matches the original bet or not (this is the Monte Carlo system).
Show that the probability u
k
of ruin for a gambler who starts with k chips with the casino holdiing
a k chips satises the dierence equation
36u
k+1
73u
k
+ 37u
k1
= 0.
Solve the dierence equation for u
k
. If the house starts with 1,000,000 at the roulette wheel and
the gambler starts with 10,000, what is the probability that the gambler breaks the bank if 5,000
are bet at each play.
In the US system the rules are less generous to the players. If the ball lands on green then the
player simply loses. What is the probability now that the player wins given the same initial stakes?
(see Luenberger (1979))
There is the possibility of a draw (see Example 2.1). At each play the probability that the
gambler wins is p =
18
37
. The stake is returned with probability
1
37

18
37

+
1
37
2

18
37

+ =
1
36
18
37
=
1
74
,
or the gambler loses after one or more greens also with probability 1/74 by the same argument.
Hence u
k
, the probability that the gambler loses satises
u
k
=
18
37
u
k+1
+
1
74
(u
k
+ u
k1
) +
18
37
u
k+1
,
or
36u
k+1
73u
k
+ 37u
k1
= 0.
The charactersitic equation is
36m
2
73m + 37 = (m1)(36m37) = 0,
which has the solutions m
1
= 1 and m
2
= 37/36. With u
0
= 1 and u
a
= 0, the required solution is
u
k
=
s
k
s
a
1 s
a
, s =
37
36
.
21
The bets are equivalent to k = 10000/5000 = 2, a = 1010000/5000 = 202. The probability that
the gambler wins is
1 u
k
=
1 s
k
1 s
a
=
1 s
2
1 s
202
= 2.23 10
4
.
In the US system, u
k
satises
u
k
=
18
37
u
k+1
+
19
37
u
k1
, or 18u
k+1
37u
k
+ 19u
k1
= 0.
in this case the ratio is s

= 19/18. Hence the probability the the gambler wins is


1 u
k
=
1 s
2
1 s
202
= 2.06 10
6
,
which is less than the previous value.
2.10. In a single trial the possible scores 1 and 2 can occur each with probability
1
2
. If p
n
is the
probability of scoring exactly n points at some stage, show that
p
n
=
1
2
p
n1
+
1
2
p
n2
.
Calculate p
1
and p
2
, and nd a formula for p
n
. How does p
n
behave as n becomes large? How do
you interpret the result?
Let A
n
be the event that the score is n at some stage. Let B
1
be the event score 1, and B
2
score 2. Then
P(A
n
) = P(A
n
|B
1
)P(B
1
) +P(A
n
|B
2
)P(B
2
) = P(A
n1
)
1
2
+P(A
n2
)
1
2
,
or
p
n
=
1
2
p
n1
+
1
2
p
n2
.
Hence
2p
n
p
n1
p
n2
= 0.
The characteristic equation is
2m
2
m1 = (m1)(2m + 1) = 0,
which has the solutions m
1
= 1 and m
2
=
1
2
. Hence
p
n
= A + B(
1
2
)
n
.
The initial conditions are p
1
=
1
2
and p
2
=
1
2
+
1
2
1
2
=
3
4
. Hence
A
1
2
B =
1
2
, A +
1
4
B =
3
4
,
so that A =
2
3
, B =
1
3
. Hence
p
n
=
2
3
+
1
3
(
1
2
)
n
, (n = 1, 2, . . .).
As n , p
n

2
3
.
2.11. In a single trial the possible scores 1 and 2 can occur with probabilities q and 1 q, where
0 < p < 1. Find the probability of scoring exactly n points at some stage in an indenite succession
of trials. Show that
p
n

1
2 p
,
22
as n .
Let p
n
be the probability. Then
p
n
= qp
n1
+ (1 q)p
n2
, or p
n
qp
n1
(1 q)p
n2
= 0.
The characteristic equation is
m
2
qm(1 q) = (m1)[m + (1 q)] = 0,
which has the solutions m
1
= 1 and m
2
= (1 q). Hence
p
n
= A + B(q 1)
n
.
The initial conditions are p
1
= q, p
2
= 1 q + q
2
, which imply
q = A + B(q 1), 1 q + q
2
= A + B(q 1)
2
.
The solution of these equations leads to A = 1/(2 q) and B = (q 1)/(q 2), so that
p
n
=
1
2 q
[1 (q 1)
n+1
].
2.12. The probability of success in a single trial is
1
3
. If u
n
is the probability that there are no two
consecutive successes in n trials, show that u
n
satises
u
n+1
=
2
3
u
n
+
2
9
u
n1
.
What are the values of u
1
and u
2
? Hence show that
u
n
=
1
6

(3 + 2

3)

1 +

3
3

n
+ (3 2

3)

3
3

.
Let A
n
be the event that there have not been two consecutive successes in the rst n trials.
Let B
1
be the event of success and B
2
the event of failure. Then
P(A
n
) = P(A
n
|B
1
)P(B
1
) +P(A
n
|B
2
)P(B
2
).
Now P(A
n
|B
2
) = P(A
n1
): failure will not change the probability. Also
P(A
n
|B
1
) = P(A
n1
|B
2
)P(B
2
) = P(A
n2
)P(B
2
).
Since P(B
1
) =
1
3
, P(B
2
) =
2
3
,
u
n
=
2
9
u
n2
+
2
3
u
n1
or 9u
n
6u
n1
2u
n
= 0,
where u
n
= P(A
n
).
The characteristic equation is
9m
2
6m2 = 0,
which has the solutions m
1
=
1
3
(1 +

3) and m
2
=
1
3
(1

3). Hence
u
n
= A
1
3
n
(1 +

3)
n
+ B
1
3
n
(1

3)
n
.
23
The initial conditions are u
1
= 1 and u
2
= 1
1
3
1
3
=
8
9
. Therefore A and B are dened by
1 =
A
3
(1 +

3) +
B
3
(1

3),
8
9
=
A
9
(1 +

3)
2
+
B
9
(1

3)
2
=
A
9
(4 + 2

3) +
B
9
(4 2

3).
The solutions are A =
1
6
(2

3 + 3) and B =
1
6
(2

3 + 3). Finally
u
n
=
1
6 3
n
[(2

3 + 3)(1 +

3)
n
+ (2

3 + 3)(1

3)
n
].
2.13. A gambler with initial capital k units plays against an opponent with initial capital a k
units. At each play of the game the gambler either wins one unit or loses one unit with probability
1
2
. Whenever the opponent loses the game, the gambler returns one unit so that the game may
continue. Show that the expected duration of the game is k(2a 1 k) plays.
The expected duration d
k
satises
d
k+1
2d
k
+ d
k1
= 2, (k = 1, 2, . . . , a 1).
The boundary conditions are d
0
= 0 and d
a
= d
a1
, indicating the return of one unit when the
gambler loses. The general solution for the duration is
d
k
= A + Bk k
2
.
The boundary conditions imply
A = 0, A + Ba a
2
= A + B(a 1) (a 1)
2
,
so that B = 2a 1. Hence d
k
= k(2a 1 k).
2.14. In the usual gamblers ruin problem, the probability that the gambler is eventually ruined is
u
k
=
s
k
s
a
1 s
a
, s =
q
p
, (p =
1
2
).
In a new game the stakes are halved, whilst the players start with the same initial sums. How does
this aect the probability of losing by the gambler? Should the gambler agree to this change of rule
if p <
1
2
? By how many plays is the expected duration of the game extended?
The new probability of ruin v
k
(with the stakes halved) is, adapting the formula for u
k
,
v
k
= u
2k
=
s
2k
s
2a
1 s
2a
=
(s
k
+ s
a
)(s
k
s
a
)
(1 s
a
)(1 + s
a
)
= u
k

s
k
+ s
a
1 + s
a

.
Given p <
1
2
, then s = (1 p)/p > 1 and s
k
> 1. It follows that
v
k
> u
k

1 + s
a
1 + s
a

= u
k
.
With this change the gambler is more likely to lose.
From (2.9), the expected duration of the standard game is given by
d
k
=
1
1 2p

k
a(1 s
k
)
(1 s
a
)

.
24
With the stakes halved the expected duration h
k
is
h
k
= d
2k
=
1
1 2p

2k
2a(1 s
2k
)
(1 s
2a
)

.
The expected duration is extended by
h
k
d
k
=
1
1 2p

k
2a(1 s
2k
)
(1 s
2a
)
+
a(1 s
k
)
(1 s
a
)

=
1
1 2p

k +
a(1 s
k
)(s
a
1 2s
k
)
(1 s
2a
)

.
2.15. In a gamblers ruin game, suppose that the gambler can win 2 with probability
1
3
or lose 1
with probability
2
3
. Show that
u
k
=
(3k 1 3a)(2)
a
+ (2)
k
1 (3a + 1)(2)
a
.
Compute u
k
if a = 9 for k = 1, 2, . . . , 8.
The probability of ruin u
k
satises
u
k
=
1
3
u
k+2
+
2
3
u
k1
or u
k+2
3u
k
+ 2u
k
= 0.
The characteristic equation is
m
3
3m + 2 = (m1)
2
(m + 2) = 0,
which has the solutions m
1
= 1 (repeated) and m
2
= 2. Hence
u
k
= A + Bk + C(2)
k
.
The boundary conditions are u
0
= 1, u
a
= 0, u
a1
=
2
3
u
a2
. The constants A, B and C satisfy
A + C = 1, A + Ba + C(2)
a
= 0,
3[A + B(a + 1) + C(2)
a1
] = 2[A + B(a 2) + C(2)
a2
],
or
A + B(a + 1) 8C(2)
a2
= 0.
The solution of these equations is
A =
(2)
a
(3a + 1)
1 (2)
a
(3a + 1)
, B =
3(2)
a
1 (2)
a
(3a + 1)
, C =
1
1 (2)
a
(3a + 1)
.
Finally
u
k
=
(3k 1 3a)(2)
a
+ (2)
k
1 (2)
a
(3a + 1)
.
The values of the probabilities u
k
for a = 9 are shown in the table below.
k 1 2 3 4 5 6 7 8
u
k
0.893 0.786 0.678 0.575 0.462 0.362 0.241 0.161
25
2.16. Find the general solution of the dierence equation
u
k+2
3u
k
+ 2u
k1
= 0.
A reservoir with total capacity of a volume units of water has, during each day, either a net
inow of two units with probability
1
3
or a net outow of one unit with probability
2
3
. If the
reservoir is full or nearly full any excess inow is lost in an overow. Derive a dierence equation
for this model for u
k
, the probability that the reservoir will eventually become empty given that it
initially contains k units. Explain why the upper boundary conditions can be written u
a
= u
a1
and u
a
= u
a2
. Show that the reservoir is certain to be empty at some time in the future.
The characteristic equation is
m
3
3m + 2 = (m1)
2
(m + 2) = 0.
The general solution is (see Problem 2.15)
u
k
= A + Bk + C(1)
k
.
The boundary conditions for the reservoir are
u
0
= 1, u
a
=
1
3
u
a
+
2
3
u
a1
, u
a1
=
1
3
u
a
+
2
3
u
a2
.
The latter two conditions are equivalent to u
a
= u
a1
= u
a2
. Hence
A + C = 1, A + Ba + C(2)
a
= A + B(a 1) + C(2)
a1
= A + B(a 2) + C(2)
a2
.
which have the solutions A = 1, B = C = 0. The solution is u
k
= 1, which means that that the
reservoir is certain to empty at some future date.
2.17. Consider the standard gamblers ruin problem in which the total stake is a and gamblers
stake is k, and the gamblers probability of winning at each play is p and losing is q = 1 p. Find
u
k
, the probability of the gambler losing the game, by the following alternative method. List the
dierence equation (2.2) as
u
2
u
1
= s(u
1
u
0
) = s(u
1
1)
u
3
u
2
= s(u
2
u
1
) = s
2
(u
1
1)
.
.
.
u
k
u
k1
= s(u
k1
u
k2
) = s
k1
(u
1
1),
where s = q/p =
1
2
and k = 2, 3, . . . a. The boundary condition u
0
= 1 has been used in the rst
equation. By adding the equations show that
u
k
= u
1
+ (u
1
1)
s s
k
1 s
.
Determine u
1
from the other boundary condition u
a
= 0, and hence nd u
k
. Adapt the same
method for the special case p = q =
1
2
.
Addition of the equations gives
u
k
u
1
= (u
1
1)(s + s
2
+ + s
k1
) = (u
1
1)
s s
k
1 s
26
summing the geometric series. The condition u
a
= 0 implies
u
1
= (u
1
1)
s s
a
1 s
.
Hence
u
1
=
s s
a
1 s
a
,
so that
u
k
=
s
k
s
a
1 s
.
2.18. A car park has 100 parking spaces. Cars arrive and leave randomly. Arrivals or departures
of cars are equally likely, and it is assumed that simultaneous events have negligible probability.
The state of the car park changes whenever a car arrives or departs. Given that at some instant
there are k cars in the car park, let u
k
be the probability that the car park rst becomes full before
it becomes empty. What are the boundary conditions for u
0
and u
100
? How many car movements
can be expected before this occurs?
The probability u
k
satises the dierence equation
u
k
=
1
2
u
k+1
+
1
2
u
k1
or u
k+1
2u
k
+ u
k1
.
The general solution is u
k
= A + Bk. The boundary conditions are u
0
= 0 and u
100
= 1. Hence
A = 0 and B = 1/100, and u
k
= k/100.
The expected duration of car movements until the car park becomes full is d
k
= k(100 k).
2.19. In a standard gamblers ruin problem with the usual parameters, the probability that the
gambler loses is given by
u
k
=
s
k
s
a
1 s
a
, s =
1 p
p
.
If p is close to
1
2
, given say by p =
1
2
+ where || is small, show, by using binomial expansions,
that
u
k
=
a k
a

1 2k
4
3
(a 2k)
2
+ O(
3
)

as 0. (The order O terminology is dened as follows: we say that a function g() = O(


b
) as
0 if g()/
b
is bounded in a neighbourhood which contains = 0. See also the Appendix in the
book.)
Let p =
1
2
+ . Then s = (1 2)/(1 + 2), and
u
k
=
(1 2)
k
(1 + 2)
k
(1 2)
a
(1 + 2)
a
1 (1 2)
a
(1 + 2)
a
.
Apply the binomial theorem to each term. The result is
u
k
=
a k
a

1 2k
4
3
(a 2k)
2
+ O(
3
)

.
[Symbolic computation of the series is a useful check.]
2.20. A gambler plays a game against a casino according to the following rules. The gambler and
casino each start with 10 chips. From a deck of 53 playing cards which includes a joker, cards are
27
randomly and successively drawn with replacement. If the card is red or the joker the casino wins 1
chip from the gambler, and if the card is black the gambler wins 1 chip from the casino. The game
continues until either player has no chips. What is the probability that the gambler wins? What
will be the expected duration of the game?
From (2.4) the probability u
k
that the gambler loses is (see (2.4))
u
k
=
s
k
s
a
1 s
a
,
with k = 10, a = 20, p = 26/53, and s = 27/26. Hence
u
10
=
(27/26)
10
(27/26)
20
1 (27/26)
20
0.593.
Therefore the probability that the gambler wins is approximately 0.407.
By (2.9)
d
k
=
1
1 2p

k
a(1 s
k
1 s
a

= 98.84,
for the given data.
2.21. In the standard gamblers ruin problem with total stake a and gamblers stake k, the probability
that the gambler loses is
u
k
=
s
k
s
a
1 s
a
,
where s = (1 p)/p. Suppose that u
k
=
1
2
, that is fair odds. Express k as a function of a. Show
that,
k =
ln[
1
2
(1 + s
a
)]
ln s
.
Given
u
k
=
s
k
s
a
1 s
a
and u
k
=
1
2
,
then 1 s
a
= 2(s
k
s
a
) or s
k
=
1
2
(1 + s
a
). Hence
k =
ln[
1
2
(1 + s
a
)]
ln s
,
but generally k will not be an integer.
2.22. In a gamblers ruin game the probability that the gambler wins at each play is
k
and loses
is 1
k
, (0 <
k
< 1, 0 k a 1), that is, the probability varies with the current stake. The
probability u
k
that the gambler eventually loses satises
u
k
=
k
u
k+1
+ (1
k
)u
k1
, u
o
= 1, u
a
= 0.
Suppose that u
k
is a specied function such that 0 < u
k
< 1, (1 k a 1), u
0
= 1 and u
a
= 0.
Express
k
in terms of u
k1
, u
k
and u
k+1
.
Find
k
in the following cases:
(a) u
k
= (a k)/a;
(b) u
k
= (a
2
k
2
)/a
2
;
(c) u
k
=
1
2
[1 + cos(k/a)].
28
From the dierence equation

k
=
u
k
u
k1
u
k+1
u
k1
.
(a) u
k
= (a k)/a. Then

k
=
(a k) (a k + 1)
(a k 1) (a k + 1)
=
1
2
,
which is to be anticipated from eqn (2.5).
(b) u
k
= (a
2
k
2
)/a
2
. Then

k
=
(a
2
k
2
) [a
2
(k 1)
2
]
[a
2
(k + 1)
2
] [a
2
(k 1)
2
]
=
2k 1
4k
.
(c) u
k
= 1/(a + k). Then

k
=
[1/(a + k)] [1/(a + k 1)]
[1/(a + k + 1)] [1/(a + k 1)]
=
a + k + 1
2(a + k)
.
2.23. In a gamblers ruin game the probability that the gambler wins at each play is
k
and loses
is 1
k
, (0 <
k
< 1, 1 k a 1), that is, the probability varies with the current stake. The
probability u
k
that the gambler eventually loses satises
u
k
=
k
u
k+1
+ (1
k
)u
k1
, u
o
= 1, u
a
= 0.
Reformulate the dierence equation as
u
k+1
u
k
=
k
(u
k
u
k1
),
where
k
= (1
k
)/
k
. Hence show that
u
k
= u
1
+
k1
(u
1
1), (k = 2, 3, . . . , a)
where

k
=
1
+
1

2
+ +
1

2
. . .
k
.
Using the boundary condition at k = a, conrm that
u
k
=

a1

k1
1 +
a1
.
Check that this formula gives the usual answer if
k
= p =
1
2
, a constant.
The dierence equation can be expressed in the equivalent form
u
k+1
u
k
=
k
(u
k
u
k1
),
where
k
= (1
k
)/
k
. Now list the equations as follows, noting that u
0
= 0,:
u
2
u
1
=
1
(u
1
1)
u
3
u
2
=
1

2
(u
1
1)
=
u
k
u
k1
=
1

2

k1
(u
1
1)
Adding these equations, we obtain
u
k
u
1
=
k1
(u
1
1),
29
where

k1
=
1
+
1

2
+ +
1

2

k1
.
The condition u
a
= 0 implies
u
1
=
a1
(u
1
1),
so that
u
1
+

a1
1 +
a1
.
Finally
u
k
=

a1

k1
1 +
a1
.
If
k
= p =
1
2
, then
k
= (1 p)/p = s, say, and

k
= s + s
2
+ + s
k
=
s s
k+1
1 s
.
Hence
u
k
=
(s s
a
)/(1 s) (s s
k
)/(1 s)
1 + (s s
a
)/(1 s)
=
s
k
s
a
1 s
a
as required.
2.24. Suppose that a fair n-sided die is rolled n independent times. A match is said to occur if side
i is observed on the ith trial, where i = 1, 2, . . . , n.
(a) Show that the probability of at least one match is
1

1
1
n

n
.
(b) What is the limit of this probability as n ?
(c) What is the probability that just one match occurs in n trials?
(d) What value does this probability approach as n ?
(e) What is the probability that two or more matches occur in n trials?
(a) The probability of no matches is

n 1
n

n
.
The probability of at least one match is
1

n 1
n

n
= 1

1
1
n

n
.
(b) As n ,

1
1
n

n
e
1
.
Hence for large n, the probability of at least one match approaches 1 e
1
= (e 1)/e.
(c) There is only one match with probability

n 1
n

n1
.
30
(d) As n

n 1
n

n1
=

1
1
n

1
1
n

n
= e
1
.
(e) Probability of two or more matches is

n 1
n

n1

n 1
n

n
=
1
n

1
1
n

n1
.
31
Chapter 3
Random Walks
3.1. In a simple random walk the probability that the walk advances by one step is p and retreats by one
step is q = 1 p. At step n let the position of the walker be the random variable X
n
. If the walk starts at
x = 0, enumerate all possible sample paths which lead to the value X
4
= 2. Verify that
P{X
4
= 2} =

4
1

pq
3
.
If the walks which start at x = 0 and nish at x = 2, then each walk must advance one step with
probability p and retreat 3 steps with probability q
3
. the possible walks are:
0 1 2 3 2
0 1 2 1 2
0 1 0 1 2
0 1 0 1 2
By (3.4),
P{X
4
= 2} =

4
1

pq
3
.
3.2. A symmetric random walk which starts from the origin. Find the probability that the walker is at the
origin at step 8. What is the probability also at step 8 that the walker is at the origin but that it is not the
rst visit there?
By (3.4), the probability that the walker is at the origin at step 8 is given by
P(X
8
) =

8
4

1
2
8
=
8!
4!4!
1
2
8
= 0.273.
The generating function of the rst returns f
n
is given by (see Section 3.4)
Q(s) =

n=0
f
n
s
n
= 1 (1 s
2
)
1
2
.
We require f
8
in the expansion of Q(s). Thus, using the binomial theorem, the series expansion for Q(s)
is
Q(s) =
1
2
s
2
+
1
8
s
4
+
1
16
s
6
+
5
128
s
8
+O(s
10
).
Therefore the probability of a rst return at step 8 is 5/128 = 0.039. Hence the probability that the walk
is at the origin but not a rst return is 0.273 0.039 = 0.234.
32
3.3. An asymmetric walk starts at the origin. From eqn (3.4), the probability that the walk reaches x in n
steps is given by
v
n,x
=

n
1
2
(n +x)

p
1
2
(n+x)
q
1
2
(nx)
,
where n and x are both even or both odd. If n = 4, show that the mean value position x is 4(p q),
conrming the result in Section 3.2.
The furthest positions that the walk can reach from the origin in 4 steps are x = 4 and x = 4, and
since n is even, the only other positions reachable are x = 2, 0, 2. Hence the required mean value is
= 4v
4,4
2v
4,2
+ 2v
4,2
+ 4v
4,4
= 4

4
0

q
4
2

4
1

pq
3
+ 2

4
3

p
3
q + 4

4
4

p
4
= 4q
4
8pq
3
+ 8p
3
q + 4p
4
= 4(p q)(p +q)
3
= 4(p q).
3.4. The pgf for the rst return distribution {f
n
}, (n = 1, 2, . . .), to the origin in a symmetric random walk
is given by
F(s) = 1 (1 s
2
)
1
2
,
(see Section 3.4).
(a) Using the binomial theorem nd a formula for f
n
, the probability that the rst return occurs at the n-th
step.
(b) What is the variance of f
n
?
Using the binomial theorem
F(s) = 1 (1 s
2
)
1
2
= 1

n=0

1
2
n

(1)
n
s
2n
.
(a) From the series, the probability of a rst return at step n is
f
n
=

(1)
1
2
n+1

1
2
1
2
n

n even
0 n odd
(b) The variance of f
n
is dened by
V = F

(1) F

(1) [F

(1)]
2
.
We can anticipate that the variance will be innite (as is the case with the mean) since the limit of the
derivatives of F(s) are unbounded as s 1.
3.5. A coin is spun 2n times and the sequence of heads and tails is recorded. What is the probability that
the number of heads equals the number of tails after 2n spins?
This problem can be viewed as a symmetric random walk starting at the origin in which a head is
represented as a step to the right, say, and a tail a step to the left. We require the probability that the
walk returns to the origin after 2n steps, which is (see Section 3.3)
v
2n,0
=
1
2
2n

2n
n

=
(2n)!
2
2n
n!n!
.
3.6. For an asymmetric walk with parameters p and q = 1 p, the probability that the walk is at the origin
after n steps is
q
n
= v
n,0
=

n
1
2
n

p
1
2
n
q
1
2
n
n even,
0 n odd
33
from eqn (3.4). Show that its generating function is
H(s) = (1 4pqs
2
)

1
2
.
If p = q, show that the mean number of steps to the return is
m = H

(1) =
4pqs
(1 4pqs
2
)
3
2

s=1
=
4pq
(1 4pq)
3
2
.
What is its variance?
The generating function H(s) is dened by
H(s) =

n=0

2n
n

p
n
q
n
s
2n
=

n=0
2
2n

1
2
n

p
n
q
n
s
2n
= (1 4pqs
2
)

1
2
using the binomial identity from Section 3.3 or the Appendix.
The mean number of steps to the return is
= H

(1) =
4pqs
(1 4pqs
2
)
3
2

s=1
=
4pq
(1 4pq)
3
2
.
The second derivative of H(s) is
H

(s) =
4pq + 32p
2
q
2
s
2
(1 4pqs
2
)
5
2
.
Hence the variance is
V(W) = H

(1) +H

(1) [H

(1)]
2
=
4pqs
(1 4pq)
3
2
+
4pq
(1 4pq)
3
2
+
16p
2
q
2
(1 4pq)
3
=
4pq[(1 4pq)
1
2
(1 + 4pq 4p
2
q
2
) 4pq]
(1 4pq)
3
.
3.7. Using the results of Problem 3.6 and eqn (3.12) relating to the generating functions of the returns and
rst returns to the origin, namely
H(s) 1 = H(s)Q(s),
which is still valid for the asymmetric walk, show that
Q(s) = 1 (1 4pqs
2
)
1
2
,
where p = q. Show that a rst return to the origin is not certain unlike the situation in the symmetric
walk. Find the mean number of steps to the rst return.
From Problem 3.6,
H(s) = (1 4pqs
2
)

1
2
,
so that, by (3.12),
Q(s) = 1
1
H(s)
= 1 (1 4pqs
2
)
1
2
.
It follows that
Q(1) =

n=1
f
n
= 1 (1 4pq)
1
2
= 1 [(p +q)
2
4pq]
1
2
= [(p q)
2
]
1
2
= 1 |p q| < 1,
34
if p = q. Hence a rst return to the origin is not certain.
The mean number of steps is
= Q

(1) =
4pq
(1 4pq)
1
2
=
4pq
|p q|
.
3.8. A symmetric random walk starts from the origin. Show that the walk does not revisit the origin in the
rst 2n steps with probability
h
n
= 1 f
2
f
4
f
2n
,
where f
n
is the probability that a rst return occurs at the n-th step.
The generating function for the sequence {f
n
} is
Q(s) = 1 (1 s
2
)
1
2
,
(see Section 3.4). Show that
f
n
=

(1)
1
2
n+1
1
2
1
2
n

n even
0 n odd
, (n = 1, 2, 3, . . .).
Show that h
n
satises the rst-order dierence equation
h
n+1
h
n
= (1)
n+1

1
2
n + 1

.
Verify that this equation has the general solution
h
n
= C +

2n
n

1
2
2n
,
where C is a constant. By calculating h
1
, conrm that the probability of no return to the origin in the rst
2n steps is

2n
n

/2
2n
.
The probability that a rst return occurs at step 2j is f
2j
: a rst return cannot occur after an odd
number of steps. Therefore the probability h
n
that a rst return has not occurred in the rst 2n steps is
given by the dierence
h
n
= 1 f
2
f
4
f
2n
.
The probability f
m
, that the rst return to the origin occurs at step m is the coecient of s
m
in the
expansion
Q(s) = 1 (1 s
2
)
1
2
=

n=1

1
2
n

(s
2
)
n
.
Therefore
f
m
=

(1)
1
2
m+1

1
2
1
2
m

m even
0 m odd
(m = 1, 2, . . .)
The dierence
h
n
= 1 f
2
f
4
f
2n
= 1

1
2
1

1
2
2

1
2
n

(1)
n
Hence h
n
staises the dierence equation
h
n+1
h
n
=

1
2
n + 1

(1)
n+1
.
35
The homogeneous part of the dierence equation has a constant solution C, say. For the particular solution
try the choice suggested in the question, namely
h
n
=

2n
n

1
2
2n
.
Then
h
n+1
h
n
=

2n + 2
n + 1

1
2
2n+2

2n
n

1
2
2n
=

2n
n

1
2
2n+1
1
(n + 1)
=
(1)
n+1
2(n + 1)

1
2
n

(using the binomial identity before (3.7)


= (1)
n+1

1
2
n + 1

Hence
h
n
= C +
1
2
2n

2n
n

.
The initial condition is h
1
=
1
2
, from which it follows that C = 0. Therefore the probability that no return
to the origin has occurred in the rst 2n steps is
h
n
=
1
2
2n

2n
n

.
3.9. A walk can be represented as a connected graph between coordinates (n, y) where the ordinate y is the
position on the walk, and the abscissa n represents the number of steps. A walk of 7 steps which joins
(0, 1) and (7, 2) is shown in Fig. 3.1. Suppose that a walk starts at (0, y
1
) and nishes at (n, y
2
), where
Figure 3.1: Representation of a random walk
y
1
> 0, y
2
> 0 and n + y
2
y
1
is an even number. Suppose also that the walk rst visits the origin at
n = n
1
. Reect that part of the path for which n n
1
in the n-axis (see Fig. 3.1), and use a reection
argument to show that the number of paths from (0, y
1
) to (n, y
2
) which touch or cross the n-axis equals
the number of all paths from (0, y
1
) to (n, y
2
). This is known as the reection principle.
All paths from (0, y
1
) to (n, y
2
) must cut the n axis at least once. Let (n
1
, 0) be the rst such contact
with n axis. Reect the path for n n
1
and y 0 in the n axis. The result is a path from (0, y
1
) to
(n, y
2
) which touches or cuts the n axis at least once. All such paths must be included.
36
3.10. A walk starts at (0, 1) and returns to (2n, 1) after 2n steps. Using the reection principle (see Problem
3.9) show that there are
(2n)!
n!(n + 1)!
dierent paths between the two points which do not ever revisit the origin. What is the probability that
the walk ends at (2n, 1) after 2n steps without ever visiting the origin, assuming that the random walk is
symmetric?
Show that the probability that the rst visit to the origin after 2n+1 steps is
p
n
=
1
2
2n+1
(2n)!
n!(n + 1)!
.
Let M(m, d) represent the total number of dierent paths in the (n, y) plane which are of length m
joining positions denoted by y = y
1
and y = y
2
: here d is the absolute dierence d = |y
2
y
1
|. The total
number of paths from (0, 1) to (0, 2n) is
M(2n, 0) =

2n
n

.
By the reection principle (Problem 3.9) the number of paths which cross the n axis (that is, visit the
origin) is
M(2n, 2) =

2n
n 1

.
Hence the number of paths from (0, 1) to (0, 2n) which do not visit the origin is
M(2n, 0) M(2n, 2) =

2n
n

2n
n 1

=
(2n)!
n!n!

(2n)!
(n 1)!(n + 1)!
=
(2n)!)
n!(n + 1)!
The total number of paths is 2
2n
. Also to visit the origin for the rst time at step 2n + 1, the walk
must be at y = 1 at step 2n, from where there is a probability of
1
2
that the walk moves to the origin.
Hence the probability is
p
n
=
1
2
2n
(2n)!
n!(n + 1)!
1
2
=
1
2
2n+1
(2n)!
n!(n + 1)!
.
3.11. A symmetric random walks starts at the origin. Let f
n,1
be the probability that the rst visit to
position x = 1 occurs at the n-th step. Obviously, f
2n,1
= 0. The result from Problem 3.10 can be adapted
to give
f
2n+1,1
=
1
2
2n+1
(2n)!
n!(n + 1)!
, (n = 0, 1, 2, . . .).
Suppose that its pgf is
G
1
(s) =

n=0
f
2n+1,1
s
2n+1
.
Show that
G
1
(s) = [1 (1 s
2
)
1
2
]/s.
[Hint: the identity
1
2
2n+1
(2n)!
n!(n + 1)!
= (1)
n

1
2
n + 1

, (n = 0, 1, 2, . . .)
is useful in the derivation of G
1
(s).]
Show that any walk starting at the origin is certain to visit x > 0 at some future step, but that the
mean number of steps in achieving this is innite.
37
The result
f
2n+1,1
=
1
2
2n+1
(2n)!
n!(n + 1)!
is simply the result in the last part of Problem 3.10.
For the pgf
G
1
(s) =

n=0
f
2n+1,1
s
2n+1
=

n=0
1
2
2n+1
(2n)!s
2n+1
n!(n + 1)!
The identity before (3.7) (in the book) states that

2n
n

= (1)
n

1
2
n

2
2n
.
Therefore, using this result
(2n)!
2
2n+1
n!(n + 1)!
=

2n
n

1
2
2n+1
(n + 1)
= (1)
n

1
2
n

1
2(n + 1)
= (1)
n

1
2
n + 1

.
Hence
G
1
(s) =

s=0
(1)
n

1
2
n + 1

s
2n+1
=

1
2
1

1
2
2

s
3
+

1
2
3

s
5

=
1
s
[1 (1 s
2
)
1
2
]
using the binomial theorem.
That G
1
(1) = 1 implies that the random walk is certain to visit x > 0 at some future step. However,
G

(1) = which means that expected number to that event is innite.


3.12. A symmetric random walk starts at the origin. Let f
n,x
be the probability that the rst visit to position
x occurs at the n-th step (as usual, f
n,x
= 0 if n +x is an odd number). Explain why
f
n,x
=
n1

k=1
f
nk,x1
f
k,1
, (n x > 1).
If G
x
(s) is its pgf, deduce that
G
x
(s) = {G
1
(s)}
x
,
where G
1
(s) is given explicitly in Problem 3.11. What are the probabilities that the walk rst visits x = 3
at the steps n = 3, n = 5 and n = 7?
Consider k = 1. The rst visit to x 1 has probability f
n1,x1
in n 1 steps. Having reached there
the walk must rst visit x in one further step with probability f
1,1
. Hence the probability is f
n1,x1
f
1,1
.
If k = 2, the rst visit to x 1 in n 2 steps occurs with probability f
n2,x1
: its rst visit to x must
occur after two steps. Hence the probability is f
n2,x1
f
2,1
. And so on. The sum of these probabilities
gives
f
n,x
=
n1

k=1
f
nk,x1
f
k,1
, (n x > 1).
Multiply both sides of this equation by s
n
and sum over n from n = x:
G
x
(s) =

n=x
n1

k=1
f
nk,x1
f
k,1
s
n
= G
x1
(s)G
1
(s).
By repeated application of this dierence equation, it follows that
G
x
(s) =

1
s

1 (1 s
2
)
1
2

x
.
38
For x = 3,
G
1
(s) =

1
s

1 (1 s
2
)
1
2

3
.
Expansion of this function as a Taylor series in s gives the coecients and probabilities:
G
2
(s) =
1
4
s
3
+
1
8
s
5
+
1
64
s
7
+O(s
9
).
3.13. Problem 3.12 looks at the probability of a rst visit to position x 1 at the n-th step in a symmetric
random walk which starts at the origin. Why is the pgf for the rst visit to position x where |x| 1 given
by
G
x
(s) = {G
1
(s)}
|x|
,
where F
1
(s) is dened in Problem 3.11?
First visits to x > 0 and to x at step n must be equally likely. Hence f
n,x
= f
n,x
. Therefore
Gx(s) = [1 (1 s
2
)
1
2
]
|x|
.
3.14. An asymmetric walk has parameters p and q = 1p = p. Let g
n,1
be the probability that the rst visit
to x = 1 occurs at the n-th step. As in Problem 3.11, g
2n,1
= 0. It was eectively shown in Problem 3.10
that the number of paths from the origin, which return to the origin after 2n steps is
(2n)!
n!(n + 1)!
.
Explain why
g
2n+1,1
=
(2n)!
n!(n + 1)!
p
n+1
q
n
.
Suppose that its pgf is
G
1
(s) =

n=0
g
2n+1,1
s
2n+1
.
Show that
G
1
(s) = [1 (1 4pqs
2
)
1
2
]/(2qs).
(The identity in Problem 3.11 is required again.)
What is the probability that the walk ever visits x > 0? How does this result compare with that for the
symmetic random walk?
What is the pgf for the distribution of rst visits of the walk to x = 1 at step 2n + 1?
The probability that the rst return to x = 1 at the (2n +1)th step is g
2n+1,1
The number of paths of
length 2n which never visit x = 1 is (adapt the answer in Problem 3.10),
(2n)!
n!(n + 1)!
.
The consequent probability of this occurrence is, since there are n steps to the right with probability p
and to the left with probability q,
(2n)!
n!(n + 1)!
p
n
q
n
.
the probability that the next step visits x = 1 is
(2n)!
n!(n + 1)!
p
n+1
q
n
,
which is the previous probability multiplied by p. Using the identity in Problem 3.11,
g
2n+1,1
=

1
2
n + 1

(1)
n
(2p)
n
(2q)
n
2p.
39
Its pgf G
1
(s) is given by
G
1
(s) =

n=0

1
2
n + 1

(1)
n
(4pq)
n
2ps
2n+1
= [1 (1 4pqs
2
)
1
2
]/(2qs)
using a binomial expansion.
Use an argumant that any walk which enters x > 0 must rst visit x = 1 as follows. The probability
that the walks rst visit to x = 1 occurs at all is

n=0
g
2n+1,1
= G
1
(1) =
1
2q
[1 {(p +q)
2
4pq}
1
2
] =
1
2q
[1 |p q|].
A symmetry argument in which p and q are interchanged gives the pgf for the distribution of rst visits
to x = 1, namely
H
1
(s) = [1 (1 4pqs
2
)
1
2
]/(2ps).
3.15. It was shown in Section 3.3 that, in a random walk with parameters p and q = 1 p, the probability
that a walk is at position x at step n is given by
v
n,x
=

n
1
2
(n +x)

p
1
2
(n+x)
q
1
2
(nx)
, |x| n,
where
1
2
(n +x) must be an integer. Verify that v
n,x
satises the dierence equation
v
n+1,x
= pv
n,x1
+qv
n,x+1
,
subject to the initial conditions
v
0,0
= 1, v
n,x
= 0, (x = 0).
Note that this dierence equation has dierences on two arguments.
Can you develop a direct argument which justies the dierence equation for the random walk?
Given
v
n,x
=

n
1
2
(n +x)

p
1
2
(n+x)
q
1
2
(nx)
, |x| n,
then
pv
n,x1
+qv
n,x+1
= p

n
1
2
(n +x 1)

p
1
2
(n+x1)
q
1
2
(nx+1)
+q

n
1
2
(n +x + 1)

p
1
2
(n+x+1)
q
1
2
(nx1)
= p
1
2
(n+x+1)
q
1
2
(nx+1)

n
1
2
(n +x 1)

n
1
2
(n +x + 1)

=
p
1
2
(n+x+1)
q
1
2
(nx+1)
n!
[
1
2
(n +x + 1)]![
1
2
(n x + 1)]!
[
1
2
(n +x + 1) +
1
2
(n x + 1)]
= p
1
2
(n+x+1)
q
1
2
(nx+1)

n + 1
1
2
(n +x)

= v
n+1,x
.
3.16. In the usual notation, v
2n,0
is the probability that, in a symmetric random walk, the walk visits the
origin after 2n steps. Using the dierence equation from Problem 3.15, v
2n,0
satises
v
2n,0
=
1
2
v
2n1,1
+
1
2
v
2n1,1
= v
2n1,1
.
How can the last step be justied? Let
G
1
(s) =

n=1
v
2n1,1
s
2n1
40
be the pgf of the distribution {v
2n1,1
}. Show that
G
1
(s) = [(1 s
2
)

1
2
1]/s.
By expanding G
1
(s) as a power series in s show that
v
2n1,1
=

2n 1
n

1
2
2n1
.
By a repetition of the argument show that
G
2
(s) =

n=0
v
2n,2
s
2n
= [(2 s)(1 s
2
)

1
2
2]/s.
Use a symmetry argument. Multiply both sides of the dierence equation by s
2n
and sum from n = 1
to innity. Then

n=1
v
2n,0
s
2n
= s

n=1
v
2n1,1
s
2n1
.
Therefore in the notation in the problem
H(s) 1 = sG
1
(s),
where H(s) = (1 s
2
)
1
2
. Therefore
G
1
(s) =

n=1
v
2n1,1
s
2n1
as required.
From the series for G
1
(s) expanded as a binomial series, the general coecient is
v
2n1,1
=

1
2
n

(1)
n
=

2n
n

1
2
2n
=
(2n)!
2
2n
n!n!
=

2n 1
n

1
2
2n1
.
From the dierence equation
v
2n+1,1
=
1
2
v
2n,0
+
1
2
v
2n,2
.
Multiplying by s
2n+1
and summing over n
G
1
(s) =
1
2

n=0
v
2n,0
s
2n+1
+
1
2

n=0
v
2n,2
s
2n+1
=
1
2
sH(s) +
1
2
sG
2
(s).
Therefore
G
2
(s) =
1
3
[(2 s)(1 s
2
)

1
2
2].
3.17. A random walk takes place on a circle which is marked out with n positions. Thus, as shown in
Fig. 3.2, position n is the same as position O. This is known as a cyclic random walk of period n. A
symmetric random walk starts at O. What is the probability that the walk is at O after j steps in the cases:
(a) j < n;
(b) n j < 2n?
Distinguish carefully the cases in which j and n are even and odd.
(a) j < n. The walk cannot circumscribe the circle. This case is the same as the walk on a line. Let
p
j
be the probability that the walk is at O at step j. Then by (3.6)
p
n
= v
n,0
=

n
1
2
n

1
2
n
n even
0 n odd
41
Figure 3.2: The cyclic random walk of period n for Problem 4.17.
(b) n j < 2n. Since n can be reached in both clockwise and counterclockwise directions,
p
j
= v
j,0
+v
j,n
+v
j,n
=

j
1
2
j

1
2
j
+

j
1
2
(j +n)

1
2
j
+

j
1
2
(j n)

1
2
j
(j, n both even)
0 (j odd, n even, or j even, n odd)

j
1
2
(j +n)

1
2
j
+

j
1
2
(j n)

1
2
j
(j, n both odd)
3.18. An unrestricted random walk with parameters p and q starts from the origin, and lasts for 50 paces.
Estimate the probability that the walk ends at 12 or more paces from the origin in the cases:
(a) p = q =
1
2
;
(b) p = 0.6, q = 0.4.
Consult Section 3.2. From (3.2)
Z
n
=
X
n
n(p q)

4npq
N(0, 1),
where X
n
is the random variable of the position of the random walk at step n. Since n = 50 is discrete we
use the approximation
P(11 X
50
11) P(11.5 < X
50
< 11.5).
(a) Symmetric random walk: p = q =
1
2
. Then
1.626 =
11.5

50
< Z
50
=
X
50

50
<
11.5

50
= 1.626.
Hence
P(1.626 < Z
50
< 1.626) = (1.626) + (1.626) = 2(1.626) 1 = 0.896.
Therefore the probability that the nal position is 12 or more paces from the origin is 1 0.896 = 0.104
approximately.
(b) p = 0.6, q = 0.4. The bounds on Z
50
are given by
3.103 =
11.5 10

48
< Z
50
=
X
50
10

48
<
11.5 10

48
= 0.217.
Hence
P(3.103 < Z
50
< 0.217) = (0.217) (3.103) = 0.585.
The probability that the nal position is 12 or more paces from the origin is 1 0.585 = 0.415.
42
3.19. In an unrestricted random walk with parameters p and q, for what value of p are the mean and
variance of the probability distribution of the position of the walk at stage n the same?
From Section 3.2 the mean and variance of X
n
, the random variable of the position of the walk at step
n, are given by
E(X
n
) = n(p q), V(X
n
) = 4npq,
where q = 1 p. The mean and variance are equal if 2p 1 = 4p(1 p), that is, if
4p
2
2p 1 = 0.
The required probability is p =
1
4
(1 +

5).
3.20. Two walkers each perform symmetric random walks with synchronized steps both starting from the
origin at the same time. What is the probability that they are both at the origin at step n?
If A and B are the walkers, then the probability a
n
that A is at the origin is given by
a
n
=

n
1
2
n

1
2
n
(n even)
0 (n odd)
.
The probability b
n
for B is given by the same formula. They can only visit the origin if n is even, in which
case the probability that they are both there is
a
n
b
n
=

n
1
2
n

2
1
2
2n
.
3.21. A random walk takes place on a two-dimensional lattice as shown in Fig. 3.3. In the example
shown the walk starts at (0, 0) and ends at (2, 1) after 13 steps. In this walk direct diagonal steps are not
Figure 3.3: A two-dimensional random walk.
permitted. We are interested the probability that, in the symmetric random walk, which starts at the origin,
has returned there after 2n steps. Symmetry in the two-dimensional walk means that there is a probability of
1
4
that, at any position, the walk goes right, left, up, or down at the next step. The total number of dierent
walks of length 2n which start at the origin is 4
2n
. For the walk considered, the number of right steps
(positive x direction) must equal the number of left steps, and the number of steps up (positive y direction)
must equal those down. Also the number of right steps must range from 0 to n, and the corresponding steps
up from n to 0. Explain why the probability that the walk returns to the origin after 2n steps is
p
2n
=
(2n)!
4
2n
n

r=0
1
[r!(n r)!]
2
.
43
Prove the two identities
(2n)!
[r!(n r)!]
2
=

2n
n

n
r

2
,

2n
n

=
n

r=0

n
r

2
.
[Hint: compare the coecients of x
n
in (1 +x)
2n
and [(1 +x)
n
]
2
.] Hence show that
p
2n
=
1
4
2n

2n
n

2
.
Calculate p
2
, p
4
, 1/(p
40
), 1/(p
80
). How do you would you guess that p
2n
behaves for large n?
At each intersection there are 4 possible paths. Hence there are 4
2n
dierent paths which start the
origin.
For a walk which returns to the origin there must be r, say, left and right steps, and nr up and down
steps (r = 0, 1, 2, . . . , n) to ensure the return. For xed r, the number of ways in which r left, r right,
(n r) up and (n r) down steps can be chosen from 2n is the multinomial formula
(2n)!
r!r!(n r)!(n r)!
.
For all r, the total number of ways is
n

r=0
(2n)!
r!r!(n r)!(n r)!
.
Therefore the probability that a return to the origin occurs is
p
2n
=
(2n)!
4
2n
n

r=0
1
[r!(n r)!]
2
.
For example if n = 2, then
p
4
=
4!
4
4
2

r=0
1
r!(2 r)!
=
4!
4
4

1
2
+ 1 +
1
2

=
3
16
.
For the rst identity
(2n)!
[r!(n r)!]
2
=
(2n)!
n!n!

n!
r!(n r)!

2
=

2n
n

n
r

2
.
For the second identity

2n
n

= the coecient of x
n
in the expansion of (1 +x)
2n
= the coecient of x
n
in the expansion of [(1 +x)
n
]
2
= the coecient of x
n
in

1 +

n
1

x +

n
2

x
2
+ +

n
n

x
n

2
=

n
0

n
n

n
1

n
n 1

+ +

n
n

n
0

n
0

2
+

n
1

2
+

n
n

2

since

n
r

n
n r

=
n

r=0

n
r

2
44
Hence
p
2n
=
2
4
2n

2n
n

2
.
The computed values are p
20
= 0.01572 and p
40
= 0.00791. Then
1
p
20
= 20.25,
1
p
40
= 40.25,
which imply that possibly p
2n
1/(n) as n .
3.22. A random walk takes place on the positions {. . . , 2, 1, 0, 1, 2, . . .}. The walk starts at 0. At step
n, the walker has a probability q
n
of advancing one position, or a probability 1 q
n
of retreating one step
(note that the probability depends on the step not the position of the walker). Find the expected position of
the walker at step n. Show that if q
n
=
1
2
+r
n
, (
1
2
< r
n
<
1
2
), and the series

j=1
r
j
is convergent, then
the expected position of the walk will remain nite as n .
If X
n
is the random variable representing the position of the walker at step n, then
P(X
n+1
= j + 1|X
n
= j) = q
n
, P(X
n+1
= j 1|X
n
= j) = 1 q
n
.
If W
i
is the modied Bernoulli random variable (Section 3.2), then
E(X
n
) = E
n

i=1
W
i
=
n

i=1
E(W
i
) =
n

i=1
[1.q
i
+ (1)(1 q
i
)]
= 2
n

i=1
q
i
n.
Let q
n
=
1
2
+r
n
, (
1
2
< r
n
<
1
2
). Then
E(X
n
) = 2
n

i=1
(
1
2
+r
i
) n =
n

i=1
r
i
.
Hence E(X
n
) is nite as n if the series on the right is convergent.
3.23. A symmetric random walk starts at k on the positions 0, 1, 2, . . . , a, where 0 < k < a. As,in the
gamblers ruin problem, the walk stops whenever 0 or a is rst reached. Show that the expected number of
visits to position j where 0 < j < k is 2j(a k)/a before the walk stops.
One approach is to this problem by repeated application of result (2.5) for gamnlers ruin.
A walk which starts at k rst reached j before a with probability (by (2.5))
p =
(a j) (k j)
a j
=
a k
a j
.
A walk which starts at j reaches a (and stops) before reaching j (again by (2.5)) with probability
1
2
1
a j
=
1
2(a j)
,
and reaches 0 before returning to j with probability
j (j 1)
2j
=
1
2j
.
Hence the probability that the walk from j stops without returning to j with probability
q =
1
2(a j)
+
1
2j
=
a
2j(a j)
.
45
Given that the walk is at j, it nexts visits j with probability
r = 1 q = 1
a
2j(a j)
=
2j(a j) a
2j(a j)
.
Therefore the probability that the walk starts from k visits j m times before stopping is
h
m
= pr
m1
q.
The expected number of visits to j is
=

m=1
mh
m
= pq

m=1
mr
m1
=
pq
(1 r)
2
,
summing the quasi-geometric series. Substituting for p, q, and r,
=
a k
a j

a
2j(a j)

1
2j(a j) a
2j(a j)

2
=
a(a k)
2j(a j)
2

4j
2
(a j)
2
a
2
=
2(a k)j
a
46
Chapter 4
Markov chains
4.1. If T = [p
ij
], (i, j = 1, 2, 3) and
p
ij
=
i +j
6 + 3i
,
show that T is a row-stochastic matrix. What is the probability that a transition between states E
2
and E
3
occurs at any step?
If the initial probability distribution in a Markov chain is
p
(0)
=

1
2
1
4
1
4

,
what are the probabilities that states E
1
, E
2
and E
3
are occupied after one step. Explain why the probability
that the chain nishes in state E
2
is
1
3
irrespective of the number of steps.
Since
p
ij
=
i +j
6 + 3i
,
then
3

j=1
p
ij
=
3

j=1
i +j
6 + 3i
=
3i
6 + 3i
+
6
6 + 3i
= 1,
for all i. Also 0 < p
i,j
< 1. Therefore T is a stochastic matrix. The probability that a transition from E
2
to E
3
occurs is p
23
=
5
12
.
The probabilities that the states E
1
, E
2
and E
3
are occupied after one step given p
(0)
are given by
p
(1)
= p
(0)
T =

1
2
1
4
1
4

2
9
1
3
4
9
1
4
1
3
5
12
4
15
1
3
2
5

173
720
1
3
307
720

Each term in the second column of T is a


1
3
. By row-on-column matrix multiplication, each element
in the second column of T
n
is a
1
3
. Hence the second term in p
(0)
T
n
is
1
3
independently of p
(0)
.
4.2. If
T =

1
2
1
4
1
4
1
3
1
3
1
3
1
4
1
2
1
4

,
calculate p
(2)
22
, p
(2)
31
and p
(2)
13
.
p
(2)
ij
are the elements of T
2
. The matrix multiplication gives
T
2
=

19
48
1
3
13
48
13
36
13
36
5
18
17
48
17
48
7
24

.
47
Therefore
p
(2)
22
=
13
36
, p
(2)
31
=
17
48
, p
(2)
13
=
13
48
.
4.3. For the transition matrix
T =

1
3
2
3
1
4
3
4

calculate p
(3)
12
, p
(2)
2
and p
(3)
given that p
(0)
=

1
2
1
2

. Also nd the eigenvalues of T, construct a


formula for T
n
and obtain lim
n
T
n
.
We require
T
2
=

1
3
2
3
1
4
3
4

2
=

5
18
13
18
13
48
35
48

, T
3
=

1
3
2
3
1
4
3
4

3
=

59
216
157
216
157
576
419
576

.
Directly from T
3
, p
(3)
12
=
157
216
.
The element can be read o from
p
(2)
= p
(0)
T
2
=

1
2
1
2

5
18
13
18
13
48
35
48

79
288
,
209
288

,
namely p
(2)
2
=
209
288
.
The vector
p
(3)
= p
(0)
T
3
=

1
2
1
2

59
216
157
216
157
576
419
576

943
3456
2513
3456

The eigenvalues of T are given by

1
3

2
3
1
4
3
4

= 0, or, 12
2
13 + 1 = 0.
The eigenvalues are
1
= 1,
2
=
1
12
. Corresponding eigenvectors are given by the transposes
r
1
=

1 1

T
, r
2
=


8
3
1

T
The matrix C is dened as
C =

r
1
r
2

1
8
3
1 1

.
By (4.18),
T
n
= CD
n
C
1
,
where D is the diagonal matrix of eigenvalues. Therefore
T
n
=

1
8
3
1 1

1 0
0
1
12
n

3
11
8
11

3
11
3
11

=
1
11

3 +
8
12
n
8
8
12
n
3
3
12
n
8 +
3
12
n

.
It follows that
lim
n
T
n
=
1
11

3 8
3 8

.
4.4. Sketch transition diagrams for each of the following three-state Markov chains.
(a) A =

1
3
1
3
1
3
0 0 1
1 0 0

; (b) B =

1
2
1
4
1
4
0 1 0
1
2
1
2
0

; (c) C =

0
1
2
1
2
1 0 0
1
3
1
3
1
3

.
48
E
1 E
1
E
1
E
2
E
2
E
2
E
3
E
3
E
3
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
3
1
3
1
3
1
4
1
4
1
1
1
1
(a) (b)
(c)
Figure 4.1: Transition diagrams for Problem 4.4
The transition diagrams are shown in Figure 4.1.
4.5. Find the eigenvalues of
T =

a b c
c a b
b c a

, (a > 0, b > 0, c > 0).


Show that the eigenvalues are complex if b = c. (If a+b+c = 1, then T is a doubly- stochastic matrix.)
Find the eigenvalues and eigenvectors in the following cases:
(a) a =
1
2
, b =
1
4
, c =
1
4
;
(b) a =
1
2
, b =
1
8
, c =
3
8
.
The eigenvalues are given by

a b c
c a b
b c a

= 0,
or
(a +b +c )(
2
+ (b +c 2a) +a
2
+b
2
+c
2
bc ca ab) = 0.
The eigenvalues are

1
= a +b +c,
2,3
=
1
2
(2a b c i

3|b c|).
(a) a =
1
2
, b =
1
4
, c =
1
4
. The eigenvalues are
1
= 1,
2
=
3
=
1
4
. The eigenvectors are
r
1
=

1
1
1

, r
2
=

1
1
0

, r
3
=

1
0
1

.
(b) a =
1
4
, b =
1
8
, c =
3
8
. The eigenvectors are
r
1
=

1
1
1

, r
2
=

1
2
i

3
2

1
2
+ i

3
2
1

, r
3
=

1
2
+ i

3
2

1
2
i

3
2
1

.
4.6. Find the eigenvalues, eigenvectors, the matrix of eigenvectors C, its inverse C
1
, a formula for T
n
and lim
n
T
n
for each of the following transition matrices;
49
(a)
T =

1
8
7
8
1
2
1
2

;
(b)
T =

1
2
1
8
3
8
1
4
3
8
3
8
1
4
5
8
1
8

;
(a) The eigenvalues are
1
= 1,
2
=
3
8
. The corresponding eigenvectors are
r
1
=

1
1

, r
2
=


7
4
1

.
The matrix C and its inverse are given by
C =

r
1
r
2

1
7
4
1 1

, C
1
=

4
11
7
11

4
11
4
11

.
The matrix T
n
is given by
T
n
= CD
n
C
1
=

1
7
4
1 1

1 0
0
3
8

4
11
7
11

4
11
4
11

=
1
11

4 + 7(
3
8
)
n
7 7(
3
8
)
n
4 (
3
8
)
n
7 + (
3
8
)
n

1
11

4 7
4 7

as n .
(b) The eigenvalues are
1
=
1
4
,
2
=
1
4
,
3
= 1, and the corresponding eigenvectors are
r
1
=

3
7

3
7
1

, r
2
=

2
1
1

, r
3
=

1
1
1

The matrix C is given by


C =

r
1
r
2
r
3

3
7
2 1

3
7
1 1
1 1 1

Then
T
n
= CD
n
C
1
=

3
7
2 1

3
7
1 1
1 1 1

(
1
4
)
n
0 0
0 (
1
4
)
n
0
0 0 1

0
7
10
7
10

1
3
1
3
1
1
3
11
30
3
10

1
3
+
1
3
2
12n 11
30
+
3
5
(1)
n
2
12n

1
3
2
12n 3
10

3
5
(1)
n
2
12n
1
3

4
n
3
11
30
+
3
5
(1)
n
2
12n
+
4
n
3
3
10

3
5
(1)
n
2
12n
1
3

4
n
3
11
30

7
5
(1)
n
2
12n
+
4
n
3
3
10
+
7
5
(1)
n
2
12n

,
so that
lim
t
T
n
=

1
3
11
30
3
10
1
3
11
30
3
10
1
3
11
30
3
10

.
4.7. The weather in a certain region can be characterized as being sunny(S), cloudy(C) or rainy(R) on
any particular day. The probability of any type of weather on one day depends only on the state of the
50
weather on the previous day. For example, if it is sunny one day then sun or clouds are equally likely on
the next day with no possibility of rain. Explain what other the day-to-day possibilities are if the weather
is represented by the transition matrix.
T =
S C R
S
1
2
1
2
0
C
1
2
1
4
1
4
R 0
1
2
1
2
Find the eigenvalues of T and a formula for T
n
. In the long run what percentage of the days are sunny,
cloudy and rainy?
The eigenvalues of T are given by

1
2

1
2
0
1
2
1
4

1
4
0
1
2
1
2

=
1
8
(4 + 1)(2 1)( 1) = 0.
Let
1
=
1
4
,
2
=
1
2
and
3
= 1. The coresponding eigenvectors are
r
1
=

3
2
1

, r
2
=


1
2
0
1

, r
3
=

1
1
1

The matrix of eigenvectors C is given by


C =

r
1
r
2
r
3

1
1
2
1

3
2
0 1
1 1 1

If D is the diagonal matrix of eigenvalues, then by (4.18)


T
n
= CD
n
C
1
=

1
1
2
1

3
2
0 1
1 1 1

(
1
4
)
n
0 0
0 (
1
2
)
n
0
0 0 1

4
15

2
5
2
15

2
3
0
2
3
2
5
2
5
1
5

As n
T
n

1
1
2
1

3
2
0 1
1 1 1

0 0 0
0 0 0
0 0 1

4
15

2
5
2
15

2
3
0
2
3
2
5
2
5
1
5

=
1
5

2 2 1
2 2 1
2 2 1

.
In the long run 40% of the days are of the days are sunny, 40% are cloudy and 20% are rainy.
4.8. The eigenvalue method of Section 4.4 for nding general powers of stochastic matrices is only guaran-
teed to work if the eigenvalues are distinct. Several possibilities occur if the stochastic matrix of a Markov
chain has a repeated eigenvalue. The following three examples illustrate these possibilities.
(a) Let
T =

1
4
1
4
1
2
1 0 0
1
2
1
4
1
4

be the transition matrix of a three-state Markov chain. Show that T has the repeated eigenvalue
1
=
2
=

1
4
and
3
= 1, and two distinct eigenvectors
r
1
=

1
4
1

r
3
=

1
1
1

.
51
In this case diagonalization of T is not possible. However it is possible to nd a non-singular matrix C
such that
T = CJC
1
,
where J is the Jordan decomposition matrix given by
J =


1
1 0
0
1
0
0 0 1


1
4
1 0
0
1
4
0
0 0 1

,
C =

r
1
r
2
r
3

,
and r
2
satises
(T
1
I
3
)r
2
= r
1
.
Show that we can choose
r
2
=

10
24
0

.
Find a formula for J
n
and conrm that, as n ,
T
n

12
25
1
5
8
25
12
25
1
5
8
25
12
25
1
5
8
25

.
(b) A four-state Markov chain has the transition matrix
S =

1 0 0 0
3
4
0
1
4
0
0
1
4
0
3
4
0 0 0 1

.
Sketch the transition diagram for the chain, and note that the chain has two absorbing states and is therefore
not a regular chain. Show that the eigenvalues of S are
1
4
,
1
4
and 1 repeated. Show that there are four
distinct eigenvectors. Choose the diagonalizing matrix C as
C =

0 0 4 5
1 1 3 4
1 1 0 1
0 0 1 0

.
Find its inverse, and show that, as n ,
S
n

1 0 0 0
4
5
0 0
1
5
1
5
0 0
4
5
0 0 0 1

.
Note that since the rows are not the same this chain does not have an invariant distribution: this is caused
by the presence of two absorbing states.
(c) Show that the transition matrix
U =

1
2
0
1
2
1
6
1
3
1
2
1
6
0
5
6

has a repeated eigenvalue, but that, in this case, three independent eigenvectors can be associated with U.
Find a diagonalizing matrix C, and nd a formula for U
n
using U
n
= CD
n
C
1
, where
D =

1
3
0 0
0
1
3
0
0 0 1

.
52
Conrm also that this chain has an invariant distribution.
(a) The eigenvalues are given by

1
4

1
4
1
2
1 0
1
2
1
4
1
4

=
1
16
( 1)(1 + 4)
2
Hence they are
1
=
1
4
(repeated) and
2
= 1. This leads to the two eigenvectors
r
1
=

1
4
1

, r
3
=

1
1
1

The Jordan decomposition matrix is given by


J =


1
4
1 0
0
1
4
0
0 0 1

.
Let r
2
satisfy
[T
1
I
3
]r
2
= r
1
or

1
2
1
4
1
2
1
1
4
0
1
2
1
4
1
2

r
2
=

1
4
1

.
The solution for the linear equations for the components of r
2
gives
r
2
=

10 24 0

T
.
The matrix C is dened in the usual way as
C =

r
1
r
2
r
3

1 10 1
4 24 1
1 0 1

.
Its computed inverse is
C
1
=

12
25

1
5
17
25

1
10
0
1
10
12
25
1
5
8
25

.
If T = CJC
1
, then T
n
= CJ
n
C
1
, where
J
n
=

1
4
1 0
0
1
4
0
0 0 1

n
=

(
1
4
)
n
n(
1
4
)
n1
0
0 (
1
4
)
n
0
0 0 1

.
As n ,
T
n

1 10 1
4 24 1
1 0 1

0 0 0
0 0 0
0 0 1

12
25

1
5
17
25

1
10
0
1
10
12
25
1
5
8
25

12
25
1
5
8
25
12
25
1
5
8
25
12
25
1
5
8
25

.
(b) The transition diagram is shown in Figure 4.2. The eigenvectors are given by

1 0 0 0
3
4

1
4
0
0
1
4

3
4
0 0 0 1

=
1
16
( 1)
2
(4 1)(4 + 1) = 0.
53
E
1
E
2
E
3
1
1
E
4
3
4
3
4
1
4
1
4
Figure 4.2: Transition diagram for Problem 4.8(b).
The eigenvalues are
1
=
1
4
,
2
=
1
4
,
3
= 1 (repeated). The eigenvectors for
1
and
2
are
r
1
=

0 1 1 0

T
, r
2
=

0 1 1 0

T
.
Let the eigenvector for the repeated
3
be
r
3
=

a b c d

T
,
where the constants a, b, c, d satisfy
3
4
a b +
1
4
c = 0,
1
4
b c +
3
4
d = 0.
We can express the solution in the form
c = 3a + 4b, d = 12a 15b.
Hence the eigenvector is
r =

a
b
3a + 4b
4a 5b

,
which contains two arbitrary constants a and b. In this case of a repeated eigenvalue, two eigenvectors can
be dened using dierent pairs of values for
r
3
=

4 3 0 1

T
, r
4
=

5 4 1 0

T
.
The matrix C and its inverse are
C =

0 0 4 5
1 1 3 4
1 1 0 1
0 0 1 0

, C
1
=
1
10

3 5 5 3
5 5 5 5
0 0 0 1
2 0 0 8

.
The matrix power of T is given by
T
n
= CD
n
C
1
=
1
10

0 0 4 5
1 1 3 4
1 1 0 1
0 0 1 0

(
1
4
)
n
0 0 0
0 (
1
4
)
n
0 0
0 0 1 0
0 0 0 1

3 5 5 3
5 5 5 5
0 0 0 1
2 0 0 8

1 0 0 0
4
5
0 0
1
5
1
5
0 0
4
5
0 0 0 1

54
as n
(c) The eigenvaues are given by

1
2
0
1
2
1
6
1
3

1
2
1
6
0
5
6

=
1
9
( 1)(3 1)
2
.
Hence the eigenvalues are
1
=
1
3
(repeated) and
3
= 1. Corresponding to the eigenvalue
1
, we can
obtain the eigenvector
r
1
=

3b
a
b

= b

3
0
1

+a

0
1
0

,
where a and b are arbitrary. Two distinct eigenvectors can be obtained by putting a = 0, b = 1 and by
putting a = 1, b = 0. The three eigenvectors are
r
1
=

3
0
1

, r
2
=

0
1
0

, r
3
=

1
1
1

.
The matrix C and its inverse become
C =

3 0 1
0 1 1
1 0 1

, C
1
=
1
4

1 0 1
1 4 3
1 0 3

.
With
D =

1
3
0 0
0
1
3
0
0 0 1

,
then
U
n
=

3 0 1
0 1 1
1 0 1

(
1
3
)
n
0 0
0 (
1
3
)
n
0
0 0 1

1
4

1 0 1
1 4 3
1 0 3

3 0 1
0 1 1
1 0 1

0 0 0
0 0 0
0 0 1

1
4

1 0 1
1 4 3
1 0 3

1
4
0
3
4
1
4
0
3
4
1
4
0
3
4

,
as n .
4.9. Miscellaneous problems on transition matrices. In each case nd the eigenvalues of T, a formula for
T
n
and the limit of T
n
as n . The special cases discussed in Problem 4.8 can occur.
(a)
T =

1
2
7
32
9
32
1 0 0
1
2
1
4
1
4

;
(b)
T =

1
3
1
4
5
12
1 0 0
1
4
1
4
1
2

;
(c)
T =

1
4
3
16
9
16
3
4
0
1
4
1
4
1
4
1
2

;
55
(d)
T =

1
4
1
4
1
2
5
12
1
3
1
4
1
2
1
4
1
4

;
(e)
T =

1 0 0 0
1
2
0 0
1
2
0 0 1 0
0
1
2
1
2
0

.
(a) The eigenvalues of T are
1
=
1
8
(repeated) and
3
= 1, with the corresponding eigenvectors
r
1
=

1
4
2
1

, r
2
=

1
1
1

.
The Jordan decomposition matrix J is required, where
J =


1
8
1 0
0
1
8
0
0 0 1

,
and r
2
is given by
[T
1
I
3
]r
2
= r
1
or

5
8
7
32
9
32
1
1
8
0
1
2
1
4
3
8

r
2
=

1
4
2
1

The remaining eigenvector is


r
2
=

3
8
4
3

.
The matrix C and its inverse are given by
C =

1
4
3 1
2 8 1
1
4
3
1

, C
1
=

10
27

13
54
11
18

1
6
1
24
1
8
16
27
5
27
2
9

Then
T
n
=

1
4
3 1
2 8 1
1
4
3
1

(
1
8
)
n
1 0
0 (
1
8
)
n
0
0 0 1

10
27

13
54
11
18

1
6
1
24
1
8
16
27
5
27
2
9

Matrix multiplcation gives


lim
n
T
n
=

16
27
5
27
2
9
16
27
5
27
2
9
16
27
5
27
2
9

.
(b) The eigenvalues of T are
1
=
1
4
,
2
=
1
6
, and
3
= 1, which are all dierent, so that the
calculations are straightforward. The eigenvectors are given by
r
1
=

1
4
1

, r
2
=

5
12

5
2
1

, r
3
=

1
1
1

The matrix C and its inverse are


C =

1
5
12
1
4
5
2
1
1 1 1

, C
1
=

6
5

1
5
1

12
7
0
12
7
18
35
1
5
2
7

56
Finally
lim
n
T
n
=

1
5
12
1
4
5
2
1
1 1 1

0 0 0
0 0 0
0 0 1

6
5

1
5
1

12
7
0
12
7
18
35
1
5
2
7

18
35
1
5
2
7
18
35
1
5
2
7
18
35
1
5
2
7

.
(c) The eigenvalues are given by

1
4
3
16
9
16
3
4
0
1
4
1
4
1
4
1
2

=
1
32
( 1)(32
2
+ 8 + 1).
Hence the eigenvalues are
1
=
1
8
(1+i),
2
=
1
8
(1i), and
3
= 1. This stochastic matrix has complex
eigenvalues. The corresponding eigenvectors are
r
1
=

3
26
+
15
26
i

31
13

14
13
i
1

, r
2
=

3
26

15
26
i

31
13
+
14
13
i
1

r
3
=

1
1
1

.
The diagonal matrix of eigenvalues is
D =


1
8
(1 + i) 0 0
0
1
8
(1 i) 0
0 0 1

.
After some algebra (easily computed using software)
lim
n
T
n
=

14
41
15
82
39
82
14
41
15
82
39
82
14
41
15
82
39
82

.
(d) The eigenvalues are given by
1
=
1
4
,
2
=
1
12
, and
3
= 1. The corresponding eigenvectors are
r
1
=

11
9
4
9
1

, r
2
=

8
3
1

r
3
=

1
1
1

.
The matrix C and the diagonal matrix D are given by
C =


11
9
1 1
4
9

8
3
1
1 1 1

, D =


1
4
0 0
0
1
12
0
0 0 1

.
It follows that
lim
n
T
n
=

21
55
3
11
19
55
21
55
3
11
19
55
21
55
3
11
19
55

.
(e) The eigenvalues of T are
1
=
1
2
,
1
=
1
2
,
3
= 1 (repeated). There is a repeated eigenvalue but
we can still nd four eigenvectors given by
r
1
=

0
1
0
1

, r
2
=

0
1
0
1

, r
3
=

3
2
0
1

, r
4
=

2
1
1
0

The matrix C and its inverse can be compiled from these eigenvectors:
C =

0 0 3 2
1 1 2 1
0 0 0 1
1 1 1 0

, C
1
=

1
6

1
2

1
6
1
2

1
2
1
2

1
2
1
2
1
3
0
2
3
0
0 0 1 0

.
57
The diagonal matrix D in this case is given by
D =

1
2
0 0 0
0
1
2
0 0
0 0 1 0
0 0 0 1

.
Hence
T
n
= CD
n
C
1
=

0 0 3 2
1 1 2 1
0 0 0 1
1 1 1 0

(
1
2
)
n
0 0 0
0 (
1
2
)
n
0 0
0 0 1 0
0 0 0 1

1
6

1
2

1
6
1
2

1
2
1
2

1
2
1
2
1
3
0
2
3
0
0 0 1 0

0 0 3 2
1 1 2 1
0 0 0 1
1 1 1 0

0 0 0 0
0 0 0 0
0 0 1 0
0 0 0 1

1
6

1
2

1
6
1
2

1
2
1
2

1
2
1
2
1
3
0
2
3
0
0 0 1 0

1 0 0 0
2
3
0 0
1
3
0 0 1 0
1
3
0
2
3
0

.
4.10. A four-state Markov chain has the transition matrix
T =

1
2
1
2
0 0
1 0 0 0
1
4
1
2
0
1
4
3
4
0
1
4
0

.
Find f
i
, the probability that the chain returns at some step to state E
i
, for each state. Determine which
states are transient and which are persistent. Which states form a closed subset? Find the eigenvalues of
T, and the limiting behaviour of T
n
as n .
The transition diagram for the chain is shown in Figure 4.3. For each state, the probability that a rst
return occurs is as follows, using the diagram:
State E
1
: f
(1)
1
=
1
2
,f
1
(2) =
1
2
, f
(n)
1
= 0, (n 3);
E
1
E
2
E
3
1
2
1
1
E
4
3
4
1
4
1
4
1
2
1
4
1
2
Figure 4.3: Transition diagram for Problem 4.10.
State E
2
: f
(1)
2
= 0, f
(2)
2
=
1
2
, f
(n)
2
= 1/2
n1
(n 3);
58
State E
3
: f
(1)
3
= 0, f
(2)
3
=
1
4
2
, f
(n)
3
= 1/4
n
, (n 3):
State E
4
: f
(n)
4
= f
(n)
3
for all n.
The probability of a return at any stage is
f
n
=

r=1
f
(n)
r
,
for each n. Therefore
f
1
=
1
2
+
1
2
= 1, f
2
=
1
2
+
1
2
2
+
1
2
3
+ = 1,
f
3
= f
4
=
1
4
2
+
1
4
3
+ =
1
12
,
summing geometric series for f
2
and f
3
. Hence E
1
and E
2
are persistent states , but E
3
and E
4
are
transient.
The eigenvalues pf T are
1
=
1
2
,
2
=
1
4
,
3
=
1
4
, and
4
= 1. The corresponding eigenvectors are
r
1
=

1
3
2
3
1
1

, r
2
=

0
0
1
1

, r
3
=

0
0
1
1

, r
4
=

1
1
1
1

.
Therefore D, C and its inverse are given by
D =

1
2
0 0 0
0
1
4
0 0
0 0
1
4
0
0 0 0 1

, C =

1
3
0 0 1
2
3
0 0 1
1 1 1 1
1 1 1 1

, C
1
=

1 1 0 0
1 1
1
2
1
2

2
3

1
3
1
2
1
2
2
3
1
3
0 0

Hence
lim
n
T
n
=

1
3
0 0 1
2
3
0 0 1
1 1 1 1
1 1 1 1

0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 1

1 1 0 0
1 1
1
2
1
2

2
3

1
3
1
2
1
2
2
3
1
3
0 0

2
3

1
3
0 0
2
3

1
3
0 0
2
3

1
3
0 0
2
3

1
3
0 0

4.11. A six-state Markov chain has the transition matrix


T =

1
4
1
2
0 0 0
1
4
0 0 0 0 0 1
0
1
4
0
1
4
1
2
0
0 0 0 0 1 0
0 0 0
1
2
1
2
0
0 0 0
1
2
1
2
0
0 0 1 0 0 0

.
Sketch its transition diagram. From the diagram which states do you think are transient and which do
you think are persistent? Which states form a closed subset? Determine the invariant distribution in the
subset.
Intuitively E
1
, E
2
, E
3
and E
6
are transient since paths can always escape through E
3
and not return.
59
E
1
E
2
E
3
1
2
1
1
E
4
1
4
1
4
1
2
1
4
E
5
E
6
1
1
4
1
2
1
2
Figure 4.4: Transition diagram for Problem 4.11.
For state E
4
, the probabilities of rst returns are
f
(1)
4
= 0, f
(n)
4
=
1
2
n1
, (n = 2, 3, 4, . . .).
It follows that a return to E
4
occurs at some step is
f
4
=

n=1
f
(n)
4
=

n=2
1
2
n1
= 1.
Hence E
4
is persistent. For E
5
,
f
(1)
5
=
1
2
, f
(2)
5
=
1
2
, f
(n)
5
= 0, (n 3),
so that f
5
=
1
2
+
1
2
= 1. Hence E
5
is also persistent.
The states E
1
, E
2
form a closed subset since no escape paths occur. The subset has the transition
matrix
S =

0 1
1
2
1
2

.
In the notation of Section 4.3, = 1 and =
1
2
. Hence the invariant distribution is

1
3
2
3

.
4.12. Draw the transition diagram for the seven-state Markov chain with transition matrix
T =

0 1 0 0 0 0 0
0 0 1 0 0 0 0
1
2
0 0
1
2
0 0 0
0 0 0 0 1 0 0
0 0 0 0 0 1 0
1
2
0 0 0 0 0
1
2
0 0 0 0 0 0 1

.
Hence discuss the periodicity of the states of the chain. From the transition diagram calculate p
(n)
11
and p
(n)
44
for n = 2, 3, 4, 5, 6. (In this example you should conrm that p
(3)
11
=
1
2
but that p
(3)
44
= 0: however, p
(3n)
44
= 0
for n = 2, 3, . . . conrming that state E
4
is periodic with period 3.)
Consider state E
1
. Returns to E
1
can occur in the sequence E
1
E
2
E
3
E
4
which takes 3 steps, or as
E
1
E
2
E
3
E
4
E
5
E
6
E
1
which takes 6 steps which is a multiple of 3. Hence returns to E
1
can only occur at
steps 3, 6, 9, . . .: hence E
1
has periodicity 3. Similarly E
2
and E
3
also have periodicity 3. On the other
hand for E
4
returns are possible at steps 6, 9, 12, . . . but it still has periodicity. The same is true of states
E
5
and E
6
.
60
E
1
E
2
E
3
E
4
E
5
E
6 1
E
7 1
1
1
1
1
2
1
2
1
2
1
2
Figure 4.5: Transition diagram for Problem 4.12.
E
7
is an absorbing state.
4.13. The transition matrix of a 3-state Markov chain is given by
T =

0
3
4
1
4
1
2
0
1
2
3
4
1
4
0

.
Show that S = T
2
is the transition matrix of a regular chain. Find its eigenvectors and conrm that S has
an invariant distribution given by

14
37
13
37
10
37

for even steps in the chain.


The matrix S is given by
S = T
2
=

9
16
1
16
3
8
3
8
1
2
1
8
1
8
9
16
5
16

,
which is regular since all elements are non-zero. The eigenvalues of S are given by
1
=
3
16

1
4
i,
E
1
E
2 E
3
1
4
1
2
3
4
1
2
3
4
1
4
Figure 4.6: Transition diagram for Problem 4.13.

2
=
3
16
+
1
4
i,
3
= 1, with corresponding eigenvectors
r
1
=

16
25
+
13
25
i

16
25

13
25
i
1

, r
2
=

2
25

14
25
i

2
25
+
14
25
i
1

, r
3
=

1
1
1

.
The matrix C is given by the matrix of eigenvectors, namely
C =

16
25
+
13
25
i
2
25

14
25
i 1

16
25

13
25
i
2
25
+
14
25
i 1
1 1 1

.
Let
D =

3
16

1
4
i 0 0
0
3
16
+
1
4
i 0
0 0 1

.
61
Finally (computation simplies the algebra)
CDC
1
=

14
37
13
37
10
37
14
37
13
37
10
37
14
37
13
37
10
37

,
which gives the limiting distribution.
4.14. An insect is placed in the maze of cells shown in Figure 4.9. The state E
j
is the state in which the
insect is in cell j. A transition occurs when the insect moves from one cell to another. Assuming that exits
are equally likely to be chosen where there is a choice, construct the transition matrix T for the Markov
chain representing the movements of the insect. Show that all states are periodic with period 2. Show that
T
2
has two subchains which are both regular. Find the invariant distributions of both subchains. Interpret
the results.
If the insect the insect starts in any compartment (state), then it can only return to that compartment
after an even number of steps. Hence all states are periodic with period 2.
E
1
E
2
E
3
1
2
1
1
E
4
1
2
1
2
E
5
1
2
1
2
1
2
1
2
E
4
E
2
E
1
E
3
E
5
Figure 4.7: Transition diagram for Problem 4.13, and the maze.
The matrix
S = T
2
=

1
2
1
4
0 0
1
4
1
2
1
2
0 0 0
0 0
3
4
1
4
0
0 0
1
4
3
4
0
1
2
0 0 0
1
2

has two subchains corresponding to E


1
, E
2
, E
5
and E
3
, E
4
: this follows since the zeros in columns 3 and
4, and the zeros in rows 3 and 4, remain for all powers of S. The subchains have the transition matrices
T
1
=

1
2
1
4
1
4
1
2
1
2
0
1
2
0
1
2

, T
2
=

3
4
1
4
1
4
3
4

.
The eigenvalues of T
1
are
1
= 0,
2
=
1
2
,
3
= 1, with the corresponding eigenvectors
r
1
=

1
1
1

, r
2
=

0
1
1

, r
3
=

1
1
1

.
The matrix C
1
and its inverse, and the diagonal D
1
are given by
C
1
=

1 0 1
1 1 1
1 1 1

, C
1
1
=


1
2
1
4
1
4
0
1
2
1
2
1
2
1
4
1
4

, D =

0 0 0
0
1
2
0
0 0 1

62
Therefore
T
n
1
= C
1
D
n
1
C
1
1
=

1 0 1
1 1 1
1 1 1

0 0 0
0 (
1
2
)
n
0
0 0 1

1
2
1
4
1
4
0
1
2
1
2
1
2
1
4
1
4

1
2
1
4
1
4
1
2
1
4
1
4
1
2
1
4
1
4

,
as n .
The eigenvalues of T
2
are
1
=
1
2
,
2
= 1, and the corresponding eigenvectors are
r
1
=

1
1

, r
2
=

1
1

.
The matrix C
2
and its inverse, and the diagonal matric D
2
are given by
C
2
=

1 1
1 1

, C
1
2
=


1
2
1
2
1
2
1
2

, D
2
=


1
2
0
0 1

.
Hence
T
n
2
= C
2
D
n
2
C
1
2
=

1 1
1 1

(
1
2
)
n
0
0 1


1
2
1
2
1
2
1
2

1
2
1
2
1
2
1
2

,
as n .
Combination of the two limiting matrices leads to
lim
n
T
n
=

1
2
1
4
0 0
1
4
1
2
1
4
0 0
1
4
0 0
1
2
1
2
0
0 0
1
2
1
2
0
1
2
1
4
0 0
1
4

.
4.15. The transition matrix of a four-state Markov chain is given by
T =

1 a a 0 0
1 b 0 b 0
1 c 0 0 c
1 0 0 0

, (0 < a, b, c < 1).


Draw a transition diagram, and, from the diagram, calculate f
(n)
1
, (n = 1, 2, . . .), the probability that a rst
return to state E
1
occurs at the n-th step. Calculate also the mean recurrence time
1
. What type of state
is E
1
?
The rst return probabilities for state E
1
are
f
(1)
1
= 1 a, f
(2)
1
= a(1 b), f
(3)
1
= ab(1 c), f
(4)
1
= abc, f
(n)
1
= 0, (n 5).
Hence
f
1
=

n=1
f
(n)
1
= 1 a +a(1 b) +ab(1 c) +abc = 1,
which implies that E
1
is persistent. Also, the mean

1
=

n=1
nf
(n)
1
= 1 a + 2a(1 b) + 3ab(1 c) + 4abc = 1 +a +ab +abc.
Hence
1
is nite so that E
1
is non-null. It is also aperiodic so that E
1
is an ergodic state.
63
E
1 E
2
E
3
E
4
1-b
1
1
1-c
1-a
a
b c
Figure 4.8: Transition diagram for Problem 4.15.
4.16. Show that the transition matrix
T =

1 a a 0 0
1 a 0 a 0
1 a 0 0 a
1 0 0 0

where 0 < a < 1, has two imaginary (conjugate) eigenvalues. If a =


1
2
, conrm that T has the invariant
distribution p =

8
15
4
15
2
15
1
15

.
The eigenvalues of T are given by
1
= a,
2
= ai,
3
= ai,
4
= 1, of which two are imaginary
conjugates.
If a =
1
2
, then
1
=
1
2
,
2
=
1
2
i,
3
=
1
2
i,
4
= 1, with corresponding eigenvectors
r
1
=

1
2
1

1
2
1

, r
2
=

1
2
i

1
2
+
1
2
i
1
2
+ i
1

, r
3
=

1
2
i

1
2

1
2
i
1
2
i
1

, r
4
=

1
1
1
1

.
The matrix C of eigenvalues and its inverse, and the diagonal matrix D are given by
C =

1
2

1
2
i
1
2
i 1
1
1
2
+
1
2
i
1
2

1
2
i 1

1
2
1
2
+ i
1
2
i 1
1 1 1 1

, C
1
=

1
3
1
3

1
3
1
3

1
10
+
3
10
i
3
10

1
10
i
1
10

3
10
i
3
10
+
1
10
i

1
10

3
10
i
3
10
+
1
10
i
1
10
+
3
10
i
3
10

1
10
i
8
15
4
15
2
15
1
15

,
D =

1
2
0 0 0
0
1
2
i 0 0
0 0
1
2
i 0
0 0 0 1

.
In the limit n , it follows that all the rows of CD
n
C
1
are given by

8
15
4
15
2
15
1
15

,
which is the same as the last row of C
1
.
4.17. A production line consists of two manufacturing stages. At the end of each manufacturing stage each
item in the line is inspected, where there is a probability p that it will be scrapped, q that it will be sent
back to that stage for reworking, and (1 p q) that it will be passed to the next stage or completed. The
production line can be modelled by a Markov chain with four states: E
1
, item scrapped; E
2
, item completed;
E
3
, item in rst manufacturing stage; E
4
, item in second manufacturing stage. We dene states E
1
and
E
2
to be absorbing states so that the transition matrix of the chain is
T =

1 0 0 0
0 1 0 0
p 0 q 1 p q
p 1 p q 0 q

.
64
An item starts along the production line. What is the probability that it is completed in two stages?
Calculate f
(n)
3
and f
(n)
4
. Assuming that 0 < p + q < 1, what kind of states are E
3
and E
4
? What is the
probability that an item starting along the production line is ultimately completed?
The transition diagram is shown in Figure 4.9. E
1
and E
2
are absorbing states. The initial position of
the item can be represented by the vector p
(0)
=

0 0 1 0

. We require
E
1
E
2
E
3
E
4
1
1
1
q q
p p
1-p-q 1-p-q
Figure 4.9: Transition diagram for Problem 4.17.
p
(2)
= p
(0)
T
2
=

0 0 1 0

1 0 0 0
0 1 0 0
p 0 q 1 p q
p 1 p q 0 q

2
=

2p p
2
(1 p q)
2
q
2
q(1 p q)

Therefore the probability that an item is completed in two stages is p


(2)
2
= (1 p q)
2
.
The rst return probabilities are
f
(1)
3
= q, f
(n)
3
= 0, (n 3),
f
(1)
4
= q, f
(n)
4
= 0, (n 3).
Hence E
3
and E
4
are transient states.
The probability of an item is completed without reworking is (1 p q)
2
, with one reworking is
2q(1 p q)
2
, with two reworkings 3q
2
(1 p q)
2
, and n reworkings (n + 1)q
n
(1 p q)
2
. Hence the
probability that an item starting is ultimately completed is
(1 p q)
2
[1 + 2q + 3q
2
+ ] =
(1 p q)
2
(1 q)
2
,
after summing the geometric series.
4.18. The step-dependent transition matrix of Example 4.9 is
T
n
=

1
2
1
2
0
0 0 1
1/(n + 1) 0 n/(n + 1)

, (n = 1, 2, 3, . . .).
Find the mean recurrence time for state E
3
, and conrm that E
3
is a persistent, non-null state.
The transition matrix is shown in Figure 4.10. Assuming that a walk starts at E
3
, the probabilities of
rst returns to state E
3
are (using the diagram)
f
(1)
3
=
1
2
, f
(2)
3
= 0, f
(3)
3
=
1
1 + 1

1
2
1 =
1
4
, . . . , f
(n)
3
=
1
1 + 1

1
2
n3

1
2
1 =
1
2
n1
, . . . .
Hence
65
E
1
E
2
E
3
1
2
1
2
1
n/(n+1)
1/(n+1)
Figure 4.10: Transition diagram for Problem 4.18.
f
3
=

n=1
f
(n)
3
=

n=1
1
2
n
= 1,
using the formula for the sum of a geometric series. This means that E
3
is persistent. The mean recurrence
time is given by

3
=

n=1
nf
(n)
3
=
1
2
+

n=3
n
2
n1
=
1
2
+
3
2
= 2,
using the formula for the sum of the geometric series. Hence E
3
is persistent and non-null.
4.19. In Example 4.9, a persistent, null state occurred in a chain with step-dependent transitions: such a
state cannot occur in a nite chain with a constant transition matrix. However, chains over an innite
number of states can have persistent, null states. Consider the following chain which has an innite number
of states E
1
, E
2
, . . . with the transition probabilities
p
11
=
1
2
, p
12
=
1
2
, p
j1
=
1
j + 1
, p
j,j+1
=
j
j + 1
, (j 2).
Find the mean recurrence time for E
1
, and conrm that E
1
is a persistent, null state.
From the transition diagram, the probabilities of rst returns to E
1
are given by
E
1
E
2
E
3
E
4
E
5 1
2
1
2
1
3
2
3
3
4
4
5
1
4 1
5 1
6
Figure 4.11: Transition diagram for Problem 4.19.
f
(1)
1
=
1
2
, f
(2)
1
=
1
2.3
, f
(3)
1
=
1
3.4
, . . . , f
(n)
1
=
1
n(n + 1)
, . . . .
Therefore
f
1
=

n=1
f
(n)
1
=

n=1
1
n(n + 1)
= lim
N
N

n=1

1
n

1
n + 1

= lim
N

1
1
N

= 1,
which implies that E
1
is persistent. However the mean recurrence time is

1
=

n=1
nf
(n)
1
=

n=1
1
n + 1
= ,
66
the series being divergent. According to the denition, E
1
is a null state.
4.20. A random walk takes place on 1, 2, . . . subject to the following rules. A jump from position i to
position 1 occurs with probability q
i
, and from position i to i + 1 with probability 1 q
i
for i = 1, 2, . . .,
where 0 < q
i
< 1. Sketch the transition diagram for the chain. Explain why to investigate the persistence
of every state, only one state, say state 1, need be considered. Show that the probability that a rst return
to state 1 occurs at some step is
f
1
=

j=1
j1

k=1
(1 q
k
)q
j
.
If q
j
= q (j = 1, 2, . . .), show that every state is persistent.
The transition diagram is shown in Figure 4.12 with the states labelled E
1
, E
2
, . . .. The chain is
irreducible since every state can be reached from every other state. The diagram indicates that every state
E
1
E
2
E
3
E
4
E
5
q
1
q
5
q
4
1- q
3
1- q
2
1- q
1
1-
q
4
q
3
q
2
Figure 4.12: Transition diagram for Problem 4.20.
is aperiodic since a return to any state can be achieved in any number of steps. Need only consider one
state since the others will have the same properties.
Consider state E
1
. Then, from the diagram, the probabilities of rst returns are
f
(1)
1
= q
1
, f
(2)
1
= (1 q
1
)q
2
, f
(3)
1
= (1 q
1
)(1 q
2
)q
3
, . . . , f
(j)
1
=
j1

k=1
(1 q
k
)q
j
, . . . .
Therefore
f
1
=

j=1
f
(j)
1
=

j=1
j1

k=1
(1 q
k
)q
j
.
If q
j
= q, (j = 1, 2, . . .), then
f
1
=

j=1
j1

k=1
(1 q)q = q

j=1
(1 q)
j1
= q/q = 1,
using the formula for the sum of geometric series. Hence E
1
and every state is recurrent.
67
Chapter 5
Poisson processes
5.1. The number of cars which pass a roadside speed camera within a specied hour is assumed to be a
Poisson process with parameter = 92 per hour. It is also found that 1% of cars exceed the designated
speed limit. What are the probabilities that (a) at least one car exceeds the speed limit, (b) at least two cars
exceed the speed limit in the hour.
With = 92 in the Poisson process, the mean number of cars in the hour is 92 1 = 92. Of these,
on average, 0.92 cars exceed the speed limit. Assume that the cars which exceed the limit form a Poisson
process with parameter
1
= 0.92. Let N(t) be a random variable of the number of cars exceeding the
speed limit by time t measured from the beginning of the hour.
(a) The probability that at least one car has exceeded the limit within the hour is
1 P(N(t) < 1) = 1 e

1
= 1 0.398 = 0.602.
(b) The probability that at least two cars have exceeded the limit within the hour is
1 P(N(t) < 2) = 1 e

1
e

1
= 1 0.398 0.367 = 0.235.
5.2. If the between-event time in a Poisson process has an exponential distribution with parameter with
density e
t
, then the probabilitythat the time for the next event to occur is at least t
1
is
P{Q
t
> t} = e
t
.
Show that, if t
1
, t
2
0, then
P{Q
t
> t
1
+t
2
|Q
t
> t
1
} = P{Q
t
> t
2
}.
What does this result imply about the Poisson process and its memory of past events?
By formula (1.2) on conditional probability
P(Q > t
1
+t
2
|Q > t
1
) =
P(Q > t
1
+t
2
Q > t
1
)
P(Q > t
1
)
=
P(Q > t
1
+t
2
)
P(Q > t
1
)
=
e
(t
1
+t
2
)
e
t
1
= e
t
2
= P(Q > t
2
)
The result shows the loss of memory property of the Poisson process.
5.3. The number of cars which pass a roadside speed camera are assumed to behave as a Poisson process
with intensity . It is found that the probability that a car exceeds the designated speed limit is .
(a) Show that the number of cars which break the speed limit also form a Poisson process.
68
(b) If n cars pass the camera in time t, nd the probability function for the number of cars which exceed
the speed limit.
(a) Let N(t) be the random variable representing m the number of speeding cars which have occurred in
time t. The probability q
n
(t) = P(N(t) = m) satises
q
n
(t +t) q
n1
(t)t +q
n
(t)(1 t),
where t is the probability that a speeding car appears in the time t. This is the equation for a Poisson
process with intensity .
(b) Of the n cars the number of ways in which m speeding cars can be arranged is
n!
m!(n m)!
=

n
m

.
The probability that any individual event occurs is

n
m

m
(1 )
nm
, (m = 0, 1, 2, . . . , n),
which is the binomial distribution.
5.4. The variance of a random variable X
t
is given by
V(X
t
) = E(X
2
t
) E(X
t
)
2
.
In terms of the generating function G(s, t), show that
V(X
t
) =

s
G(s, t)
s

G(s, t)
s

s=1
.
(an alternative formula to (5.20)). Obtain the variance for the Poisson process using its generating function
G(s, t) = e
(s1)t
given by eqn (5.17), and check your answer with that given in Problem 5.3.
The assumption is that the random variable X
t
is a function of the time t. Let p
n
(t) = P(X
t
= n).
The probability generating function G(s, t) becomes a function of two variables s and t. It is dened by
G(s, t) =

n=0
p
n
(t)s
n
.
The mean of X
t
is given by
E(X
t
) =

n=1
np
n
(t) =
G(s, t)
s

s=1
The mean of X
2
t
is given by
E(X
2
t
) =

n=1
n
2
p
n
(t) =

s

s
G(s, t)
s

s=1
.
Hence
V(X
t
) =

s
G(s, t)
s

G(s, t)
s

s=1
.
For the Poisson process G(s, t) = e
(s1)t
. Then
E(X
t
) =

s
[e
(s1)t
]

s=1
= t,
69
and
E(X
2
t
) =

s
[ste
(s1)t
]

s=1
= t (t)
2
.
Hence V(X
t
) = t.
5.5. A telephone answering service receives calls whose frequency varies with time but independently of
other calls perhaps with a daily patternmore during the day than the night. The rate (t) 0 becomes a
function of the time t. The probability that a call arrives in the small time interval (t, t +t) when n calls
have been received at time t satises
p
n
(t +t) = p
n1
(t)((t)t +o(t)) +p
n
(t)(1 (t)t +o(t)), (n 1),
with
p
0
(t +t) = (1 (t)t +o(t))p
0
(t).
It is assumed that the probability of two or more calls arriving in the interval (t, t +t) is negligible. Find
the set of dierential-dierence equations for p
n
(t). Obtain the probability generating function G(s, t) for
the process and conrm that it is a stochastic process with intensity

t
0
(x)dx. Find p
n
(t) by expanding
G(s, t) in powers of s. What is the mean number of calls received at time t?
From the dierence equation it follows that
p
n
(t +t) p
n
(t)
t
= (t)p
n1
(t) (t)p
n
(t) +o(1),
p
0
(t +t) p
0
(t)
t
= (t)p
0
(t) +o(1).
Let t . Then
p

n
(t) = (t)p
n1
(t) (t)p
n
(t), (i)
p

0
(t) = (t)p
0
(t). (ii)
Let
G(s, t) =

n=0
p
n
(t)s
n
.
Multiply (i) by s
n
, sum from n = 1, and add (ii) so that
G(s, t)
t
= (t)(s 1)G(s, t). (iii)
The initial value is G(s, 0) = 1. The solution of (iii) (which is essentially an ordinary dierential equation)
subject to the initial condition is
G(s, t) = exp

(s 1)

t
0
(u)du

.
Expansion of the generating function gives the series
G(s, t) = exp

t
0
(u)du

exp

t
0
(u)du

n=0
p
n
(t)s
n
,
where the probability
p
n
(t) =
1
n!
exp

t
0
(u)du

t
0
(u)du

n
.
The mean of the process is
(t) =
G(s, t)
s

s=1
=

t
0
(u)du.
5.6. For the telephone answering service in Problem 5.5, suppose that the rate is periodic given by (t) =
a +b cos(t) where a > 0 and |b| < a. Using the probability generating function from Problem 5.6 nd the
70
probability that n calls have been received at time t. Find also the mean number of calls received at time t.
Sketch graphs of p
0
(t), p
1
(t) and p
2
(t) where a = 0.5, b = 0.2 and = 1.
Using the results from Problem 5.5,
G(s, t) = exp

(s 1)

t
0
(a +b cos u)du

= exp[(s 1)(at + (b/) sin t)].


Hence
p
n
(t) =
1
n!
e
[at+(b/) sin t]
[at + (b/) sin t]
n
,
and the mean is
(t) = at + (b/) sin t.
The rst three probabilities are shown in Figure 5.1.
t
p (t)
0
p (t)
p (t)
1
2
Figure 5.1: Graphs of the probabilities p
0
(t), p
1
(t) and p
2
(t) versus t in Problem 5.6.
5.7. A Geiger counter is pre-set so that its initial reading is n
0
at time t = 0. What are the initial
conditions on p
n
(t), the probability that the reading is n at time t, and its generating function G(s, t)?
Find p
n
(t), and the mean reading of the counter at time t.
The probability generating function for this Poisson process is (see eqn (5.16))
G(s, t) = A(s)e
(s1)t
.
The initial condition is G(s, 0) = x
n
0
. Hence A(s) = s
n
0
and
G(s, t) = s
n
0
e
(s1)t
.
The power series expansion of G(s, t) is
G(s, t) =

n=n
0
s
n

nn
0
e
t
(n n
0
)!
.
Hence
p
n
(t) = 0, (n < n
0
), p
n
(t) =
(t)
nn
0
e
t
(n n
0
)!
, (n n
0
).
The mean reading at time t is given by
(t) =
G(s, t)
s

s=1
= n
0
+t.
71
5.8. A Poisson process with probabilities
p
n
(t) = P[N(t) = n] =
(t)
n
e
t
n!
has a random variable N(t). If = 0.5, calculate the following probabilities associated in the process:
(a) P[N(3) = 6];
(b) P[N(2.6) = 3];
(c) P[N(3.7) = 4|N(2.1) = 2];
(d) P[N(7) N(3) = 3].
(a) P(N(3) = 6) = p
6
(3) =
(0.5 3)
6
e
0.53
6!
= 0.00353.
(b) P(N(2.6) = 3) = p
3
(2.6) =
(0.5 2.6)
3
e
0.52.6
3!
= 0.100.
(c) P[N(3.7) = 4|N(2.1) = 2] = P[N(1.6) = 2] = p
2
(1.6) =
(0.5 1.6)
2
e
0.51.6
2!
= 0.144.
(d) P[N(7) N(3) = 3] = P[N(4) = 3] = 0.180.
5.9. A telephone banking service receives an average of 1000 call per hour. On average a customer transac-
tion takes one minute. If the calls arrive as a Poisson process, how many operators should the bank employ
to avoid an expected accumulation of incoming calls?
Let time t be measured in minutes . The intensity of the Poisson process = 1000/60 = 50/3. The
expected inter-arrival time is
1

=
3
50
= 0.06 minutes.
This must be covered
1
0.06
= 16.7 operators.
Hence 17 operators would be required to cover expected incoming calls.
5.10. A Geiger counter automatically switches o when the nth particle has been recorded, where n is xed.
The arrival of recorded particles is assumed to be a Poisson process with parameter t. What is the expected
value of the switch-o times?
The probability distribution for the switch-o times is
F(t) = 1 {probabilities that 0, 1, 2, . . . , n 1 particles recorded by time t)}
= 1 e
t

1 +t + +
(t)
n1
(n 1)!

= 1 e
t
n1

r=0
(t)
r
r!
Its density is, for t > 0,
f(t) =
dF(t)
dt
=
d
dt

1 e
t
n1

r=0
(t)
r
r!

= e
t
n1

r=0
(t)
n
n!
e
t
n1

r=1

n
t
n1
(n 1)!
= e
t
(t)
n1
(n 1)!
.
72
which is gamma. Its expected value is
=


0
tf(t)dt =


0
e
t

n1
(n 1)!
t
n
dt =

n
(n 1)!


0
e
t
t
n
dt
=
1
(n 1)!


0
e
s
ds =
n!
(n 1)!
=
n

.
5.11. Particles are emitted from a radioactive source, and, N(t), the random variable of the number of
particles emitted up to time t form t = 0, is a Poisson process with intensity . The probability that any
particle hits a certain target is p, independently of any other particle. If M
t
is the random variable of the
number of particles that hit the target up to time t, show, using the law of total probability, that M(t) forms
a Poisson process with intensity p.
For any two times t
1
, t
2
, (t
2
> t
1
0), using the law of total probability, with t
2
t
1
= t,
P[M(t
2
) M(t
1
) = k] =

n=k
P[N(t
2
) N(t
1
) = n]

n
k

p
k
(1 p)
nk
=

n=k
e
t
(t)
n
n!

n
k

p
k
(1 p)
nk
= e
t
p
k
(1 p)
k

n=k
(t)
n
n!

n
k

(1 p)
n
= e
t
(pt)
k
k!

n=k
[(1 p)t]
nk
(n k)!
=
(pt)
k
e
pt
k!
,
which is a Poisson process of intensity p.
73
Chapter 6
Birth and death processes
6.1. A colony of cells grows from a single cell. The probability that a cell divides in a time interval t is
t +o(t).
There are no deaths. Show that the probability generating function for this death process is
G(s, t) =
se
t
1 (1 e
t
)s
.
Find the probability that the original cell has not divided at time t, and the mean and variance of population
size at time t (see Problem 5.4, for the variance formula using the probability generating function).
This is a birth process with parameter and initial population size 1. Hence the probability generating
function is (see eqn (6.12))
G(s, t) = se
t
[1 (1 e
t
)s]
1
,
which satises the initial condition G(s, 0) = s. It follows that the probability that the population size is
1 at time t is p
1
(t) = e
t
.
The mean population size is
(t) =
G(s, t)
s

s=1
= e
t
.
Using the generating function, the variance of the population size is

2
=

s
G(s, t)
s

G(s, t)
s

s=1
= [2e
2t
e
t
] e
2t
= e
2t
e
t
.
6.2. A simple birth process has a constant birth-rate . Show that its mean population size (t) satises
the dierential equation
d(t)
dt
= (t).
How can this result be interpreted in terms of a deterministic model for a birth process?
From Section 6.3, the mean population of the birth process with initial population n
0
is given by
(t) = n
0
e
t
. It can be veried that
d
dt
= n
0
e
t
n
0
e
t
= 0,
This dierential equation is a simple deterministic model for the population in a birth process, so that
the mean population size in the stochastic process satises a deterministic equation. [However, this is not
always the case in the relation between stochastic and deterministic models.]
74
6.3. The probability generating function for a simple death process with death-rate and initial population
size n
0
is given by
G(s, t) = (1 e
t
)
n
0

1 +
se
t
1 e
t

n
0
(see Equation (6.17)). Using the binomial theorem nd the probability p
n
(t) for n n
0
. If n
0
is an even
number, nd the probability that the population size has halved by time t. A large number of experiments
were undertaken with live samples with a variety of initial population sizes drawn from a common source
and the times of the halving of deaths were recorded for each sample. What would be the expected time for
the population to halve?
The binomial expansion of G(s, t) is given by
G(s, t) = (1 e
t
)
n
0

1 +
se
t
1 e
t

n
0
= (1 e
t
)
n
0
n
0

n=0

n
0
n

e
nt
s
n
(1 e
t
)
n
From this series, the coecient of s
n
is
p
n
(t) = (1 e
t
)
n
0
n
e
nt

n
0
n

, (n = 0, 1, 2, . . . , n
0
)
which is the probability that the population size is n at time t.
Let n
0
= 2m
0
, where m
0
is an integer, which ensures that n
0
is even. We require
p
m
0
= (1 e
t
)
m
0
e
m
0
t

2m
0
m
0

.
The mean population size at time t is given by
= Gs(1, t) = n
0
e
t
.
This mean is half the initial population is n
0
e
t
=
1
2
n
0
, which occurs, on average, at time t =
1
ln 2.
6.4. A birth process has a probability generating function G(s, t) given by
G(s, t) =
s
e
t
+s(1 e
t
)
.
(a) What is the initial population size?
(b) Find the probability that the population size is n at time t.
(c) Find the mean and variance of the population size at time t.
(a) Since G(s, 0) = s, the initial population size is n
0
= 1.
(b) Expand the generating function using the binomial theorem:
G(s, t) =
s
e
t
+s(1 e
t
)
= se
t

n=0
s
n
(1 e
t
)
n
.
The coecients give the required probabilities:
p
0
(t) = 0, p
n
(t) = e
t
(1 e
t
)
n1
.
(c) Since
G(s, t)
s

s=1
=
e
t
[e
t
+s(1 e
t
)]
2

s=1
= e
t
. (i)
Hence the mean population size is = e
t
.
75
From (i),

2
G(s, t)
s
2

s=1
=
2e
t
(1 e
t
)
[e
t
+s(1 e
t
)]
3

s=1
= 2e
t
(1 e
t
).
Hence the variance is given by
V(t) =

G(s, t)
s
+s

2
G(s, t)
s
2

G(s, t)
s

s=1
= e
t
2e
t
(1 e
t
) e
2t
= e
2t
e
t
.
6.5. A random process has the probability generating function
G(s, t) =

2 +st
2 +t

r
,
where r is a positive integer. What is the initial state of the process? Find the probability p
n
(t) associated
with the generating function. What is p
r
(t)? Show that the mean associated with G(s, t) is
(t) =
rt
2 +t
.
Since G(s, 0) = 1, this implies that the initial size is 1.
Expansion of G(s, t) using the binomial theorem leads to
G(s, t) =

2 +st
2 +t

r
=

2
2 +t

1 +
1
2
st

r
=

2
2 +t

n=0

r
n

st
2

n
Hence the probability that the size is n at time t is
p
n
(t) =

2
2 +t

r
n

t
2

n
, (n = 0, 1, 2, . . .),
With n = r,
p
r
(t) =

2
2 +t

r
r

t
2

r
=

rt
2 +t

r
.
The mean size is given by
(t) =
G(s, t)
s

s=1
=
rt(2 +st)
r1
(2 +t)
r

s=1
=
rt
2 +t
.
6.6. In a simple birth and death process with unequal birth and death-rates and , the probability gener-
ating function is given by
G(s, t) =

(1 s) ( s)e
()t
(1 s) ( s)e
()t

n
0
,
for an initial population size n
0
(see Equation (6.23)).
(a) Find the mean population size at time t.
(b) Find the probability of extinction at time t.
(c) Show that, if < , then the probability of ultimate extinction is 1. What is it if > ?
(d) Find the variance of the population size.
76
(a) Let G(s, t) = [A(s, t)/B(s, t)]
n
0
with obvious denitions for A(s, t) and B(s, t). Then
G(s, t)
s
=
[A(s, t)]
n
0
1
[B(s, t)]
n
0
+1
[( +e
()t
)B(s, t) ( +e
()t
))A(s, t)].
If s = 1, then A(1, t) = B(1, t) = ( )e
()t
. Therefore the mean population size is given by
= n
0
e
()t
.
(b) The probability of extinction at time t is
p
0
(t) = G(0, t) =

e
()t
e
()t

n
0
.
(c) If < , then e
()t
0 as t . Hence p
0
(t) 1.
If > , then p
0
(t) (/)
n
0
as t .
(d) This requires a lengthy dierentiation to obtain the second derivative of G(s, t): symbolic compu-
tation is very helpful. The variance is given by
V(t) = G
ss
(1, t) +G
s
(1, t) [G
s
(1, t)]
2
= n
0
( +)
( )
[e
()
1], ( = ).
6.7. In a population model, the immigration rate
n
= , a constant, and the death rate
n
= n. For an
initial population size n
0
, the probability generating function is (Example 6.3)
G(s, t) = e
s/
exp[(1 (1 s)e
t
)/][1 (1 s)e
t
]
n
0
.
Find the probability that extinction occurs at time t. What is the probability of ultimate extinction?
The probability of extinction is
p
0
(t) = G(0, t) = (1 e
t
)
n
0
e
(1e
t
)/
.
The probability of ultimate extinction is
lim
t
p
0
(t) = e
/
.
6.8. In a general birth and death process a population is maintained by immigration at a constant rate ,
and the death rate is n. Using the dierential-dierence equations (6.26) directly, obtain the dierential
equation
d(t)
dt
+(t) = ,
for the mean population size (t). Solve this equation assuming an initial population n
0
and compare the
answer with that given in Example 6.3.
In terms of a generating function the mean of a process is given by
(t) =

n=1
np
n
(t).
From (6.25) (in the book) the dierential-dierence equation for this immigration-death model is
dp
n
(t)
dt
= p
n1
(t) ( +n)p
n
(t) +(n + 1)p
n+1
(t), (n = 1, 2, . . .).
77
Multiply this equation by n and sum over over n:

n=1
n
dp
n
(t)
dt
=

n=1
np
n1
(t)

n=1
np
n
(t)

n=1
np
n
(t) +

n=1
n(n + 1)p
n+1
(t)
=

n=0
(n + 1)p
n
(t) (t)

n=1
n
2
p
n
(t) +

n=2
n(n 1)p
n
(t)
= (t) + (t) (t)
= (t).
Hence
d(t)
dt
+(t) = .
This is a rst-order linear equation with general solution
(t) = Ae
t
+

.
The initial condition implies A = n
0
(/). Hence
(t) =

n
0

e
t
+

.
6.9. In a death process the probability of a death when the population size is n = 0 is a constant but
obviously zero if the population size is zero. Verify that, if the initial population is n
0
, then p
n
(t), the
probability that the population size is n at time t is given by
p
0
(t) =

n
0
(n
0
1)!

t
0
s
n
0
1
e
s
ds, p
n
(t) =
(t)
n
0
n
(n
0
n)!
e
t
, (1 n n
0
),
Show that the mean time to extinction is n
0
/.
The probability that a death occurs in time t is a constant independently of the population size.
Hence
p
n
0
(t +t) = (1 )p
n0
(t), (i)
p
n
(t +t) = p
n+1
(t) + (1 )p
n
(t), (n = 1, . . . , n
0
1), (ii)
p
0
(t +t) = p
1
(t) +p
0
(t). (iii)
subject to the initial conditions p
n
0
(0) = 1, p
n
(0) = 0, (n = 0, 1, 2, . . . n
0
1). Divide through each of the
eqns (i) and (ii) by t, and let t to obtain the dierential-dierence equations
p

n
0
(t) = p
n
0
(t), (iv)
p

n
(t) = p
n
(t) +p
n+1
(t), (n = 1, 2, . . . , n
0
1). (v)
p

0
(t) = p
1
(t). (vi)
From (iii) and the initial conditions,
p
n
0
(t) = Ae
t
= e
t
.
From (iv) with n = n
0
1,
p

n
0
1
(t) = p
n
0
1
(t) +p
n
0
(t) = p
n
0
1
(t) +e
t
.
Subject to the initial condition p
n
0
1
(0) = 1, this rst-order linear equation has the solution
p
n
0
(t) = te
t
.
Repeat this process which leads to the conjecture that
p
n
(t) =
(t)
n
0
n
(n
0
n)!
e
t
, (n = 1, 2, . . . , n
0
1),
78
which can be proved by induction. The nal probability satises
p

0
(t) = p
1
(t) =

n
0
t
n
0
1
(n
0
1)!
e
t
.
Direct integration gives
p
0
(t) =

n
0
(n
0
1)!

t
0
s
n
0
1
e
s
ds. (vii)
It can be checked that p
0
(t) 1 as t which conrms that extinction is certain.
The probability distribution of the random variable T of the time to extinction is p
0
(t) = P[T t]
given by (viii). Its density is
f(t) =
dp
0
(t)
dt
=

n
0
t
n
0
1
(n
0
1)!
e
t
.
Hence the expected value of T is
E(T) =


0
tf(t)dt =

n
0
t
n
0
(n
0
1)!
e
t
=
n
0

,
using an integral formula for the factorial (see the Appendix in the book).
6.10. In a birth and death process the birth and death rates are given by

n
= n +,
n
= n,
where represents a constant immigration rate. Show that the probability generating function G(s, t) of
the process satises
G(s, t)
t
= (s )(s 1)
G(s, t)
s
+(s 1)G(s, t).
Show also that, if
G(s, t) = ( s)
/
S(s, t),
then S(s, t) satises
S(s, t)
t
= (s )(s 1)
S(s, t)
s
.
Let the initial population size be n
0
. Solve the partial dierential equation for S(s, t) using the method of
Section 6.5 and conrm that
G(s, t) =
( )
/
[( s) (1 s)e
()t
]
n
0
[( s) (1 s)e
()t
]
n
0
+(/)
.
(Remember the modied initial condition for S(s, t).)
Find p
0
(t), the probability that the population is zero at time t (since immigration takes place even
when the population is zero there is no question of extinction in this process). Hence show that
lim
t
p
0
(t) =

/
if < . What is the limit if > ?
The long term behaviour of the process for < can be investigated by looking at the limit of the
probability generating function as t . Show that
lim
t
G(s, t) =

/
.
This is the probability generating function of a stationary distribution and it indicates that a balance has
been achieved the birth and immigration rates, and the death rate. What is the long term mean population?
If you want a further lengthy exercise investigate the probability generating function in the special case
= .
79
The dierential-dierence equations are
p

0
(t) = p
0
(t) +p
1
(t), (i)
p

n
(t) = [(n 1) +]p
n1
(t) (n + +n)p
n
(t) + (n + 1)p
n+1
(t), (n = 1, 2, . . .) (ii)
Multiply (ii) by s
n
, sum over n 1 and add (i) leads to

n=0
p

n
(t)s
n
=

n=1
[(n 1) +]p
n1
(t) (n + +n)p
n
(t)s
n
+

n=0
(n + 1)p
n+1
(t)s
n
Let the probability generating function be G(s, t) =

n=0
p
n
(t)s
n
. Then the summations above lead to
G(s, t)
t
= (s )(s 1)
G(s, t)
s
+( 1)G(s, t).
Let G(s, t) = ( s)
/
S(s, t). Then
G(s, t)
s
= ( s)
/
S(s, t)
s
+( s)
(/)1
S(s, t),
and
G(s, t)
t
= ( s)
/
S(s, t)
t
.
This transformation removes the non-derivative term to leave
S(s, t)
t
= (s )(s 1)
S(s, t)
s
.
Now apply the change of variable dened by
ds
dz
= (s )(s 1)
as in Section 6.5(a). Integration gives (see eqn (6.21))
s =
e
()z
e
()z
= h(z) (say). (iii)
The initial condition is equivalent to G(s, 0) = s
n
0
or S(s, 0) = ( s)
/
s
n
0
. It follows that
S(f(z), 0) =

( )
e
()z

e
()z
e
()z

n
0
= w(z) (say).
Since S(f(z), t) = w(z +t) for any smooth function of z +t, it follows that
G(s, t) = ( s)
/
S(f(z), t) = ( s)
/
w(z +t)
= ( s)
/

( )
e
()(z+t)

e
()(z+t)
e
()(z+t)

n
0
,
where z is dened by s = h(z) given by (iii). Finally
G(s, t) =
( )
/
[( s) (1 s)e
()t
]
n
0
[( s) (1 s)e
()t
]
n
0
+(/)
, (iv)
as displayed in the problem.
The probability that the population is zero at time t is
p
0
(t) = G(0, t) =
( )
/
( e
()t
)
n
0
( e
()t
)
n
0
+(/)
.
80
If < , then
p
0
(t)

/
.
If > , then
p
0
(t) =
( )
/
(e
()t
)
n
0
(e
()t
)
n
0
+(/)

n
0
,
as t .
The long term behaviour for < is determined by letting t in (iv) resulting in
lim
t
G(s, t) =

/
.
Express G(s, t) in the form
G(s, t) = ( )
/
A(s, t)
B(s, t)
,
where
A(s, t) = [( s) (1 s)e
()t
]
n
0
, B(s, t) = [( s) (1 s)e
()t
]
n
0
+(/)
.
Then
G
s
(s, t) = ( )
/
A
s
(s, t)B(s, t) A(s, t)B
s
(s, t)
[B(s, t)]
2
For the mean we require s = 1, for which value
A(1, t) = ( )
n
0
, B(1, t) = ( )
n
0
+(/)
,
A
s
(1, t) = n
0
( +e
()t
)( )
n
0
1
,
B
s
(1, t) = (n
0
+ (/))( +e
()t
)( )
n
0
+(/)1
.
Hence
(t) = G
s
(1, t) = ( )
/
A
s
(1, t)B(1, t) A(1, t)B
s
(1, t)
[B(1, t)]
2
=
n
0

[ +{n
0
( ) }e
()t
].
If < , then
(t)
n
0


,
as t . If > , then the mean becomes unbounded as would be expected.
6.11. In a birth and death process with immigration, the birth and death rates are respectively

n
= n +,
n
= n.
Show directly from the dierential-dierence equations for p
n
(t), that the mean population size (t) satises
the dierential equation
d(t)
dt
= ( )(t) +.
Deduce the result
(t)


as t if < . Discuss the design of a deterministic immigration model based on this equation.
The dierence equations for the probability p
n
(t) are given by
p

0
(t) = p
0
(t) +p
1
(t), (i)
p

n
(t) = [(n 1) +]p
n1
(t) [(n + +n)p
n
(t) + (n + 1)p
n+1
(t). (ii)
81
The mean (t) is given by (t) =

n=1
np
n
(t). Multiply (ii) by n and sum from n = 1. Then, re-ordering
the sums
d(t)
dt
=

n=1
np

n
(t) =

n=2
n(n 1)p
n1
(t) +

n=1
p
n1
(t)

n=1
np
n
(t)
( +)

n=1
n
2
p
n
(t) +

n=1
n(n + 1)p
n+1
(t)
=

n=1
n(n + 1)p
n
(t) +

n=0
p
n
(t)

n=1
np
n
(t)
( +)

n=1
n
2
p
n
(t) +

n=2
n(n 1)p
n
(t)
= + ( )(t).
The mean of the stochastic process satises a simple deterministic model for a birth and death process
with immigration.
6.12. In a simple birth and death process with equal birth and death rates , the initial population size has
a Poisson distribution with probabilities
p
n
(0) = e

n
n!
(n = 0, 1, 2, . . .),
with intensity . It could be thought of as a process in which the initial distribution has arisen as the result
of some previous process. Find the probability generating function for this process, and conrm that the
probability of extinction at time t is exp[/(1 +t)] and that the mean population size is for all t.
In Section 6.5(b), the probability generating function G(s, t) for the case in which the birth and death
rates are equal satises
G(s, t)
t
= (1 s)
2
G(s, t)
s
.
To solve the partial dierential equation, the transformation
z =
1
(1 s)
, or s =
z 1
z
(i)
is used. The result is that G(s, t) = w(z +t) for any smooth function w. The initial condition at t = 0 is
w(z) =

n=0
p
n
(0)s
n
=

n=0

n
e

n!
s
n
= e
(s1)
= exp

z 1
z
1

= e
/(z)
,
usng the transformation (i). Hence
G(s, t) = w(z +t) = e
/[(z+t)]
= exp

1
1 s
+t

= exp

(1 s)
1 +t(1 s)

The probability of extinction at time t is


p
0
(t) = G(0, t) = exp[/(1 +t)].
6.13. A birth and death process takes place as follows. A single bacterium is allowed to grow and assumed
to behave as a simple birth process with birth rate for a time t
1
without any deaths. No further growth
then takes place. The colony of bacteria is then allowed to die with the assumption that it is a simple death
process with death rate for a time t
2
. Show that the probability of extinction after the total time t
1
+t
2
is

n=1
e
t
1
(1 e
t
1
)
n1
(1 e
t
2
)
n
.
82
Using the formula for the sum of a geometric series, show that this probability can be simplied to
e
t
2
1
e
t
1
+e
t
2
1
.
Suppose that at time t = t
1
the population size is n. From Section 6.3, the probability that the
population is of size n at time t
1
entirely through births is
p
n
(t
1
) = e
t
1
(1 e
t
1
)
n1
.
From Section 6.4 on the death process, the probability that the population becomes extinct after a further
time t
2
is
q
0
(t
2
) = (1 e
t
2
)
n
.
The probability that the population increases to n and then declines to zero is
p
n
(t)q
0
(t) = e
t
1
(1 e
t
1
)
n1
(1 e
t
2
)
n
Now n can take any value equal to or greater than 1. Hence the probability of extinction through every
possible n is
s(t
1
, t
2
) =

n=1
e
t
1
(1 e
t
1
)
n1
(1 e
t
2
)
n
.
The probability s(t
1
, t
2
) can be expressed as a geometric series in the form
s(t
1
, t
2
) =
e
t
1
1 e
t
1

n=1
[(1 e
t
1
)(1 e
t
2
)]
n
=
e
t
1
(1 e
t
1
)

(1 e
t
1
)(1 e
t
2
)
[1 (1 e
t
1
)(1 e
t
2
)]
=
e
t
2
1
e
t
1
+e
t
2
1
6.14. As in the previous problem a single bacterium grows as a simple birth process with rate and no
deaths for a time . The colony numbers then decline as a simple death process with rate . Show that the
probability generating function for the death process is
(1 e
t
(1 s))e

1 (1 e

)(1 e
t
(1 s))
,
where t is measured from the time . Show that the mean population size during the death process is e
t
.
During the birth process the generating function is (see eqn (6.12))
G(s, t) =
se
t
1 (1 e
t
)s
,
assuming an initial population of 1. For the death process suppose that time restarts from t = 0, and that
the new probability generating function is H(s, t). At t = 0,
H(s, 0) = G(s, ) =
se

1 (1 e

)s
.
For the death process the transformation is s = 1 e
z
, so that
H(s, 0) = w(z) =
(1 e
z
)e

1 (1 e

)(1 e
z
)
.
83
Then, in terms of s,
H(s, t) = w(z +t) =
[1 e
t
(1 s)]e
t
1 (1 e

)[1 e
t
(1 s)]
.
The mean population size in the death process is
H
s
(1, t) =
[1 (1 e

)]e
t
+e
t
(1 e

)
[1 (1 e

]
2
= e
t
.
6.15. For a simple birth and death process the probability generating function ( equation (6.23)) is given by
G(s, t) =

(1 s) ( s)e
()t
(1 s) ( s)e
()t

n
0
for an initial population of n
0
. What is the probability that the population is (a) zero, (b) 1 at time t?
G(s, t) =

(1 s) ( s)e
()t
(1 s) ( s)e
()t

n
0
=
[{ e
()t
} s{ e
()t
}]
n
0
[{ e
()t
} s{ e
()t
}]
n
0
=

e
()t
e
()t

n
0

1 +n
0
s

e
()t
e
()t

e
()t
e
()t

The probabilities p
0
(t) and p
1
(t) are given by the rst two coeciemts in this series.
6.16. (An alternative method of solution for the probability generating function) The general solution of
the rst-order partial dierential equation
A(x, y, z)
z
x
+B(x, y, z)
z
y
= C(x, y, z)
is f(u, v) = 0, where f is an arbitrary function, and u(x, y, z) = c
1
and v(x, y, z) = c
2
are two independent
solutions of
dx
A(x, y, z)
=
dy
B(x, y, z)
=
dz
C(x, y, z)
.
This is known as Cauchys method.
Apply the method to the partial dierential equation for the probability generating function for the
simple birth and death process, namely (equation (6.19))
G(s, t)
t
= (s )(s 1)
G(s, t)
s
,
by solving
ds
(s )(1 s)
=
dt
1
=
dG
0
.
Show that
u(s, t, G) = G = c
1
, and v(s, t, G) = e
()t

1 s

= c
2
.
are two independent solutions. The general solution can be written in the form
G(s, t) = H

e
()t

1 s

.
Here H is a function determined by the initial condition G(s, 0) = s
n
0
. Find H and recover formula (6.22)
for the probability generating function.
84
Note that in the birth and death equation the function C is zero. Comparing the two partial dierential
equations, we have to solve
ds
(s )(1 s)
=
dt
1
=
dG
0
.
The second equality is simply dG = 0. This equation has a general solution which can be expressed as
u(s, t, G) G = c
1
.
The rst equality requires the solution of the dierential equation
ds
dt
= (s )(1 s).
The integration is given essentially in eqn (6.20) in the text which in terms of v can be expressed as
v(s, t, G) e
()t

1 s
(/) s

= c
2
.
Hence the genetal solution is
f(u, v) = 0, or f

G, e
()t

1 s
(/) s

= 0.
Alternatively, this can be written in the form
G(s, t) = H

e
()t

1 s
(/) s

,
where he function H is determined by initial conditions.
Assuming that the initial population size is n
0
, then G(s, 0) = s
n
0
, which means that H is determined
by
H

1 s
(/) s

= s
n
0
.
Let u = (1 s)/((/) s). Then
H(u) =

u
u

n
0
,
which determines the functional form of H. The result follows by replacing u by
e
()t

1 s
(/) s

,
as the argument of H.
6.17. Apply the Cauchys method outlined in Problem 6.16 to the immigration model in Example 6.3. In
this application the probability generating function satises
G(s, t)
t
= (s 1)G(s, t) +(1 s)
G(s, t)
s
.
Solve the equation assuming an initial population of n
0
.
Reading the coecients of the partial dierential equation in Problem 6.16,
ds
(1 s)
=
dt
1
=
dG
(s 1)
.
Integration of the second equality gives
u(s, t, G) G+

s = c
1
,
85
whilst the rst gives
v(s, t, G) = e
t
(1 s) = c
2
.
The general solution can be expressed in the functional form
G(s, t) =

s +H(e
t
(1 s)).
From the initial condition G(s, 0) = s
n
0
. Therefore

s +H(1 s) = s
n
0
,
so that
H(1 s) =

s +s
n
0
.
Let u = 1 s: then
H(u) =

(1 u) + (1 u)
n
0
.
The result follows by replacing u by e
t
(1 s) in this formula.
6.18. In a population sustained by immigration at rate with a simple death process with rate (see
Example 6.3), the probability p
n
(t) satises (equation (6. 25))
dp
0
(t)
dt
= p
0
(t) +p
1
(t),
dp
n
(t)
dt
= p
n1
(t) ( +n)p
n
(t) + (n + 1)p
n+1
(t).
Investigate the steady-state behaviour of the system by assuming that
p
n
(t) p
n
, dp
n
(t)/dt 0
for all n, as t . Show that the resulting dierence equations for what is known as the corresponding
stationary process
p
0
+p
1
= 0,
p
n1
( +n)p
n
+ (n + 1)p
n+1
= 0, (n = 1, 2, . . .)
can be solved iteratively to give
p
1
=

p
0
, p
2
=

2
2!
2
p
0
, p
n
=

n
n!
n
p
0
, .
Using the condition

n=0
p
n
= 1, and assuming that < , determine p
0
. Find the mean steady-state
population size, and compare the result with that obtained in Example 6.3.
From the steady-state dierence equations
p
1
=

p
0
,
p
2
=
1
2
[p
0
+ ( + 2)p
1
] =

2
2!
2
,
p
3
=
1
3
[p
1
+ ( + 2)p
2
] =

3
3!
3
p
0
,
and so on: the result can be conrmed by an induction proof. The requirement

n=0
p
n
= 1 implies

n=0

n
p
0
=


p
0
= 1
if p
0
= ( )/.
86
The mean steady state population is given by
=

n=1
np
n
=

n=1
=

n=1
np
0
n!

n
=

n=1
p
0
(n 1)!

n
= p
0

e
/
=
( )

2
e
/
.
6.19. In a simple birth process the probability that the population is of size n at time t given that it was n
0
at time t = 0 is given by
p
n
(t) =

n 1
n
0
1

e
n
0
t
(1 e
t
)
nn
0
, (n n
0
).
(see Section 6.3 and Figure 6.1). Show that the probability achieves its maximum value for given n and n
0
when t = (1/) ln(n/n
0
). Find also the maximum value of p
n
(t) at this time.
Dierentiating p
n
(t), we obtain
dp
n
(t)
dt
=

n 1
n
0
1

e
n
0
t
(1 e
t
)
nn
0
1
[n
0
(1 e
t
) +(n n
0
)e
t
].
The derivative is zero if
e
t
=
n
0
n
, or t =
1

ln

n
n
0

.
Substituting this time back into p
n
(t), it follows
max
t
[p
n
] =

n 1
n
0
1

n
n
0
0
n
n
(n n
0
)
nn
0
.
6.20. In a birth and death process with equal birth and death parameters, , the probability generating
function is (see eqn(6.24))
G(s, t) =

1 + (t 1)(1 s)
1 +t(1 s)

n
0
.
Find the mean population size at time t. Show also that its variance is 2n
0
t.
The derivative of G(s, t) with respect to s is given by
G
s
(s, t) =
n
0
[t (t 1)s]
n
0
1
[(1 +t) ts]
n
0
+1
.
The mean population size at time t is
(t) = G
s
(1, t) = n
0
.
We require the second derivative given by
G
ss
(s, t) =
n
0
[t (t 1)s]
n
0
2
[(1 +t) ts]
n
0
+2
[n
0
1 + 2ts + 2
2
t
2
(1 s)]
The variance of the population size is
V(t) = G
ss
(1, t) +G
s
(1, t) [G
s
(1, t)]
2
= [2n
0
t +n
2
0
n
0
] +n
0
n
2
0
= 2n
0
t.
6.21. In a death process the probability that a death occurs in time t is a time-dependent parameter (t)n
when the population size is n. The pgf G(s, t) satises
G
t
= (t)(1 s)
G
s
.
87
as in Section 6.4. Show that
G(s, t) = [1 e

(1 s)]
n
0
,
where
=

t
0
(s)ds.
Find the mean population size at time t.
In a death process it is found that the expected value of the population size at time t is given by
(t) =
n
0
1 +t
, (t 0),
where is a positive constant. Estimate the corresponding death-rate (t).
Let
z =

ds
1 s
, so that s = 1 e
z
,
and
=

t
0
(u)du.
The equation for the probability generating function becomes
G

=
G
z
.
The general solution can be expressed as G(s, t) = w(z + ) for any arbitrary dierentiable function w.
Initially = 0. Hence
G(s, 0) = s
n
0
= w(z) = (1 e
z
)
n
0
,
so that
G(s, t) = w(z +) = [1 e
(z+)
]
n
0
= [1 e

(1 s)]
n
0
.
The mean population size at time t is given by
(t) = G
s
(1, t) = n
0
e

[1 e
tau
(1 s)]
n
0
1

s=1
= n
0
e

= n
0
exp

t
0
(u)du

.
Given the mean
(t) =
n
0
1 +t
= n
0
exp

t
0
(u)du

, (i)
it follows that the death-rate is (t) = /(1 +t), which can be obtained by dierentiating both sides (i)
with rspect to t
6.22. A population process has a probability generating function G(s, t) which satises the equation
e
t
G
t
= (s 1)
2
G
s
.
If, at time t = 0, the population size is n
0
, show that
G(s, t) =

1 + (1 s)(e
t
1)
1 +(1 s)(e
t
1)

n
0
.
Find the mean population size at time t, and the probability of ultimate extinction.
The generating function satises
e
t
G
t
= (x 1)
2
G
s
.
Let
=

t
0
e
u
du, z =

ds
(s 1)
2
.
88
The transformations are = e
t
1 and s = (z 1)/(z). The transformed partial dierential equation
has the general solution G(s, t) = w(z +). The initial condition is
G(s, 0) = s
n
0
=

1
1
z

n
0
= w(z).
Hence
G(s, t) =

1
1
(z +)

n
0
=

1
s 1
1 +(s 1)(e
t
1)

n
0
=

(s 1)(e
t
1) 1
(s 1)(e
t
1) 1

n
0
as required.
The mean population size is
(t) = G
s
(1, t) = n
0
.
The probability of extinction at time t is
p
0
(t) = G(0, t) =

(e
t
1)
(e
t
1) + 1

n
0
1
as t .
6.23. A population process has a probability generating function given by
G(s, t) =
1 e
t
(1 s)
1 +e
t
(1 s)
,
where is a parameter. Find the mean of the population size at time t, and its limit as t . Expand
G(s, t) in powers of s, determine the probability that the population size is n at time t.
We require the derivative
G
s
(s, t) =
e
t
[1 +e
t
(1 s)] +e
t
(1 s)[1 e
t
(1 s)]
[1 +e
t
(1 s)]
2
=
2e
t
[1 +e
t
(1 s)]
2
Then the mean population size is
(t) = G
s
(1, t) = 2e
t
.
To nd the individual probabilities we require the power series expansion of G(s, t). Using a binomial
expansion
G(s, t) =

1 e
t
1 +e
t

1 +
e
t
s
1 e
t

1
e
t
s
1 +e
t

1
=

1 e
t
1 +e
t

1 +
e
t
s
1 e
t

n=0

e
t
1 +e
t

n
s
n
=

1 e
t
1 +e
t

n=0

e
t
1 +e
t

n
s
n
+

e
t
1 e
t

n=1

e
t
1 +e
t

n
s
n+1

the coecients of the powers s


n
give the following probabilities:
p
0
(t) =
1 e
t
1 +e
t
,
p
n
(t) =
1 e
t
1 +e
t


n
e
nt
(1 +e
t
)
n
+

n
e
nt
(1 +e
t
)
n1
1
(1 e
t)

=
2
n
e
nt
(1 +e
t
)
n+1
, (n 1).
89
6.24. In a birth and death process with equal rates , the probability generating function is given by (see
eqn (6.24))
G(s, t) =

(z +t) 1
(z +t)

n
0
=

1 + (t 1)(1 s)
1 +t(1 s)

n
0
,
where n
0
is the initial population size. Show that p
i
, the probability that the population size is i at time t,
is given by
p
i
(t) =
i

m=0

n
0
m

n
0
+i m1
i m

(t)
m
(t)
n
0
+im
if i n
0
, and by
p
i
(t) =
n
0

m=0

n
0
m

n
0
+i m1
i m

(t)
m
(t)
n
0
+im
if i > n
0
, where
(t) =
1 t
t
, (t) =
t
1 +t
.
Expand G(s, t) as a power series in terms of s using the binomial expansion:
G(s, t) =

t
1 +t

n
0

1 +
1 t
t
s

n
0

1
t
1 +t
s

n
0
=

t
1 +t

n
0
n
0

k=0

n
0
k

k
s
k

j=0

n
0
+j 1
j

j
s
j
,
where
=
1 t
t
, =
t
1 +t
.
Two cases have to be considered.
(a) i n
0
.
p
i
(t) =

t
1 +t

n
0

n
0
0

n
0
+i 1
i

i
+

n
0
1

n
0
+i 2
i 1

i1
+
+

n
0
i

n
0
1
0

t
1 +t

n
0
+i
i

m=0

n
0
m

n
0
+i 1 m
i m

m
=

t
1 +t

n
0
+i
i

m=0

n
0
m

n
0
+i 1 m
i m

1 (t)
2
(t)
2

m
(b) i > n
0
.
p
i
(t) =

t
1 +t

n
0

n
0
0

n
0
+i 1
i

i
+ +

n
0
n
0

n
0

i 1
i n
0

in
0

t
1 +t

n
0
+i
n
0

m=0

n
0
m

n
0
+i 1 m
i m

1 (t)
2
(t)
2

m
6.25. We can view the birth and death process by an alternative dierencing method. Let p
ij
(t) be the
conditional probability
p
ij
(t) = P(N(t) = j|N(0) = i),
90
where N(t) is the random variable representing the population size at time t. Assume that the process is in
the (xed) state N(t) = j at times t and t +t and decide how this can arise from an incremental change
t in the time. If the birth and death rates are
j
and
j
, explain why
p
ij
(t +t) = p
ij
(t)(1
i
t
i
t) +
i
tp
i+1,j
(t) +
i
tp
i1,j
(t)
for i = 1, 2, 3, . . ., j = 0, 1, 2, . . .. Take the limit as t 0, and conrm that p
ij
(t) satises the dierential
equation
dp
ij
(t)
dt
= (
i
+
i
)p
ij
(t) +p
i+1,j
(t) +
i
p
i1,j
(t).
How should p
0,j
(t) be interpreted?
In this approach the nal state in the process remains xed, that is, the j in p
ij
. We now view p
ij
(t+t)
as p
ij
(t + t) in other words see what happens in an initial t. There will be a birth with probability

i
in a time t or a death with probability
i
. Then
p
ij
(t +t) = p
ij
(t)[1
i
t
i
t] +
i
t +
i
tp
i1,j
(t) +o(t)
for i = 1, 2, 3, . . .; j = 1, 2, 3, . . .. Hence
p
ij
(t +t) p
ij
(t)
t
= (
i
+
i
)p
ij
(t) +
i
p
i+1,j
(t) +
i
p
i1,j
(t) +o(1).
In the limit t 0,
dp
ij
(t)
dt
= (
i
+
i
)p
ij
(t) +
i
p
i+1,j
(t) +
i
p
i1,j
(t),
where we require
p
0,j
(t) = P(N(t) = j|N(0) = 0) =

0, j > 0
1, j = 0
6.26. Consider a birth and death process in which the rates are
i
= i and
i
= i, and the initial
population size is n
0
= 1. If
p
1,j
= P(N(t) = j|N(0) = 1),
it was shown in Problem 6.25 that p
1,j
satises
dp
1,j
(t)
dt
= ( +)p
1,j
(t) +p
2,j
(t) +p
0,j
(t),
where
p
0,j
(t) =

0, j > 0
1, j = 0
If
G
i
(s, t) =

j=0
p
ij
(t)s
j
,
show that
G
1
(s, t)
t
= ( +)G
1
(s, t) +G
2
(s, t) +.
Explain why G
2
(s, t) = [G
1
(s, t)]
2
(see Section 6.5). Hence solve what is eectively an ordinary dierential
equation for G
1
(s, t), and conrm that
G
1
(s, t) =
(1 s) ( s)e
()t
(1 s) ( s)e
()t
,
as in eqn (6.23) with n
0
= 1.
Given
dp
1,j
(t)
dt
= ( +)p
1,j
(t) +p
2,j
(t) +p
0,j
(t),
91
multiply the equation by s
j
and sum over j = 1, 2, . . .. Then
G(s, t)
t
= ( +)G
1
(s, t) +G
2
(s, t) +(t).
Also
G
2
(s, t) = E[s
N
1
(t)
]E[s
N
2
(t)
] = E[s
N
1
(t)
]
2
= G
1
(s, t)
2
.
Therefore
G
1
(s, t)
t
= ( +)g
1
(s, t) +G
1
(s, t)
2
+.
This is a separable rst-order dierential equation with general solution

dG
1
(G
1
)(G
1
1)
=

dt +A(x) = t +A(x),
where the constant is a function of x. Assume that |G| < min(1, /). Then
t +A(x) =

G
1

dG
1
G
1

+
1

dG
1
G
1
1
=
1

ln

1 G
1
G
1

.
Hence
G
1
(s, t) =
+B(x)e
()t
+B(x)e
()t
,
where B(x) (more convenient than A(x)) is a function to be determined by the initial conditions.
Initially, G(s, 0) = s, so that B(x) = (s)/(1 s). Finally G
1
(s, t) agrees with G(s, t) in eqn (6.23)
with n
0
= 1.
6.27. In a birth and death process with parameters and , ( > ), and initial population size n
0
, show
that the mean time to extinction of the random variable T
n
0
is given by
E(T
n
0
) = n
0
( )
2


0
te
()t
[ e
()t
]
n
0
1
[ e
()t
]
n
0
+1
dt.
If n
0
= 1, using integration by parts, evaluate the integral over the interval (0, ), and then let to
show that
E(T
1
) =
1

ln

.
The distribution function for T
n
0
is given by
F(t) = p
0
(t) = G(0, t) =

e
()t
e
()t

n
0
,
(put s = 0 in (6.23)). Its density is
f(t) =
dF(t)
dt
= n
0
( )
2
e
()t
[ e
()t
]
n
0
1
[ e
()t
]
n
0
+1
, (t > 0).
The mean time to extinction is
E(T
n
0
) =


0
tf(t)dt = n
0
( )
2


0
te
()t
[ e
()t
]
n
0
1
[ e
()t
]
n
0
+1
dt,
as required.
92
If n
0
= 1, then
E(T
1
) = ( )
2


0
te
()t
[ e
()t
]
2
dt
=
( )
2


0
te
()t
[1 (/)e
()t
]
2
dt
Integrate the following nite integral between t = 0 and t = :


0
te
()t
dt
[1 (/)e
()t
]
2
=

( )

t
1 (/)e
()t
+

dt
1 (/)e
()t

0
=

( )

t
1 (/)e
()t
+
1

ln{e
()t
(/)}

0
=

( )


1 (/)
+ +
1

ln{1 (/)e
()
}

1

ln{1 (/)}


( )
2
ln{1 (/)},
as . Finally
E(T
1
) =
1

ln

.
6.28. A death process (see Section 6.4) has a parameter and the initial population size is n
0
. Its probability
generating function is
G(s, t) = [1 e
t
(1 s)]
n
0
.
Show that the mean time to extinction is
n
0

n
0
1

k=0
(1)
k
(k + 1)
2

n
0
1
k

.
Let T
n
0
be a random variable representing the time to extinction. The probability distribution of T
n
0
is given by
F(t) = p
0
(t) = G(0, t) = (1 e
t
)
n
0
.
The mean time to extinction is
E(T
n
0
) =


0
t
dp
0
(t)
dt
dt = n
0


0
te
t
(1 e
t
)
n
0
1
dt.
Replace (1 e
t
)
n
0
1
by its binomial expnsion, namely
(1 e
t
)
n
0
1
=

k=0
(1)
k

n
0
1
k

e
kt
,
and integrate the series term-by-term:
E(T
n
0
) = n
0

k=0
(1)
k

n
0
1
k


0
te
(k+1)t
dt =
n
0

k=0
(1)
k
(k + 1)
2

n
0
1
k

.
6.29. A colony of cells grows from a single cell without deaths. The probability that a single cell divides
into two cells in a time interval t is t + o(t). As in Problem 6.1, the probability generating function
for the process is
G(s, t) =
se
t
1 (1 e
t
)s
.
93
By considering the probability
F(t) = 1
n1

k=n
p
k
(t),
that is, the probability that the population is n or greater by time t, show that the expected time T
n
for the
population to rst reach n 2 is given by
E(T
n
) =
1

n1

k=1
1
k
.
The expansion of the generating function is
G(s, t) =

n=1
e
t
(1 e
t
)
n1
s
n
.
Consider the probability function
F(t) =

k=n
p
k
(t) = 1
n1

k=1
p
k
(t) = 1
n1

k=1
e
t
(1 e
t
)
k1
.
Its density is dened by, for n 3, (although it is not required)
f(t) =
dF(t)
dt
= e
t
+e
t
n1

k=2
(1 ke
t
)(1 e
t
)
k2
, (t > 0)
and for n = 2,
f(t) = e
t
, (t > 0).
Then,
E(T
n
) = lim


0
t
dF(t)
dt
dt = lim

[tF(t)


0
F(t)dt

= lim


n1

k=1
e

(1 e

)
k1

1
n1

k=1
e
t
(1 e
t
)
k1

dt

= lim


n1

k=1
e

(1 e

)
k1
+
1

n1

k=1
1
k
(1 e

)
k

=
1

n1

k=1
1
k
.
6.30. In a birth and death process, the population size represented by the random variable N(t) grows
as a simple birth process with parameter . No deaths occur until time T when the whole population
dies. The distribution of the random variable T is exponential with parameter . The process starts with
one individual at time t = 0. What is the probability that the population exists at time t, namely that
P[N(t) > 0]?
What is the conditional probability P[N(t) = n|N(t) > 0] for n = 1, 2, . . .? Hence show that
P[N(t) = n] = e
(+)t
(1 e
t
)
n1
.
Construct the probability generating function of this distribution, and nd the mean population size at time
t.
Since the catastrophe is a Poisson process with intensity
P[N(t) > 0] = e
t
.
94
The process must be simple birth conditional on no deaths, namely
P[N(t) = n|N(t) > 0] = e
t
(1 e
t
)
n1
, (n = 1, 2, . . .).
Hence
P[N(t) = 0] = 1 P[N(t) > 0] = 1 e
t
,
P[N(t) = n] = P[N(t) = n|N(t) > 0]P[N(t) > 0] = e
(+)t
(1 e
t
)
n1
, (n = 1, 2, . . .).
The probability generating function is G(s, t), where
G(s, t) =

n=0
P[N(t) = n]s
n
= 1 e
t
+

n=1
e
(+)t
(1 e
t
)
n1
s
n
= 1 e
t
+e
(+)t
s
1 s(1 e
t
)
using the formula for the sum of a geometric series.
For the mean, we require
G(s, t)
s
= e
(+)t

1
1 s(1 e
t
)
+
s(1 e
t
)
[1 s(1 e
t
)]
2

.
Then the mean
(t) =
G(s, t)
s

s=1
= e
(+)t
[e
t
+e
2t
e
t
] = e
()t
.
6.31. In a birth and death process, the variable birth and death rates are, for t > 0, respectively given by

n
(t) = (t)n > 0, (n = 0, 1, 2, . . .)
n
(t) = (t)n > 0, (n = 1, 2, . . .).
If p
n
(t) is the probability that the population size at time t is n, show that its probability generating function
is
G(s, t) =

n=0
p
n
(t)s
n
,
satises
G
t
= (s 1)[(t)s (t)]
G
s
.
Suppose that (t0 = (t) ( > 0, = 1), and that the initial population size is n
0
. Show that
G(s, t) =

1 q(s, t)
1 q(s, t)

n
0
where q(s, t) =

1 s
s

exp

(1 )

t
0
(u)du

.
Find the probability of extinction at time t.
Using eqns (6.25), the dierential-dierence equations are
p

0
(t) = (t)p
1
(t),
p

n
(t) = (n 1)(t)p
n1
(t) n[(t) +(t)]p
n
(t) + (n + 1)(t)p
n+1
(t), (n = 1, 2, . . .).
In the usual way multiply the equations by s
n
and sum over n:

n=0
p

n
(t)s
n
=

n=2
(n 1)(t)p
n1
(t)s
n
[(t) +(t)]

n=1
np
n
(t)s
n
+

n=0
(n + 1)(t)p
n+1
(t)s
n
Let G(s, t) =

n=0
p
n
(t)s
n
. Then the series can be expressed in terms of G(s, t) as
G
t
= (t)s
2
G
s
[(t) +(t)]s
G
s
+(t)
G
s
= (s 1)[(t)s (t)]
G
s
95
Let (t) = (t), ( = 1). Then
G
t
= (s 1)(s )
G
s
.
Let d = (t)dt so that can be dened by
=

t
0
(u)du.
Let ds/dz = (s 1)(s ) and dene z by
z =

ds
(s 1)(s )
=
1
1


1
s

1
1 s

ds
=
1
1
ln

1 s
s

,
where s < min(1, ). Inversion of this equation gives
s =
1 e
(1)z
1 e
(1)z
= q(z),
say. Let G(s, t) = Q(z, ) after the change of variable. Q(z, ) satises
Q

=
Q
z
.
Since the initial population size is n
0
, then
Q(z, 0) = s
n
0
=

1 e
(1)z
1 e
(1)z

n
0
.
Hence
Q(z, ) =

1 e
(1)(z+)
1 e
(1)(z+)

n
0
.
Finally
G(s, t) =

1 q(s, t)
1 q(s, t)

n
0
where q(s, t) =

1 s
s

exp

(1 )

t
0
(u)du

as required.
The probability of extinction is
G(0, t) =

1 q(0, t)
1 q(0, t)

n
0
,
where
q(0, t) =
1

exp

(1 )

t
0
(u)du

.
6.32. A continuous time process has three states E
1
, E
2
, and E
3
. In time t the probability of a change
from E
1
to E
2
is t, from E
2
to E
3
is also t, and from E
2
to E
1
is t. E
3
can be viewed as an
absorbing state. If p
i
(t) is the probability that the process is in state E
i
(i = 1, 2, 3) at time t, show that
p

1
(t) = p
1
(t) +p
2
(t), p

2
(t) = p
1
(t) ( +)p
2
(t), p

3
(t) = p
2
(t).
Find the probabilities p
1
(t), p
2
(t), p
3
(t), if the process starts in E
1
at t = 0.
The process survives as long as it is in states E
1
or E
2
. What is the survival probability of the process?
By the usual birth and death method
p
1
(t +t) = tp
2
(t) + (1 t)p
1
(t) +O((t)
2
),
96
p
2
(t +t) = tp
1
(t) + (1 t t)p
2
(t) +O((t)
2
),
p
3
(t +t) = tp
2
(t) +O((t)
2
).
Let t 0 so that the probabilities satisfy
p

1
(t) = p
2
(t) p
1
(t), (i)
p

2
(t) = p
1
(t) ( +)p
2
(t), (ii)
p

3
(t) = p
2
(t). (iii)
Eliminate p
2
(t) between (i) and (ii) so that p
1
(t)
p

1
(t) + (2 +)p

1
(t) +
2
p
1
(t) = 0.
This second-order dierential equation has the characteristic equation
m
2
+ (2 +)m+
2
= 0,
which has the solutions
m
1
m
2

= ,
where =
1
2
(2 +) and =
1
2

(4 +). Therefore, since p


1
(0) = 1, then
p
1
(t) = Ae
m
1
t
+ (1 A)e
m
2
t
.
From (i), since p
2
(0) = 0,
p
2
(t) =
1

[p

1
(t) +p
1
(t)]
=
1

[A(m
1
+)e
m
1
t
+ (1 A)(m
2
+)e
m
2
t
]
=
(m
1
+)(m
2
+)
(m
1
m
2
)
[e
m
1
t
+e
m
2
t
]
It follows that
p
1
(t) =
1
m
1
m
2
[(m
2
+)e
m
1
t
+ (m
1
+)e
m
2
t
].
The survival probability at time t is p
1
(t) +p
2
(t).
6.33. In a birth and death process, the birth and death rates are given respectively by (t)n and (t)n in
eqn (6.25). Find the equation for the probability generating function G(s, t). If (t) is the mean population
size at time t, show, by dierentiating the equation for G(s, t) with respect to s, that

(t) = [(t) (t)](t),


(assume that (s 1)
2
G(s, t)/s
2
= 0 when s = 1)). Hence show that
(t) = n
0
exp

t
0
[(u) (u)]du

,
where n
0
is the initial population size.
The dierential dierence equations for the probability p
n
(t) are (see eqn (6.25))
p

0
(t) = (t)p
1
(t),
p

n
(t) = (t)(n 1)p
n1
(t) [(t)n +(t)n]p
n
(t) +(t)(n + 1)p
n+1
(t).
Hence the probability generating function G(s, t) satises
G(s, t)
t
= [(t)s (t)](s 1)
G(s, t)
s
, (i)
97
(the method parallels that in Section 6.5). Dierentiating (i) with respect to s:

2
G(s, t)
st
= (t)(s 1)
G(s, t)
s
+ [(t)s (t)]
G(s, t)
s
+ [(t)s (t)](s 1)

2
G(s, t)
s
2
.
Put s = 1 and remember that (t) = G
s
(s, t). Then

(t) = [(t) (t)](t).


Hence integration of this dierential equation gives
(t) = n
0
exp

t
0
[(u) (u)]du

.
98
Chapter 7
Queues
7.1. In a single-server queue a Poisson process for arrivals of intensity
1
2
and for service and departures
of intensity are assumed. For the corresponding stationary process nd
(a) p
n
, the probability that there are n persons in the queue,
(b) the expected length of the queue,
(c) the probability that there are not more than two persons in the queue, including the person being
served in each case.
(a) As in Section 7.3, with =
1
2

p
n
= (1 )
n
, =

=
1
2
.
In this case p
n
= 1/2
n+1
.
(b) If N is the random variable of the number n of persons in the queue (including the person being
served), the its expected value is (see Section 7.3(b))
E(N) =
1

= 1.
(c) The probability that there are not more than two persons in the queue is
p
0
+p
1
+p
2
=
1
2
+
1
4
+
1
8
=
7
8
.
7.2. Consider a telephone exchange with a very large number of lines available. If n lines are busy the
probability that one of them will become free in small time t is nt. The probability of a new call is t
(that is, Poisson), with the assumption that the probability of multiple calls is negligible. Show that p
n
(t),
the probability that n lines are busy at time t satises
p

0
(t) = p
0
(t) +p
1
(t),
p

n
(t) = ( +n)p
n
(t) +p
n1
(t) + (n + 1)p
n+1
(t), (n 1).
In the stationary process show by induction that
p
n
= lim
t
p
n
(t) =
e
/
n!

n
.
Identify the distribution.
If p
n
= lim
t
p
n
(t) (assumed to exist), then the stationary process is dened by the dierence
equations
p
0
+p
1
= 0,
99
(n + 1)p
n+1
( +n)p
n
+p
n1
= 0.
Assume that
p
n
=
e
/
n!

n
.
p
n+1
=
1
(n + 1)
[( +n)p
n
p
n1
]
=
1
(n + 1)

+n
n!
e
/mu


(n 1)!

n1

=
e
/
(n + 1)!

n
[ +n n]
=
e
/
(n + 1)!

n+1
Hence if the formula is true for p
n
and p
n1
, then it is true for p
n+1
. It can be veried that that p
1
and
p
2
are correct: therefore by induction on the positive integers the formula is proved. The distribution is
Poisson with parameter /.
7.3. For a particular queue, when there are n customers in the system, the probability of an arrival in the
small time interval t is
n
t +o(t). The service time parameter
n
is also a function of n. If p
n
denotes
the probability that there are n customers in the queue in the steady state queue, show by induction that
p
n
= p
0

1
. . .
n1

2
. . .
n
, (n = 1, 2, . . .)
and nd an expression for p
0
.
If
n
= 1/(n + 1) and
n
= , a constant, nd the expected length of the queue.
Let p
n
(t) be the probability that there are n persons in the queue. Then, by the usual arguments,
p
0
(t +t) =
1
p
1
(t) + (1
1
t)p
0
(t),
p
n
(t +t) =
n1
p
n1
(t)t +
n+1
p
n+1
(t)t + (1
n
t
n
t)p
n
(t), (n = 1, 2, . . .).
Divide through by t, and let t 0:
p

0
(t) =
1
p
1
(t),
p

n
(t) =
n+1
p
n+1
(t) (
n
+
n
)p
n
(t) +
n1
p
n1
(t).
Assume that a limiting stationary process exists, such the p
n
= lim
t
p
n
(t). Then p
n
satises

1
p
1

0
p
0
= 0,

n+1
p
n+1
(
n
+
n
)p
n
+
n1
p
n1
= 0.
Assume that the given formula is true for p
n
and p
n1
. Then, using the dierence equation above,
p
n+1
=
1

n+1

(
n
+
n
)

1
. . .
n1

2
. . .
n

n1

1
. . .
n2

2
. . .
n1

=

0

1
. . .
n

2
. . .
n+1
p
0
,
Showing that the formula is true for p
n+1
. It can be veried directly that
p
1
=

0

1
p
0
, p
2
=

0

2
p
0
.
Induction proves the result for all n.
100
The probabilities satisfy

n=0
p
n
= 1. Therefore
1 = p
0
+p
0

n=1

1
. . .
n1

2
. . .
n
,
provided that the series converges. If that is the case, then
p
0
= 1

1 +

n=1

1
. . .
n1

2
. . .
n

.
If
n
= 1/(n + 1) and
n
= , then
p
n
=
p
0

n

1
1

1
2

1
n
=
p
0

n
n!
,
where
p
0
= 1

1 +

n=1
1

n
n!

= e
1/
.
Hence
p
n
=
e
1/

n
n!
.
which is Poisson with parameter 1/.
The expected length of the queue is the usual result for the mean:
=

n=1
np
n
= e
1/

n=1
1

n
(n 1)!
= e
1/
1

e
1/
=
1

.
7.4. In a baulked queue (see Example 7.1) not more than m 2 people are allowed to form a queue. If
there are m individuals in the queue, then any further arrivals are turned away. If the arrivals form a
Poisson process with parameter and the service distribution is exponential with parameter , show that
the expected length of the queue is
(m+ 1)
m+1
+m
m+2
(1 )(1
m+1
)
,
where = /. Deduce the expected length if = 1. What is the expected length of the queue if m = 3 and
= 1?
How doeas the expected length of the baulked queue behave as becomes large?
From Example 7.1, the probability of a baulked queue having length n is
p
n
=

n
(1 )
(1
m+1
)
, (n = 0, 1, 2, . . . , m), ( = 1). (i)
The expected length is
E(N) =
m

n=1
np
n
=

n=1
n
n
(1 )
(1
m+1
)
=

1
1
m+1

n=1
n
n
.
Let
S =
m

n=1
n
n
.
Then
(1 )S =
m

n=1

n
m
m+2
.
101
Further summation of the geometric series gives
S =
(m+ 1)
m+1
+m
m+2
(1 )
2
,
so that the expected length of the queue is
E(N) =
(m+ 1)
m+1
+m
m+2
(1 )(1
m+1
)
, ( = 1). (ii)
If = 1, then, applying lHopitals rule in calculus to (i)
p
n
=

d
d
[
n
(1 )]

d
d
[1
m+1
]

=1
=
1
m+ 1
.
In this case the expected length is
E(N) =
m

n=1
np
n
=
1
m+ 1
m

n=1
n =
1
(m+ 1)

1
2
m(m+ 1) =
1
2
m,
using an elementary formula for the sum of the rst m integers.
If = 1 and m = 3, then E(N) = 3/2.
The expected length in (ii) can be re-arranged into
E(N) =

m1
(m+ 1)
1
+m
(
1
1)(
1
1)
m,
as . For the baulked queue there is no restriction on .
7.5. Consider the single-server queue with Poisson arrivals occurring with parameter , and exponential
service times with parameter . In the stationary process, the probability p
n
that there are n individuals in
the queue is given by
p
n
=

n
, (n = 0, 1, 2, . . .).
Find its probability generating function
G(s) =

n=0
p
n
s
n
.
If < , use this function to determine the mean and variance of the queue length.
The probability generating function G(s, t) is dened by
G(s, t) =

n=0

n
s
n

n+1
s
n

.
Summation of the two geometric series
G(s, t) =

s



s
=

s
.
The rst two derivatives of G(s, t) are
G
s
(s, t) =
( )
( s)
2
, G
ss
(s, t) = 2

2
( )
( s)
3
Hence the mean and variance are given by
E(N) = G
s
(1, t) =


,
102
V(N) = G
ss
(1, t) +G
s
(1, t) [G
s
(1, t)]
2
=
2
2
( )
2
+




2
( )
2
=

( )
2
7.6. A queue is observed to have an average length of 2.8 individuals including the person being served.
Assuming the usual exponential distributions for both service times and times between arrivals, what is the
trac density, and the variance of the queue length?
With the usual parameters and and the random variable N of the length of the queue, then its
expected length is, with = /,
E(N) =

1
,
which is 2.8 from the data. Hence the trac density is = 2.8/3.8 0.74.
The probability that the queue length is n in the stationary process is (see eqn (7.5)) p
n
= (1 )
n
.
The variance of the queue length is given by
V(N) = E(N
2
) [E(N)]
2
= (1 )

n=1
n
2


2
(1 )
2
=
(1 +)
(1 )
2


2
(1 )
2
=

(1 )
2
If = 0.74, then V(N) 10.9.
7.7. The non-stationary dierential-dierence equations for a queue with parameters and are (see
equation (7.1))
dp
0
(t)
dt
= p
1
(t) p
0
(t),
dp
n
(t)
dt
= p
n1
(t) +p
n+1
(t) ( +)p
n
(t),
where p
n
(t) is the probability that the queue has length n at time t. Let the probability generating function
of the distribution {p
n
(t)} be
G(s, t) =

n=0
p
n
(t)s
n
.
Show that G(s, t) satises the equation
s
G(s, t)
t
= (s 1)(s )G(s, t) +(s 1)p
0
(t).
Unlike the birth and death processes in Chapter 6, this equation contains the unknown probability p
0
(t)
which complicates its solution. Show that it can be eliminated to leave the following second-order partial
dierential equation for G(s, t):
s(s 1)

2
G(s, t)
ts
(s 1)
2
(s )
G(s, t)
s

G(s, t)
t
(s 1)
2
G(s, t) = 0.
This equation can be solved by Laplace transform methods.
Multiply the second equation in the question by s
n
, sum over all n from n = 1 and add the rst
equation to the sum resulting in

n=0
pn

(t) = p
1
(t) p
0
(t) +

n=1
p
n1
(t)s
n
+

n=1
p
n+1
(t)s
n
( )

n=1
p
n
(t)s
n
= p
1
(t) p
0
(t) +sG(s, t) +

s
[G(s, t) sp
1
(t) p
0
(t)] ( +)[G(s, t) p
0
(s, t)]
=
1
s
(s 1)(s )G(s, t) +

s
(s 1)p
0
(t).
103
Hence the dierential equation for G(s, t) is
s
G(s, t)
t
= (s 1)[(s )G(s, t) +p
0
(t)].
Write the dierential equation in the form
s
s 1
G(s, t)
t
= (s )G(s, t) +p
0
(t).
Dierentiate the equation with respect to s to eliminate the term p
0
(t), so that

s
s 1
G(s, t)
t

=

s
[(s )G(s, t)],
or

1
(s 1)
2
G(s, t)
t
+
s
s 1

2
G(s, t)
st
= G(s, t) + (s )
G(s, t)
s
.
The required result follows.
7.8. A call centre has r telephones manned at any time, and the trac density is /(r) = 0.86. Compute
how many telephones should be manned in order that the expected number of callers waiting at any time
should not exceed 4? Assume a stationary process with inter-arrival times of calls and service times for all
operators both exponential with parameters and respectively (see Section 7.4).
From (7.11) and (7.12), the expected length of the queue of callers, excluding those being served, is
2 4 6 8 10
r
2
4
6
8
EN
Figure 7.1: Expected queue length E(N) versus number r of manned telephones.
E(N) =
p
0

r+1
(r 1)!(r )
2
, (i)
where
p
0
= 1

r1

n=0

n
n!
+

r
(r )(r 1)!

, =

. (ii)
Substitute for p
0
from (ii) into (i) and compute E(N) as a function of r with = 0.86r. A graph of E(N)
against r is shown in Figure 7.1 for r = 1, 2, . . . , 10. From the graph the point at r = 6 is (just) below the
line E(N) = 4. The answer is that 6 telephones should be manned.
7.9. Compare the expected lengths of the two queues M()/M()/1 and M()/D(1/)/1 with = / < 1.
The queues have parameters such that the mean service time for the former equals the xed service time
in the latter. For which queue would you expect the mean queue length to be the shorter?
104
From Section 7.3(b), the expected length of the M()/M()/1 queue is, with = / < 1,
E
1
(N) =

1
.
Since = / = ( is the xed service time), the expected length of the M()/D(1/)/1 is (see end of
Section 7.5)
E
2
(N) =
(1
1
2
)
1
.
It follows that
E
2
(N) =

1

1
2

2
1


1
= E
1
.
In this case the queue with xed service time has the shorter expected length.
7.10. A queue is serviced by r servers, with the distribution of the inter-arrival times for the queue being
exponential with parameter and each server has a common exponential service time distribution with
parameter . If N is the random variable for the length of the queue including those being served, show
that its expected value is
E(N) = p
0

r1

n=1

n
(n 1)!
+

r
[r
2
+(1 r)]
(r 1)!(r )
2

,
where = / < r, and
p
0
= 1

r1

n=0

n
n!
+

r
(r )(r 1)!

.
(see equation (7.11)).
If r = 2, show that
E(N) =
4
4
2
.
For what interval of values of is the expected length of the queue less than the number of servers?
For the M()/M()/r queue, the probability that there n persons in the queue is
p
n
=


n
p
0
/n! n < r

n
p
0
/(r
nr
r!) n r
,
where p
0
is given in the question. The expected length of the queue including those being served is
E(N) =

n=1
np
n
= p
0

r1

n=1
n
n
n!
+

n=r
n
n
r
nr
r!

= p
0

r1

n=1

n
(n 1)!
+
r
r
r!

n=r
n

.
Consider the series
R =

n=r
n

n
.
Using the method for summing geometric series twice
R =

r
r(r
2
+ r)
(r )
2
.
Hence
E(N) = p
0

r1

n=1

n
(n 1)!
+

r
[r
2
+(1 r)]
(r 1)!(r )
2

,
as required.
If r = 2 (two servers) then
p
0
= 1

1 + +

2
(2 )

=
2
2 +
,
105
and
E(N) = p
0

+

2
(4 )
(2 )
2

=
4
4
2
.
The expected length of the queue is less than the number of servers if
4
4
2
< 2, or
2
+ 2 4 < 0.
The roots of
2
+ 2 4 = 0 are = 1

5. The required range for is 0 < <

5 1.
7.11. For a queue with two servers, the probability p
n
that there are n servers in the queue, including those
being served, is given by
p
0
=
2
2 +
, p
1
= p
0
, p
n
= 2

n
p
0
, (n 2),
where = / (see Section 7.4). If the random variable X is the number of people in the queue, nd its
probability generating function. Hence nd the mean length of the queue including those being served.
The probability generating function for this queue with two servers is, summing the geometric series,
G(s) = p
0
+p
0
s + 2p
0

n=2

n
s
n
=

2
2 +

1 +s +

2
s
2
2 s

2
2 +

2 +s
2 s

,
assuming that 0 s < 2.
For the mean we require
G

(s) =

2
2 +

4
(2 s)
2
.
Then the mean length of the queue
= G

(1) =
4
(4
2
)
,
which agrees with the result from Problem 7.10.
7.12. The queue M()/D()/1, which has a xed service time of duration for every customer, has the
probability generating function
G(s) =
(1 )(1 s)
1 se
(1s)
,
where = (0 < < 1) (see Section 7.5).
(a) Find the probabilities p
0
, p
1
, p
2
.
(b) Find the expected value and variance of the length of the queue.
(c) Customers are allowed a service time which is such that the expected length of the queue is two
individuals. Find the value of the trac density .
(a) The Taylor series for G(s) is
G(s) = (1 ) + (1 )(e

1)s + (1 )(e

1)e

s
2
+O(s
3
).
Hence
p
0
= 1 , p
1
= (1 )(e

1), p
2
= (1 )(e

1)e

.
(b) The rst two derivatives of G(s) are
G

(s) =
e
s
(1 +)

e
s
+e

1 +s s
2

(e
s
e

s)
2
,
106
G

(s) =
e
+s
(1 +)

e
s

2 + (2 4s) + (1 +s)s
2

+e

2 + 2s s
2

2
+s
3

(e
s
e

s)
3
.
It follows that the mean and variance are
E(N) = G

(1) =
(2 )
2(1 )
,
V(N) = G

(1) +G

(1) [G

(1)]
2
=

12(1 )
2
(12 18 + 10
2

3
).
(c) We require G

(1) = 2, or 2 2 =
1
2

2
. Hence
2
6 + 4 = 0. The roots of this quadratic
equation are = = 3

5: the density is = 3 +

5 since must be less than 1.


7.13. For the baulked queue which has a maximum length of m beyond which customers are turned away,
The probabilities that there are n individuals in the queue are given by
p
n
=

n
(1 )
1
m+1
, (0 < < 1), p
n
=
1
m+ 1
( = 1),
for n = 0, 1, 2, . . . , m. Show that the probability generating functions are
G(s) =
(1 )[1 (s)
m+1
]
(1
m+1
)(1 s)
, ( = 1),
and
G(s) =
1 s
m+1
(m+ 1)(1 s)
, ( = 1).
Find the expected value of the queue length including the person being served.
The probability generating function G(s) of the baulked queue is dened by, for = 1,
G(s) =
m

n=0
p
n
s
n
=

1
1
m+1

n=0

n
s
n
=
(1 )(1
m+1
s
m+1
)
(1
m+1
)(1 s)
,
using the formula for the sum of the geometric series. The rst derivative of G(s) is
G

(s) =
(1 ) (1 (s)
m
m(s)
m
(1 s))
(1 s)
2
(1
1+m
)
.
The expected length of the queue is
E(N) = G

(1) =

1

(1 +m)
1+m
1
1+m
If = 1, then
G(s) =

n=0
p
n
s
n
=
1
m+ 1
m

n=0
s
n
=
1 s
m+1
(m+ 1)(1 s)
.
Its rst derivative is
G

(s) =
1 (1 +m)s
m
+ms
1+m
(1 +m)(1 s)
2
.
The expected length of the queue is E(N) = G

(1) =
1
2
m.
7.14. In Section 7.3(ii), the expected length of the queue with parameters and , including the person
being served was shown to be /(1). What is the expected length of the queue excluding the person being
served?
107
The expected length of the queue excluding the person being served is given by
E(N) =

n=2
(n 1)p
n
=

n=2
(n 1)(
n

n+1
)
=

n=2
n
n

n=2

n=3
n
n
+ 2

n=3

n
= 2
2


2
1
+
2
3
1
=

2
1
7.15. An M()/M()/1 queue is observed over a long period of time. Regular sampling indicates that the
mean length of the queue including the person being served is 3, whilst the mean waiting time to completion
of service by any customer arriving is 10 minutes. What is the mean service time?
From Section 7.3, the mean length of the queue is 3, and the mean time to service is 10 minutes: hence

1
= 3,

(1 )
= 10.
Hence
=
3
4
and =

10(1 )
=
3
10
.
7.16. An M()/G/1 queue has a service time with a gamma distribution with parameters n and where
n is a positive integer. The density function of the gamma distribution is
f(t) =

n
t
n1
e
t
(n 1)!
, t > 0.
Show that, assuming = n/, the expected length of the queue including the person being served is given
by
[2n (n 1)]
2n(1 )]
.
The formula for the expected length of queue is given by (7.25):
E(X) =
2
2
+
2
V(S)
2(1 )
,
where = E(S), and S is the random variable of the service time. Since S has a gamma distribution
with parameters n and , then
E(S) =
n

, V(S) =
n

2
, = E(S) =
n

.
Hence
E(X) =
2n
2
n
2
+
2
n
2( n)
.
This result assumes < /n.
7.17. The service in a single-server queue must take at least 2 minutes and must not exceed 8 minutes, and
the probability that the service is completed at any time between these times is uniformly distributed over
the time interval. The average time between arrival is 4 minutes, and the arrival distribution is assumed
to be exponential. Calculate expected length of the queue.
108
The density of the uniform distribution is
f(t) =

1/(
2

1
)
1
t
2
0 elsewhere
The expected length of the queue is given by (7.25). In this formula, we require E(S) and V(S). They are
given by
E(S) =

tf(t)dt =

1
tdt

1
=
1
2
(
2
+
1
).
By (7.21), it follows that the trac density is = E(S) =
1
2
(
2

1
). The variance is given by
V(S) = E(S
2
) [E(S)]
2
=

1
t
2
dt

1
4
(
2

1
)
2
=
1
3
(
2
2
+
1

2
+
2
1
)
1
4
(
2
+
1
)
2
=
1
12
(
2

1
)
2
=

2
3
2
The formula is simpler if (7.25) is expressed as a function of the trac density . Thus, substituting
for V(S), the expected queue length is for < 1,
E(X) =
2
2
+
2
V(S)
2(1 )
=
2
2
+
1
3

2
2(1 )
=
(3 )
3(1 )
. (i)
In the particular case
2

1
= 4 minutes. Hence the expected time between arrivals is


0
te
t
dt =
1

.
Hence =
1
4
(minutes)
1
. Since
2

1
= 6 minutes, then =
1
2

1
4
6 =
3
4
. Finally, the expected length
in eqn (i) becomes
E(X) =
3
4
(1
3
4
)
3(1
3
4
)
=
9
4
.
7.18. A person arrives at an M()/M()/1 queue. If there are two people in the queue (excluding customers
being served) the customer goes away and does not return. If there are there are fewer than two queueing
then the customer joins the queue. Find the expected waiting time for the customer to the start of service.
This a is a baulked queue with baulking after 2 customers in the queue but not being served. The
method is similar to that in Section 7.3(c). Let T
i
, (i = 1, 2) be the random variable representing the time
for customer i to reach the service position. Let S
1
and S
2
be the random variables dened by
S
1
= T
1
, S
2
= T
1
+T
2
.
As explained in Section 7.3(c), the densities of S
1
and S
2
are gamma with parameters respectively , 1
and , 2, are
f
1
(t) = e
t
(t > 0), f
2
(t) =
2
te
t
, (t > 0). (i)
From Example 7.1, the probability that there are n persons in the queue is
p
n
=

n
(1 )
1
3
, (n = 0, 1, 2). (ii)
Then, using (i) and (ii)
P(S
1
+S
2
> T) = P(S
1
> t)p
1
+P(S
2
> t)p
2
=
(1 )
1
3


t
f
1
(s)ds +

2
(1 )
1
3


t
f
2
(s)ds
=
(1 )
1
3


t
e
s
ds +

2
se
s
ds

=
(1 )
1
3
e
t
[1 +t +].
109
The density associated with this probability is
g(t) =
d
dt

1
(1 )
1
3
e
t
[1 +t +]

=
(1 )
1
3
(1 +t)e
t
.
Finally, the expected waiting time for the customer to start service is
E(S
1
+S
2
) =


0
tg(t)dt =
(1 )
1
3


0
(t +t
2
)e
t
dt =
(1 )(1 + 2)
1
3
=
(1 + 2)
1 + +
2
7.19. A customer waits for service in a bank in which there are four counters with customers at each counter
but otherwise no one is queueing. If the service time distribution is negative exponential with parameter
for each counter, for how long should the queueing customer have to wait on average?
For each till, the density is f(t) = e
t
. The corresponding probability function is
F(t) =

t
0
f(s)ds =

t
0
e
s
ds = 1 e
t
.
Let T be random variable of the time to complete service at any till. Then the probability that T is greater
than time t is
P(T > t) = P(T > t for till 1)P(T > t for till 2)P(T > t for till 3)P(T > t for till 4)
= e
t
e
t
e
t
e
t
= e
4t
.
Hence the density of T is g(t) = 4e
4t
(t > 0), which is exponential with parameter 4. The mean time
for a till becoming available is
E(T) =


0
4te
4t
dt =
1
4
,
after integrating by parts.
7.20. A hospital has a waiting list for two operating theatres dedicated to a particular group of operations.
Assuming that the queue is stationary, and that the waiting list and operating time can be viewed as an
M()/M()/1 queue, show that the expected value of the random variable N representing the length of the
queue is given by
E(N) =

3
4
2
, =

< 2.
The waiting list is very long at 100 individuals. Why will be very close to 2? Put = 2 where
> 0 is small. Show that 0.02. A third operating theatre is brought into use with the same operating
parameter . What eect will this new theatre have on the waiting list eventually?
In Section 7.4, apply eqns (7.11) and (7.12) (in the book) with r = 2 giving for the expected length of
the queue of patients as
E(N) =
p
0

3
(2 )
2
,
where
p
0
= 1

1 + +

2
2

=
2
2 +
.
Elimination of p
0
between these equations leads to
E(N) =

3
4
2
, < 2.
For E(N) to be large, 4
2
is small. Since 0 < < 2, let = 2 with small. Then
E(N) =
2(1
1
2
)
(1
1
4
)
=
1

+O(1).
110
as 0. If E(N), then 2/100 = 0.02.
If r = 3 (which will require / < 3), then
E(N) =
p
0

4
2(3 )
2
,
where
p
0
= 1

1 + +
1
2

2
+

3
2(3 )

.
Therefore
E(N) =

4
(3 )(6 + 4 +
2
)
.
If = 2 , then the expected length is
E(N)
8
9
+O(),
which is a signicant reduction.
7.21. Consider the M()/M()/r queue which has r servers such that = / < r. Adapting the method
for the single-server queue (Section 7.3 (iii)), explain why the average service time for (nr+1) customers
to be served is (n r + 1)/(r) if n r. What is it if n < r? If n r, show that the average value of the
waiting time random variable T until service is
E(T) =

n=r
n r + 1
r
p
n
.
What is the average waiting time if service is included?
If n r 1, then the arriving customer has immediate service. If n r, the customer must wait until
nr +1 preceding customers have been served, and they take time (nr +1)/(r) since the mean service
time is 1/(r) for r servers. The expected value of the service time T until service is
E(T) =

n=r
n r + 1
r
p
n
,
where (see Section 7.4)
p
n
=

n
p
0
r
nr
r!
, p
0
= 1

r1

n=0

n
n!
+

r
(r )(r 1)!

.
Then
E(T) =

r
p
0
rr!

n=r
(n r + 1)

nr
=

r
p
0
rr!

1 + 2

+ 3

2
+

=

r
p
0
(r 1)!(r )
2
The mean time represented by random variable T

until service is completed is


E(T

) = E(T) +
1

.
7.22. Consider the queue M()/M()/r queue. Assuming that < r, what is the probability in the long
term that at any instant there is no one queueing excluding those being served?
111
The probability that no one is queueing is
q
r
= 1

n=r
p
n
,
where (see Section 7.4) using the formula for the sum of a geometric series,

n=r
p
n
=
p
0
r
r
r!

n=r

n
=
p
0

r
(r 1)!(r )
,
and
p
0
= 1

r1

n=0

n
n!
+

r
(r )(r 1)!

.
7.23. Access to a toll road is controlled by a plaza of r toll booths. Vehicles approaching the toll booths
choose one at random: any toll booth is equally likely to be the one chosen irrespective of the number of
cars queueing (perhaps an unrealistic situation). The payment time is assumed to be negative exponential
with parameter , and vehicles are assumed to approach as Poisson with parameter . Show that, viewed
as a stationary process, the queue of vehicles at any toll booth is an M(/r)/M()/1 queue assuming
/(r) < 1. Find the expected number of vehicles queueing at any toll booth. How many cars would you
expect to be queueing over all booths?
One booth is out of action, and vehicles distribute themselves randomly over the remaining booths.
Assuming that /[(r 1)] < 1, how many extra vehicles can be expected to be queueing overall?
It is assumed to be an M(/r/M()/1) queue. For a random number N
n
of cars at toll booth n
(n = 1, 2, . . . r) is
E(N
n
) =

r
.
For all booths, the expected number of cars is
E(N
1
+N
2
+ N
r
) =
r
r
,
including cars at any booth.
If one booth is out of action (say booth r) then the expected number of cars queueing is
E(N
1
+N
2
+ N
r1
) =
(r 1)
(r 1)
,
Hence the expected extra length of the queue is
(r 1)
(r 1)

r
r
=

2
[(rt 1) ](r )
.
7.24. In an M()/M()/1 queue, it is decided that the service parameter should be adjusted to make the
mean length of the busy period 10 times the slack period to allow the server some respite. What should
be in terms of ?
From Section 7.3(d), in the stationary process, the expected length of the slack period is 1/ and that
of the busy period 1/( ). Therefore the busy period is 10 times of the slack period if
1

=
10

, or =
11
10
.
7.25. In the baulked queue (see Example 7.1) not more than m 2 people (including the person being
served) are allowed to form a queue, the arrivals having a Poisson distribution with parameter . If there
are m individuals in the queue, then any further arrivals are turned away. It is assumed that the service
112
distribution is exponential with rate . If = / = 1, show that the expected length of the busy periods is
given by (1
m
)/( ).
For the baulked queue with a limit of m customers (including the person being served) the probability
that the queue is of length n is given by
p
n
=

n
(1 )
1
n+1
, (n = 0, 1, 2, . . . , m).
In the notation of Section 7.3(d) for slack and busy periods,
lim
j

1
j
j

i=1
s
i

=
1

,
and
lim
j

1
j
j

i=1
b
i

=
1 p
0
p
0
lim
j

1
j
j

i=1
s
i

=
1 p
0
p
0
=
1
m

.
7.26. The M()/D()/1 queue has a xed service time , and from Section 7.5, its probability generating
function is
G(s) =
(1 )(1 s)
1 se
(1s)
.
Show that the expected length of its busy periods is /(1 ).
The average length of the slack periods is, in the notation of Section 7.3(d),
lim
j

1
j
j

i=1
s
i

=
1

,
since it depends on the arrival distribution. Also the average length of the busy periods is
lim
j

1
j
j

i=1
b
i

=
1 p
0
p
0
lim
j

1
j
j

i=1
s
i

=
1 p
0
p
0

.
From the given generating function p
0
= G(0) = 1 , so that the average length of busy periods is
lim
j

1
j
j

i=1
b
i

=

1
.
7.27. A certain process has the (r+1) states E
n
, (n = 0, 1, 2, . . . r). The transition rates between state n and
state n+1 is
n
= (r n), (n = 0, 1, 2, . . . , r 1), and between n and n1 is
n
= n, (n = 1, 2, . . . , r).
These are the only possible transitions at each step in the process. [This could be interpreted as a capped
birth and death process in which the population size cannot exceed r.]
Find the dierential-dierence equation for the probabilities p
n
(t), (n = 0, 1, 2, . . . , r), that the process is
in state n at time t. Consider the corresponding staionary process in which dp
n
/dt 0 and p
n
(t) p
n
as t . Show that
p
n
=

r
n


r
( +)
r
, (n = 0, 1, 2, . . . , r).
In (6.25), let

n
= (r n), (n = 0, 1, 2, . . . , r 1),
n
= n, (n = 1, 2, . . . , r),
113
so that the nite system of dierential-dierence equations are
p

0
(t) = rp
0
(t) +p
1
(t),
p

n
(t) = (r n + 1)p
n1
(t) [(r n) +n]p
n
(t) + (n + 1)p
n+1
(t), (n = 1, 2, . . . , r 1),
p

r
(t) = p
r1
(t) rp
r
(t).
The time-independent stationary process satises the dierence equations
rp
0
+p
1
= 0,
(r n + 1)p
n1
[(r n) +n]p
n
+ (n + 1)p
n+1
= 0, (n = 1, 2, . . . , r 1),
p
r1
rp
r
= 0.
Let u
n
= (r n)p
n
(n + 1)p
n+1
. The the dierence equations become
u
0
= 0, u
n1
u
n
= 0, (n = 1, 2, . . . , r 1), u
r1
= 0.
We conclude that u
n
= 0 for (n = 0, 1, 2, . . . , r 1). Hence
p
n+1
=

r n
n + 1

p
n
.
Repeated application of this formula gives
p
n
=

n
r(r + 1) (r n + 1)
n!
p
0
=

r
n

p
0
, (n = 0, 1, 2, . . . , r).
The probability p
0
is dened by
1 =
r

n=0
p
n
=
r

n=0

r
n

p
0
= p
0

1 +

r
.
Hence
p
0
=

r
( +)
r
,
so that, nally
p
n
=

r
n


r
( +)
r
.
114
Chapter 8
Reliability and renewal
8.1. The lifetime of a component has a uniform density function given by
f(t) =

1/(t
1
t
0
) 0 < t
0
< t < t
1
0 otherwise
For all t > 0, obtain the reliability function R(t) and the failure rate function r(t) for the component.
Obtain the expected life of the component.
For the given density, the distribution function is
F(t) =

0 0 t t
0
(t t
0
)/(t
1
t
0
) t
0
< t < t
1
1 t t
1
Therefore the reliability function R(t) is
R(t) = 1 F(t) =

1 0 t t
0
(t
1
t)/(t
1
t
0
) t
0
< t < t
1
0 t t
1
,
and the failure rate function r(t) is
r(t) =
f(t)
R(t)
=

0 0 t t
0
1/(t
1
t) t
0
< t < t
1
does not exist t t
1
.
Failure of the component will not occur for 0 t t
0
since R(t) = 1 in this interval. The component will
not survive beyond t = t
1
.
If T is a random variable of the lifetime of the component, then the expected lifetime is given by
E(T) =


0
tf(t)dt =

t
1
t
0
tdt
t
1
t
0
=
t
2
1
t
2
0
2(t
1
t
0
)
=
1
2
(t
0
+t
1
).
8.2. Find the reliability function R(t) and the failure rate function r(t) for the gamma density
f(t) =
2
te
t
, t > 0.
How does r(t) behave for large t? Find the mean and variance of the time to failure.
For the given gamma density,
F(t) =
2

t
0
se
s
ds = 1 e
t
(1 +t).
115
Hence the reliability function is
R(t) = 1 F(t) = (1 +t)e
t
.
The failure rate function is
r(t) =
f(t)
R(t)
=

2
t
1 +t
.
For xed and large t,
r(t) =

1 +

1
t

1
= +O(t
1
),
as t .
If T is a random variable of the time to failure, then its expected value is
E(T) =


0
sf(s)ds =
2


0
s
2
e
s
ds =
2

.
Also the variance
V(T) =


0
s
2
f(s)ds

2
=
2


0
s
3
e
s
ds
4

2
=
6

2

4

2
=
2

2
.
Both these are results for gamma mean and variance.
8.3. A failure rate function is given by
r(t) =
t
1 +t
2
, t 0,
The rate of failures peaks at t = 1 and then declines towards zero as t : failure becomes less likely
with time (see Figure 8.1 ). Find the reliability function, and the corresponding probability density.
Figure 8.1: Failure rate distribution r(t) with a = 1 and c = 1
In terms of r(t) (see eqn (8.5), the reliability function is given by
R(t) = exp

t
0
r(s)ds

= exp

t
0
sds
1 +s
2

= exp[
1
2
ln(1 +t
2
)] =
1

(1 +t
2
)
,
for t 0. Hence the probability function
F(t) = 1 R(t) = 1
1

(1 +t
2
)
, (t 0).
Finally, the density is given by
f(t) = F

(t) =
t
(1 +t
2
)
3
2
.
116
8.4. A piece of oce equipment has a piecewise failure rate function given by
r(t) =

2
1
t, 0 < t t
0
,
2(
1

2
)t
0
+ 2
2
t, t > t
0

1
,
2
> 0.
Find its reliability function.
The reliability function is given by
R(t) = exp

t
0
r(s)ds

,
where, for 0 < t t
0
,

t
0
r(s)ds = 2
1

t
0
sds =
1
t
2
,
and, for t > t
0
,

t
0
r(s)ds =

t
0
0
r(s)ds +

t
t
0
r(s)ds
=
1
t
2
0
+

t
t
0
[2(
1

2
)t
0
+ 2
2
s]ds
=
1
t
2
0
+ 2(
1

2
)t
0
(t t
0
)] +
2
(t
2
t
2
0
)
= t
0
(
1

2
)(2t t
0
) +
2
t
2
Hence the reliability function is
R(t) =

1
t
2
0 < t t
0
e
[t
0
(
1

2
)(2tt
0
)+
2
t
2
]
t > t
0
.
8.5. A laser printer is observed to have a failure rate function r(t) = 2t (t > 0) per hour whilst in use,
where = 0.00021(hours)
2
: r(t) is a measure of the probability of the printer failing in any hour given
that it was operational at the beginning of the hour. What is the probability that the printer is working
after 40 hours of use? Find the probability density function for the time to failure. What is the expected
time before the printer will need maintenance?
Since r(t) = 2t, the reliability function is
R(t) = exp

t
0
sds

= e
t
2
.
Hence R(40) = e
0.000214040
= 0.715. The probability of that the printer is working after 40 hours is
0.715. The probability of failure is F(t)1 R(t), so that F(t) = 1 e
t
2
.
By (8.8), the expected time T of the random variable to failure is
E(T) =


0
R(t)dt =


0
e
t
2
dt =
1
2

= 61.2 hours,
(see the Appendix for the value of the integral).
8.6. The time to failure is assumed to be gamma with parameters and n, that is,
f(t) =
(t)
n1
e
t
(n 1)!
, t > 0.
Show that the reliability function is given by
R(t) = e
t
n1

r=0

r
t
r
r!
.
117
Find the failure rate function and show that lim
t
r(t) = . What is the expected time to failure?
The gamma distribution function is
F(t, , n) =

t
0
(s)
n1
e
s
(n 1)!
ds =

n
(n 1)!

t
0
s
n1
e
s
ds
=

n1
(n 1)!
t
n1
e
t
+

n1
(n 2)!
F(t, , n 1)
= e
t

n1
t
n1
(n 1)!


n2
t
n2
(n 2)!

t
1!

t
0
e
s
ds
= 1 e
t
n1

r=0

r
t
r
r!
.
after repeated integration by parts. The reliability function is therefore
R(t) = 1 F(t, , n) = e
t
n1

r=0

r
t
r
r!
.
The failure rate function r(t) is dened by
r(t) =
f(t)
R(t(
=
(t)
n1
e
t
(n 1)!

e
t
n1

r=0

r
t
r
r!

=
n
t
n1

(n 1)!
n1

r=0

r
t
r
r!

For the limit, express r(t) in the form


r(t) =
n

(n 1)!
n1

r=0

r
t
rn+1
r!

(n 1)!

n1
(n 1)!

=
as t .
The expected time to failure is
E(T) =

n
t
n
e
t
dt
(n 1)!
=

n
(n 1)!

n!

n+1
=
n

.
8.7. A electrical generator has an exponentially distributed failure time with parameter
f
and the subse-
quent repair time is exponentially distributed with parameter
r
. The generator is started up at time t = 0.
What is the mean time for generator to fail and the mean time from t = 0 for it to be operational again?
As in Section 8.4, the mean time to failure is 1/
f
. The mean repair time is 1/
r
, so that the mean
time to the restart is
1

f
+
1

r
.
8.8. A hospital takes a grid supply of electricity which has a constant failure rate . This supply is backed
up by a stand-by generator which has a gamma distributed failure time with parameters (2, ). Find the
reliability function R(t) for the whole electricity supply. Assuming that time is measured in hours, what
should the relation between the parameters and be in order that R(1000)=0.999?
For the grid supply, the reliability function is R
g
(t) = e
t
. For the stand-by supply, the reliability
function is (see Problem 8.6)
R
s
(t) = e
t
1

r=0

r
t
r
r!
= e
t
(1 +t).
118
The reliability function for the system is
R = 1 [1 R
g
(t)][1 R
s
(t)] = 1 [1 e
t
][1 (1 +t)e
t
]
= e
t
(1 +t)e
(+)t
+ (1 +t)e
t
.
Let T = 1000 hours. Then solving the equation above for at time t = T, we have
=
1
T
ln

R(T) (1 +T)e
T
1 (1 +T)e
T

,
where R(T) = R(1000) = 0.999.
8.9. The components in a renewal process with instant renewal are identical with constant failure rate
= (1/50)(hours)
1
. If the system has one spare component which can take over when the rst fails, nd
the probability that the system is operational for at least 24 hours. How many spares should be carried to
ensure that continuous operation for 24 hours occurs with probability 0.98?
Let T
1
and T
2
be respectively random variables of the times to failure of the components. Let S
2
be
the time to failure of these components. If = 24 hours is the operational time to be considered, then, as
in Example 8.5,
P(S
2
< ) = 1 (1 +)e
.
Hence
P(S
2
< 24) = 1

1 +
24
50

e
24/50
= 1 0.916.
The required probability is 0.916.
This is the reverse problem: given the probability, we have to compute n. If S
n
is the time to failure,
then
P(S
N
< ) = F
n
() =


0
f
n
(s)ds =

n
(n 1)!


0
s
n1
e
s
ds
= 1 e

1 + +
()
2
2!
+ +
()
n1
(n 1)!

The smallest value of n is required which makes 1 F


n
(24) > 0.98. Conputation gives
1 F
2
(24) = 0.916, F
3
(24) = 0.987 > 0.98.
Three components are required.
8.10. A device contains two components c
1
and c
2
with independent failure times T
1
and T
2
from time
t = 0. If the densities of the times to failure are f
1
and f
2
with probability distributions F
1
and F
2
, show
that the probability that c
1
fails before c
2
is given by
P{T
1
< T
2
} =


y=0

y
x=0
f
1
(x)f
2
(y)dxdy, =


y=0
F
1
(y)f
2
(y)dy.
Find the probability P{T
1
< T
2
} in the cases:
(a) both failure times are exponentially distributed with parameters
1
and
2
;
(b) both failure times have gamma distributions with parameters (2,
1
) and (2,
2
).
The probability that c
1
fails befors c
2
is
P(T
1
< T
2
) =

A
f
1
(x)f
2
(x)dx,
where the region A is shown in Figure 8.2. As a repeated integral the double integral can be expressed as
P(T
1
< T
2
) =

y
0
f
1
(x)f
2
(y)dxdy =


0
[F
1
(y) F
1
(0)]f
2
(y)dy =


0
F
1
(y)f
2
(y)dy,
119
A
O
y
x
Figure 8.2: Region A in Problem 8.10.
since F
1
(0) = 0.
(a) For exponentially distributed failure times
f
2
(y) =
2
e

2
y
, F
1
(y) = 1 e

1
y
.
Therefore
P(T
1
< T
2
) =


0
(1 e

1
y
)
2
e

2
y
dy
=
2

2
e

2
y
+
1

1
+
2
e
(
1
+
2
)y

0
=
2

1
+
2

=

1

1
+
2
(b) For gamma distributions with parameters (
1
, 2) and (
1
, 2),
f
2
(y) =
2
2
ye

2
y
, F
1
(y) = 1 (1 +
1
y)e

1
y
.
Hence
P(T
1
< T
2
) =


0
(1 e

1
y

1
ye

1
y
)(
2
2
y)e

2
y
dy =

2
1
(
1
+ 3
2
)
(
1
+
2
)
3
.
8.11. Let T be a random variable for the failure time of a component. Suppose that the distribution function
of T is
F(t) = P(T t), t 0,
with density
f(t) =
1
e

1
t
+
2
e

2
t
,
1
,
2
> 0,
1
,
2
> 0,
where the parameters satisfy

1
+

2

2
= 1.
Find the reliability function R(t) and the failure rate function r(t) for this double exponential distribution.
How does r(t) behave as t ?
The probability distribution
F(t) =

t
0
f(s)ds =

t
0
(
1
e

1
s
+
2
e

2
s
)ds
=

1
e

1
s

2
e

2
s

t
0
=

1
e

1
t

2
e

2
t
+

1

1
+

2

2
= 1

1

1
e

1
t

2
e

2
t
120
The reliability function is therefore
R(t) = 1 F(t) =

1

1
e

1
t
+

2

2
e

2
t
.
The failure rate function is
r(t) =
f(t)
R(t)
=

1
e

1
t
+
2
e

2
t

1
e

1
t
+

2

2
e

2
t

.
As t ,
r(t)


1
if
2
>
1

2
if
2
<
1
if
1
=
2
=
8.12. The lifetimes of components in a renewal process with instant renewal are identically distributed with
constant failure rate . Find the probability that at least three components have been replaced by time t.
In the notation of Section 8.7
P(S
3
< t) = F
3
(t) =

t
0
F
2
(t y)f(y)dy,
where
F
2
(t) = 1 (1 +t)e
t
, f(t) = e
t
.
Then
P(S
3
t) =

t
0
[1 (1 +(t y)e
(ty)
]e
y
dy
= 1 (1 +t +
1
2
t
2

2
)e
t
.
8.13. The lifetimes of components in a renewal process with instant renewal are identically distributed with
a failure rate which has a uniform distribution with density
f(t) =

1/k 0 < t < k


0 elsewhere
Find the probability that at least two components have been replaced at time t.
For the uniform density
F
1
(t) =

0 t < 0
t/k 0 < t < k
1 t > k
As in Example 8.5,
F
2
(t) =

t
0
F
1
(t y)F(y)dy.
Interval 0 < t < k.
F
2
(t) =
1
k

t
0
F
1
(t y)dy =
1
k
2

t
0
(t y)dy =
t
2
2k
2
.
Interval t > k
F
2
(t) =
1
k

k
0
F
2
(t y)dy =
1
k
2

k
tk
(t y)dy
=
1
k
2
[
1
2
(t k)
2
+
1
2
k
2
]
=
t
2k
2
(2k t)
121
To summarize
F
2
(t) =

t
2
/(2k
2
) 0 < t < k
t(2k t)/(2k
2
) t > k
,
which is the probability that at least two components have failed.
8.14. The lifetimes of components in a renewal process with instant renewal are identically distributed each
with reliability function
R(t) =
1
2
(e
t
+e
2t
), t 0, > 0.
Find the probability that at least two components have been replaced by time t.
Given the reliability function, it follows that the distribution function and its density are given by
F(t) = 1 R(t) = 1
1
2
e
t

1
2
e
2t
,
f(t) = F

(t) =
1
2
[e
t
+ 2e
2t
].
In the notation of Example 8.5,
P(S
2
< t) = F
2
(t) =

t
0
F(t s)f(s)ds
=
1
2

t
0
[1
1
2
e
(ts)

1
2
e
2(ts)
][e
s
+ 2e
2t
]ds
=
1
4
[4 + (1 2t)e
2t
(5 +t)e
t
],
which is the probability that at least two components have failed by time t.
8.15. The random variable T is the time to failure from t = 0 of a system. The distribution funmction for
T is F(t), (t > 0). Suppose that the system is still functioning at time t = t
0
. Let T
t
0
be the conditional
time to failure from this time, and let F
t
0
(t) be its distribution function. Show that
F
t
0
(t) =
F(t +t
0
) F(t
0
)
1 F(t
0
)
, (t 0, t
0
0),
and that the mean of T
t
0
is
E(T
t
0
) =
1
1 F(t
0
)


t
0
[1 F(u)]du.
The distribution function for the conditional time to failure is
F
t
0
(t) = P(T
t
0
t) = P(T t
0
t|T > t
0
)
=
P(T t
0
t T > t
0
)
P(T > t
0
)
(by eqn (1.2))
=
P(t
0
< T t +t
0
)
(T > t
0
)
=
F(t +t
0
) F(t
0
)
1 F(t
0
)
,
as required.
For the mean
E(T
t
0
) =

1
F(t +t
0
) F(t
0
)
1 F(t
0
)

dt
=
1
1 F(t
0
)


0
[1 F(t +t
0
)]dt
=
1
1 F(t
0
)


t
0
[1 F(u)]du, (where u = t +t
0
).
122
8.16. Suppose that the random variable T of the time to failure of a system has a uniform distribution for
t > 0 given by
F(t) =

t/t
1
, 0 t t
1
1 t > t
1
.
Using the result from Problem 8.15, nd the conditional probability function assuming that the system is
still working at time t = t
0
.
Thare are two cases to consider: t
0
t
1
and t
0
> t
1
.
t
0
t
1
. In the formula in Problem 8.15
F(t +t
0
) =

(t +t
0
)/t
1
0 t +t
0
t
1
,
1 t +t
0
> t
1
.
Hence
F
t
0
(t) =

[(t +t
0
)/t
1
] [t
0
/t
1
]
1 [t
0
/t
1
]
=
t
t
1
t
0
0 t t
1
t
0
1 [t
0
/t
1
]
1 [t
0
/t
1
]
= 1 t > t
1
t
0
.
t
0
> t
1
. F
t
0
= P(T t
0
t|T > t
0
) = 1.
123
Chapter 9
Branching and other random
processes
9.1. In a branching process the probability that any individual has j descendants is given by
p
0
= 0, p
j
=
1
2
j
, (j 1).
Show that the probability generating function of the rst generation is
G(s) =
s
2 s
.
Find the further generating functions G
2
(s), G
3
(s) and G
4
(s). Show by induction that
G
n
(s) =
s
2
n
(2
n
1)s
.
Find p
n,j
, the probability that the population size of the n-th generation is j given that the process starts
with one individual. What is the mean population size of the n-th generation?
The generating function is given by
G(s) =

j=0
p
j
s
j
=

j=1

s
2

j
=
s
2
+

s
2

2
+ =
s
2 s
,
using the geometric series formula for the sum.
For the second generation G
2
(s) = G(G(s)), so that
G
2
(s) =
s/(2 s)
2 [2/(2 s)]
=
s
2(2 s) s
=
s
4 3s
.
Repeating this procedure,
G
3
(s) = G(G(G(s))) = G
2
(G(s)) =
s
4(2 s) 3s
=
s
8 7s
,
G
4
(s) = G
3
(G(s)) =
s
8(2 s) 7s
=
s
16 15s
.
Consider the formula
G
n
(s) =
s
2
n
(2
n
1)s
.
Then
G
n+1
(s) = G(G
n
(s)) =
s
2
n
(2 s) (2
n
1)s
=
s
2
n+1
(2
n+1
1)s
.
124
Hence if the formula is correct for G
n
(s) then it is true for G
n+1
(s). The result has been veried for G
2
(s)
and G
3
(s) so it is true for all n by induction on the integers.
Using the binomial expansion
G
n
(s) =
s
2
n
(2
n
1)s
=
s
2
n

1
(2
n
1)
2
n
s

1
=

j=1

2
n
1
2
n

j1
s
j
.
Hence the probability that the population size of the n-th generation is j is given by the coecient of s
j
in this series, namely
p
n,j
=

2
n
1
2
n

j1
.
Since G(s) = s/(2s), then G

(s) = 2/(2s)
2
, so that the mean of the rst generation is = G

(1) = 2.
Using result (9.7) in the text, the mean size of the n-th generation is

n
= G

n
(1) =
n
= 2
n
.
9.2. Suppose in a branching process that any individual has a probability given by the modied geometric
distribution
p
j
= (1 p)p
j
, (j = 0, 1, 2, . . .),
of producing j descendants in the next generation, where p (0 < p < 1) is a constant. Find the probability
generating function of the second and third generations. What is the mean size of any generation?
The probability generating function is
G(s) =

j=0
p
j
s
j
=

j=0
(1 p)p
j
s
j
=
1 p
1 ps
,
using the formula for the sum of the geometric series.
In the second generation
G
2
(s) = G(G(s)) =
1 p
1 pG(s)
=
1 p
1 p[(1 p)/(1 ps)]
=
(1 p)(1 ps)
(1 p +p
2
) ps
,
and for the third generation
G
3
(s) = G
2
(G(s)) =
(1 p)[1 {p(1 p)/(1 ps)}]
(1 p +p
2
) {p(1 p)/(1 ps)}
=
(1 p)(1 p +p
2
ps)
(1 2p + 2p
2
) p(1 p +p
2
)s
The mean size of the rst generation is
= G

(1) =
(1 p)p
(1 ps)
2

s=1
=
p
1 p
.
From (9.7) in the book, it follows that
2
=
2
,
3
=
3
, and, in general, that
n
=
n
for the n-th
generation.
9.3. A branching process has the probability generating function
G(s) = a +bs + (1 a b)s
2
for the descendants of any individual, where a and b satisfy the inequalities
0 < a < 1, b > 0, a +b < 1.
125
Given that the process starts with one individual, discuss the nature of the descendant generations. What is
the maximum possible population of the n-th generation? Show that extinction in the population is certain
if 2a +b 1.
Each descendant produces 0,1,2 individuals with probabilities a, b, 1ab respectively. If X
n
represents
a random variable of the population size in the n-th generation, then the possible values of X
1
, X
2
, . . . are
{X
1
} = {0, 1, 2}
{X
2
} = {0, 1, 2, 3, 4}
{X
3
} = {0, 1, 2, 3, 4, 5, 6, 7, 8}

{X
n
} = {0, 1, 2, . . . , 2
n
}
The maximum possible population of the n-th generation is 2
n
.
The probability of extinction is the smallest solution of G(g) = g, that is,
a +bg + (1 a b)g
2
= g, or (g 1)[(1 a b)g a] = 0.
The equation always has the solution g = 1. The other possible solution is g = a/(1 a b). Extinction
is certain if a 1 a b, that is if 2a +b 1. The region in the a, b where extinction is certain is shown
in Figure 9.1. If a < 1 a b, then extinction occurs with probability a/(1 a b).
a
b
1
O 1 0.5
extinction certain
Figure 9.1: Extinction probability region in the a, b plane for Problem 9.3.
9.4. A branching process starts with one individual. Subsequently any individual has a probability (Poisson)
p
j
=

j
e

j!
, (j = 0, 1, 2, . . .)
of producing j descendants. Find the probability generating function of this distribution. Obtain the mean
and variance of the size of the n-th generation. Show that the probability of ultimate extinction is certain
if 1.
The probability generating function is given by
G(s) =

j=0
p
j
s
j
=

j=0

j
e

j!
s
j
= e
s
.
As expected for this distribution, the mean and variance of the population of the rst generation are
= G

(1) = e
s
|
s=1
= ,

2
= G

(1) +
2
=
2
+
2
= .
By Section 9.3, the mean and variance of the population of the n-th generation are

n
=
n
=
n
,
126

2
n
=

n
(
2
+
2
)(
n
1)
( 1)
( = 1)
=

n+1
(
n
1)
1
.
If = 1, then
n
= 1 and
n
= n.
9.5. A branching process starts with one individual. Any individual has a probability
p
j
=

2j
sech
(2j)!
, (j = 0, 1, 2, . . .)
of producing j descendants. Find the probability generating function of this distribution. Obtain the mean
size of the n-th generation. Show that ultimate extinction is certain if is less than the computed value
2.065.
The probability generating function of this distribution is given by
G(s, ) = sech

j=0

2j
(2j)!
s
j
= sechcosh(

s), (s 0).
Its derivative is
G
s
(s, ) =

2

s
sechsinh(

s).
Hence the mean size of the population of the rst generation is
= G
s
(1, ) =
1
2
tanh ,
which implies that the mean population of the n-th generation is

n
=

n
2
n
tanh
n
.
g

1 0.5 O
2
4
6
1.5
(1, 2.065)
Figure 9.2: Graph of g = G(g) for Problem 9.5.
Ultimate extinction occurs with probability g where g is the smallest solution of
g = G(g, ) = sechcosh(

g).
This equation aways has the solution g = 1, which is the only solution if < 2.065, approximately. This
is a numerically computed value. The graph of the equation above is shown in Figure 9.2.
9.6. A branching process starts with two individuals. Either individual and any of their descendants has
probability p
j
, (j = 0, 1, 2, . . .) of producing j descendants independently of any other. Explain why the
probabilities of 0, 1, 2, . . . descendants in the rst generation are
p
2
0
, p
0
p
1
+p
1
p
0
, p
0
p
2
+p
1
p
1
+p
2
p
0
, . . .
n

i=0
p
i
p
ni
, . . . ,
127
respectively. Hence show that the probability generating function of the rst generation is G(s)
2
, where
G(s) =

j=0
p
j
s
j
.
The second generation from each original individual has generating function G
2
(s) = G(G(s)) (see Sec-
tion 9.2). Explain why the probability generating function of the second generation is G
2
(s)
2
, and of the
n-th generation is G
n
(s)
2
.
If the branching process starts with r individuals, what would you think is the formula for the probability
generating function of the n-th generation?
For each individual, the probability generating function is
G(s) =

j=0
p
j
s
j
,
and each produces descendants with populations 0, 1, 2, . . . with with probabilities p
0
, p
1
, p
2
, . . .. The
combined probabilities that populations of the generations are
p
2
0
, p
0
p
1
+p
1
p
0
, p
0
p
2
+p
2
1
+p
2
p
0
, . . . .
These expressions are the coecients of the powers of s in
G(s)
2
=

j=0
p
j
s
j

k=0
p
k
s
k
=

k=0
k

j=0
p
j
p
kj
s
k
(this is known as the Cauchy product of the power series). Hence the probability that the population
of the rst generation is of size k is

k=0
k

j=0
p
j
p
kj
.
Repeating the argument, each original individual generates descendants whose probabilities are the co-
ecients of G
2
(s) = G(G(s)). Hence the probabilities of populations 0, 1, 2, . . . descandants are coecients
of G
2
2
. This process is repeated for succeeding generations which have the generating functions G
n
(s)
2
.
9.7. A branching process starts with two individuals as in the previous problem. The probabilities are
p
j
=
1
2
j+1
, (j = 0, 1, 2, . . .).
Using the results from Example 9.1, nd H
n
(s), the probability generating function of the n-th generation.
Find also
(a) the probability that the size of the population of the n-th generation is m 2;
(b) the probability of extinction by the n-th generation;
(c) the probability of ultimate extinction.
For either individual the probability generating function is
G(s) =

j=0
s
j
2
j+1
=
1
2 s
.
Then
G
2
(s) = G(G(s)) =
2 s
3 2s
,
and, in general,
G
n
(s) =
n (n 1)s
(n + 1) ns
.
128
According to Problem 9.6, the generating function for the combined descendants is
H
n
(s) = G
n
(s)
2
=

n (n 1)s
(n + 1) ns

2
=
n
2
(n + 1)
2

1
2(n 1)s
n
+
(n 1)
2
s
2
n
2

1
ns
n + 1

2
=
n
2
(n + 1)
2

1
2(n 1)s
n
+
(n 1)
2
s
2
n
2

r=0
(r + 1)

n
n + 1

r
s
r
=
n
2
(n + 1)
2
+
2n
(n + 1)
3
s +

r=2
[(r 1)n
r2
+ 2n
r
]
(n + 1)
r+2
s
r
after some algebra: series expansion by computer is helpful to conrm the formula.
(a) From the series above, the probability p
n,m
that the population of the n generation is m is the
coecient of s
m
in the series, namely
p
n,m
=
[(m1)n
m2
+ 2n
m
]
(n + 1)
m+2
, (m 2)
(b) From the series above, the probability of extinction by the n-th generation is
p
n,0
=
n
2
(n + 1)
2
.
(c) The probability of ultimate extinction is
lim
n
p
n,0
= lim
n

n
2
(n + 1)
2

= 1,
which means that is is certain.
9.8. A branching process starts with r individuals, and each individual produces descendants with probability
distribution {p
j
}, (j = 0, 1, 2, . . .), which has the probability generating function G(s). Given that the
probability of the n-th generation is [G
n
(s)]
r
, where G
n
(s) = G(G(. . . (G(s)) . . .)), nd the mean population
size of the n-th generation in terms of = G

(1).
Let Q(s) = [G
n
(s)]
r
. Its derivative is
Q

(s) = rG

n
(s)[G
n
(s)]
r1
,
where
G

n
(s) =
d
ds
G
n1
(G(s)) = G

n1
(G(s))G

(s).
Hence, the mean population size of the n-th generation is

n
= Q

(1) = r[G
n
(1)]
r1
G

n1
(1)G

(1) = r
n
.
9.9. Let X
n
be the random variable of the population size of a branching process starting with one individual.
Suppose that all individuals survive, and that
Z
n
= 1 +X
1
+X
2
+ +X
n
is the random variable representing the accumulated population size.
(a) If H
n
is the probability generating function of the total accumulated population, Z
n
, up to and including
the n-th generation, show that
H
1
(s) = sG(s), H
2
(s) = sG(H
1
(s)) = sG(sG(s)),
(which perhaps gives a clue to the probability generating function of H
n
(s)),
129
(b) What is the mean accumulated population size E(Z
n
) (you do not require H
n
(s) for this formula)?
(c) If < 1, what is lim
n
E(Z
n
), the ultimate expected population?
(d) What is the variance of Z
n
?
(a) Let p
j
be the probability that any individual in any generation has j descendants, and let the
probability generation function of {p
j
} be
G(s) =

j=0
p
j
s
j
.
The probabilities of the accumulated population sizes are as follows. Since the process starts with one
individual
P(Z
0
= 1) = 1 P(Z
1
= 0) = 0, P(Z
1
= n) = p
n1
(n = 1, 2, 3, . . .).
Hence the generating function of P(Z
1
= n) is given by H
1
(s), where
H
1
(s) =

r=1
P(Z
1
= r)s
r
=

r=1
P(X
1
= r 1)s
r
=

r=1
p
r1
s
r
= sG(s).
For the probability of Z
2
, use the identity
P(Z
2
= n) =

r=1
P(Z
2
= n|Z
1
= r 1)P(Z
1
= r 1).
Then the probability generating function H
2
(s) has the series
H
2
(s) =

n=1
P(Z
2
= n)s
n
=

n=1

r=1
P(Z
2
= n|Z
1
= r)P(Z
1
= r)s
n
=

n=1

r=1
p
r1
P(Z
2
= n|Z
1
= r)s
n
=

r=1
p
r1
E(s
Z
2
) =

r=1
p
r1
E(s
(Z
1
+Y
1
)+(Z
1
+Y
2
)++(Z
1
+Y
r
)
)
=

r=1
p
r1
E(s
Z
1
+Y
1
)E(s
Z
1
+Y
2
) E(s
Z
1
+Y
r
)
=

r=1
p
r1
[sG(s)]
r
= sG[sG(s)],
using a method similar to that of Section 9.2. In this analysis, in the second generation, it is assumed that
X
2
= Y
1
+Y
2
+ +Y
r
,
where {Y
j
} are iid.
(b) The mean of the accumulated population is (see eqn (9.7))
E(Z
n
) = E(1 +X
1
+X
2
+ +X
n
) = E(1) +E(X
1
) +E(X
2
) + +E(X
n
)
= 1 + +
2
+ +
n
=
1
n+1
1
( = 1).
after summing the geometric series.
If = 1, then E(Z
n
) = n + 1.
(c) If < 1, then from (b) E(Z
n
) 1/(1 ).
130
(d) The variance of Z
n
is, from Section 9.3(i),
V(Z
n
) = V(1 +X
1
+X
2
+ +X
n
) = V(1) +V(X
1
) +V(X
2
) + +V(X
n
)
= 0 +
n

r=1

r1
(
r
1)
1
=

2
1
n

r=1
(
2r1

r1
)
=

2
1

(1
2n
)
1
2

1
n
1

( = 1)
=

2
(1
n
)(1
n+1
)
(1 )(1
2
)
9.10. A branching process starts with one individual and each individual has probability p
j
of producing j
descendants independently of every other individual. Find the mean and variance of {p
j
} in each of the
following cases, and hence nd the mean and variance of the population of the n-th generation:
(a) p
j
=
e

j
j!
, (j = 0, 1, 2, . . .) (Poisson);
(b) p
j
= (1 p)
j1
p (j = 1, 2, . . . ; 0 < p < 1) (geometric);
(c) p
j
=

r +j 1
r 1

p
j
(1 p)
r
, (j = 0, 1, 2, . . . ; 0 < p < 1) (negative binomial).
where r is a positive integer, the process having started with one individual (a negative binomial distribu-
tion).
(a) For the Poisson distribution with intensity ,
p
j
=
e

j
j!
.
Its probability generating function is G(s) = e
(1s)
. Therefore
G

(s) = e
(1s)
, G

(s) =
2
e
(1s)
,
and the mean and variance of the rst generation are
= ,
2
= G

(1) +G

(1) [G

(1)]
2
=
2
+ +
2
= .
The mean and variance of the n-th generation are (see Section 9.3), for = 1,

n
=
n
=
n
,
2
n
=

2

n1
(
n
1)
1
=

n
(
n
1)
1
.
(b) The geometric distribution is p
j
= q
j1
p, where q = 1 p, which has the probability generating
function G(s) = q/(1 ps). Then
G

(s) =
pq
(1 ps)
2
, G

(s) =
2p
2
q
(1 ps)
3
.
The mean and variance of the rst generation are
= G

(1) =
p
q
,
2
= G

(1) +G

(1) [G

(1)]
2
=
p
q
2
.
The mean and varistion of the n-th generation are

n
=

p
q

n
,
2
n
=

2

n1
(
n
1)
1
=
1
2p 1

p
q

p
q

n
1

(p =
1
2
).
131
(c) The negative binomial distribution is
p
j
=

r +j 1
r 1

p
j
q
r
(q = 1 p).
Its probability generating function is
G(s) =

q
1 ps

r
.
The derivatives are
G

(s) =
rpq
r
(1 ps)
r+1
, G

(s) =
r(r + 1)p
2
q
r
(1 ps)
r+2
.
Hence, the mean and variance of the rst generation are
=
rp
1 p
,
2
=
rp
(1 p)
2
, (p = 1).
The mean and variance of the populations of the n-th generation are

n
=

rp
1 p

n
,

n
=

2

n1
(
n
1)
1
=
1
rp 1 +p

rp
1 p

rp
1 p

n
1

.
9.11. A branching process has a probability generating function
G(s) =

1 p
1 ps

r
, (0 < p < 1),
where r is a positive integer (a negative binomial distribution), the process having started with one individ-
ual. Show that extinction is not certain if p > 1/(1 +r).
We need to investigate solutions of g = G(g) (see Section 9.4). This equation always has the solution
g = 1, but does it have a solution less than 1? For this distribution the equation for g becomes
g(1 gp)
r
= (1 p)
r
.
Consider where the line y = (1 p)
r
and the curve y = g(1 gp)
r
intersect in terms of g for xed p and
r. The curve has a stationary value where
dy
dg
= (1 gp)
r
rp(1 gp)
r1
= 0,
which occurs at g = 1/[p(1 + r)], which is a maximum. The line and the curve intersect for a value of g
between g = 0 and g = 1 if p > 1/(1 +r), which is the condition that extinction is not certain. Graphs of
the line and curve are shown in Figure 9.3 for p =
1
2
and r = 2.
9.12. Let G
n
(s) be the probability generating function of the population size of the n-th generation of a
branching process. The probability that the population size is zero at the n-th generation is G
n
(0). What
is the probability that the population actually becomes extinct at the n-th generation?
In Example 9.1, where p
j
= 1/2
j+1
(j = 0, 1, 2, . . .), it was shown that
G
n
(s) =
n
n + 1
+

r=1
n
r1
(n + 1)
r+1
s
r
.
Find the probability of extinction,
(a) at the n-th generation,
(b) at the n-th generation or later.
What is the mean number of generations until extinction occurs?
132
y
g
g
1
p = 0.5
r = 2
Figure 9.3: Graphs of the line y = (1 p)
r
and the curve y = g(1 gp)
r
with p =
1
2
and r = 2 for
Problem 9.11.
The probability that the population is extinct at the n-th generation is G
n
(0), but this includes
extinction of previous generations at r = 1, 2, . . . , n 1. The probability is therefore G
n
(0) given that
individuals have survived at the (n 1)-th generation, namely G
n
(s) G
n1
(s).
(a) In this example G
n
(s) = n/(n + 1). Hence probability of extinction at the n-th generation is
G
n
(0) G
n1
(0) =
n
n + 1

n 1
n
=
1
n(n + 1)
.
(b) Since ultimate extinction is certain, the probability that extinction occurs at or after the n-th
generation is
1 G
n1
(0) = 1
n 1
n
=
1
n
.
The mean number of generations until extinction occurs is

n=1
n[G
n
(0) G
n1
(0)] =

n=1
n
n(n + 1)
=

n=1
1
n + 1
.
This series diverges so that the number of generations is innite.
9.13. An annual plant produces N seeds in a season which are assumed to have a Poisson distribution with
parameter . Each seed has a probability p of germinating to create a new plant which propagates in the
following year. Let M the random variable of the number of new plants. Show that p
m
, the probability that
there are m growing plants in the rst year is given by
p
m
= (p)
m
e
p
/m! (m = 0, 1, 2, . . .),
that is Poisson with parameter p. Show that its probability generating function is
G(s) = e
p(s1)
.
Assuming that all the germinated plants survive and that each propagates in the same manner in succeeding
years, nd the mean number of plants in year k. Show that extinction is certain if p 1.
Given that plant produces seeds as a Poisson process of intensity , then
f
n
=

n
e

n!
.
Then
p
m
=

r=m

r
m

p
m
(1 p)
rm

n
e

n!
= (p)
m
e

i=0
(1 p)
i

i
i!
=
(p)
m
e
p
m!
.
133
Its probability generating function is
G(s) = e
p

m=0
(p)
m
m!
s
m
= e
p+ps
.
The mean of the rst generation is
= G

(1) = pe
p(s1)
|
s=1
= p.
The mean of the n-th generation is therefore

n
=
n
= (p)
n
.
Extinction occurs with probability g, where g is the smaller solution of g = G(g), that is, the smaller
solution of
g = e
p
e
pg
.
Consider the line y = g and the exponential curve y = e
p
e
pg
. On the curve, its slope is
0.2 0.4 0.6 0.8 1.0
g
0.2
0.4
0.6
0.8
1.0
y
Figure 9.4: Graphs of the line y = g and the curve y = e
p
e
pg
with = 2 and p = 1 for Problem
9.11.3
dy
dg
= pe
p
e
pg
,
and its slope at g = 1 is p. Since e
p
e
pg
0 as g , and e
p
e
pg
and its slope decrease as
g decreases, then the only solution of g = G(g) is g = 1 if p 1. Extinction is certain in this case. If
p > 1 then there is a solution for 0 < g < 1. Figure 9.4 shows such a solution for = 2 and p = 1.
9.14. The version of Example 9.1 with a general geometric distribution is the branching process with
p
j
= (1 p)p
j
, (0 < p < 1; j = 0, 1, 2, . . .). Show that
G(s) =
1 p
1 ps
.
Using an induction method, prove that
G
n
(s) =
(1 p)[p
n
(1 p)
n
ps{p
n1
(1 p)
n1
}]
[p
n+1
(1 p)
n+1
ps{p
n
(1 p)
n
}]
, (p =
1
2
).
Find the mean and variance of the population size of the n-th generation.
What is the probability of extinction by the n-th generation? Show that ultimate extinction is certain
if p <
1
2
, but has probability (1 p)/p if p >
1
2
.
As in Problem 9.2, the generating function for the rst generation is
G(s) =
1 p
1 ps
.
134
Consider
G
n
(G(s)) =
(1 p)[p
n
(1 p)
n
p[(1 p)/(1 ps)]{p
n1
(1 p)
n1
}]
[p
n+1
(1 p)
n+1
p[(1 p)/(1 ps)]{p
n
(1 p)
n
}]
=
(1 p)[{p
n
(1 p)
n
}(1 ps) p(1 p){p
n1
(1 p)
n1
}]
{p
n+1
(1 p)
n+1
}(1 ps) p(1 p){p
n1
(1 p)
n1
}
=
(1 p)[p
n+1
(1 p)
n+1
ps{p
n
(1 p)
n
}]
p
n+2
(1 p)
n+2
ps{p
n+1
(1 p)
n+1
}
= G
n+1
(s).
Hence if the formula is true for G
n
(s), then it is true for G
n+1
(s). It can be veried for G
2
(s), so that by
induction on the integers, it is true for all n.
The probability of extinction by the n-th generation is
G
n
(0) =
(1 p)[p
n
(1 p)
n
]
p
n+1
(1 p)
n+1
.
If p >
1
2
, express in the following form
G
n
(0) =
(1 p)[1 ((1 p)/p)
n
]
p[1 ((1 p)/p)
n+1
]

1 p
p
as n , which is the probability of ultimate extinction. If p <
1
2
, then
G
n
(0) =
[(p/(1 p))
n
1]
[(p/(1 p))
n+1
1]
1,
as n : extinction is certain.
9.15. A branching process starts with one individual, and the probability of producing j descendants has
the distribution {p
j
}, (j = 0, 1, 2, . . .). The same probability distribution applies independently to all
descendants and their descendants. If X
n
is the random variable of the size of the n-th generation, show
that
E(X
n
) 1 P(X
n
= 0).
In Section 9.3 it was shown that E(X
n
) =
n
, where = E(X
1
). Deduce that the probability of extinction
eventually is certain if < 1.
By denition
E(X
n
) =

j=1
jP(X
n
= j)

j=1
P(X
n
= j) = 1 P(X
n
= 0).
Hence
P(X
n
= 0) = 1
n
.
Therefore, if < 1, then P(X
n
= 0) 1 as n . This conclusion is true irrespective of the distribution.
9.16. In a branching process starting with one individual, the probability that any individual has j descen-
dants is p
j
= /2
j
, (j = 0, 1, 2, . . . , r), where is a constant and r is xed. This means that any individual
can have a maximum of r descendants. Find and the probability generating function G(s) of the rst
generation. Show that the mean size of the n-th generation is

n
=

2
r+1
2 r
2
r+1
1

n
.
What is the probability of ultimate extinction?
Given p
j
= /2
j
, then for it to be a probability distribution
r

j=0

2
j
=

1 +
1
2
+
1
2
2
+ +
1
2
r

= 2

1
2

r+1

= 1.
135
Therefore the constant is dened by
=
1
2[1 (
1
2
)
r+1
]
. (i)
The probability generating function is given by
G(s) =
r

j=0
s
j
2
j
=

1 +
s
2
+
s
2
2
2
+ +
s
r
2
r

=
[1 (s/2)
r+1
]
(1
1
2
s)
, (ii)
using the formula for the sum of the geometric series: is given by (i).
The derivative of G(s) is
G

(s) =
2
r

2
1+r
2(1 +r)s
r
+rs
1+r

(s 2)
2
.
Hence the mean value of the rst generation is
= G

(1) = 2
r

2
1+r
2 r)

=
2
r+1
2 r
2
r+1
1
.
By (9.7), the mean of the n-th generation is

n
=
n
=

2
r+1
2 r
2
r+1
1

n
.
Since
=
2
r+1
2 r
2
r+1
1
< 1,
then, by Problem 9.15, ultimate extinction is certain.
9.17. Extend the tree in Figure 9.3 for the gambling martingale in Section 9.5 to Z
4
, and conrm that
E(Z
4
|Z
0
, Z
1
, Z
2
, Z
3
) = Z
3
.
conrm also that E(Z
4
) = 1.
Extension of the gambling martingale to Z
4
is shown in Figure 9.5. The values for the random variables
1
2
0
4
0
2
-2
8
0
4
-4
6
-2
2
-6
16
0
8
-8
12
-4
4
-12
14
-2
6
-10
10
-6
2
-14
0 1
Z Z
3
Z
2
Z
4
Z
Figure 9.5: Martingale for Problem 9.17.
Z
4
are:
Z
4
= {even numbers between -14 and 16 inclusive}
The mean value of Z
4
is given by
E(Z
4
) =
15

m=0
1
2
4
(2
4
+ 2m+ 2) = 1,
136
or the mean can be calculated from the mean of the nal column of numbers in Figure 9.5.
9.18. A gambling game similar to the gambling martingale of Section 9.5 is played according to the following
rules:
(a) the gambler starts with 1, but has unlimited resources;
(b) against the casino, which also has unlimited resources, the gambler plays a series of games in which
the probability that the gambler wins is 1/p and loses is (p 1)/p, where p > 1;
(c) at the n-th game, the gambler either wins (p
n
p
n1
) or loses p
n1
.
If Z
n
is the random variable of the gamblers asset/debt at the n-th game, draw a tree diagram similar
to that of Figure 9.3 as far as Z
3
. Show that
Z
3
= {p p
2
, p
2
, p, 0, p
3
p
2
p, p
3
p
2
, p
3
p, p
3
}
and conrm that
E(Z
2
|Z
0
, Z
1
) = Z
1
, E(Z
3
|Z
0
, Z
1
, Z
2
) = Z
2
,
which indicates that this game is a martingale. Show also that
E(Z
1
) = E(Z
2
) = E(Z
3
) = 1.
Assuming that it is a martingale, show that, if the gambler rst wins at the n-th game, then the gambler
will have an asset gain or debt of (p
n+1
2p
n
+ 1)/(p 1). Explain why a win for the gambler can only
be guaranteed for all n if p 2.
The tree diagram for this martingale is shown in Figure 9.6. From the last column in the Figture 9.6,
1
0 1
Z Z
3
Z
2
Z
p
p
2
p
3
0
p
2
- p
- p
0
p
3
- p
2
- p
2
p
3
- p
- p
p
3
- p
2
- p
p
2
- p -
0
Figure 9.6: Tree diagram for Problem 9.18.
it can be seen that the elements of Z
3
are given by
Z
3
= {p p
2
, p
2
, p, 0, p
3
p
2
p, p
3
p
2
, p
3
p, p
3
}.
For the other conditional means,
E(Z
2
|Z
0
, Z
1
) = {p, 0} = {0, p},
E(Z
3
|Z
0
, Z
1
, Z
2
, Z
3
) = {p
2
, 0, p
2
p, p} = {p, 0, p
2
p, p}.
For the other means,
E(Z
1
) = p
1
p
+ 0
p 1
p
= 1,
E(Z
2
) = p
2
1
p
2
+ 0
1
p
p 1
p
+ (p
2
p)
p 1
p
1
p
p

p 1
p

2
= 1,
etc.
137
Suppose the gambler rst wins at the nth game: on the tree the path will be lowest track until the last
game. Generalising the path for Z
3
, the gambler has an asset of
(p
n
1 p p
2
p
n1
) =

p
n

p
n
1
p 1

p
n+1
2p
n
+ 1
p 1

.
To guarantee winnings requires
p
n+1
2p
n
+ 1 = p
n
(p 1)+ > 0
for all n. This will certainly be true if p > 2. A smaller value of p will guarantee winnings but this depends
on n.
9.19. Let X
1
, X
2
, . . . be independent random variables with means
1
,
2
, . . . respectively. Let
Z
n
= X
1
+X
2
+ +X
n
,
and let Z
0
= X
0
= 0. Show that the random variable
Y
n
= Z
n

i=1

i
, (n = 1, , 2, . . .)
is a martingale with respect to {X
n
}. [Note that E(Z
n
|X
1
, X
2
, . . . , X
n
) = Z
n
.]
The result follows since
E(Y
n+1
|X
1
, X
2
, . . . , X
n
) = E(Z
n+1

n+1

i=1

i
|X
1
, X
2
, . . . , X
n
)
= E(Z
n
+X
n+1

n+1

i=1

i
|X
1
, X
2
, . . . , X
n
)
= Z
n
+
n+1

n+1

i=1

i
= Z
n

i=1

i
= Y
n
Hence the random variable Y
n
is a martingale.
9.20. Consider an unsymmetric random walk which starts at the origin. The walk advances one position
with probability p and retreats one position with probability 1 p. Let X
n
be the random variable giving
the position of the walk at step n. Let Z
n
be the random variable given by
Z
n
= X
n
+ (1 2p)n.
Show that
E(Z
2
|X
0
, X
1
) = {2p, 2 2p} = Z
1
.
Generally show that {Z
n
} is a martingale with respect to {X
n
}.
The conditional mean
E(Z
2
|X
0
, X
1
) = E(X
2
+ (1 2p)2|X
0
, X
1
)
= E(X
2
|X
0
, X
1
) + 2(1 2p)
= {1 + 1 2p, 1 + 1 2p} = {2 2p, 2p} = Z
1
By the Markov property of the random walk,
E(Z
n+1
|X
0
, X
1
, . . . , X
n
) = E(Z
n+1
|X
n
) = E(X
n+1
+ (1 2p)(n + 1)|X
n
)
138
Suppose that X
n
= k. Then the walk either advances one step with probability p or retreats one step with
probability 1 p. Therefore
E(X
n+1
+ (1 2p)(n + 1)|X
n
) = p(k + 1) + (1 p)(k 1) + (1 2p)(n + 1)
= k + (1 2p)n = X
n
.
9.21. In the gambling martingale of Section 9.5, the random variable Z
n
, the gamblers asset, in a game
against a casino in which the gambler starts with 1 and doubles the bid at each play is given by
Z
n
= {2
n
+ 2m+ 2}, (m = 0, 1, 2, . . . , 2
n
1).
Find the variance of Z
n
. What is the variance of
E(Z
n
|Z
0
, Z
1
, . . . , Z
n1
)?
Then the sum of the elements in the set Z
n
= {2
n
+ 2m + 2}, m = 0, 1, 2, . . . , 2
n
1) is required,
that is,
2
n
1

m=0
Z
n
=
2
n
1

m=0
(2
n
+ 2m+ 2) = (2
n
+ 2)2
n
+ 2
2
n
1

m=1
m = 2
n
Since all the elements in are equally likely to occur after n steps, then
E(Z
n
) =
1
2
n
2
n
1

m=0
Z
n
=
1
2
n
2
n
= 1.
The variance of Z
n
is given by
V(Z
n
) = E(Z
2
n
) [E(Z
n
)]
2
=
1
2
n
2
n
1

m=0
(2
n
+ 2m+ 2)
2
1 =
1
3
(2
2n
1),
since
2
n
1

m=0
(2
n
+ 2m+ 2)
2
=
2
n
3
(2 + 2
2n
).
Since
E(Z
n
|Z
0
, Z
1
, . . . , Z
n1
) = Z
n1
,
then
V[E(Z
n
|Z
0
, Z
1
, . . . , Z
n1
)] = V(Z
n1
) =
1
3
[2
2(n1)
1]
by the previous result.
9.22. A random walk starts at the origin, and, with probability p
1
advances one position and with probability
q
1
= 1p
1
retreats one position at every step. After 10 steps the probabilities change to p
2
and q
2
= 1p
2
respectively. What is the expected position of the walk after a total of 20 steps?
After 10 steps the walk could be at any position in the list of even positions
{10, 8, 6 . . . 6, 8, 10},
which are the random variable X
r
. Let the random variable Y
n
be the position of the walk after 20 steps
so that
Y
n
= {20, 18, 16, . . . , 16, 18, 20}.
139
Then the position after a further 10 steps is
E(Y
n
|X
r
) = X
r
+ 10(p
2
q
2
).
Its expected position is
E[E(Y
n
|X
r
)] = E[X
r
+ 10(p
2
q
2
)] = 10(p
1
+p
2
q
1
q
2
).
9.23. A symmetric random walk starts at the origin x = 0. The stopping rule that the walk ends when the
position x = 1 is rst reached is applied, that is the stopping time T is given by
T = min{n : X
n
= 1},
where X
n
is the position of the walk at step n. What is the expected value of T? If this walk was interpreted
as a gambling problem in which the gambler starts with nothing with equal odds of winning or losing 1
at each play, what is the aw in this stopping rule as a strategy of guaranteeing a win for the gambler in
every game? [Hint: the generating function for the probability of the rst passage is
G(s) = [1 (1 s
2
)
1
2
]/s :
see Problem 3.11.]
The probability generating function for the rst passage to x = 1 for the walk starting at the origin is
G(s) =
1
s
[1 (1 s
2
)
1
2
] =
s
2
+
s
3
8
+
s
5
16
+O(s
7
),
which imples that the probability that the rst visit to x = 1 occurs at the 5-th step is 1/16.
The mean of the rst visits is
= G

(s)|
s=1
=
1 (1 s
2
)
1
2
s
2
(1 s
2
)
1
2

s=1
= .
It seems a good ploy but would take, on average, an innite number of plays to win 1.
9.24. In a nite-state branching process, the descendant probabilities are, for every individual,
p
j
=
2
mj
2
m+1
1
, (j = 0, 1, 2, . . . , m),
and the process starts with one individual. Find the mean size of the rst generation. If X
n
is a random
variable of the size of the n-th generation, explain why
Z
n
=

2
m+1
1
2
m+1
m2

n
X
n
denes a martingale over {X
n
}.
In this model of a branching process each descendant can produce not more than m individuals. It can
be checked that
m

j=0
p
j
=
m

j=0
2
mj
2
m+1
1
= 1,
using the formula for the sum of the geometric series.
The probability generating function for the rst generation is
G(s) =
m

j=0
p
j
s
j
=
2
m
2
m+1
1
m

j=0

s
2

j
=
2
m+1
s
m+1
(2
m+1
1)(2 s)
.
140
Its rst derivative is
G

(s) =
2
m+1
2(m+ 1)s
m
+ms
m+1
(2
m+1
1) (2 s)
2
.
Therefore the mean of the rst generation is
= G

(1) =
2
m+1
m2
2
m+1
1
.
The random variable Z
n
is simply Z
n
= X
n
/
n
(see Section 9.5).
9.25. A random walk starts at the origin, and at each step the walk advances one position with probability
p or retreats with probability 1 p. Show that the random variable
Y
n
= X
2
n
+ 2(1 2p)nX
n
+ [(2p 1)
2
1]n + (2p 1)
2
n
2
,
where X
n
is the random variable of the position of the walk at time n, denes a martingale with respect to
{X
n
}.
Let (p, n) = 2(1 2p)n and (p, n) = [(2p 1)
2
1]n + (2p 1)
2
n
2
in the expression for Y
n
. Then
E(Y
n+1
|X
n
) = p[(X
n
+ 1)
2
+(p, n + 1)(X
n
+ 1) +(p, n + 1)]
+(1 p)[(X
n
1)
2
+(p, n + 1)(X
n
1) +(p, n + 1)]
= X
2
n
+X
n
[4p 2 +(p, n + 1)] + [1 + (2p 1)(p, n + 1) +(p, n + 1)]
The coecients in the last expression are
4p 2 +(p, n + 1) = 4p 2 + 2(1 2p)(n + 1) = 2(1 2p)n = (p, n),
and
1 + (2p 1)(p, n + 1) +(p, n + 1) =
1 + (2p 1)2(1 2p)(n + 1) + [(2p 1)
2
1](n + 1) + (2p 1)
2
(n + 1)
2
= [(2p 1)
2
1]n + (2p 1)
2
n
2
= (p, n)
Hence
E(Y
n+1
|X
n
) = X
2
n
+ 2(1 2p)nX
n
+ [(2p 1)
2
1]n + (2p 1)
2
n
2
= Y
n
,
so that, by denition, Y
n
is a martingale.
9.26. A simple epidemic has n
0
susceptibles and one infective at time t = 0. If p
n
(t) is the probability that
there are n susceptibles at time t, it was shown in Section 9.7 that p
n
(t) satises the dierential-dierence
equations (see eqns (9.15 and (9.16))
dp
n
(t)
dt
= (n + 1)(n
0
n)p
n+1
(t) n(n
0
+ 1 n)p
n
(t),
for n = 0, 1, 2, . . . n
0
. Show that the probability generating function
G(s, t) =
n
0

n=0
p
n
(t)s
n
satises the partial dierential equation
G(s, t)
t
= (1 s)

n
0
G(s, t)
s
s

2
G(s, t)
s
2

.
Nondimensionalize the equation by putting = t. For small let
G(s, /) = G
0
(s) +G
1
(s) +G
2
(s)
2
+ .
141
Show that
nG
n
(s) = n
0
(1 s)
G
n1
(s)
s
s(1 s)

2
G
n1
(s)
s
2
,
for n = 1, 2, 3, . . . n
0
. What is G
0
(s)? Find the coecients G
1
(s) and G
2
(s). Hence show that the mean
number of infectives for small is given by
n
0
n
0

1
2
n
0
(n
0
2)
2
+O(
3
).
In Example 9.9, the number of susceptibles initially is given by n
0
= 4. Expand p
0
(t), p
1
(t) and p
2
(t)
in powers of and conrm that the expansions agree with G
1
(s) and G
2
(s) above.
Multiply the dierence equation by s and sum from s = 0 to s = n
0
giving
n
0

n=0
p

(t)s
n
=
n
0
1

n=0
(n + 1)(n
0
n)p
n+1
(t)
n
0

n=1
n(n
0
+ 1 n)p
n
(t)s
n
,
or,
G
t
(s, t) = n
0
n
0

m=1
mp
m
(t)s
m1

n
0

m=2
m(m1)p
m
(t)s
m1
n
0
n
0

n=1
np
n
(t)s
n
+
n
0

n=1
n(n 1)p
n
(t)s
n
= n
0
G
s
(s, t) sG
ss
(s, t) n
0
sG
s
(s, t) +s
2
G
ss
(s, t)
= n
0
(1 s)G
s
(s, t) +s(s 1)G
ss
(s, t),
as required.
Let = t. Then the equation for H(s, ) = G(s, /) is
H(s, )

= (1 s)

n
0
H(s, )
s
s

2
H(s, )
s
2

.
For small , let
H(s, ) = G(s, /) = H
0
(s) +H
1
(s) +H
2
(s)
2
+ ,
and substitute this series into the partial dierential equation for H(s, ), so that
H
1
(s) + 2H
2
(s) + = (1 s)n
0
[H

0
(s) +H

1
(s) + ] s(1 s)n
0
[H

0
(s) +H

1
(s) + ].
Equating powers of , we obtain
nH
n
(s) = (1 s)n
0
H

n1
(s) s(1 s)n
0
H

n1
(s), (n = 1, 2, 3, . . .). (i)
For = 0,
H
0
(s) = G(s, 0) =
n
0

n=0
p
n
(0)s
n
.
Since the number of susceptibles is n
0
at time t = 0. Therefore
p
n
0
(0) = n
0
, p
n
(0) = 0, (n = n
0
).
Hence H
0
(s) = s
n
0
.
From (i),
H
1
(s) = n
0
(1 s)
H
0
(s)
s
s(1 s)

2
H
0
(s)
s
2
= n
2
0
(1 s)s
n
0
1
s(1 s)n
0
(n
0
1)s
n
0
2
= n
0
(1 s)s
n
0
1
,
142
H
2
(s) =
1
2

n
0
(1 s)
H
1
(s)
s
s(1 s)

2
H
1
(s)
s
2

=
1
2
n
0
(1 s)[n
0
{(n
0
1)s
n
0
2
n
0
s
n
0
1
}

1
2
s(1 s)[n
0
{(n
0
1)(n
0
2)s
n
0
3
n
0
(n
0
1)s
n
0
2
}]
=
1
2
n
0
s
n
0
2
(1 s)(2n
0
2 n
0
s)
The mean number of infectives is given by
= H
s
(1, ) = H

0
(1) +H

1
(1) +H

2
(1)
2
+O(
3
)
= n
0
s
n
0
1
+ [n
0
(n
)
1)s
n
0
2
n
2
0
s
n
0
1
]
+
1
2
[n
3
0
s
n
0
1
+ (n
0
1)(2n
0
3n
2
0
)s
n
0
2
+ (n
0
2)(2n
2
0
2n
0
)s
n
0
3
]
2
+O(
3
)

s=1
= n
0
n
0

1
2
n
0
(n
0
2)
2
+O(
3
),
where = t.
143

You might also like