You are on page 1of 12

Stochastic Process Since arrivals may come to a queue at random instants and since the service provided by the

servers will also generally be random, we would need to consider arrival processes and departure processes to be random processes when studying a queue or a queuing network. A Stochastic Process (also called a random process) is a family of random variables indexed by the parameter t in T. For a given choice of the time instant t, different realizations of the stochastic process will generate random values of X at the selected time instant t. Alternatively one can view the stochastic process to be one which generates a random function of time for every realization of the process. We consider the the stochastic process X(t) to take on the random values X(t1)=x1,......, X(tn)=xn ,.......at times t1,....,tn,....... The random variables x1,........, xn,........ are then specified by specifying their joint distribution. One can also choose the time points t1,....,tn,....... where the process is actually examined in different ways. For our purposes, we need to consider a specific type of random process where past history of the process can be neglected if one knows the current state of the process. This property is referred to as the Markov property. Apart from the fact that this property can be naturally observed in many processes, assuming this also makes the analysis reasonably tractable in the case where we want to study queues and queueing networks. Markov Processes The stochastic process X(t) is referred to as a Markov Process if it satisfies the Markovian Property (i.e. memoryless property). This states that P{X(tn+1)=xn+1 | X(tn)=xn ...... X(t1)=x1} = P{X(tn+1)=xn+1 | X(tn)=xn} for any choice of time instants ti , i =1,, n where tj > tk for j>k . Note that we use P(A) to denote the probability of event A and P(A | B) as the probability of the event A given that event B has happened, i.e. the event (A | B) . This is referred to as the Memoryless property as the state of the system at future time tn+1 is only decided by the system state at the current time tn and does not depend on the state at the earlier time instants t1 ,.., tn-1 Restricted versions of the Markov Property lead to different types of Markov Processes. These may be classified based on whether the state space is a continuous variable or discrete and whether the process is observed over continuous time or only at discrete time instants. These may be summarized as (a) Markov Chains over a Discrete State Space (b) Discrete Time and Continuous Time Markov Processes and Markov Chains Markov Chain Discrete Time Continuous Time State Space is discrete (e.g. set of non- negative integers) State changes are pre-ordained to occur only at the integer points 0, 1, 2, ......, n (that is at the time points t0 , t1 , t2 ,......, tn ) State changes may occur anywhere in time

In the analysis of simple queues, the state of the queue may be represented by a single random variable X(t) which takes on integer values {i, i = 0, 1, ...... } at any instant of time. The corresponding process may therefore be treated as a Continuous Time Markov Chain since time is continuous but the state space is discrete. Homogenous Markov Chain A Homogenous Markov Chain is one where the transition probability P{Xn+1=j | Xn = i } is the same regardless of n . Therefore, the transition probability pij of going from state i to state j may be written as

Note that this property implies that if the system is in state i at the nth time instant then the probability of finding the system in state j at the (n+1)th time instant (i.e. the next time instant) does not depend on when we observe the system, i.e. the particular choice of n. This simplifies the description of the system considerably as the transition probability pij will be the same, regardless of when we choose to observe the system. It should be noted that for a Homogenous Markov chain, the transition probabilities depend only on the terminal states (i.e. the initial state i and the final state j ) but do not depend on when the transition ( i j ) actually occurs.

Consider a Discrete Time Markov Chain which is currently in state A. Let p be the probability that the system remains in state A at the next time instant and (1-p) is the probability that it goes to some other state. This may be represented by the figure shown below.

Discrete Time Markov Chain (Transition from State A) We can then find the probability of the system staying in state A for N time units before exiting from state A as follows P{system stays in state A for N time units | given that the system is currently in state A} = pN P{system stays in state A for N time units before exiting from state A} = pN (1-p) Note that the above distribution is Geometric, which is also memoryless in nature.

Similarly, consider a Continuous Time Markov Chain which is in state A at time t. Let be the rate at which the system leaves state A so that the probability of its leaving state A in time interval t is t. Then (1-t) will be the probability that the system remains in state A at the time instant t+t. This may be represented by the figure shown below.

Continuous Time Markov Chain (Transition from State A) We can then find the probability of the system staying in state A for a time interval of length T units before exiting from state A as follows P{system in state A for time T | system currently in state A} = (1 - t )T /t e- T for t 0 Note that this is the complement of the cumulative distribution function of an exponential distribution implying that the time spent in a particular state (say state A) will be an exponentially distributed random variable. Note that this distribution is Exponential, which is also memoryless in nature.

Important Observation from the results of the earier two slides In a homogenous Markov Chain, the distribution of time spent in a state is (a) Geometric for a Discrete Time Markov Chain (b) Exponential for a Continuous Time Markov Chain

The Semi-Markov Process/Chain relaxes this condition but still has the one step memory property Semi-Markov Process/Chain In these processes, the distribution of time spent in a state can have any arbitrary distribution but the one-step memory feature of the Markovian property is retained, i.e. the system state at the n+1th time instant depends only on the nth time instant but the distribution of time spent in a particular state can be arbitrary.

Discrete-Time Markov Chains The sequence of random variables X1, X2 , .......... forms a Markov Chain if for all n (n=1, 2, ..........) and all possible values of the random variables, we have that P{ Xn =j | X1 =i1 ...........Xn-1 =in-1 }=P{ Xn =j | Xn-1 =in-1 } Note that this once again, illustrates the one step memory of the process since the probability of the system state at the nth time instant depends only on the system state at the (n-1)th instant and not on any of the earlier time instants. Homogenous Discrete-Time Markov Chain The homogenity property additionally implies that the state transition probability pij = P{ Xn= j | Xn-1= i } will also be independent of n, i.e. the instant when the transition actually occurs. This implies that the probability of the system going from state i to state j in one step will be the same (i.e. pij ) whenever that happens - this probability will not be different even if it happens at different times. In this case, the state transition probability will only depend on the value of the initial state and the value of the next state, regardless of when the transition occurs.

Homogenous Discrete-Time Markov Chain The homogenity property also implies that we can define a m-step state transition probability as follows pij(m) = P{ Xn+m = j | Xn =i}= pik(m-1)pkj m=2, 3,...........

where pij(m) is the probability that the state changes from state i to state j in m steps. The way the above equation is written, we go from state i to k in (m-1) steps and then go from k to j in the last (mth) step, and sum the probabilities of each of these over all possible intermediate states (i.e. k).

This may also be written in other ways, though the value obtained in each case will be the same. For example, we can also write pij(m) in the following way. pij(m) = P{ Xn+m = j | Xn =i}= pik pkj(m-1) m=2, 3,.........

This way of expressing the m-step state transition probability is one where the system goes from state i to state k in the first step and then goes from state k to state j in the next (m-1) steps. The probabilities of each of these are summed over all possible intermediate states.

Irreducible Markov Chain An Irreducible Markov Chain is a Markov Chain where every state can be reached from every other state in a finite number of steps. This implies that k exists such that pij(k) for all possible values of i and j. In other words, given any initial state and final state which are valid states of the Markov Chain, there will always be a sequence of states such that one can reach the final state from the initial state. If a Markov Chain is not irreducible , then (a) It may have one or more absorbing states which will be states from which the process cannot move to any of the other states or (b) It may have a subset of states A from where one cannot move to states outside A

Irreducible Markov Chain This is a Markov Chain where every state can be reached from every other state in a finite number of steps. This implies that k exists such that p ij(k) fori, j. If a Markov Chain is not irreducible , then (a) it may have one or more absorbing states which will be states from which the process cannot move to any of the other states, or (b) it may have a subset of states A from where one cannot move to states outside A , i.e. in A C Classification of States for a Markov Chain The states of a Markov Chain may be classified as being either Recurrent or Transient as shown in the figure given below. Recurrent states may be further divided into Recurrent Null or Recurrent Non-null. A state which is Recurrent Non-null is also referred to as being Positive Recurrent. It may be noted that the all the states of a queue will be Positive Recurrent under equilibrium conditions. fj (n) = P{system returns to state j exactly n steps after leaving state j }

fj = P{system returns to state j some time after leaving state j } M j = Mean recurrence time for state j = Mean number of steps to return to state j after leaving state j State j is periodic with respect to a ( a >1), if the only possible steps in which state j may occur are a , 2 a , 3 a ............... . In that case, the recurrence time for state j has period a . State j is said to be aperiodic if a =1 A recurrent state is said to be ergodic if it is both positive recurrent and aperiodic . An ergodic Markov chain will have all its states as ergodic . An Aperiodic , Irreducible, Markov Chain with a finite number of states will always be ergodic . The states of an Irreducible Markov Chain are either all transient, or all recurrent null or all recurrent positive. If the chain is periodic, then all states have the same period a . In an irreducible, aperiodic, homogenous Markov Chain, the limiting state probabilities pj = P{state j} always exist and these are independent of the initial state probability distribution and All states are transient, or all states are recurrent null - in this case, the state probabilities pj 's are zero for all states and no stationary state distribution will exist. All states are recurrent positive - in this case a stationary distribution giving the equilibrium state probabilities exists and is given by pj=1/ Mj j .

either

or

Interpreting this for queueing, we will find that (a) When the queue is in equilibrium, all its states will be recurrent positive with a stationary state distribution which will be ergodic as well. That implies that if we let the queue operate for a long time T and find that overall, it spent time Tj in state j then the probability of finding the queue in state j will be the same as Tj /T. It is also important to note that all states (including state 0, i.e. the state when the queue is empty) will have non-zero probabilities of occurrence. This is the reason why one sometimes makes the statement that a queue in equilibrium is guaranteed to be empty some time or the other! It is also important to note that, in this case, the state distribution probability of the queue will be the same regardless of the initial state with which the queue is started. (b) If the queue is overloaded (i.e. maximum service rate is less than the arrival rate) then the queue will not be in stable equilibrium. In this case, all its states will be transient as the overall tendency of the queue will be that the number in the queue will keep increasing - it will tend to infinity for a queue with infinite buffer or the

buffer will become completely full if it has only a finite buffer. Moreover, the probability of finding the queue empty will be zero. The stationary distribution of the states (i.e. the equilibrium state probabilities ) of an irreducible, aperiodic, homogenous Markov Chain (which will therefore also be ergodic ), may be found by solving a set of simultaneous linear equations and a normalization condition, as given below.

Balance Equations Normalization Condition

If system has N states, j = 0,1, ....., (N -1) ), then we need to solve forp0 , p1 ,.., pN -1 using the Normalization Condition and any (N -1) equations from the N Balance Equations. Note that the Normalization Condition has to be used as the N Balance Equations will not be linearly independent! Birth-Death Process A Birth-Death Process is a special homogenous, aperiodic, irreducible (discrete-time or continuous-time) Markov Chain where state changes can only happen between neighbouring states. If the current state (at time instant n ) isXn = i, then the state at the next instant can only be Xn+1 = (i +1), i or (i 1) . This implies that, for a Birth-Death Process, the state changes that can happen between one time instant and the next are only the ones which can either increase or decrease the state by unity or keeps the system in the same state. Note that states are represented as integer-valued (i.e. 0, 1, 2, ........., ) without loss of generality. A Pure Birth Process is a special type of Birth-Death Process where decreasing the state is not allowed (i.e. the system state can only go up by one or stay the same from one time instant to the next). On the other hand, for a Pure Death Process, the system starts from a non-zero state and can either decrease by one from one time instant to the next or stay in the same state. This is illustrated below. Pure Birth No decrements, only increments Continuous Time, Birth-Death Markov Chain Let k be the birth rate in state k k be the death rate in state k P{state k to state k+1 in time t }= k t Then P{state k to state k-1 in timet } = k t P{state k to state k in time t } = 1- k t - k t t0 Pure Death No increments, only decrements

P{other transitions in time t }= 0

System State X(t) = Number in the system at time t = Total Births - Total Deaths in time interval (0, t) The initial condition will not matter when we are only interested in the equilibrium state distribution. It will matter if we want the transient solution for the queue state as then we need to consider the starting state of the queue. pk(t) = P{X(t)=k}

= Probability that system is in state k at time t

State Transition Diagram for a Birth-Death Process Note that when the system is in state k, we assume that the arrival rate to the system is k and the departure rate from the system is k. Therefore, in a time interval t, the system will go from state k to state k +1 with probability kt and will go from state k to state k -1 with probability kt. This allows us to write the state transition equations given in the next slide. The state transitions from time t to t + t will then be governed by the following equations -

For t0, we then get

(2.1) We can obtain the equilibrium solutions for this system by setting

and obtaining the state distribution pi i such that the normalization condition

is satisfied. Note that the first condition amounts to saying that for a system at equilibrium, the rate of change of all the state probabilities must be zero (this is as per the definition of a system at equilibrium). The second condition is also logical as that amounts to saying that the system has to be in some state or the other at all times and that this condition would also hold at equilibrium. This yields the following equations to be solved for the state probabilities under equilibrium conditions-

(2.2) The solution is (2.3)

Note that the equation given by (2.3) arises directly from the normalization condition mentioned earlier, given that the individual state probabilities are given by (2.2)

The state probabilities given by (2.2) are in the form of a continued product. This is an interesting form as equilibrium solutions of this form will arise naturally in many queueuing analysis. Solutions of this form are referred to as Product Form Solutions. Instead of writing differential equations, one can obtain the solution in a simpler fashion by directly considering flow balance for each state, i.e. by claiming that, at equilibrium, the total probability flow entering a state must be balanced by the total probability flow leaving the state and that this will hold for each state of the system. This leads to the following approach. (a) Draw the state transition diagram as shown earlier (b) Draw closed boundaries and equate flows across this boundary. Any closed boundary may be chosen for this. If the closed boundary encloses state k (as shown in the figure), then we get Flow entering state k = k -1Pk -1 + k+1Pk+1 = ( k + k )Pk = Flow leaving state k as the desired flow balance equation for state k . Global Balance Equation for state k

(c) Solve the equations in (b) along with the normalization condition to get the equilibrium state distribution. Equations of this type which balance the flow leaving and entering a state for each state are referred to as Global Balance Equations. These can always be written for a system at equilibrium though they may be harder to solve than the Detailed Balance Equations given in the next slide

It would be even simpler in this case to consider a closed boundary which is actually closed at infinity as shown in the figure

This would lead to the following equation Flow from state k-1 to k = Flow from state k to k-1 k-1Pk-1 =kPk Detailed Balance Equation

Equations of this type are referred to as Detailed Balance Equations since they essentially balance the flows state-by-state between neighbouring states. The solution for this will be the same as that obtained earlier using the Global Blance Equations.

While Detailed Balance Equations are easy to write for the Birth-Death system, one has to be careful as it may not be always possible to write something this simple for a general Markov Chain. (In that case, it would be advisable to verify that the equations being written essentially do balance flows across a closed boundary.) One can choose other closed boundaries as well and obtain the same solution. For solving a given Markov Chain, it may be a good idea to try writing flow balance equations in a way such that the overall set of equations is easy to solve for the equilibrium state probabilities. In general, the equations expressing flow balance in a Birth-Death Chain of this type will be Global Balance Equations Closed boundary encircling each state j

Detailed Balance Equations Equates flows between states i and j, in a pair-wise fashion. Boundary between states i and j, closed at + and -

Conditions for Existence of Solution for Birth-Death Chain

(a) All states are transient , if and only if =, < (b) All states recurrent null , if and only if =, = (c) All states ergodic , if and only if < , =

Equilibrium State Distribution will exist only for the case where all the states are ergodic and hence, the chain itself is also ergodic Note that equilibrium solutions to a Birth-Death Chain exist only when condition (c) given above is satisfied. An alternative (but equivalent) statement is to say that, in order to have an equilibrium solution, the chain must have some state K such that k/k+1<1 for all k>K.

1. A radioactive sample emits a-particles at the rate particles per second where the time interval between the emissions of successive particles is given to be exponentially distributed with mean -1 . A counter (initialized to zero) is used to count the number of particles emitted. Show that the probability of the counter detecting k particles in time T is given by the Poisson distribution, i.e.

for k =0, 1, 2, ......................... (Note that this shows that when the inter-arrival times have an exponential distribution, the corresponding arrival process will be Poisson.) 2. A machine shop has two machines MC1 and MC2. The time for MC i to break down is an exponentially distributed random variable with mean i -1 for i =1, 2. Once a machine breaks down, we start repairing it immediately. For both MC1 and MC2, the time to repair it is an exponentially distributed random variable with mean -1 . (a) Consider a time instant when both MC1 and MC2 are working. What will be the probability that MC1 will break down first? (b) Consider a time instant when MC1 has broken down but MC2 is working. What will be the probability that MC2 will break down before MC1 is repaired? (c) Under equilibrium conditions, compute the probabilities of (i) both machines are working, (ii) neither machine is working, (iii) MC1 is working, (iv) MC2 is working (v) MC1 is working but MC2 is under repair and (vi) MC1 is under repair but MC2 is working. 3. Prof. Calculus makes it a point to start his office hours at exactly 8:00 am on Wednesday mornings but can only handle one student at a time in his office. Each student stays in his office for a random duration X . Students arriving when there is already one student in his office, wait outside. Students arrive following a Poisson process with average arrival rate . (a) If X=c (constant), what is the probability that the second arriving student will not have to wait? In this case, what will be the mean waiting time of this student (i.e. the one who arrives second)? (b) Repeat (a) when X is exponentially distributed with mean -1 . 4. A service facility has K servers where the servers are identical but work independently from each other. The facility does not have any additional place for customers to wait in case they arrive and find all servers busy. A server engaged in service provides a service time which is exponentially distributed with mean -1 . Customers arrive to this service facility following a Poisson arrival process (i.e. exponentially distributed inter-arrival times) with rate . Assume that at time t =0, the manager of the service facility inspects the system and happily notes that all the servers are engaged. (a) What is the probability that the next customer arriving to this service facility finds all servers busy and leaves without service? (b) What is the probability that the next customer arriving to this service facility finds at least two servers free? (c) What is the mean number of customers who will get turned away before any of the servers become free?

You might also like