You are on page 1of 7

J.E.D.I.

1 Process Synchronization
1.1

Objectives

The goal of this chapter is to give a theoretical overview on process synchronization primitives.

1.2

Chapter Outline

Introduction

Synchronization Problems

Synchronization Solutions

1.3

Introduction

In a multitasking environment, processes would be running at the same time with other
processes. Often, these processes act independently of one another, and the operating system
provides safeguards in memory (i.e. A process is not allowed to access the memory address of
another process).
However, the operating system allow for cooperating processes. Cooperating processes are two
or more processes that work together on the same task. Because of this, the operating
system may allow communication between the two processes, either by shared memory, a
linked file or more complicated process communication feature such as mailboxes or pipes.
As processes run while sharing resources, the operating system must provide a way wherein a
process would be asked to wait until it is notified by another process, such as when accessing a
shared resource. Also the operating system must allow a process to define a critical section of
code where no other process can access at the same time. These three concepts are addressed
by process synchronization.
These chapter discusses process synchronization as well as discuss scenarios classic
synchronization problems.

1.3.1

Race condition

A traditional process synchronization problem is the producer-consumer problem. Consider this


real world scenario where a bartender (the producer) places drinks on a long counter. You have
a beer drinker (the consumer) moving across the bar in order to get the latest drink.

Operating Systems

J.E.D.I.

In programming terms, we will define the counter as an array that would contain beers. We
would need a variable to indicate the position in the array if where the bartender would place
the beers. The beerstack[] array and the top variable would have to be shared by the
bartender and beerdrinker processes, which would run concurrently.

These two processes would be running concurrently. As the CPU can only process one
instruction at a time, some context switching would be occurring while each process runs.
Consider the scenario where this particular context switch occurs
Beerdrinker: top = top 1
Bartender: beerstack[top] = new Beer // beer on top of an existing beer!
Bartender: top = top + 1
Beerdrinker: drink beerstack[top] // beerdrinker would drink an empty mug!
If the context switch does not occur here, then, the two processes would run correctly.
However, as the processes have no control of when or where the context switch occurs, then
there is a chance that the system would end up in an inconsistent state.
What happens here is called a race condition, where the outcome of the modification of shared
variables is dependent on which process would modify the variables first.

1.3.2

Resource Competition

Another problem that occurs in multiprocessor systems is resource competition. Since the
systems resources are limited, a process may need to wait for another process to finish before
it can use system resources. Consider a printer being used by two applications. Both can not
Operating Systems

J.E.D.I.

print at the same time.

1.3.3

Need for synchronization

Process synchronization addresses the race condition. Its purpose is to allow processes to
impose an order by which they are to run with respect to other processes. For example, we
could impose that a context switch not occur between the two lines of the bartender and the
beerdrinker's code. We could also extend process synchronization so that a process would
wait for another process before executing.
Synchronization also addresses resource competition. Synchronization primitives allows
processes to wait for resources and notify others when they are made available.

1.4

Other Synchronization Problems

In the previous section, we have discussed the producer-consumer problem. This section
discusses other synchronization problems.

1.4.1

Critical Section Problem

A critical section is a region of a process' code where shared data between other cooperating
processes are manipulated. As was discussed in the previous section, this modification should
not occur in parallel with the modification done by other processes.
Processes having a critical section would have its code divided into three parts

Entry section code that implements the critical section problem solution

Critical section code that only one process can execute

Remainder section the rest of the code that has no effect on the critical section. The
remainder section can of course occur before the critical section.

Any solution to the critical section problem should have the following characteristics:

Mutual exclusion only a single process is allowed to run its critical section. All other
processes wishing to enter their critical section must wait as only one process is allowed
to run its critical section.

Progress If there is more than one process waiting to enter their critical section, then
the selection of who next to run their critical section must be done by the waiting

Operating Systems

J.E.D.I.

processes. This selection must also be done at the soonest possible time.

Bounded wait A process cannot be allowed to wait forever to enter its critical section.

The producer-consumer problem could be reduced to the critical section problem. We could
solve the race condition by placing the code where the shared variables top and the beerarray
are modified into the critical section.

1.4.2

Readers and Writers

Consider a file being shared by multiple processes. Multiple processes are allowed to read the
file at the same time, as the execution of one would not influence the execution of the others.
However, if a process would like to write to the file, then no other processes are allowed to
write or read to the file. If a process is allowed to read the file while a write is being
performed, then the reading process may get a copy of the file at an inconsistent state.

1.4.3

Dining Philosophers

The dining philosophers problem is a classic synchronization problem. Consider a group of


philosophers seated around a dining table. The dining table has exactly enough plates for all
the philosophers, but not enough chopsticks. So the table is setup so that each philosopher's
right chopstick is also the left chopstick of the philosopher seated to the right.

Operating Systems

J.E.D.I.

Each philosopher alternates between two states, either they are eating, or they are thinking. A
philosopher who wants to eat must have both chopsticks left and right chopsticks, if they are
not in use by that philosopher's neighbor.

1.5

Synchronization Solutions

This section discusses strategies that would allow process synchronization. We will discuss
these algorithms as well as some of the solutions to our synchronization problems.

1.5.1

Busy wait

A busy wait is a while-loop that does nothing, simply stopping program flow until the loop
conditional becomes false, set by another process.
To illustrate busy waits, we consider the following (incomplete) solution to a two process
critical section problem. We consider two processes i and j. This is the code for process i.
do {
while (turn!=i) { }
criticalsection();
turn = j;
remaindersection();
} while (true);

// loop while it is not my turn yet


// after i'm done, its j's turn

The code for process j is the reverse of this. Mutual exclusion is established by this process as
process i or j can only enter their critical section after they are allowed by the other process
exiting theirs. However, this code fails the progress requirement, if process i loops very fast
and waits at the busy loop, it would have to wait until j finishes its remainder section and
enters its own critical section again before it gets to have a turn.

1.5.2

Wait and Notify

The problem with the busy wait is that the process wastes CPU cycles while waiting. An
alternatives are operating system primitives called wait() and notify().
When a process invokes wait() ,it simply stops running, exits the ready state and enters the
waiting state. It remains there until another process invokes a notify(i), where i is the name of
the process to be woken up.
With that, our partial solution to the critical section problem now becomes this:
do {

if (turn != i) {
wait(); // stop running until i am notified by j
}
criticalsection();
turn = j;
//after i is done done, its j's turn
notify(j);
// wakeup j if it is waiting.
remaindersection();
} while (true);

Operating Systems

J.E.D.I.

1.5.3

Semaphores

Once again, the problem with the two-process critical section solution is that a rapidly
processing process i would have to wait for j to finish its own critical section. A semaphore is
an object that modifies the wait() and notify() primitives to take into account an integer value
and a linked list that will store the waiting processes.
class Semaphore {
int ctr = 1;
ProcessList L;
void wait() {
ctr = ctr 1;
if (ctr < 0) {
add this process to L;
block;
}
}
void notify() {
ctr = ctr + 1;
if (ctr <= 0) {
get a process from L and resume its running
}
}
}
Our two processes would now use the semaphore.
Semaphore s; //shared by the two processes
do {

s.wait();
criticalsection();
s.notify();
remaindersection();
} while (true);
The initial value of ctr indicates how many processes are allowed to proceed. When a wait is
invoked on the semaphore, then the value is set to 0. Additional processes wanting to enter
the critical section would find that invoking wait would halt the process, as the wait primitive
blocks all incoming processes.
Now when a process invokes notify, the counter increments. If the counter is negative, then it
means there are blocked processes and then proceeds to resume at least one of them in order
to enter the critical section.
The advantage of this is that if there are no processes waiting or running in the critical section,
then the ctr has value 1 and wait() will allow any process to pass through, without having to
wait for another process.
Using semaphores, we now have a solution to the Dining Philosophers problem by having each
chopstick to be a semaphore.
Semaphore chopstick[N];
do {
chopstick[i].wait() //wait on the left chopstick
chopstick[(i+1) % N].wait() // wait on the right chopstick
eat;
chopstick[(i+1) % N].notify() // release the right chopstick
chopstick[i].notify() // release the left chopstick
Operating Systems

J.E.D.I.

think;
} until(true)
This solution is still not the best solution as it is prone to deadlocks (i.e. What if everyone
grabbed their left chopstick all at once). Deadlocks are discussed in the succeeding chapter.

1.5.4

Monitors

A monitor is an object that only allows a single process to run any of its methods. It abstracts
process synchronization from the programmer, simply guaranteeing that if ever code is needed
that must be executed independently of other processes.

Operating Systems

You might also like