You are on page 1of 18

Resource Management:

- Processes and Threads:


- Dead Lock:

Definitions
Process
Resource
Resource Allocator
Thread
Interprocess Communication
Scheduling

- Definitions:
- Process:
A process is a program in execution condition.
Process Task Job Program Application

- Resource:
Resources are what a program need to execution. Resources can be Hw or SW.

Hardware Resources:
Processor (CPU), Memory, I/O Device, Disk device

Software Resources:
Data Files, Shared Subprograms
*.ini , *.dll files (Configuration Files, initialization) (Dynamic Linked Library)

- Resource allocator: (Coordinator)
Manages and allocates suitable resources (CPU, Memory, File, ) to one or
more running program(s).

A computer system consists of a collection of processes during execution.

The resource management is about Multi task systems.



2

Example: A system with one CPU and four programs (processes)











a. Multiprogramming of four programs
b. Conceptual model of 4 independent, sequential processes
c. Only one program active at any instant


In a multiprogramming system, the CPU switches automatically among processes
running each for a piece of time in milliseconds. (Time Slice)
The time slice can be fixed or variant depend to the OS policy.
A single CPU is actually running one and only one process at a time.


Process Creation:

Principal events that cause process creation
System initialization (Loading OS via BIOS)
User request to create a new process (Run a program with Compiler)
Execution of a program *.exe (most common)
Initiation of a batch job (Child Job, Sub Program)

Process Hierarchies:

Parent creates a child process, child processes can create its own process
UNIX Forms a hierarchy of processes calls "process group"
In Windows no concept of process hierarchy (all processes are created equal)


3


















Process states:
Processes may be in one of the following states:


New A process being created, but not yet included in the pool of
executable processes.
Ready Process is prepared to execute but is waiting its turn.
Running Process is currently being executed by the CPU.
Waiting
(Blocked)
Process is waiting for an event to occur; it cannot execute
until then. ( Ex: Wait for I/O)
Terminated Process has completed execution and released the CPU.







4

Process Termination Conditions:

Normal exit (voluntary, By User)
Error exit (voluntary)
Fatal error (involuntary, By OS)
Killed by another process (involuntary)
Interrupts:
Interrupt makes pause the CPU from current task, to do an urgent another task.
The Interrupt comes from HW or SW source.
Kinds of Int: Mask-able Int, Nun mask-able Int, are the pins of the CPU.
Mask-able: CPU can ignore the Interrupt.
Non Mask-able: CPU cannot ignore the Interrupt and must do it.
Interrupt Handling:

Scheduler with using a queue or better an ordered queue with priority values
controls the Interrupt.
Process State Transition Diagram:
A process goes through states of process life cycle, which is illustrated in the
following state transition diagram.


















5

Process State Transition Diagram for Unix:







Implementation of Processes:














Skeleton of what lowest level of OS does when an interrupt occurs









6
Threads:

Thread is a sequence of instructions, which may execute in parallel with other
threads.

In computer science, a thread of execution is a fork of a computer program into
two or more concurrently running tasks. The implementation of threads and
processes differs from one operating system to another, but in most cases, a thread
is contained inside a process. Multiple threads can exist within the same process
and share resources such as memory, while different processes do not share this
data.






















(a) Three processes each with one thread
(b) One process with three threads


7


Scheduler:

In a multiprogramming system, the CPU switches automatically among
processes running each for a time slice around tens to hundreds of milliseconds.
When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process. (Switch Time)
The main objective of time-sharing is to switch the CPU among different
processes so frequently that, users can interact with each program while it is
running like single task system.
A system with one CPU can only have one running process at any time. As user
enters job to the system, they are put on a queue called as the Job Pool.









Scheduler Goals:
1. Maximize CPU Utilization
2. Maximize Throughput
3. Minimize Turnaround time
4. Minimize Waiting time
5. Minimize Response time

8

1. Processor utilization: It is defined as:
(Processor busy time) / (processor busy time + processor idle time)
We would like to keep the processor as busy as possible.

2. Throughput: Number of processes completed per time unit
It is the measure of work done in a unit time interval.

3. Turnaround time: The interval from the time of submission (admission) to the
time of completion of a process. (From New state until Terminated state)
It is the sum of time spent waiting to get into the ready queue, execution time and
waiting I/O time.

4. Waiting time: The average time spent time by a process waiting in the ready
queue.

5. Response time: The time from the submission of request until getting the first
response.

Elevator Problem:

1. FCFS Scheduling (Queue) First Come First Served
2. SJF Scheduling Shortest Job First
3. RR Scheduling (Round Robin) Sequential Service


Let processes P1, P2, and P3 arrive to the ready queue in this order and
let the run times (CPU burst time) of these processes as follows:

Process CPU burst time
P1 14 ms
P2 3 ms
P3 8 ms
FCFS:

P1 P2 P3
0 14 17 25


9
The waiting time for P1 is: W (P1) = 0 ms
The waiting time for P2 is: W (P2) = 14 ms
The waiting time for P3 is: W (P3) = 17 ms
The average waiting time : W = (0+14+17)/3 = 10.3 ms
SFJ:
P2 P3 P1
0 3 11 25
The waiting time for P1 is: W (P1) = 11 ms
The waiting time for P2 is: W (P2) = 0 ms
The waiting time for P3 is: W (P3) = 3 ms
The average waiting time : W = (11+0+3)/3 = 4.6 ms

The Scheduling with priority queue:

Priority values are between 0~31 which interval 16~31 is for OS itself
and 0~15 is for User tasks


Homework: Do the FCFS, SJF and RR (quantum 10ms) Scheduling on
given table and calculate the average waiting time, average Response
time, CPU utilization and the Throughput in 120 ms.

Note: Do the FCFS in two cases
a) With priority
b) Without priority

10
Process Admit time CPU burst time Priority
A (7) 0 22 (5) 1
B (5) 4 12 (2) 2
C (6) 7 23 (6) 2
D (3) 12 11 (1) 3
E (2) 15 15 (3) 4
F (1) 19 15 (4) 5
G (4) 25 24 (7) 3


Round Robin Scheduling:

A small unit of time, called a time quantum, is assigned to each process. Usually a
quantum is 10 to 100 ms.

The scheduler allocates the CPU to each process in the ready queue for a time
interval of up to 1 time quantum in FIFO (circular). If the process still running at
the end of the quantum; it will be preempted from the CPU.

A context switch will be executed, and the process will be put at the tail of the
ready queue. Then the scheduler will select the next process in the ready queue.
However, if the process has blocked or finished before the quantum has elapsed,
the scheduler will then proceed with the next process in the ready queue.


Types of Priority Scheduling:
Non-preemptive: A process runs up to the end of its CPU burst time
Preemptive: A running process can be preempted by a process with a higher
priority which joined the ready queue later.
Inter Process Communication: (IPC)

IPC provides a mechanism to allow processes to communicate and synchronize
their actions. It is provided by a message-passing system.
An IPC facility basically provides two operations:
Send (message)
Receive (message)
The messages may be of fixed or variable length.
The communication established can be either direct communication or indirect
communication.

11
-Dead Lock:
Deadlock is a potential problem in any computer system. It occurs when a
group of processes each have been granted exclusive access to some resources, and
each one wants yet another resource that belongs to another process in the group.
All of them are blocked and none will ever run again

Shareable Resource: More than one process can use it at the same time. (Memory,
Data file for reading, )


Non-shareable Resource: Only one process can use it at a time. (CPU, Data file
for writing, )
Resource Allocation Graph:










(a) resource R assigned to process A
(b) process B is requesting/waiting for resource S
(c) process C and D are in deadlock over resources T and U ( Where T and U are
Non-shareable Resources)











(a) Note the resource ownership and requests
(b) A cycle can be found within the graph, denoting deadlock
12










Race Condition:
The situation where two or more processes access and manipulate shared resource
concurrently. The final value of the shared resource depends upon which process
finishes last.
Ex: Two programs want to write to a file at same time.

Critical Section (Region):

Critical section problem occurs in systems where multiple processes (P1, P2,,
Pn) all compete for the use of shared resource.
The critical section of a program is a fragment, which performs the access to a
shared resource (such as a common data file)
Problem ensures when one process is executing in its critical section, no other
process is allowed to execute in its critical section.


















13

















Solution to Critical-Section Problem:
1. Mutual Exclusion: If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
P1,P2,P3, Pn

Four conditions to provide mutual exclusion
1. No two processes simultaneously are in their critical region
2. No assumptions made about speeds or numbers of CPUs
3. No process running outside its critical region may block another process
4. No process must wait forever to enter to its critical region

Variable turn shows, which process can enter its CS; it has value either 0 or 1.
So only one of the processes can be in its CS.

while (true)
if (turn==0)
{
turn=1;
critical_region();
trun=0;
noncritical_region();
}

14
2. Busy waiting:

busy waiting or spinning is a technique in which a process repeatedly checks to
see if a condition is true, such as waiting for keyboard input or waiting for a lock
to become available. It can also be used to delay execution for some amount of
time. In general, the CPU time spent waiting could have been reassigned to
another task.

3. Semaphores:
is a protected variable or abstract data type which is the classic method for
restricting access to shared resources. A semaphore is a counter for a set of
available resources, rather than a locked/unlocked flag of a single resource.
Semaphores are the classic solution to preventing race conditions.
Those resources marked for a process should not be interrupted. that is, if the
system decides that "Turn is on" for the process using that, it shouldn't stop it in
the middle of those instructions.











Dining philosophers problem:

It is a classic multi-process synchronization universal problem in 1965.
This is a theoretical explanation of deadlock.

The dining philosophers problem is summarized as five philosophers sitting at a
table doing one of two things: eating or thinking. While eating, they are not
thinking, and while thinking, they are not eating. The five philosophers sit at a
circular table with five plate of spaghetti. A fork is placed in between each
philosopher, and as such, each philosopher has one fork to his left and one fork to
his right. As spaghetti is difficult to serve, it is assumed that a philosopher must eat
with two forks. The philosopher can only use the fork on his immediate left or
right. each philosopher takes a different fork as a first priority and then looks for
another.
15

The philosophers never speak to each other, which creates a dangerous possibility
of deadlock when every philosopher holds a left fork and waits for a right fork (or
vice versa).

This system reaches deadlock when there is a cycle of requests. In this case
philosopher P1 waits for the fork used by philosopher P2 who is waiting for the
fork of philosopher P3 and so forth, making a circular chain.

For example there might be a rule that the philosophers put down a fork after
waiting five minutes for the other fork to become available and wait a further five
minutes before making their next attempt.

When the resource a program is interested in is already locked by another one, the
program waits until it is unlocked. When several programs are involved in locking
resources, deadlock might happen.




Semaphore solution:
A relatively simple solution is achieved by introducing a waiter at the table.
Philosophers must ask his permission before taking up any forks. Because the
waiter is aware of which forks are in use, he is able to arbitrate and prevent
deadlock. When four of the forks are in use, the next philosopher to request one
has to wait for the waiter's permission, which is not given until a fork has been
released. The logic is kept simple by specifying that philosophers always seek to
pick up their left hand fork before their right hand fork (or vice versa).
To illustrate how this works, consider the philosophers are labelled clockwise from
A to E. If A and C are eating, four forks are in use. B sits between A and C so has
no fork available, whereas D and E have one unused fork between them. Suppose
D wants to eat. Were he to take up the fifth fork, deadlock becomes likely. If
instead he asks the waiter and is told to wait, we can be sure that next time two
16
forks are released there will certainly be at least one philosopher who could
successfully request a pair of forks. Therefore deadlock cannot happen.

Sleeping barber problem:
the sleeping barber problem is a classic inter-process communication and
synchronization problem between multiple processes operating systems.

The problem is based on a barber shop with one barber, one barber chair, and a
number of chairs for waiting customers. When there are no customers, the barber
sits in his chair and sleeps. As soon as a customer arrives, he either awakens the
barber or, if the barber is cutting someone else's hair, sits down in one of the
waiting chairs. If all of the chairs are occupied, the newly arrived customer leaves.














Not implementing a proper solution can lead to the usual inter-process
communication problems of starvation and deadlock. For example, the barber
could end up waiting on a customer and a customer waiting on the barber,
resulting in deadlock.

Alternatively, customers may not decide to approach the barber in an orderly
manner, leading to process starvation as some customers never get the chance for a
haircut even though they have been waiting.

Solution:
The most common solution involves using three semaphores: one for any waiting
customers, one for the barber (to see if he is idle), and the third ensures mutual
exclusion. When a customer arrives, he attempts to acquire the mutex, and waits
17
until he has succeeded. The customer then checks to see if there is an empty
chair for him (either one in the waiting room or the barber chair), and if none of
these are empty, leaves. Otherwise the customer takes a seat thus reducing the
number available (a critical section). The customer then signals the barber to
awaken through his semaphore, and the mutex is released to allow other customers
(or the barber) the ability to acquire it. If the barber is not free, the customer then
waits. The barber sits in a perpetual waiting loop, being awakened by any waiting
customers. Once he is awoken, he signals the waiting customers through their
semaphore, allowing them to get their haircut one at a time.
This problem involves only one barber, and it is therefore also called the single
sleeping barber problem. A multiple sleeping barbers problem is similar in the
nature of implementation and pitfalls, but has the additional complexity of
coordinating several barbers among the waiting customers.

Dead-Lock Solution:

Deadlock Conditions

1. Mutual exclusion condition. Each non-sharable resource is either currently
assigned to exactly one process or is available.
2. Hold and wait condition. Process currently holding resources granted earlier
can request new resources.
3. No preemption condition. Resources previously granted cannot be forcibly
taken away from a process. They must be explicitly released by the process
holding them.
4. Circular wait condition. There must be a circular chain of two or more process,
each of which is waiting for resource held by the next member of the chain.

All four of these conditions must be present for a deadlock to occur. If one of them is
absent, so deadlock is not possible.


Strategies for Handling Deadlock

Ignore the problem (The Ostrich algorithm )
Recovery ( Allow deadlock to occur, detect it and try to recover )
Prevention ( Statically make deadlocks structurally impossibe )
Avoidance ( avoid deadlocks by allocating resources carefully )



18
Types of Recovery
Recovery through Preemption
Recovery through Rollback
Recovery through Killing Processes

Deadlock Prevention Methods
Attacking the Mutual Exclusion Condition (Spool everything)
Attacking the Hold and Wait Condition (Request all resources initially)
Attacking the No Preemption Condition (Take resource away)
Attacking the Circular Wait Condition (Order resources numerically)


Spooling: Using buffer (Temprory Storage) for slow resources like printer.


Deadlock Avoidance
Using Matrix to Control for avoidance of the dead-lock.

R1 R2 R3 R4 R1 R2 R3 R4
P1 0 0 1 1 P1 1 0 0 0
C= P2 1 0 0 0 R= P2 0 1 1 0
P3 0 1 1 1 P3 1 0 0 0
P4 1 0 0 1 P4 0 0 1 0

Current Allocation Matrix Requestion Matrix

You might also like