You are on page 1of 41

c 

    c 


   !! 

÷ 
 An operating system (sometimes abbreviated as "OS") is the program that, after
being initially loaded into the computer by a boot program, manages all the other programs in a
computer. The other programs are called applications or application programs. The application
programs make use of the operating system by making requests for services through a defined
application program interface In addition, users can interact directly with the operating system through
a user interface such as a command language or a graphical user interface an operating system performs
these services for applications

u? In a multitasking operating system where multiple programs can be running at the same time,
the operating system determines which applications should run in what order and how much
time should be allowed for each application before giving another application a turn.
u? It manages the sharing of internal memory among multiple applications.
u? It handles input and output to and from attached hardware devices, such as hard disks, printers,
and dial-up ports.
u? It sends messages to each application or interactive user (or to a system operator) about the
status of operation and any errors that may have occurred.
u? It can offload the management of what are called batch jobs (for example, printing) so that the
initiating application is freed from this work.
u? On computers that can provide parallel processing, an operating system can manage how to
divide the program so that it runs on more than one processor at a time.

???????????????????????????????? ?

Figure: Operating system Working

‰  ÷ 


 The Main Resources of an operating system is the following

" 
 # $ %&' (   )* &&+   ' 
 , -
c 
    c 
   !! 

CPU
Main Memory (Primary Memory)
Secondary Memory
Tertiary Memory
Printer Display
Channels
S/w Routines

R     The central processing unit (CPU) is the portion of a computer system that carries out the
instructions of a computer program, and is the primary element carrying out the computer's
functions. The central processing unit carries out each instruction of the program in sequence, to
perform the basic arithmetical, logical, and input/output operations of the system. This term has
been in use in the computer industry at least since the early 1960s the form, design and
implementation of CPUs have changed dramatically since the earliest examples, but their
fundamental operation remains much the same

 
The main memory in a computer is called Random Access Memory. It is also known as
RAM. This is the part of the computer that stores operating system software, software applications and
other information for the central processing unit (CPU) to have fast and direct access when needed to
perform tasks. It is called "random access" because the CPU can go directly to any section of main
memory, and does not have go about the process in a sequential order RAM is one of the faster types of
memory, and has the capacity to allow data to be read and written. When the computer is shut down,
all of the content held in RAM is purged. Main memory is available in two types: Dynamic Random
Access Memory (DRAM) and Static Random Access Memory (SRAM)


  
Secondary memory, also known as secondary storage means that it stores the
data permanently and it is the slower and cheaper form of memory. CPU does not access the secondary
memory directly. The content in it must first be copied into the primary storage RAM for CPU to process.
Secondary memory devices include hard drives, floppy disks, CDs and CDROMs etc


 
Tertiary memory, or long-term memory, does not seem to be affected by the aging
process Foard gave four pieces of evidence to support the theory that long-term memory does not
decline with age. The four lines of evidence are
1. The rate of forgetting for pictorial learning does not vary with age over a two-year period.
2. Material of significance learned under natural conditions 10 to 30 years earlier is easily recalled by
adults.
3. Older people remember colloquial expressions and names of well-known events as well as young
people do.
4. Total knowledge increases with age but efficiency of memory remains constant.
These results supported the theory that long term memory does not decrease with age.

 
The Printer Display is the process of making a hard copy of from the soft copy of
anything which is stored with you in your computer its example are

" 
 # $ %&' (   )* &&+   ' 
 , 
c 
    c 
   !! 

u? Printer your CV or printing the notes you prepared

R In computer science channel I/O is a generic term that refers to a high-performance
input/output (I/O) architecture that is implemented in various forms on a number of computer
architectures, especially on mainframe computers In the past they were generally implemented with a
custom processor, known alternately as peripheral processor, I/O processor, I/O controller, or DMA
controller or generally it is that path through which we can enter the data into a computer or and then
we can retrieve it from that computer

‰ ??? In computer programming, routine and subroutine are general and nearly synonymous
terms for any sequence of code that is intended to be called and used repeatedly during the executable
of a program. This makes the program shorter and easier to write (and also to read when necessary).
The main sequence of logic in a program can branch off to a common routine when necessary. When
finished, the routine branches back to the next sequential instruction following the instruction that
branched to it. A routine may also be useful in more than one program and save other programmers
from having to write code than can be shared.

u? Typically, in assembly languages, a routine that requires some variable input can be encoded
into a 2acro definition with a specified interface called a 2acro instruction. The programmer
can then use a macro instruction instead of having to include and manage the branching to a
routine. Macro definitions and instructions also tend to be shared among programmers for use
in multiple programs, especially in software development projects.
u? In higher-level computer languages, many commonly-needed routines are prepackaged as
function which is routines with specified programming interfaces. Some functions can be
compiled in line with other code. Other functions are compiled in as stub that makes dynamic
calls for system services during program execution. Functions are sometimes called iibrary
routines. The compiler and a set of library routines usually come as part of a related software
development package.

V  ‰  an operating system enables different users to share computer resources
concurrently without interfering with each other these are done by the abstract resources and these
resources are Abstract resources or generally called software used for it and Hardware also called
physical resources͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙͙..

     

 A process is an instance of program in execution which contain code in execution and data
and job execution command

 A Program is a combination of more than one Numbers of process such that in a program
more than process is executed

R
The Simultaneous Execution of More than one process is called concurrency or the
Concurrency, in computing, refers to when multiple path of execution (threads or processes) are running
at the same time. This is a very loosely defined term and may refer to multiple things:

" 
 # $ %&' (   )* &&+   ' 
 , 
c 
    c 
   !! 

1.? Multiple processes or threads executing on the same processor are said to be running
concurrently.
2.? Multiple processes or threads executing on different processors may be said to be "truly
concurrent," since they can be running instructions at the same time (without the need of a CPU
scheduler).
3.? Even a CPU with a pipeline may be said to be executing several instructions concurrently.

?  ??In telecommunications and computer networks multiplexing (also known as ) is a
method by which multiple analog message signals or digital data streams are combined into one signal
over a shared medium the aim is to share an expensive resource. For example, in telecommunications,
several telephone calls may be carried using one wire. Multiplexing originated in telegraphy and is now
widely applied in communications.

u? The multiplexed signal is transmitted over a communication channel which may be a physical
transmission medium. The multiplexing divides the capacity of the low-level communication
channel into several higher-level logical channels, one for each message signal or data stream to
be transferred. A reverse process, known as demultiplexing can extract the original channels on
the receiver side.
u? A device that performs the multiplexing is called a multiplexer (MUX), and a device that
performs the reverse process is called a demultiplexer (DEMUX).
u? Inverse multiplexing (IMUX) has the opposite aim as multiplexing, namely to break one data
stream into several streams, transfer them simultaneously over several communication
channels, and recreate the original data stream.

Figure: Multiplexing and Demultiplexing

Multiplexing is of two types

Õ? Space Multiplexing
Õ? Time Sharing Multiplexing

" 
 # $ %&' (   )* &&+   ' 
 , 
c 
    c 
   !! 

    Space Multiplexing is the process sharing the space in a computer system
operating system in it we devide multiple use resources up among several users such that memory disk
space Input devices can be classified as being space-2uitipiexed or ti2e-2uitipiexed. With space-
multiplexed input, each function to be controlled has a dedicated transducer, each occupying its own
space. For example, an automobile has a brake, clutch, throttle, steering wheel, and gear shift which are
distinct, dedicated transducers each controlling a single specific task. Each transducer can be accessible
independently but also possibly simultaneously. A space-multiplex input style affords the capability to
take advantage of the shape, size and position of the multiple physical controllers to increase
functionality and decrease complexity. It also means that the potential persistence of attachment of a
device to a function can be increased.

With space multiplexed architectures, there is minimal confusion over what function is performed by
any device. The problem is that as the number of functions increases, so does the number of input
devices. The large number of controls that may result can increase cost, work place "real estate" and
training time. Aircraft cockpit design is a good example where the initial approach was almost
completely space multiplexed. To reduce problems induced by the increased functionality of flight
systems, more recent designs have incorporated a high degree of time multiplexing in both controls and
displays.

   Time-division multiplexing (TDM) is a method of putting multiple data streams in a
single signal by separating the signal into many segments, each having a very short duration. Each
individual data stream is reassembled at the receiving end based on the timing. The circuit that
combines signals at the source (transmitting) end of a communications link is known as a multiplexer. It
accepts the input from each individual end user, breaks each signal into segments, and assigns the
segments to the composite signal in a rotating, repeating sequence. The? composite signal? thus contains
data from multiple senders. At the other end of the long-distance cable, the individual signals are
separated out by means of a circuit called a demultiplexer, and routed to the proper end users. A two-
way communications circuit requires a multiplexer/demultiplexer at each end of the long-distance, high-
bandwidth cable If many signals must be sent along a single long-distance line, careful engineering is
required to ensure that the system will perform properly. An asset of TDM is its flexibility. The scheme
allows for variation in the number of signals being sent along the line, and constantly adjusts the time
intervals to make optimum use of the available bandwidth The Internet is a classic example of a
communications network in which the volume of traffic can change drastically from hour to hour. In
some systems, a different scheme, known as frequency-division multiplexing is preferred. (((OR)))

A multiplexing technique operating in the time domainTime division multiplexing is a technique where
several optical signals are combined, transmitted together, and separated again based on different
arrival times. In an optical fiber communication system, interleaving pulse trains can carry different data
channels in a single fiber [1,Î3]. The use of multiple channels allows increased overall data transmission
capacities without increasing the data rates of the single channels, or transmission of data of different
users simultaneously. However, the time slot per bit must be reduced. Even if the bandwidth of the data
modulator is limited, this can be done by using a train of ultrashort pulses (rather than a continuous
optical wave) as the input of the modulator.

" 
 # $ %&' (   )* &&+   ' 
 , !
c 
    c 
   !! 

r Schematic of optical time division multiplexing. Two interleaving pulse sequences are
combined to a single pulse train. In a communications system, each pulse may represent a ͞1͟ bit (if
present) or a ͞0͟ (if suppressed).

Special requirements of data transmitters for optical time division multiplexing are a short pulse
duration and a low timing jitter . Also, the extinction ratio should be high, i.e. each combined channel
should exhibit a very low power level between the bit slots, because such a background could otherwise
interfere with other channels.

An alternative to time division multiplexing is =aveiength division 2uitipiexing where the channels are
distinguished by wavelength rather than by arrival time.

In the context of distributed fiber-optic sensors [2], optical time division multiplexing means that signals
are assigned to certain locations in the sensor via their arrival times. Such systems usually operate with
ultrashort pulses 

÷ 
   ?The Stratagies of an operating system is the following

u? Aatch Processing system


u? Time sharing system
u? PC and Work Station
u? Process control and real time cs
u? Netwok Technology

A  




A ??A Aatch is achieved by sequential readings of the jobs into machines and executing programs for
each job a job is a sequence of commands program and data?
A  
 Executing a series of non interactive jobs all at one time. The term originated
in the days when users entered programs on punch cards. They would give a batch of these
programmed cards to the system operator who would feed them into the computer Aatch jobs can be
stored up during working hours and then executed during the evening or whenever the computer is idle.
Aatch processing is particularly useful for operations that require the computer or a peripheral device
for an extended period of time. Once a batch job begins, it continues until it is done or until an error
occurs. Note that batch processing implies that there is no interaction with the user while the program is
being executed. An example of batch processing is the way that credit card companies process billing.
The customer does not receive a bill for each separate credit card purchase but one monthly bill for all
of that month??s purchases. The bill is created through batch processing, where all of the data are
collected and held until the bill is processed as a batch at the end of the billing cycle. The opposite of
batch processing is transaction processing or interactive processingï In interactive processing, the

" 
 # $ %&' (   )* &&+   ' 
 ,  
c 
    c 
   !! 

application responds to commands as soon as you enter them. >>>>OR<<<<<<<<


This requires the operating system to work through a series of programs that are held in a queue. The
operating system is responsible for scheduling the jobs according to priority and the resources they
require.

Example: A large company would use batch processing to automate their payrolls. This would find the
list of employees, calculate their monthly salary (with tax deductions) and print the corresponding
payslips. Aatch processing is useful for this purpose since these procedures are repeated for every
employee each month.

  
 Time-sharing is the sharing of a computing resource among many users by
means of multiprogramming and multi-tasking Its introduction in the 1960s, and emergence as the
prominent model of computing in the 1970s, represents a major technological shift in the history of
computing. Ay allowing a large number of users to interact concurrently with a single computer, time-
sharing dramatically lowered the cost of providing computing capability, made it possible for individuals
and organizations to use a computer without owning one, and promoted the interactive use of
computers and the development of new interactive applications >>>OR<<<

Time Sharing: in this operating system OS assigns some time slots to each job. Here each job is executed
according to the alloted time slots.

u? @ob1: 0 to 5
u? @ob2: 5 to 10
u? @ob3: 10 to 15

r 

" 
 # $ %&' (   )* &&+   ' 
 , .
c 
    c 
   !! 

R 

R PC stand for the Personal computer it is that system which we used for our personal use such that
the computer we use in the home

  A workstation is a high-end microcomputer designed for technical or scientific


applications. Intended primarily to be used by one person at a time, they are commonly connected to a
local area network and run multi-user operating systems The term =orkstation has also been used to
refer to a mainframe computer terminal or a PC connected to a network

4 ? ? ?
? ??

  R A process in an operating system is represented by a data structure known as a


process control block (PCA) or process descriptor. The PCA contains important information about the
specific process including

u? The current state of the process i.e., whether it is ready, running, waiting, or whatever.
u? Unique identification of the process in order to track "which is which" information.
u? A pointer to parent process.
u? Similarly, a pointer to child process (if it exists).
u? The priority of process (a part of CPU scheduling information).
u? Pointers to locate memory of processes.
u? A register save area.
u? The processor it is running on.

The PCA is a certain store that allows the operating systems to locate key information about a process.
Thus, the PCA is the data structure that defines a process to the operating systems.

‰   R Cs stand for computing system In computer science real-time computing (RTC) or
reactive computing, is the study of hardware and software systems that are subject to a "real-time
constraint"Ͷi.e. operational deadlines from event to system response. Real-time programs must
execute within strict constraints on response time Ay contrast, a non-reai-ti2e syste2 is one for which
there is no deadline, even if fast response or high performance is desired or preferred. The needs of
real-time software are often addressed in the context of real-time operating systems and synchronous
programming languages which provide frameworks on which to build real-time application software. A
real-time system may be one where its application can be considered (within context) to be mission
critical. The anti-lock brakes on a car are a simple example of a real-time computing system Ͷ the real-
time constraint in this system is the time in which the brakes must be released to prevent the wheel
from locking. Real-time computations can be said to have faiied if they are not completed before their
deadline, where their deadline is relative to an event. A real-time deadline must be met, regardless of
system load

r 

" 
 # $ %&' (   )* &&+   ' 
 , 
c 
    c 
   !! 

ë
The company is engaged in the design, manufacture and marketing of hardware
and software used in connecting all computer associated equipment in the business and domestic
environment, such as personal computers (PCs), workstations, printers, scanners, fax and vending
machines to local area networks (LANs), wide area networks (WANs) and the Internet. The Company is
also engaged in security and access control devices for connection to computer networks, as well as
security management software for use within organizations and their enterprise networks. During the
year ended March 31, 2006 (fiscal 2006), Network Technology's wholly-owned subsidiary, link Ringdale
acquired certain assets of NLlynx Inc. In fiscal 2006, the Company acquired certain assets of Madge Ltd

  
 The earlier time sharing system is the following

1.? CTSS(1972)
2.? MULTICS
3.? CALTSS(1970)
Oï? UNIX

R CTSS stand for Comptable Time Sharing system it was developed for IAM in mid of 1960 at MIT
it Support initial Resources on scheduling algorithams

R !"#$ It Replaced CTSS it was designed for the extremely capable and reliable operating
system it developed virtual memory protection and security

RV !"%$ it was developed at calafornia technology USA it was contemporary CTs Multics it has
the same concern for virtual memory scheduling Scheduling protection security

ë& It was developed at (ATT AULL) in 1970 it was developed to manage Minicomputer (PDP 11/45)
which work processor to DEC VAX Machines

 V  A   The Difference between Tradational and Modern batch
processing is the following

" 
 # $ %&' (   )* &&+   ' 
 , /
c 
    c 
   !! 

'÷ 
 Modern operating system can be evaluated by the following
figure

R '     Client/server describes the relationship between two computer programs in
which one program, the client, makes a service request from another program, the server, which fulfils
the request. Although programs within a single computer can use the client/server idea, it is a more
important idea in a network. In a network, the client/server model provides a convenient way to
interconnect programs that are distributed efficiently across different locations. Computer transactions
using the client/server model are very common. For example, to check your bank account from your
computer, a client program in your computer forwards your request to a server program at the bank.
That program might in turn forward the request to its own client program that sends a request to a
database server at another bank computer to retrieve your account balance. The balance is returned
back to the bank data client, which in turn serves it back to the client in your personal computer, which
displays the information for you.The Server Include File server print server D.A server evovled system
supporting Network

" 
 # $ %&' (   )* &&+   ' 
 , -
c 
    c 
   !! 


    The Performance of A system can be Describe by the following figure shown
below

r 

  It is the @ob Completed per unit time such that time applied for a single task and it
completed in that unit time it is called Throughput

‰    Response time is the Time elapsed from the moment the command is issued until the
availability of the process result or generally it is the time in which the processor give information about
the process that either it is completed or not

V' 
availability is the processs that each and every resource is available for the completion of
the process being in execution


 : In computer science, spooling refers to the process of placing data in a temporary
working area for another program to process. The most common use is in writing files on a magnetic
tape or disk and entering them in the work queue (possibly just linking it to a designated folder in the
file system) for another process. Spooling is useful because devices access data at different rates.
Spooling allows one program to assign work to another without directly communicating with it The
general diagram for the spooling system is the following

Figure:

" 
 # $ %&' (   )* &&+   ' 
 , --
c 
    c 
   !! 

The spooling system controls continous buffering on input output buffering queue and sequential
scheduling in the spooling system the cpu multiplexed between four program

1)? One program controls input of card on a queue on the backing store i.e.(Magnetic coloum)
2)? The second program select the user jobs from the input queue and start execution one at a time
3)? The third program controls the printing of results on a line printer
4)? The fourth program (main memory or internal stores) the current program reading data from
the input queue and writes the results on the output queue on the backing stores

In the above program data goes from the input program to the input queue then in the second the
program the data goes from the input queue to the scheduling program and in it its execution is done
now the program data is gon the output queue and then there from the output queue it goes to the
output program and then to the user program and to the line printer <<<<<<<<<OR>>>

Def: According to Tanenbaum, "Spool" is an acronym for simultaneous peripheral operations on-line[3]
(though others may consider this a backronym). For printers: simultaneous peripheral output on line.
Early mainframe computers had no disk drives and slightly more recent ones had, by current standards,
small and expensive hard disks.

  The most common spooling application is print spooling: documents formatted for printing
are stored usually into an area on a disk and retrieved and printed by a printer at its own rate. Printers
typically can print only a single document at a time and require seconds or minutes to do so. With
spooling, multiple processes can write documents to a print queue without waiting. As soon as a process
has written its document to the spool device, the process can perform other tasks, while a separate
printing process operates the printer.

For example, when a city prepares payroll checks, the actual computation may take a matter of minutes
or even seconds, but the printing process might take hours. If the program printed directly, computing
resources (CPU, memory, peripherals) would be tied up until the program was able to finish. The same is
true of personal computers. Without spooling, a word processor would be unable to continue until
printing finished. Without spooling, most programs would be relegated to patterns of fast processing
and long waits, an inefficient paradigm.[1]

Spooler or print management software may allow priorities to be assigned to jobs, notify users when
they have printed, distribute jobs among several printers, allow stationery to be changed or select it
automatically, generate banner pages to identify and separate print jobs, etc

‰'  (   


 IAM OS/360, in full International Ausiness Machines Operating
System/360, an operating system introduced by IAM in 1964 to operate its 360 family of mainframe
computer systems. The 360 system was unprecedented in its ability to support a wide array of
applications, and it was one of the first operating systems to require direct-access storage devices.

The OS/360 OR IAM 360 are family of mainframe computer this offers a variety of series on a wide range
of computer it was equally applicable batched jobs on a real time application it was a second generation
Operating system it has the following Objectives

" 
 # $ %&' (   )* &&+   ' 
 , -
c 
    c 
   !! 

'? This was the primery objective that is accomodiate on environment of diverse application and
operating system modes
'? The secondary objective is the increasing of througut i.e the number of increasing of jobs
completed per unit time
'? It has lowered response time
'? It increased programmer productivity
'? The adaptability of programs to the changing resources
'? Expandability
'? It consist of a variety of machine configuration
'? It was a library of compilers, utility programs, resource management , programs, e.t.c
'? It contains several millions of instructions
'? Aecause of the size os/360 was considered ureliable it has 1000 errors in each release (product)
'? This is very low percentage considering the size of the system

R   V  In computing, a  is an instance of a computer program that is being
executed. It contains the program code and its current activity. Depending on the operating system (OS),
a process may be made up of multiple threads of execution that execute instructions concurrently
<<<<<<OR>>>>

A sequence of operations contains code that is in execution is called process i.e an instance of program
in execution when a program is submitted to the operating system a data structure is assigned to it this
data structure is called process control block or simply RA$

 RA A Process Control Alock (PCA, also called Task Controling Alock or Task Struct) is
a data structure in the operating system kernel containing the information needed to manage a
particular process. The PCA is "the manifestation of a process in an operating system <<OR>>

 A process in an operating system is represented by a data structure known as a process


control block (PCA) or process descriptor. The PCA contains important information about the specific
process including

u? The current state of the process i.e., whether it is ready, running, waiting, or whatever.
u? Unique identification of the process in order to track "which is which" information.
u? A pointer to parent process.
u? Similarly, a pointer to child process (if it exists).
u? The priority of process (a part of CPU scheduling information).
u? Pointers to locate memory of processes.
u? A register save area.
u? The processor it is running on.

The PCA is a certain store that allows the operating systems to locate key information about a process.
Thus, the PCA is the data structure that defines a process to the operating systems.

r ÷ 


 The fields of an operating system is the following

>? Its(process) Status {Ready, Running, Suspended}


>? Memory Management Table

" 
 # $ %&' (   )* &&+   ' 
 , -
c 
    c 
   !! 

>? Filling system Information


>? Accounting Information

      


 ÷ 
 The steps for managing the process in the
operating system is the following

1)? Creating and removing (destroying) of process


2)? Controlling the programs of the process i.e ensuring that each logically enabled process makes
to its comletion
3)? Acting on Exceptional conditions (interrupts, Arithmatic Errors)
4)? Allocating H/W resources among processes
5)? Providing means of communicating messages or signal

     Processes change state whenever something of significance happens
during the life cycle of the process instance. For example, an API request causes a process in the running
state to be put into the suspended state. State transition diagrams show the state transitions that can
occur during the process life cycle. Microflows and long-running processes have different state
transition diagrams.

)   the 4 state transition diagram is the following

" 
 # $ %&' (   )* &&+   ' 
 , -
c 
    c 
   !! 

????????????????????????

   The 5 state transition diagram is the following

The Difference in it both is that the 5-state has suspended and swapped out queue while the 4 state
process transition diagram don͛t have suspended and swapped out queue

R  

Def: Concurrent process is the process of simultanous execution of multiple program

Def: Concurrent processing is a computing model in which multiple processors execute instructions
simultaneously for better performance. Concurrent means something that happens at the same time as

" 
 # $ %&' (   )* &&+   ' 
 , -!
c 
    c 
   !! 

something else. Tasks are broken down into subtasks that are then assigned to separate processors to
perform simultaneously, instead of sequentially as they would have to be carried out by a single
processor. Concurrent processing is sometimes said to be synonymous with parallel processing.

Def: The process are said to be concurrent if their execution overlap in time

Or the following is also called the concurrent process

Two Concurrent processes time of P and Q are overlapped interleque practically in time e.g. AC are
interlequed and overlapped in time

Conclusion: whenever the First of one process is started before the last operation of another process is
completed the two process are said to be concurrent

R   * $ The Language Notation begin SO cobegin s1,s2,͙͙͙͙͙͙͙sn co end
Sn+1, s1,s2,͙͙͙͙͙..Sn, under cobegin and co end are concurrent process

ë  R  When two processes are executed concurrently and it is between two
concurrrent process execution then it is nested concurrent process i.e. when two processes is exeuted
inside the other two processes are executed concurrently then it is called Nested concurrrent processes

Code For Nested Concurrent Processes

>? Aegin
>? SO;

" 
 # $ %&' (   )* &&+   ' 
 , - 
c 
    c 
   !! 

>? Cobegin
>? S1;
>? Aegin;
>? S2;
>? Cobegin s3,s4, co end
>? s5;
>? End;
>? S6;
>? Co end Sn+1;
>? End;

 rR it is a procedural diagram for the code which is above


 When we sequentially execute numbers of jobs i.e.

In this line the jobs is executed and job number is reading this is from the beginning so it is called
concurrent execution

+    

" 
 # $ %&' (   )* &&+   ' 
 , -.
c 
    c 
   !! 

>? @ob1 : r1, x1, p1


>? @ob2 : r2, x2, p2
>? @ob3 : r3, x3, p3

Now for the designing of spooling system process description are the

>? Reader Process: r1, r2, r3


>? Exeuction, Schedular process: x1, x2, x3
>? Printer process: p1, p2, p3

" 
 # $ %&' (   )* &&+   ' 
 , -
c 
    c 
   !! 

  it is a real time process let we have two process ͚P͛ and ͚Q͛

>? Process P input and reset the pulse regish every 1 sec by adding (0,1) the integer variable = v
(counter)
>? Process Q : Output (mean the item manufactured) and reset,s the integer every 5 hour a shift in a
task

We implement this as a concurrent process

>? Var V = integer;


>? Aegin;
>? V := 0;
>? Cobegin;
>? {p}
>? Repeat
>? Delay(1);
>? V := v+input;
>? Forever;
>? {Q}
>? Repeat
>? Delay(8000)
>? Out(v);
>? V := 0;
>? Forever ;
>? Co-end;
>? End;

This algoritham has a time dependent error

   The interprocess interaction is catagorize into the following two

1.? Co-operating process


2.? Mutual exclusion(ME)

$R,   These processes share some resources belonging to the entire group or
family processes are farmed and controlled explicitely by the programmer,s to exploit the benefits of
concurrency ot multiprogramming in time critical applications or complex product type application e.g.
CAM(computer aided manufacture) co- operating processes must syncronize with each other where
they use shared resources .e.g data structure or physical device there are three forms of co-operating
system

a)? Interprocess syncronization


b)? Interprocess signaling
c)? Interprocess communication 

" 
 # $ %&' (   )* &&+   ' 
 , -/
c 
    c 
   !! 

$  
- in interprocess syncronization A set of products and Mechanism used
to preserve the system integrity and consistency its example is serially reusable resource a serially
reusable can be used by one process at a time e.g. read/write shared variable Magnetic Tap or Printer

$    This pertains the exchange of timing signals among concurrent processes
used to coordinate the collective progress

$    Concurrent process co-operating processes communicating the
exchange of data and reporting progress shared memory(SM) provides a mean of interprocess
communication

#$  Mutual exclusion (often abbreviated to mutex) algorithms are used in concurrent
programming to avoid the simultaneous use of a common resource, it means that only one Resource
can be used by a single process only at a time mutual exclusion ensures that only one process used the
resource (shared resource) it is done by the help of critical section

R  R $ A sequence of instruction which is clearly with a clearly marked beginning and end
so when a process enter to critical section it must complete all instruction before other process is
allowed to Enter to the critical section i.e. when one process is present in critical section then other
process can,t Enter to it

R r  The following are the conditions for the mutual exclusion

1)? It must be ensured that ME between process occurr accessing the protected shared resource
2)? Make no assumption about the Relative speed,s and priorities of processes
3)? Guarantee that crashing or terminating a process outside it,s Critical section does not affect the
ability of other contending process to access shared resource
4)? When more than one process wishes to enter to the critical section grant permission to one of
them in a finite time to use resource for finite time

  r The Mutual exclusion requires that each process observe the following basic
protocol

1)? Negotiating Protocol (winner Proceeds)


2)? Enter the CS (Exclusive use of Resources)
3)? Release Protocol (Ownership of the resource is Reliquished)


-   The following are the tools for the Mutual Exclusion

1.? Critical Region


2.? Semaphore

R‰ Mutual Exclusion problem can be solved by a shared variable for the resource e.g.

Var V : Shared T

" 
 # $ %&' (   )* &&+   ' 
 , 
c 
    c 
   !! 

i.e. we define here a varibale V for shared variable T concurrent process can refer and change common
variable with a structural state statement : category e.g. region V de S;

R

>? Var V : Shared V;


>? Var A : Shared W;
>? Cobegin;
>? {P} : region v do S1;
>? {Q} : region w do S2;
>? Coend;

ë R

>? Region v do
>? Aegin S1;
>? Region w do
>? Aegin s2;
>? End;

 A deadlock is a situation in which two computer programs sharing the same resource are
effectively preventing each other from accessing the resource, resulting in both programs ceasing to
function. The earliest computer operating systems ran only one program at a time. All of the resources
of the system were available to this one program. Later, operating systems ran multiple programs at
once, interleaving them. Programs were required to specify in advance what resources they needed so
that

Deadlock may occurr if ͚P͛ AND ͚Q͛ want to enter to the same region at the same time e.g.

simplest example of deadlock:

?? ? ?
?  ? ? ?  ?
?
?? ??
?  ?? ?  ?
?
?? ? ?
?  ?? ? ???  ?
? ???
?? ??
?  ? ? ? ???  ?
? ?? ?
Cobegin

>? {P} : region V do S1, region W do S2;


>? {Q} : region W do S3, region V do S4;

" 
 # $ %&' (   )* &&+   ' 
 , -
c 
    c 
   !! 

V A Resource ͞R͟ is shared by ͞n͟ concurrent processes in mutual exclusion

>? Var R : Shared Aoolean


>? Cobegin;
>? {P1} Repeat
>? Region R use resource
>? P1 passive ;
>? Forever;
>? {P2} Repeat
>? Region R do use resource;
>? P2 Passive;
>? Forever;
>? ;;
>? ;;
>? {Pn} : repeat
>? Region r do use resource;
>? Pn passive;
>? Forever;
>? Eo-end;

V A variable ͞V͟ is shared by two concurrent process using region standerd

>? Var V : shared integer


>? Aegin
>? V := 0;
>? Cobegin;
>? {P} repeat
>? Delay(1);
>? Region V do V := V+input ;
>? Forever;
>? {Q} repeat
>? Delay(1800)
>? Region V do
>? begin
>? output(v);
>? v := 0;
>? end;
>? forever;
>? coend;
>? end;

   in above algoritham there is no time dependent error since only one process will use the
variable V at a time

" 
 # $ %&' (   )* &&+   ' 
 , 
c 
    c 
   !! 

A a buffer is a region of memory used to temporarily hold data while it is being moved from
one place to another. Typically, the data is stored in a buffer as it is retrieved from an input device (such
as a keyboard) or just before it is sent to an output device (such as a printer) so

Auffering is a process of co-operation or communication through the usage of a buffer to operate on


common task,s the process must be able to exchange data in A process P(producer) produces a
sequence of data and sends this data to another process C (conusmer) which recieves and consumes to
this

The data are transmited between process in discrete portions A message

͞P͟ and ͞C͟ can operate ata their own speed buffer can be implemented as an unbounded buffer
circulur(bounded) buffer

÷ 

>? Send (M, A) : send message M to buffer A


>? Receive (M, A) : receive M from buffer A

  
-$'  A semaphore mechanism consist of two primptive operation wait
and signal this operate on a specific typeof variable ͞S͟

Wait(S) : it decrements the valuse of S as soon as it become negative

Signal (S) : it increments the value of S as an indivisible operator

Wait (S) : while Not (S>0) P {keep in testing}

S := S - 1

" 
 # $ %&' (   )* &&+   ' 
 , 
c 
    c 
   !! 

(a)Wait operation

Signal(S) : S := S + 1

(b)Signal operation

These two operation (a) and (b) are called busy wait operations so there is livelock in it

A
    it operates on two integers (0,1) and the above operations ͞a͟ and ͞b͟ are binary

.      The following are the queue implementation of the semaphore
overview which is shown below

Every resource has a queue for semaphores

    The Structure of the semaphore is the following

Wait(s) : if Not(S>0) then suspended the caller at s

Else

S : = S-1

(a) Wait operation

Signal(S) : if Q and S is not empty atleast one process is waiting Then resume the process from queu at S

Else

S := S + 1

(b)Signal Operation

Advantage : Ausy wait is avoided there will be no livelock

' A condition that occurs when two or more processes continually change their state in
response to changes in the other processes. The result is that none of the processes will complete. An
analogy is when two people meet in a hallway and each tries to step around the other but they end up
swaying from side to side getting in each other's way as they try to get out of the way.

  is a key concept in computer multitasking, multiprocessing operating system


and real-time operating system designs.  refers to the way processes are assigned to run on
the available CPUs, since there are typically many more processes running than there are available CPUs.
This assignment is carried out by software known as a  and  .

" 
 # $ %&' (   )* &&+   ' 
 , 
c 
    c 
   !! 

V             The following is the
algoritham which is used for the implementation of mutual exclusion through sempahore

>? Program / Module $mutex;


>? Var mutex : Semaphore ; {binary}
>? Process p1;
>? While true do;
>? Aegin;
>? Wait(mutex)
>? Critical section ;
>? Signal(mutex);
>? Other-p1-processing;
>? End;
>? End;
>? Process p2;
>? Aegin;
>? While true do;
>? Aegin;
>? Wait(mutex)
>? Critical section;
>? Signal(mutex);
>? Other-p2-processing
>? End;
>? End;
>? Processor p3
>? Aegin;
>? While(true) do
>? Aegin;
>? Wait(mutex)
>? Critical section;
>? Signal (mutex);
>? Other-p3-processing;
>? End;
>? End;
>? {Parent.process}
>? Aegin;
>? Mutex :=1 :{free}
>? Initiate p1,p2,p3;
>? End;

This program is used that how a resource will used by three processes mutually exclusive using
semaphore

" 
 # $ %&' (   )* &&+   ' 
 , !
c 
    c 
   !! 

   

    '
 Mutex:=1 Processor Process enter
 #  Free interest in CS into Cs
M1 //// //// //// 1 //// ////
M2 Wait(mutex) Wait(mutex) Wait(mutex) 0 //// P1,P2,P3
M3 critical Waiting Waiting 0 P2 P2,P3
section
M4 Signal(mutex) Waiting Waiting 1 //// P2,P3

M5 Other-p1- Critical Waiting 0 //// P3,1


procssing section
M6 Wait(mutex) Critical Waiting 0 //// P3,1
section
M7 Waiting Signal(mutex) waiting 1 //// P3,P1

M8 Critical Other-p2- waiting 0 P1 P3


section processing
A ;Scenario of Execution of Program

 A set of policies and mechanism built into ͞OS͟ that govern the order in which work ͞A͟ be
done by a computer is completed V÷ 

 it Select the Next job to be admitted into the system and next process to be run

÷ *' it objective is the optimzation of the system in using resources

In general purpose computer system there may exist three types of schedular

0? Long Term Schedular


#0? Medium Term Schedular
3.? Short Term Schedular

" 
 # $ %&' (   )* &&+   ' 
 ,  
c 
    c 
   !! 

r

 it works with batch queue and select the next job to be executed batch is reserved for resource
intensive (cpu time, special I/O devices) low priority queue the batch job will be used as files to keep the
cpu busy

÷ *' To provide a balanced mix of jobs such as CPU I/O bond to STS

 A running process my become suspended by making I/O request or issuing a system cell the
image of the suspended process is on the disk (SM) and process of saving is called Swapping if the
process is suspended for long term then it may be swapped out the vaccined memory allocated to other
process

÷ *' it is incharge of handling swapped out process on removal of the suspended condition the
MTS makes the swapped out process ready by enabling it to get allocated the requested memory so that
it become resident on ͞MM͟

 it is incharge of the handling ready queue when a process is completed it is invoked to select the
next @OA/Process in the ready queue for assigment to hence it is the most frequently invoked schedular
it is also invoked when the following event ocurr

a)? OS calls
b)? Clocktick
c)? Timebiased interrupts
d)? Sending and receiving of Signals e.t.c

'. .$  there are three queues

1)? High priority queue


2)? Medium priority queue
3)? Low priority queue

" 
 # $ %&' (   )* &&+   ' 
 , .
c 
    c 
   !! 

V  General purpose CS in a leading university serving

a)? Variety of devices and terminals


b)? Student,s program (interactive process)
c)? Aatch jobs (simultanous runs)

In this system a process may be assigned a specific queue on the basis of attribute (user system
supplied) each queue is served by a specific descipline used for scheduling within queues (ED,RR,FIFO)
within scheduling each queue may be given a percentage of time OR Priority;biased descipline my be
used First served HPQ , then MPQ , LPQ

' 1 

Each process starts at Top level queue if the process is not completed within a given time slice the
process responds the next level queue by the ͞OS͟ if still not completed then it goes down to the lowest
level queue on the contrary if the process sorrounds control to the ͞OS͟ before the expiry of time slots
then tis behaviour is recorded and its give Royal treatment next time it found that the cpu hungry
process used lot of cpu time every time may get cpu then tend to require more time they will be served
finally in the lowest level queue this system avoids abuse or miuse of the previous system


-  R   RR‰$ this syncronizing preemptive use keyword: await
syntacitcally it is simillar to ͞CR͟

R  i Var v shared T1;i region v do; ibegin S1 and A.S2

" 
 # $ %&' (   )* &&+   ' 
 , 
c 
    c 
   !! 

r the process waiting on condition within the critical region is suspended on a suspended
queue (Event queue) it does not prevent the other processor from using the resources when the
condition is satisfied that suspended process is awakened and Rejoins the main queue

  

>? ͞consumer͟
>? Region v do
>? Aegin awaits A;
>? S1; end;
>? ͞Prouducer͟ (p)
>? Region ͞v͟ do S2;

The two process ͞C͟ and ͞P͟ works on a common task the consumer wished to enter to a ͞CR͟ to
operate shared variable ͞V͟ by executing statement S1 when certain relationship ͞A͟ Holds among
component of V

The producer enter a ͞CR͟ unconditionally and change V by statement S2 to make ͞A͟ holds

Summary of Tool studied so for

>? Syncronization syncronizing tools


>? Mutual exclusion critical region
>? Exchange of time signal and ME Semaphore
>? Exchange of message Auffer
>? Priority timing delay Conditional ͞CR͟

" 
 # $ %&' (   )* &&+   ' 
 , /
c 
    c 
   !! 

   To explain it we have an example we take the following


program

>? Type A: Shared record;


>? Auffer : array ͞0͙͙͙͙͙͙͟..max-1 of T;
>? P, C : ͞0͙͙͙͙͙͙͙͙͙͙͙͙͟max-1;
>? Full : ͞0͙͙͙͙͙͙͙͙͙͙͙͙͟max;
>? End;
>? {Initially P=C=Full=0}
>? Procedure :
>? Produce(m, T; var : A);
>? Aegin;
>? Region b do
>? Aegin
>? Await full < max ;
>? Auffer(Pi := M; )
>? Pi = (P+1) mod max;
>? Full := full+1 ;
>? End;
>? End;
>? Producer consuer(var m :T; b : A)
>? Aegin;
>? Region b do
>? Aegin;
>? Await full > 0
>? m := buffer(C);
>? C := (C+1)mod max;
>? Full := full ʹ 1 ;
>? End;
>? End;

R         the following are the two classical problems in
concurrent programming

1)? General producer-consumer problem


#$? Reader writer problem

$( ,     Let we given a set of cooperating processes some of
which produces data called producer to be consumed by other consumer will possible disparity between
producer and consumer rates device a synchronizing tool that allows both producers and consumers to
operate concurrently at their respective rates using ͞FIFO͟ discipline

Solution: unbounded buffer

" 
 # $ %&' (   )* &&+   ' 
 , 
c 
    c 
   !! 

>? Program / module producer ʹ consumer


>? Var producer : semaphore {general}
>? Process producer ;
>? Aegin;
>? While true do;
>? Aegin;
>? Produce ;
>? Place ʹ in ʹ buffer ;
>? Signal (produced);
>? Other ʹ producer ʹ processing ;
>? End;
>? End;
>? Process consumer ;
>? Aegin;
>? While true do;
>? Aegin;
>? Wait(produced);
>? Take ʹ from ʹ buffer ;
>? Consume;
>? Other ʹ consumer ʹ processing ;
>? End;
>? End;
>? {parent.process}
>? Aegin;
>? Produced := 0;
>? Initiate producer consumer;
>? End;

Now what will happen in case of multiple producer and consumer as shown in figure

" 
 # $ %&' (   )* &&+   ' 
 , -
c 
    c 
   !! 

In it the management is done in such a way that only a single producer can send the data to buffer at a
time and only a single consumer can receive the data at a time this is done by using critical section so
this situation can be handled by using the semaphore

>? Program / module producer , consumer {multiple processes using the same buffer}
>? Producer : semaphore ; {general}
>? Mutex : semaphore ; {binary}
>? Process produced x;
>? Aegin;
>? While true do;
>? Aegin;
>? Produce:
>? Wait (mutex);
>? Place ʹ in ʹ buffer ;
>? Signal (mutex) ;
>? Signal (produced);
>? Other ʹ x ʹ processing ;
>? End;
>? End;
>? Process consumer Y;
>? Aegin;
>? While true do;
>? Aegin;
>? Wait (produced);
>? Wait (mutex);
>? Take ʹ from ʹ buffer ;
>? Signal (mutex);
>? Consumer;
>? Other ʹ Y ʹ processing ;
>? {parent.process}
>? Aegin;
>? Produced := 0;
>? Signal (mutex);
>? Initiate producer, consumer ;
>? End;

Note that in the code if the order of the sequence of iwait (produced i wait (mutex) is changed
consumer will take control of buffer the producer is busy in producing and there is no buffer to place the
data so a deadlock will be occur the producer can place the data in buffer and the consumer has nothing
to resume

" 
 # $ %&' (   )* &&+   ' 
 , 
c 
    c 
   !! 

#$ ‰ ,    This pertains with the simultaneous reading /writing operation over a
single database the number of process using a shared data structure (SDS) can be defined as readers,
writer processes

>? ‰  it does not modify the shared data structure but only can access it
>?   it may both can read and write data to a shared data structure
>? A Number of records may use the data structure concurrently
>? A writer is given on exclusive access to the data structure

  given a universe of reader that reads a common data structure and a universe of writers that
modify the same common data structure derive a synchronizing protocol among the readers and writers
that ensures consistency of common data structure while maintaining a high degree of concurrency as
possible so for it we will use the semaphore

>? Program/module reader - writers


>? Var readcount := integer;
>? Mutex, write : semaphore; {binary}
>? Aegin;
>? Process reader x;
>? While true do;
>? Aegin;
>? {Obtain permission to enter}
>? Wait (mutex);
>? Readcount : = readcount + 1;
>? If readcount := 1 then
>? Wait (write);
>? Signal (Mutex);
>? --------
>? Reads operations
>? ---------
>? Wait (mutex);
>? Readcount := readcount ʹ 1 ;
>? If readcount = 0 then
>? Signal (write);
>? Signal (mutex);
>? Other ʹ x ʹ processing ;
>? End;
>? End;
>? Process writer z;
>? Aegin;
>? While true do
>? Aegin;

" 
 # $ %&' (   )* &&+   ' 
 , 
c 
    c 
   !! 

>? Write (wait);


>? -----------------
writes
>? ------------------
>? Signal (write);
>? Other ʹ z ʹ processing ;
>? End;
>? End;
>? {parent.process}
>? Aegin;
>? Readcount = 0;
>? Signal (mutex);
>? Signal (write);
>? Initiate reader, writer,s;
>? End;


readers have priority over waiting writer in the policy there is starvation of writer͛s as shown in
code

r 
 2$ A new reader should not start there is writer waiting (parents starvation of
writers) all readers waiting at the end of write should have priority over the next writer
(parent.startvation of readers)

'
   in most of the cases resources are heavily used by a large group
of users the Operating system must be able to control the scheduling of resources explicitly among
compiling processes the scheduling of heavenly used resources can be controlled associating a
synchronizing variable with the process and maintaining a queue of request

>? Declaration:
>? Var q : queue of T;
>? Operation : enter (T, q);
>? Remove(t, q);
>? Empty(q) : returns boolean
>? Vide : sequence : var variable : sequence of resources
>? Operation:
>? get(resource, available);
>? put(resource , available );
>? empty(available) : return Aoolean

define available resources : sequences of indices of type r pending request of indices of type R pending
request are defined by a sequence of indices of ͞P͟ e.t.c

V scheduling of heavily used resources with conditional critical region (CCR)

" 
 # $ %&' (   )* &&+   ' 
 , 
c 
    c 
   !! 

>? Type
>? P=1͙͙͙͙͙͙͙number of processes;
>? R=1͙͙͙͙͙͙͙number of resources;
>? Var v : shared record;
>? Available : sequence of R;
>? Available : sequence of P;
>? Turn : array of P of Aoolean ;
>? End;
>? Procedure reserved (process p; var resource)
>? Aegin;
>? Region v do;
>? Aegin;
>? While empty (available) d0
>? Aegin;
>? Enter (process, request);
>? Turn (process) := false;
>? Await turn(process);
>? End;
>? Get(resource, available);
>? End;
>? End;
>? Procedure release (resource : R);
>? Var process P;
>? Aegin;
>? Region v do;
>? Aegin;
>? Put(resource, available);
>? If not empty(request) then
>? Aegin;
>? Remove(process, request);
>? Turn (process := true);
>? End; 



" 
 # $ %&' (   )* &&+   ' 
 , !
c 
    c 
   !! 

    if a process ͞A͟ want to used the resource held by process ͞A͟ it may
happen that the other process ͞C͟ may enter from the main queue the resource released by the process
͞A͟ may be acquire (getting) directly by process ͞C͟ instead of process ͞A͟

In order to avoid this we use another synchronizing tool called ͞'.͟

 '. The Event queue can be used by the following method given below

Declaration: var e : event V

This associates an event queue ͞e͟ with a shared variable declared   where    A
process can leave a critical region associate (gathering) with the '3'4and join the '134of
executing the standard procedure  $Another process can enable all process in the event queue to
enter to the critical region by executing the standard procedure R  $

The await and cause procedure can only be called within the critical regions associated with ''; they
execute each other in time

    

>? ͚Consumer͛
>? Region v do
>? Aegin
>? While not A do await(e)
>? S1;
>? End;
>? Producer;
>? Region v do
>? Aegin;
>? S2;
>? Cause;
>? End;

V scheduling of heavily used resources with simple  and Event variable

>? Type
>? P = 1 ͙͙͙͙͙͙͙͙͙͙͙͙ Number of process;
>? R = 1͙͙͙͙͙͙͙͙͙͙͙͙ Number of resources;
>? Var v : shared record ;
>? Available : sequence of R;
>? Request : queue of P;
>? Turn : array p of event e;
>? End;
>? Procedure reserve (process : P, resource : R);

" 
 # $ %&' (   )* &&+   ' 
 ,  
c 
    c 
   !! 

>? Aegin;
>? Region v do
>? Aegin;
>? While empty(available) do
>? Aegin;
>? Enter (process, request);
>? Await (turn (process));
>? End;
>? Get (resource, available);
>? End;
>? End;
>? Procedure release(resource: R);
>? Var process : P;
>? Aegin;
>? Region v do
>? Aegin;
>? Put (Resource, available);
>? IF (Not empty (request)) then
>? Aegin;
>? Remove (process, request);
>? Cause (turn (process));
>? End; end;
>? End;

R     


- 

Synchronizing problem Synchronizing tool


critical region
Mutual exclusion

Mutual Exchange of time signal Semaphore


and ME
Message Auffer
Exchange of message

Conditional ͞CR͟
Priority timing delay

explicit process scheduling Event queue

" 
 # $ %&' (   )* &&+   ' 
 , .
c 
    c 
   !! 

 A high performance system having high resource utilization and parallel operation in multi
tasking or multiprogramming system cause a situation sometimes which is called Deadlock which can be
defined as ͞A  is a situation where a group of process are permanently blocked as a result of
each process having acquired a subset of resources needed for its completion and waiting for release of
remaining resources held by other process in the same group (similar processes) is called deadlock
situation i.e.

  Assume two concurrent process P1,P2 suppose there are two disk and printer available in a
computer

>? Cobegin;
>? Process P1;
>? Aegin;
>? Wait (printer);
>? Wait (disk);
>? Wait (disk);
>? 2;disk;and;printer;processing
>? Signal (disk);
>? Signal (disk);
>? Signal (printer);
>? End;
>? Process P2;
>? Aegin;
>? Wait (disk);
>? Wait (printer);
>? Disk;and;printer;processing;
>? Signal (printer);
>? Signal (disk);
>? End;
>? Coend;

In it process P1 waits for disk while it is already held by process P2 and process P2 waits for printer while
it is already held by process P1 so this situation is Called Deadlock


 ‰  There are two types of resources stated by Holt which is the following

1)? Permanent resource


#$? Temporary resource 

  ‰  These are the physical devices used repeatedly by the process e.g. cpu main
memory e.t.c.

 
‰  These are the Resources (messages) produced by one process and consumed

" 
 # $ %&' (   )* &&+   ' 
 , 
c 
    c 
   !! 

Other process e.g. routines , subroutines e.t.c.

 
a deadlock is a situation in which two or more process are waiting indefinitely for condition
which will never hold

ë 
 r    according to there are four condition
nessecery for deadlock occurrence which is the following

1)? Mutual exclusion


2)? Partial allocation
3)? Non preemptive scheduling
4)? Circular waiting 

  The shared resources are acquired and used in a mutually exclusive manner it means
that the in a mutual exclusion manner a resource can be used only by a single process now when a
another process needs this resource while it is held by the 1st process then he will wait for it to leave the
resource then it produce deadlock

  A process may acquire the resource in a piecemeal manner i.e. in it a resource is
allocated to the process in the initial now when another process need this resource then he will wait to
leave the resource by the 1st process so in this the situation of deadlock produced

ë  '  A Resource can only be release by the process which has acquired it so
in this situation if the other process need it then he will wait till the resource is released by the process
to which it acquire

R  The deadlocked process are involved in a circular chain such that each process hold
one or more resources requested by the next process in the chain its example is the disk and printer in
the above algorithm

'  


 2 V 2‰V$ A Deadlock is prevented by allocating the
resource in a hierarchal manner is given below

L1

L2

LMAX

" 
 # $ %&' (   )* &&+   ' 
 , /
c 
    c 
   !! 

   in a ͞HRA͟ system the request and release of resources of various types are subject in a
fixed sequence of order A resource hierarchy consist of resources L1, L2 and Lmax each level in turn consist
of a finite number of resources types when a process has acquired resources at a level Lj it can only
requested resource at level Lk (K>@)

The resource acquired ata level Lk must be release before resource acquired ata low Level Lj (K>@)

'ë  Let a variable v

V : varibale acquired by a critical region i.e. var is permanent resource Deadlock in nested critical region
can be prevented by a hierarchal ordering of common variable v1,v2,͙͙͙͙͙͙͙..vmax

>? Aegin
>? Region v1 do s1;
>? Region v2 do s2;
>? --------------
>? --------------
>
? Region vmax do smax
>
? End;

2  
      

    Let the following
four process ͞P͟,͟Q͟,͟R͟, and ͞S͟ connected in a unidirectional Manner

   Two chains of buffer lead from Master ͞P͟ to a servant ͞S͟ in it the ͞P͟ is unable to send to
͞R͟ and ͞R͟ is unable to send to ͞s͟ because the buffer are full ͞S͟ may be unable to receive from ͞Q͟
and ͞Q͟ is unable to receive from ͞P͟ because the buffers are empty in it the deadlock situation is the
following

  

1.? ͚P͛ waits for ͚R͛ to receive


2.? ͚R͛ waits for ͚S͛ to receive
3.? ͚S͛ waits for ͚Q͛ to send

" 
 # $ %&' (   )* &&+   ' 
 , 
c 
    c 
   !! 

)0? ͚Q͛ waits for ͚P͛ to send 

Due to these situation the deadlock is produced

R  There is no agreement between Master and slave through which the information is flow

" 
 # $ %&' (   )* &&+   ' 
 , -

You might also like