Professional Documents
Culture Documents
PAWAR
SUNBEAM, PUNE.
What is Operating System:
An OS is a program that manages the computer
hardware.
It also provides a basis for application programs
and acts as an intermediary between the
computer user and the computer hardware.
OS is a resource allocator: Resources like, CPU
time, memory space, file storage space, I/O
devices and so on.
OS is the control program: manages the execution
of user programs to prevent errors and improper
use of computer.
The One program running all the time on
computers is the Kernel, everything else is
either as system program (ships with the OS)
or an application programs.
Buffers
3. Main memory:
Instructions, Data
4. System Bus
When data are moved over longer distances, to or
from a remote device, the process is known as
communications.
Bus: A bus is communication pathway connecting
two or more devices. A key characteristic of bus is
that it is shared transmission medium.
Multiple devices connects to bus, and a signal
transmitted by any one device is available for
reception by all other devices attached to the bus.
Typically bus consists of multiple communication
pathways or lines, each line is capable of
transmitting signals representing binary 1 and
binary 0.
System Bus: A bus that connects major computer
components (processor, memory, I/O) is called
system bus.
There are many different bus designs, on any bus
the lines can be classified into three functional
groups:
1. Data lines (data bus)
2. Address lines(address bus)
3. Control lines
Typical control lines includes:
Memory write, Memory read, I/O write, I/O read,
Transfer ACK, Bus request, Bus grant, Interrupt
request, Interrupt ACK, Clock and Reset.
Internal Memory:
1. Processor registers
2. Magnetic disk
Cooperating Process:
A process is cooperating if it can affect or be affected by
any other process in the system.
Any process that shares data with any other process is
a cooperating process.
Cooperating processes requires an inter process
communication (IPC) mechanism that will allow
them to exchange data and information.
There are two fundamental models of IPC:
1. Shared Memory Model: In this model, a region
of memory that is shared by cooperating processes
is established. Processes then can exchange
information by reading and writing data to the
shared region.
2. Message Passing Model: In this model,
communication takes place by means of messages
exchanged between the cooperating processes.
Pipe:
Pipe is used to communicate between two processes.
It is a stream based unidirectional communication.
There are two types of pipe:
1. Unnamed Pipe: Used to communicate between
related processes.
2. Named Pipe/FIFO: Used to communicate between
unrelated processes.
Message Queue:
Used to transfer packets of data from one process to
other process.
It is bidirectional IPC mechanism.
Signals:
A process can send signal to another process or OS can
send signal to a process. E.g. SIGKILL, SIGSEGV
Sockets:
Socket is defined as communication endpoint
Using socket one process can communicate with another
process on same machine or different machine in the
network.
Socket = IP Address + Port
RPC (Remote Procedure Call)
Used call method from another process on the same
machine or different machine in the network.
Process Coordination/Process Synchronization:
Cooperating processes can either directly share
a logical address space(i.e. both code and data)
or be allowed share data only through files or
messages.
Concurrent access to shared data may results in
data inconsistency.
The mechanisms to ensure the orderly
execution of cooperating processes that share a
logical address space, so that data
inconsistency is maintained.
Critical-Section Problem:
Consider a system consisting of n processes { P1, P2, …,
Pn-1}. Each process has a segment code, called a critical
section, in which the process may be changing
common variables, updating a table, writing a file and
so on. The important feature of this system is that,
when one process is executing in its critical section, no
other process is to be allowed to execute in its critical
section. That is, no two processes are executing in their
critical sections at the same time.
The critical section problem is to design a protocol that
processes can use to cooperate. Each process must
request permission to enter its critical section. The
section of code implementing this request is the entry
section. The critical section may be followed by an exit
section. The remaining code is remainder section.
Semaphores:
To over come the problem of critical section we can
use the synchronization tool called a semaphore.
Semaphore S is an integer variable that, apart from
initialization is accessed only through two
standard atomic operations: wait( ) and signal( ).
All modifications to the integer value of the
semaphore in the wait( ) and signal( ) operations
must be executed indivisible. That is when one
process modifies the semaphore value, no other
process can simultaneously modify that same
semaphore value.
Deadlock:
In multiprogramming environment, several
processes may compete for a finite number of
resources.
A process requests resources; if the resources
are not available at that time, the process enters
a waiting state.
Sometimes, a waiting process never again able
to change state, because the resources it has
requested are held by other waiting process.
This situation is called a deadlock.
Necessary Conditions to occur deadlock:
1. Mutual exclusion: resource is not sharable
2. Hold and Wait: process hold a resource and wait for
another resource.
3. No preemption: A resource should not be released until
task is completed.
4. Circular wait
Resource-Allocation Graph:
Deadlocks can be described more precisely in terms of
directed graphs called as system resource allocation
graph.
This graph consists of a set of vertices V and set of edges E.
The set of vertices V partitioned into two different types of
nodes: P = { P1, P2, …., Pn }, the set consisting of all the
active processes in the system, and R={R1, R2, …., Rn}, the
set consisting of all resources types in the system.
The directed edge from process Pi to resource type Rj is
denoted by Pi Rj; it signifies that Process Pi has requested
an instance of the resource Rj and is currently waiting for
that process.
A directed edge from resource type Rj to process Pi is
denoted by Rj Pi; it signifies that Rj has been allocated to
process Pi.
A directed edge Pi Rj is called request edge, and a
directed edge Rj Pi is called assignment edge.
Pictorially we represents each process Pi as a circle and each
resource as a rectangle. Since resource type Rj may have
more than one instance, we represent each such instance as a
dot within a rectangle.
If the graph contains no cycle, then no process in the system
is deadlocked. If the graph does contains cycle, then the
deadlock may exists.
Methods of Handling Deadlocks.
1. Deadlock Prevention: For a deadlock to occur,
each of the four necessary conditions must hold.
By ensuring that at least one of these conditions
can not hold, we can prevent the occurrence of
deadlock.
2. Deadlock Avoidance:
1. Resource-Allocation-Graph Algorithm
2. Banker’s Algorithm
3. Recovery from the deadlock:
1. Process termination
2. Resource Preemption
An address generated by the CPU is commonly
referred to as a logical address, whereas an
address seen by the memory unit - that is, one
loaded into the memory address register of the
memory- is commonly referred to as a
physical address.
The runtime mapping from virtual/logical to
physical memory addresses is done by a
hardware device called Memory Management
Unit (MMU).
SWAPPING:
A process must be in memory to be executed. A
process, however, can be swapped temporarily out
of memory to a backing store and then brought
back into memory for continued execution.
If a higher priority process arrives and wants
service, the memory manager can swap out the
lower priority process and then load and execute
the higher priority process.
When the high priority process finishes, the lower
priority process can be swapped back in and
continued. This variant of swapping is sometimes
called roll out, roll in.
Memory Allocation:
1. Contiguous memory allocation, each process is contained in a
single contiguous section of memory.
2. Partition:
Dynamic Memory Allocation:
1. First Fit: Allocates the first hole that is big enough
2. Best Fit: Allocate the smallest hole that is big enough
3. Worst Fit: Allocate the largest hole
Generally the first fit is faster than other two algorithms.
External Fragmentation:
Both the first fit and best fit strategies for memory allocation
suffer from external fragmentation. As processes are loaded and
removed from memory, the free memory space is broken into
little pieces. External fragmentation exists when there is enough
total memory space to satisfy a request but the available spaces
are not contiguous; storage is fragmented into large number of
small holes.
Internal Fragmentation:
Unused memory that is internal to a partition
5. LOOK:
Implementation policy of SCAN or C-SCAN
If no requests pending magnetic head is stopped.