Professional Documents
Culture Documents
This unit is organized into 4 chapters, and each chapter is organized into a
number of lessons as outlined below.
Learning Objectives
CHAPTER 1
INTRODUCTION
In this chapter we identify the need for a network, and the complexity in building
networks. Lessons in this chapter:
Let us ponder over these questions. Even a child/novice who has some exposure
to computers would know “what” a network is. A simplistic definition of “an
interconnection of different machines” would suffice to start with.
It is interesting to take a look at some of the developments that have lead to the
current situation and some of the present and proposed applications. That is,
how things have been happening, what is actually being done, and what is likely
to happen in the future. This is more or less the goal of this course.
In general, one can see that the enhancement of network facilities and the
demand of applications have had a mutually contributing effect. On the one hand,
facilities are enhanced to meet the demand of applications; on the other,
enhanced facilities enable one to envisage applications that make use of the
available facilities. For instance, initial networks were used only by universities to
share information among them. Then, these networks were made accessible to
people outside of these universities, and with that came a requirement of sending
/ sharing data and voice among users. And also more data, which meant more,
speed. An associated development therefore was the design of special networks
such as integrated services network and the high-speed ATM networks that can
handle more data and voice at higher speeds. With the kind of bandwidths that
these networks can support, demanding applications like, video on demand,
streaming of audio and video, interactive gaming etc., are being pursued.
But, before trying to understand how these networks operate to support the
different applications, it is necessary and useful to understand the requirements
or specifications - both mandatory and desired, that it should meet. Further, this
list of requirements should incorporate the perspectives of the user and the
designer. In addition, it should also take into account the needs of the network
provider- the one who provides the services to the users.
Let us expand this network a little further. Let us connect 3 machines to form a
network. At the physical level, we need to look at how three machines may be
interconnected. Its simple ! – Every machine connects to the other two using two
separate links. All machines are fully connected to one another (fig. 1.1). The
rest of the communication is like a two machine set up.
Now expand to more than 3 machines, some value ‘n’. Look at the
interconnection options –
(i) a fully connected network as in the 3 machine set-up. We need n-1
connections for each machine.
(ii) A ring structure, where each node has two connections to its left and
right neighbours – and they join hands to form a ring (fig.1.2a).
(iii) A star structure – where there is one central node to which all
machines connect, and they communicate to one another through that
node etc (fig 1.2b).
These structures are called topologies in the networking world.
Fig. 1.2 Topologies - a) Ring b) Star
A ring, star or bus network (we will look at these in detail later), on the other
hand, would require less wire, but require more than one node to share the
cable, and sometimes need the cooperation of other nodes to send data. This
implies that there should be some mechanism for such sharing and co-
ordination. We need access mechanisms to determine who gets access to the
cable at any given point of time – what we refer to as “media access control”.
Let us pause this discussion here, and consolidate the requirements for our
network. We need
(i) some physical media that determines the basic rate at which
data can be transferred, to what distance etc.
(ii) to choose a suitable topology for the network along with any
additional hardware devices.
(iii) Media access control techniques
(iv) Applications that will use this infrastructure to communicate.
If we want to expand our network even further, we may add more machines to
the same network if they are to be located near-by (say same department or
floor), or have separate networks at different locations and interconnect them.
This implies that we first need some interconnection mechanism to connect
various networks, and then a routing mechanism to determine how to send data
from a node on one network to another node on another network. This will
require what are called switching mechanisms and addressing mechanisms.
Switching mechanisms commonly used are circuit-switching or packet switching
or some variation of these. Circuit switching is something we use in telephone
networks – where an end-to-end path is established before communication takes
place. Packet switching is like your SMS, you send a message that is then
delivered offline to the destination. Addressing mechanism is needed to identify
‘who’ is sending, ‘from where’ ‘to whom’ ! That is we will need additional devices
called switches or routers that allow such extensions or interconnections.
Once we have all this infrastructure, there can be many applications on one
machine that need to share this infrastructure. So, once again, we need some
mux-demux mechanisms on each node that allow such sharing to happen (fig.
1.3).
Fig.1.3 Mux – Demux Structure
And while we construct a network with all these features, two main questions will
arise – what is the cost ? and what is the performance ? These are two very
pertinent questions. Ideally, we would like to minimize the former, and maximize
the latter. Two contradicting goals which actually lead to some interesting
challenges and trade-offs in the design of networks. Yet another question is how
scalable is the network – meaning – how many more machines/ networks can be
connected without affecting performance.
(i) Connectivity
(ii) scalability.
(iii) Cost-effectiveness
(iv) Functionality.
(v) Reliability.
(vi) Performance.
Try these :
1. How would you calculate the cost of setting up a network?
2. Name a few components that you need to purchase to setup a LAN for a
small office.
CHAPTER 2
NETWORK ARCHITECTURE
To handle the complex requirements outlined in the previous lesson, network
designers have developed a ‘Network architecture’ that is used to design and
implement networks.
There are two models that are prevalent- the OSI model and the TCP/IP model.
We’ll first look at some of the general fundamentals of these, before addressing
each separately.
Referring to the discussion in the previous section, our simple two machine
network can be composed of two layers on each machine – an application layer
on top of a physical connectivity layer provided by the serial interface. The ‘n’
machine network, as a composition of application layer on top of a media access
layer, which is on top of a physical layer.
2.1.1 Advantages of layering:
• A protocol gives the how, what & when of the communication that takes
place between 2 entities. The two entities here could be two adjacent
layers in the tiered/layered approach or the two peer layers in each node
that are communicating with one another.
• In essence, it gives the
Syntax à How?
Semantics à What? and
Timings à When? of the data being
exchanged between the two entities.
By combining these two concepts – of layering and the use of protocols, the
complex task of networking is broken down into more manageable pieces that
work together giving rise to the idea of a layered network architecture.
Two such network architectures are in vogue - the OSI Architecture and the
TCP/IP Architecture. We next look at the salient features of these two
architectures.
The OSI Architecture is a standard proposed by ISO, many years ago – in 1983.
This network architecture divides the network functionality into 7 layers as shown
below (fig. 2.1).
Application Layer
Presentation Layer
Session Layer
Transport Layer
Network Layer
Data Link Layer
Physical Layer
Fig. 2.1 Architecture as per OSI Model
The physical layer – the bottom-most layer in the architecture, deals with (relates
to) the physical medium, that is actually responsible for transferring the data from
one side to the other by means of signals. The signals may be electrical, electro-
magnetic, optical etc., and accordingly the media used to carry these signals
would be of different characteristics. This layer therefore deals with questions like
– what kind of signals, what kind of media, how much data can be represented
by the signals, data rate for a signal rate, how to transmit the signal, how to
recover the signal in the presence of noise, what is the delay in transmitting and
propagating the signal etc.
The data link layer which sits just above the physical layer is responsible for
making some sense out of the raw bits that are sent to and received from the
physical layer. This layer groups the bits into some manageable size. A group of
bits at this layer is referred to as a frame. This grouping is done so that we can
give some identity to the data being transmitted, and keep track of how much
data has been sent or received correctly on either side. If a frame of data does
not reach the other end due to some noise or error at the physical layer, then this
needs to be identified and some action needs to be taken. This is referred to as
error handling / control. Similarly, we need to check if the receiver is able to
accept the data, and store it into its buffer at the rate sent by the sender, else we
may again lose the data. A fast sender should not overwhelm a slow receiver.
Ensuring this goes by the name of flow control. Thus, framing, error control, flow
control etc. are functions to be handled by the data link layer. Essentially, this
layer is responsible for making sure that data is sent reliably across a physical
network.
If we are dealing with a two machine network that we discussed earlier, the
physical layer and the data link layer are the two layers that would suffice. But if
we are to deal with more machines interconnected to form a network, we need a
network layer to handle network-related functions. This layer located just above
the DLL, is responsible for routing packets across many networks/nodes.
Mechanisms to identify a path/route to be taken from source to destination, are
part of this layer. All devices such as routers/ switches must have all the
functionalities up to this layer (i.e., PL, DLL and NL).
Now, once the data reaches the destination machine with the help of the network
layer, it has to be sent to the correct recipient application on that machine. That
is, we need a demultiplexing function above the network layer at the destination
machine. Correspondingly, we also need a multiplexing function above the
network layer at the source machine, which allows many programs to share the
services provided by the network layer. Thus, all end machines need to have one
more layer of functionality which allows data from many applications to be
transported across the network, and reach the correct application at the other
end – what we call as the transport layer.
Other than this transport, there are two other distinct functions which may be
required by all applications. Instead of each application individually building in
these functions, it may be wise to group them and provide it as a service, that
can be used by the applications if necessary. These two functions deal with
establishing a session between the two ends, and taking care of presentation
aspects.
Of these, the bottom three layers are required at all the end machines and the
intermediate routers, whereas all seven layers would have to be present at the
end machines. This is referred to as the 7-layered OSI model. There is also a 5-
layered TCP/IP model (discussed in the next section), which does not have the
presentation and sessions layers. It assumes that the functions of these two
layers can be taken care of by the applications themselves.
The reverse process takes place on the Rx side – as the data moves from the
lower level layers to the application layer – and finally to the user. Each layer
removes the headers put in by its peer layer, does the necessary processing,
and passes the ‘data’ to the higher layer.
Data link layer : Groups the raw bits into frames and sends them. Issues include:
o Framing and acknowledgement handling
o Error control - Retransmission in case of error.
o Flow Control - making sure that a slow receiver is not flooded by a fast
transmitter.
o Accessing the physical media - Media Access control (MAC) function.
o Normally, implemented on the network adapter.
These 3 lower level layers are implemented on all network nodes & switches.
Applications Layer: The actual applications that really need to communicate from
one system to another. Examples are : FTP, HTTP, etc.
This is also referred to as the Internet architecture - as this is the basis of the all
too familiar ‘Internet’. Or as the DOD model – as it was funded by the
Department of Defense. This model has almost replaced the OSI model.
OSI TCP/IP
Application Application
xPresentation
Session
Transport Transport
Net work Internet
Data Link Host to Net
Physical work
-------------------
----
(A single
layer or two
layers)
The 2 key constituents of this model are the TCP and IP protocols used at the
transport and network layers.
The Transport layer also supports another protocol – the UDP. The application
directly talks to either the TCP or the UDP which in turn use IP.
Among these 2 models, (OSI vs TCP/IP), it is the TCP/IP model that has
flourished. Hence the discussion in the following chapters focuses more on the
TCP/IP model.
Here, we take a look at the bottom-most layer in the network architecture- the
one that is actually responsible for moving /transmitting the bits from one place to
another.
Actually a ‘physical’ connection may or may not exist- in the sense that it could
be a wired or wireless connection. Nevertheless, the job of transferring the bits is
handled at this layer. We will first take a look at wired networks before moving
into the realm of wireless networks.
• Twisted pair - Shielded twisted pair STP & Un-shieled twisted pair
UTP.
• Coaxial Cable.
• Optical Fiber.
Depending on the distance, and the bandwidth required (or in other words,
the data rate required), the physical media is chosen.
Some of the factors that determine these two essential parameters are:
(i) Bandwidth - Higher the bandwidth, higher is the data rate that can be
achieved.
(ii) Attenuation - This limits the distance. Different media have different
attenuation values.
Each receiver could cause some attenuation thereby limiting the distance and/or
data rate.
The date rates and the distances that different media support subject to these
factors are discussed below.
Twisted Pair
Coaxial cable
Optical fibre
Wireless Transmission
Terrestrial Microwave
Satellite Microwave
• Use of satellite to link two or more ground stations. Uplink to satellite and
down link to station (at different frequencies) with satellite acting as
amplifier /repeater.
• Used for T.V. and long distance telephone Tx.
• Data rates – up to 100 Mbps.
• Problems due to interference etc.
Infrared
1. What are the different types of guided media available for building
networks ?
2. What is the advantage of optical fibre ?
3. What data rates can be achieved using the various types of media ?
Chapter 4
• Framing,
• Flow Control,
• Error Control,
• Addressing and
• Link management.
A frame is a group of bits exchanged between two nodes at the data link layer.
The issue in framing is in determining which set of bits constitute a frame, so that
the sender and receiver are able to identify the frames uniquely. There are
several approaches to this. We look at these in this lesson.
In this set of protocols, a frame is a collection of bytes. This approach was used
on the earliest systems. Specific byte values indicate start of frame, end of frame,
synchronization information and so on. A typical example would be the BISYNC
protocol whose frame format is given below,
Other than CRC, which is 2 bytes, all other control characters are of 1 byte.
This system works fine as long as an ETX character pattern does not appear in
the data stream. If it does, it would indicate a premature end- of –frame signal to
the receiver.
To avoid this, if this pattern occurs in the data stream, it is preceded by another
control character – the DLE character. Now if the DLE character itself appears in
the data stream, it is preceded by one more DLE,!
Yet another approach that can be adopted to specify the end of frame, is to have
a ‘byte-count’ field in the header and read-in that many bytes. The danger here
is-‘What happens if that field gets corrupted?’ The receiver would go on reading
as many bytes, but would determine at the end (Using the error- check field) that
an erroneous frame has been received.
Similarly, think about what would happen if the ETX field in the previous approach
got corrupted!!
A disadvantage of byte-oriented protocols is that while they are very natural for
transmission of text, they are not really suited for any arbitrary data, like, say, pixel
values in an image. The bit-oriented protocols are more suited for that.
The common standard for bit-oriented protocol is the HDLC protocol, standing for
High-level Data Link Control protocol.
In this protocol, both the start of frame and the end of frame are denoted by a
unique pattern of bits 01111110, called the flag. This flag pattern is transmitted
even when the link is idle to keep the clocks of the transmitter and the receiver in
synchronization. When two frames are sent back-to-back, a single flag serves as
the end of the first frame and the start flag of the second.
Here again it is possible that this sequence appears amidst data, thus signaling a
premature end-of-frame. To avoid this, a technique called bit- stuffing is used.
This technique works as follows :
(Note that – this is similar to adding a DLE character in the byte-oriented protocol.
If this is bit-stuffing, that is character stuffing) .
Look at the errors that can occur in this framing scheme. One frame could get
split into two frames due to an error, or two frames could be combined into one.
Can you identify the situation when these would occur?
The basic HDLC frame format is as below (fig 4.1) :
This is a third approach to framing. The major difference from the previous 2
approaches is that here we are talking about fixed-length frames, and the clocks
at the 2 ends are synchronized. Time- division multiplexing (TDM) may also be
viewed under this catergory. We’ll briefly look at 2 standards- the T1-
carrier(using TDM) and the SONET to understand this.
TDM:
TDM basically refers to multiple data sources sharing a common link on a time-
bound basis. We’ll focus on the framing aspects alone.
In TDM, each frame contains a set of time slots. In each frame, one or more slots
is dedicated to a data source. Each time-slot has a fixed number of bits, say one
byte. Data from each of the sources are interleaved to form the TDM frame. At
the receiver, the frame is de-multiplexed and routed to appropriate destination.
An example TDM
The T1 carrier uses a frame with 193 bits-24 channels of 8 bits each + 1 control
bit, repeated every 64µ s to give a data rate of 1.544 Mbps.
SONET
This is a standard which specifies how data is transmitted over optical networks.
Here again, we’ll focus on the framing related aspects alone.
There are different SONET links used for different data rates. Let us look at the
lowest speed link, known as STS- , which runs at 51.84 Mbps. The STS-
SONET frame is of a fixed length of 810 bytes- normally depicted as 9 rows X 90
columns as shown in the figure (fig 4.3) below.
So that fixes the length of the frame. But how is the start of frame identified? – By
the first 2 bytes of the frame. The first 2 bytes contain a special bit pattern, and it
is this pattern that is used to detect the beginning of the frame. Again this pattern
could occur as part of the data pay load. So the Rx keeps checking at the end of
every 810 bytes for the desired pattern, to make sure it is in sync.
Flow control is to ensure that the sending station does not send frames at a rate
faster than what can be handled by the receiving station. Normally, the receiver
allocates some buffer of a certain maximum length for the data. It also needs to
do some processing of the data received. So it is possible that the buffer gets full.
And if the Transmitter continues to send the data, it will have no place at the Rx
and will have to be dropped.
Hence, some kind of flow control mechanism is required to handle this. Two
simple schemes are commonly used-
• Stop and wait flow control and
• Sliding window flow control.
It is the simplest form of flow control. Let us see how this works. A source
transmits a frame. After the destination receives it, it sends an
acknowledgement (Ack). The source waits for the acknowledgement before it
sends the next frame. The destination can thus control the flow by withholding or
delaying the ack.
It works as follows :
o Assume that the Rx station has a buffer space for n frames. The Tx station
can send up to n frames without waiting for an Ack. As and when it receives acks
from the Rx, it can send additional frames.
o Again, ACK need not be sent for every frame. If a sequence number is
given to the frames, an Ack will specify the next sequence number that the Rx is
ready to accept. The Tx can then send up to n frames starting from the number.
Both the Tx and the Rx keep track of a window of frames that need to be sent and
received. And this window keeps sliding as Acks are received – giving it its name
(Fig. 4.4).
o
o
Assume window size n = 7, and sequence numbers from 0 to 7. See how the
windows shrink and expand (shown by the arrows) as data is sent and Acks are
received. Window size shrinks from 7 to 4, and then back to 7 and so on.
• In addition to a positive Ack, the Rx may also send a RNR – receiver not
ready signal, to stop the Tx. When ready, it can again send an Ack or Receiver
Ready Signal to resume the transmission.
• When both sides are transmitting and receiving, the Ack field is normally
piggybacked (literally sent on its back) on the data frame being sent in the
opposite direction.
Understand this protocol well - It’s the basis for reliable transmission in TCP.
Lesson 4.3 Error Control
This refers to the detection of errors that occur during transmission, and their
correction.
Is it possible to detect all errors and correct them? – Difficult to give an absolute
answer, because it depends on many factors - How noisy is the channel? What
is the bit-error rate (BER)? How good is the error detection technique etc. What if
the BER is very low? Do we still need error detection? Given the bit error rate, it
is possible to find the probability of a frame being in error. And it often turns out
that even for low BER, the probability of a frame being in error is of a higher
magnitude. So some error control mechanism is required.
Error correction is a more tedious process, and has higher over heads. Also, as
long as we detect that a frame is in error, we can always have it retransmitted.
Hence we’ll focus on the error detection techniques. 2 of them- Parity check,
CRC.
This is the simplest of all schemes – you must be familiar with this. Just XOR all
the bits to generate the parity bit and send that along. Rx checks for the valid
parity bit.
This can detect cases where odd number of bits are in error.
At Tx:
1111011
1101) 1001001000
1101
1000
1101
1010
1101
1111
1101
1000
1101
1010
1101
111 = R
The new message is 1001001111.
At Rx:
1111011
1101) 1001001111
1101
1000
1101
1010
1101
1111
1101
1011
1101
1101
1101
0 = R.
Hence no error!!
What is the trick here – it is in choosing the polynomial. The polynomial should
have certain characteristics so that certain kinds of errors can be detected. An
important point to note is that the coefficients of xn and x0 should always be 1.
If(x+1) is a factor of the polynomial then all odd bit errors can be deducted. The
basis (Mathematics) for this comes from the topic of Fields and Groups. Don’t
worry too much about it. Just remember where it comes from, so that if you need
it at some point of time, you can refer to it. Also, our life has been made much
simpler by researchers who have already identified some standard polynomials
which are widely used today. Some of these are :
CRC – 16 x16+x15+x2+1
CRC - CCITT x16+x12+x5+1
CRC -32 x +x +x23+c22+c16+x12+x11+x10+x8+x7+x5+x4+x2+1
32 26
Given that we have a mechanism for detecting errors, what does error control
involve? We’ll look at that now.
4.3.3 ARQ
(ii) The frame could have reached correctly. But the Ack sent by the
receiver may be in error or be lost.
Now, again the transmitter will time-out, and resend the frame. The
duplicate frame will be accepted by the receiver as a separate
frame.
Go-back-N ARQ:-
Stn A
Fig 4.7 Go-back N ARQ
Station A sends frames 0 to 5. Frame 3 is lost. Receiver replies are sent for
frames 0-2. When frame 4 arrives at B, it is out of sequence, hence a REJ3
frame is sent. On receiving the REJ3 frame, station A resends all frames
starting from frame 3. On receiving frame 3, B sends an ACK (RR) but it is
lost. However, subsequent acks for frames 4 and 5 arrive. Since ACKs are
cumulative, frames 4 and 5 are taken as ACKed. Meanwhile, frame 6
reaches B, but B does not respond with an ACk for some reason. A timeout
occurs at A for frame 6, and it sends a RR frame with a P-bit set. On
receiving that B responds with an RR for 5. A sends frame 6 again, and B
ACKs it.
Unit 1 Summary
a. Each layer of the OSI model uses encryption to put the PDU from the
upper layer into its data field. It adds header and trailer information that is
available to its counterpart on the system that will receive it.
b. Data always needs to be tunneled to its destination so encapsulation
must be used.
c. Each layer of the OSI model uses compression to put the PDU from the
upper layer into its data field. It adds header and trailer information that is
available to its counterpart on the system that will receive it.
d. Each layer of the OSI model puts the PDU from the upper layer into its
data field. It adds header and trailer information so that its counterpart on
the system that will receive it correctly.
2. A protocol stack is
a. the way in which the data are passed between layers in the TCP/IP
architecture
b. a set of rules for making a sandwich
c. never implemented because of the lack of ISO standards
d. software that implements the layers of a protocol.
4. The Internet
a. is implemented using the TCP/IP protocol
b. has the attribute of service generality
c. allows multiplexing
d. all of the above
8. From the standpoint of connection, there are two basic types of data
transmission. They are
a. Unconnected
b. Connection-oriented
c. Connectionless
d. a & b
e. b& c
9. The term used to describe the ability of the receiving end to limit the
amount or rate at which data is sent by the transmitting end is
a. Transmit control
b. Flow control
c. Check damming
d. Flow limiting
11. In HDLC, when a 0 is inserted after all strings of five consecutive 1s, the
term is called
a. Zeroing
b. Synchronizing
c. String breaking
d. Bit stuffing
13. When a receiver must acknowledge every block of data before the next
block is sent, the type of flow control being used is
a. Sliding window
b. Stop and hop
c. Stop and go
d. Stop and wait
14. The name of the flow control protocol in which the sending station resends
the damaged or out of sequence frame and all frame after it, on receipt of
a NAK is
a. Selective reject
b. Selective repeat
c. Go-back-n
d. Sliding window
17. Techniques to ensure that a fast transmitting node does not send data
faster than the receiving node can receive and process are called
a. Parity checking
b. CRC
c. Flow control
d. Error control
19. CRC
a. Is a particular implementation of a more general class of error
detection techniques called polynomial error checking
b. Provides additional bits so that errors can be corrected at the
receiving end
c. Requires a math coprocessor to calculate its value
d. Uses Hamming code to improve accuracy of data
20. A device that operates at the physical layer and is used to regenerate
signals is called
a. Gateway
b. Repeater
c. Switch
d. Bridge
Exercises :
4. List the functions to be performed by a data link control protocol. Why are
they needed?
5. How would you determine the number of bits for the sequence number
field in a sliding window protocol ? Calculate the number of bits in the
sequence number field for a 1 Mbps link with a one-way latency of 1.25
secs, assuming that each frame carries 1KB of data.
(b) With the selective repeat protocol, it is possible for the sender to
receive an ACK for a packet that falls outside its current window.