You are on page 1of 41

UNIT 1

Introduction – Layers 1 and 2


Building a network – requirements – network architecture – OSI – Internet –
Direct link networks – hardware building blocks – framing – error detection
– reliable transmission

The purpose of this course is to understand the behavior or working of networks


as they are today, and use this knowledge to build or design computer networks.
To this end, we study the existing network architectures, their requirements, their
performance & their issues - solved and yet to be solved. This unit introduces
some basic concepts of simple networking.

This unit is organized into 4 chapters, and each chapter is organized into a
number of lessons as outlined below.

Structure of the unit


1. Introduction
1.1 Networking – What ? Why? & How?
1.2 Requirements
2. Network Architecture
2.1 Layering and protocols
2.2 OSI Architecture
2.3 TCP/IP
3. The Physical connection
3.1 The links
4. The logical connection – Data Link
4.1 Framing
4.1.1 Bit-oriented framing
4.1.2 Byte-oriented framing
4.1.3 Clock-based framing
4.2 Flow control
4.2.1 Stop and wait flow control
4.2.2 Sliding window flow control
4.3. Error control
4.3.1 Parity check
4.3.2 CRC
4.3.3 ARQ mechanisms

Learning Objectives

On completion of this unit you should be able to


• Identify the requirements of a network.
• Understand the concept of layering.
• Specify the layers in the OSI and TCP/IP architecture.
• Identify the functionality of each layer.
• Get an idea of how networks are physically connected.
• Understand the functionalities of the logical link.
• Discuss various framing, flow control and error control schemes.

CHAPTER 1

INTRODUCTION
In this chapter we identify the need for a network, and the complexity in building
networks. Lessons in this chapter:

1..1 Networking – what ? Why? & How?


1.2 Requirements

Lesson 1.1 Networking –What? Why? & How?

Let us ponder over these questions. Even a child/novice who has some exposure
to computers would know “what” a network is. A simplistic definition of “an
interconnection of different machines” would suffice to start with.

The question of “Why Networking” has been rendered meaningless in today’s


context, due to the ubiquitous ‘Internet’ and the world - wide web. It would
amount to stating the obvious. Everybody today needs the Internet and the
World-wide-web – to get information on any topic, to book tickets, to send mail to
one another, to watch movies, to listen to music, to play games, and the list goes
on. And, all this happens because of the underlying network.

It is interesting to take a look at some of the developments that have lead to the
current situation and some of the present and proposed applications. That is,
how things have been happening, what is actually being done, and what is likely
to happen in the future. This is more or less the goal of this course.
In general, one can see that the enhancement of network facilities and the
demand of applications have had a mutually contributing effect. On the one hand,
facilities are enhanced to meet the demand of applications; on the other,
enhanced facilities enable one to envisage applications that make use of the
available facilities. For instance, initial networks were used only by universities to
share information among them. Then, these networks were made accessible to
people outside of these universities, and with that came a requirement of sending
/ sharing data and voice among users. And also more data, which meant more,
speed. An associated development therefore was the design of special networks
such as integrated services network and the high-speed ATM networks that can
handle more data and voice at higher speeds. With the kind of bandwidths that
these networks can support, demanding applications like, video on demand,
streaming of audio and video, interactive gaming etc., are being pursued.

Looking at the application scene of today, a number of popular applications exist.


FTP or file Transfer Protocol- still continues to be one of the most useful
applications. The World Wide Web of course, would win - hands down. Audio
and video conferencing is another that has become popular. E-mail and
newsgroups have become part of day-to-day activities, and e-commerce and its
cousin – web services is the mantra today. And all these depend on a robust
network to be really meaningful.
So we’ll try to take a look at what all these things mean from a networking
perspective.

As we go through this course, we will look at the technology, techniques,


architecture, protocols - hardware and software that should give you an
understanding of how all of this really works.

Lesson 1.2 Requirements

But, before trying to understand how these networks operate to support the
different applications, it is necessary and useful to understand the requirements
or specifications - both mandatory and desired, that it should meet. Further, this
list of requirements should incorporate the perspectives of the user and the
designer. In addition, it should also take into account the needs of the network
provider- the one who provides the services to the users.

In order to understand the specifications/ requirements of a network, let us start


with building a small network. The smallest network that we can think of is a two-
machine network. Think of the many ways in which you can connect up two
machines to form a network. What do we need ? We need a physical media – a
copper wire, an optical cable, or a wireless connection to physically transfer the
data from one machine to another.
We need to have a transmitter on one machine that will send data and a receiver
on the other that will receive the data. The transmitter and receiver should talk
the same language – in terms of how or in what format they are actually
exchanging data. That is , the data must first be put in some format that the
transmitter and receiver have agreed upon, and then sent, again at a mutually
agreed upon rate, using the physical media.

A very simple example of such a network would be an interconnection of two


machines through a serial port by means of a serial cable. An application
program running on one machine can then talk to a program running on the other
by writing to the serial port and reading from it. The only requirement would be
that both the serial ports be configured to support the same data format, and data
rate.

Let us expand this network a little further. Let us connect 3 machines to form a
network. At the physical level, we need to look at how three machines may be
interconnected. Its simple ! – Every machine connects to the other two using two
separate links. All machines are fully connected to one another (fig. 1.1). The
rest of the communication is like a two machine set up.

Fig.1.1 Three machines connected to each other through separate links

Now expand to more than 3 machines, some value ‘n’. Look at the
interconnection options –
(i) a fully connected network as in the 3 machine set-up. We need n-1
connections for each machine.
(ii) A ring structure, where each node has two connections to its left and
right neighbours – and they join hands to form a ring (fig.1.2a).
(iii) A star structure – where there is one central node to which all
machines connect, and they communicate to one another through that
node etc (fig 1.2b).
These structures are called topologies in the networking world.
Fig. 1.2 Topologies - a) Ring b) Star

A fully connected network, would work in a manner similar to our 2-machine


network, since there is a point-to-point (dedicated) link between every pair of
nodes. The main disadvantage of this network is that a lot of wire is required to
establish the network. It can become costly.

A ring, star or bus network (we will look at these in detail later), on the other
hand, would require less wire, but require more than one node to share the
cable, and sometimes need the cooperation of other nodes to send data. This
implies that there should be some mechanism for such sharing and co-
ordination. We need access mechanisms to determine who gets access to the
cable at any given point of time – what we refer to as “media access control”.

This access control mechanism can actually be viewed as some kind of


multiplexing. The concept of multiplexing is something that we are already
familiar with – many inputs sharing a common output. Here, many inputs are the
nodes that need to send data, and the output is the channel itself. At the other
end of the channel, we need the reverse function – demultiplexing, where, the
data coming on the channel (input) is sent to one of many machines (output).
Understand the concept of mux and demux intuitively. We will keep referring to
this idea again and again at various places. This mux-demux operation can be
done by sharing the channel in terms of time – time-sharing or time-division
multiplexing; in terms of frequency, wavelength etc., giving rise to frequency
division multiplexing (FDM), wavelength division multiplexing (WDM) and so on.
It can also be done on an ad hoc or on-demand basis. We will not worry about
the techniques now. We will understand them better when we come to the actual
places where they are used.

Let us pause this discussion here, and consolidate the requirements for our
network. We need
(i) some physical media that determines the basic rate at which
data can be transferred, to what distance etc.
(ii) to choose a suitable topology for the network along with any
additional hardware devices.
(iii) Media access control techniques
(iv) Applications that will use this infrastructure to communicate.

If we want to expand our network even further, we may add more machines to
the same network if they are to be located near-by (say same department or
floor), or have separate networks at different locations and interconnect them.
This implies that we first need some interconnection mechanism to connect
various networks, and then a routing mechanism to determine how to send data
from a node on one network to another node on another network. This will
require what are called switching mechanisms and addressing mechanisms.
Switching mechanisms commonly used are circuit-switching or packet switching
or some variation of these. Circuit switching is something we use in telephone
networks – where an end-to-end path is established before communication takes
place. Packet switching is like your SMS, you send a message that is then
delivered offline to the destination. Addressing mechanism is needed to identify
‘who’ is sending, ‘from where’ ‘to whom’ ! That is we will need additional devices
called switches or routers that allow such extensions or interconnections.

Once we have all this infrastructure, there can be many applications on one
machine that need to share this infrastructure. So, once again, we need some
mux-demux mechanisms on each node that allow such sharing to happen (fig.
1.3).
Fig.1.3 Mux – Demux Structure

And while we construct a network with all these features, two main questions will
arise – what is the cost ? and what is the performance ? These are two very
pertinent questions. Ideally, we would like to minimize the former, and maximize
the latter. Two contradicting goals which actually lead to some interesting
challenges and trade-offs in the design of networks. Yet another question is how
scalable is the network – meaning – how many more machines/ networks can be
connected without affecting performance.

Let us just spend a few minutes on what we actually mean by performance in


this context. The terms used commonly are throughput – how much data is
transferred per unit of time, delay – how long does it take, and reliability – how
reliable is the service given that failures will occur. All three are important. Some
more important than the others – depending on the application. Just think of the
network applications that you commonly use, and try to figure out which of these
parameters are critical for each of them.
Broadly, these requirements can be categorised into the following:

(i) Connectivity
(ii) scalability.
(iii) Cost-effectiveness
(iv) Functionality.
(v) Reliability.
(vi) Performance.

So, the question is – how do we actually go about designing these networks ?


What has been done and why ? These are what we try to answer as we go on.

Have you understood ?

1. Why is networking important ?


2. How to build a small network ?
3. What are the requirements of a network ?
4. What is meant by topology of a network ?
5. What is meant by multiplexing / demultiplexing operation in the networking
domain?

Try these :
1. How would you calculate the cost of setting up a network?
2. Name a few components that you need to purchase to setup a LAN for a
small office.
CHAPTER 2
NETWORK ARCHITECTURE
To handle the complex requirements outlined in the previous lesson, network
designers have developed a ‘Network architecture’ that is used to design and
implement networks.

There are two models that are prevalent- the OSI model and the TCP/IP model.
We’ll first look at some of the general fundamentals of these, before addressing
each separately.

Lessons in this chapter


2.1 Layering and protocols.
2.2 OSI Architecture.
2.3 TCP/IP.

Lesson 2.1 LAYERING AND PROTOCOLS

Layering is a concept that is common to both the models to be discussed here.


So we’ll look at that first.

Layering can be viewed as different levels of abstraction of the network’s


services; i.e., the different services provided by the network can be abstracted at
different levels. Each abstraction would roughly correspond to a layer. Each
layer performs its defined set of functions and provides certain services to the
layer above it. The idea is that an application need not worry about all the details
of all the functions performed by the network. It needs to interface only with the
top most layer. Similarly, each layer needs to interface only with 2 layers – the
one above it – to provide some services, and to the one below it – to use its
services. Each layer will have an interface – a service interface – through which
the layer above it can access its services. Also, each layer will have a peer-
interface which defines how this layer interacts with the corresponding layer on a
remote machine.

Referring to the discussion in the previous section, our simple two machine
network can be composed of two layers on each machine – an application layer
on top of a physical connectivity layer provided by the serial interface. The ‘n’
machine network, as a composition of application layer on top of a media access
layer, which is on top of a physical layer.
2.1.1 Advantages of layering:

(i) Problem decomposed into manageable components.


 Need not write one huge monolithic piece of s/w.
(ii) Modular approach.
 Can conveniently change any layer or its
functions without affecting the others.
(iii) Flexibility.

Now let us look at what we mean by a protocol.

2.1.2 What is a protocol?

• A protocol gives the how, what & when of the communication that takes
place between 2 entities. The two entities here could be two adjacent
layers in the tiered/layered approach or the two peer layers in each node
that are communicating with one another.
• In essence, it gives the
 Syntax à How?
 Semantics à What? and
  Timings à When? of the data being
exchanged between the two entities.

By combining these two concepts – of layering and the use of protocols, the
complex task of networking is broken down into more manageable pieces that
work together giving rise to the idea of a layered network architecture.

Two such network architectures are in vogue - the OSI Architecture and the
TCP/IP Architecture. We next look at the salient features of these two
architectures.

Lesson 2.2 OSI Model

The OSI Architecture is a standard proposed by ISO, many years ago – in 1983.
This network architecture divides the network functionality into 7 layers as shown
below (fig. 2.1).

Application Layer
Presentation Layer
Session Layer
Transport Layer
Network Layer
Data Link Layer
Physical Layer
Fig. 2.1 Architecture as per OSI Model

A few lines to consolidate the functions of each of these layers.

The physical layer – the bottom-most layer in the architecture, deals with (relates
to) the physical medium, that is actually responsible for transferring the data from
one side to the other by means of signals. The signals may be electrical, electro-
magnetic, optical etc., and accordingly the media used to carry these signals
would be of different characteristics. This layer therefore deals with questions like
– what kind of signals, what kind of media, how much data can be represented
by the signals, data rate for a signal rate, how to transmit the signal, how to
recover the signal in the presence of noise, what is the delay in transmitting and
propagating the signal etc.

The data link layer which sits just above the physical layer is responsible for
making some sense out of the raw bits that are sent to and received from the
physical layer. This layer groups the bits into some manageable size. A group of
bits at this layer is referred to as a frame. This grouping is done so that we can
give some identity to the data being transmitted, and keep track of how much
data has been sent or received correctly on either side. If a frame of data does
not reach the other end due to some noise or error at the physical layer, then this
needs to be identified and some action needs to be taken. This is referred to as
error handling / control. Similarly, we need to check if the receiver is able to
accept the data, and store it into its buffer at the rate sent by the sender, else we
may again lose the data. A fast sender should not overwhelm a slow receiver.
Ensuring this goes by the name of flow control. Thus, framing, error control, flow
control etc. are functions to be handled by the data link layer. Essentially, this
layer is responsible for making sure that data is sent reliably across a physical
network.

If we are dealing with a two machine network that we discussed earlier, the
physical layer and the data link layer are the two layers that would suffice. But if
we are to deal with more machines interconnected to form a network, we need a
network layer to handle network-related functions. This layer located just above
the DLL, is responsible for routing packets across many networks/nodes.
Mechanisms to identify a path/route to be taken from source to destination, are
part of this layer. All devices such as routers/ switches must have all the
functionalities up to this layer (i.e., PL, DLL and NL).

Now, once the data reaches the destination machine with the help of the network
layer, it has to be sent to the correct recipient application on that machine. That
is, we need a demultiplexing function above the network layer at the destination
machine. Correspondingly, we also need a multiplexing function above the
network layer at the source machine, which allows many programs to share the
services provided by the network layer. Thus, all end machines need to have one
more layer of functionality which allows data from many applications to be
transported across the network, and reach the correct application at the other
end – what we call as the transport layer.

Other than this transport, there are two other distinct functions which may be
required by all applications. Instead of each application individually building in
these functions, it may be wise to group them and provide it as a service, that
can be used by the applications if necessary. These two functions deal with
establishing a session between the two ends, and taking care of presentation
aspects.

A few words on what we mean by these terms : Establishing a session helps to


keep track of data being transported for a given application, and prevent
intermittent network failure from affecting the application. Presentation deals with
“representing” the data in a format that is understood correctly by the end
machines. Remember that two machines may represent the same data in
different formats. The job of the presentation layer is to iron out such differences.
Thus we can visualize the seven layers required for the network to function.

Of these, the bottom three layers are required at all the end machines and the
intermediate routers, whereas all seven layers would have to be present at the
end machines. This is referred to as the 7-layered OSI model. There is also a 5-
layered TCP/IP model (discussed in the next section), which does not have the
presentation and sessions layers. It assumes that the functions of these two
layers can be taken care of by the applications themselves.

Data flow in layered models - Encapsulation:


The data is presented by the user to a specific application layer protocol. This
protocol adds its header to it and passes this on to the next layer. The header
contains control information that the layer wants to convey to its counter part on
the receiving side – to provide some functionality. The presentation layer treats
this whole package as its data and processes it. It adds its own header and
passes it to the next layer. This way the data travels down to the lowest layer –
the physical layer on one side, gets converted into raw bits and is transmitted.
Each layer is said to encapsulate the packet it receives from the higher layer.

The reverse process takes place on the Rx side – as the data moves from the
lower level layers to the application layer – and finally to the user. Each layer
removes the headers put in by its peer layer, does the necessary processing,
and passes the ‘data’ to the higher layer.

Summary of issues in each layer

Physical layer : is concerned with transmitting raw bits over a communication


link. Issues include:
o How are the bits represented.
o How are they transmitted.
o Speed of Transmission.
o Physical media for transmission etc.

Data link layer : Groups the raw bits into frames and sends them. Issues include:
o Framing and acknowledgement handling
o Error control - Retransmission in case of error.
o Flow Control - making sure that a slow receiver is not flooded by a fast
transmitter.
o Accessing the physical media - Media Access control (MAC) function.
o Normally, implemented on the network adapter.

Network Layer: concerned with routing packets from source to destination.


Issues include:
o ‘Packets’ of data are exchanged between hosts
o Static or dynamic routing.
o Should handle heterogeneous networks.

These 3 lower level layers are implemented on all network nodes & switches.

Transport Layer: An end to end protocol layer. A message is exchanged from


host to host. Issues include:
o Multiplexing of different message channels on the network. Error free point
to point delivery.
o In -order delivery.
o Flow control between hosts.
Sessions Layer: Allow users on different hosts to establish a session between
them. Issues include:
o Dialogue control.
o Synchronization - check pointing to resume in case of failure.

Presentation Layer : Concerned with the format of the data exchanged


between the end systems. Issues include:
o Data representation standards.
o Abstract data structure. 

Applications Layer: The actual applications that really need to communicate from
one system to another. Examples are : FTP, HTTP, etc.

Lesson 2.3. The TCP/IP model

This is also referred to as the Internet architecture - as this is the basis of the all
too familiar ‘Internet’. Or as the DOD model – as it was funded by the
Department of Defense. This model has almost replaced the OSI model.

This is a 4 or 5 layer model – the 7 layers of OSI are collapsed into 4 or 5 as


shown below (Fig 2.2):

OSI TCP/IP
Application Application
xPresentation
Session
Transport Transport
Net work Internet
Data Link Host to Net
Physical work

-------------------
----

(A single
layer or two
layers)

Fig. 2.2 TCP/IP Model


The Presentation and Sessions layers are done away with – the application layer
is expected to handle these issues. The lowest 2 layers are combined into one in
the 4 layered model, and they are retained as such in the 5 layer model. This
makes sense – as these 2 are normally handled together by the network adaptor
card.

The 2 key constituents of this model are the TCP and IP protocols used at the
transport and network layers.
The Transport layer also supports another protocol – the UDP. The application
directly talks to either the TCP or the UDP which in turn use IP.

Among these 2 models, (OSI vs TCP/IP), it is the TCP/IP model that has
flourished. Hence the discussion in the following chapters focuses more on the
TCP/IP model.

Have you understood ?

1. Name 2 reasons for using layered protocols.


2. List the seven layers of the OSI architecture.
3. What is the primary function of each layer ?
4. Which are the missing layers in the TCP/IP model?
5. How does the data flow from one application on one machine to another
application on some other machine ?
CHAPTER – 3
THE PHYSICAL CONNECTION IN A NETWORK
This chapter provides a quick brush up of the physical layer characteristics and
functions.

Lesson 3.1 The Links

Here, we take a look at the bottom-most layer in the network architecture- the
one that is actually responsible for moving /transmitting the bits from one place to
another.

Actually a ‘physical’ connection may or may not exist- in the sense that it could
be a wired or wireless connection. Nevertheless, the job of transferring the bits is
handled at this layer. We will first take a look at wired networks before moving
into the realm of wireless networks.

The different kinds of guided media commonly used are:

• Twisted pair - Shielded twisted pair STP & Un-shieled twisted pair
UTP.
• Coaxial Cable.
• Optical Fiber.

Depending on the distance, and the bandwidth required (or in other words,
the data rate required), the physical media is chosen.

Some of the factors that determine these two essential parameters are:

(i) Bandwidth - Higher the bandwidth, higher is the data rate that can be
achieved.

(ii) Attenuation - This limits the distance. Different media have different
attenuation values.

(iii) Interference - Susceptibility to interference can totally wipeout the signal


being transmitted.

(iv) No of Receivers - This is specially in the case of guided media.

Each receiver could cause some attenuation thereby limiting the distance and/or
data rate.

The date rates and the distances that different media support subject to these
factors are discussed below.
Twisted Pair

This consists of 2 insulated copper wires twisted together in a regular pattern.


• A number of such pairs are bundled together and wrapped in a tough
protective sheath.
• The twist helps to decrease the cross-talk interference between adjacent
pairs in a cable. Adjacent pairs have different twist lengths.
• Can be used to transmit both analog and digital signals.
• Limited in distance, bandwidth & data-rate.
o For Analog - Distance of up to 5 to 6 km between amplifiers Bandwidth
– up to 250 KHz.
o For Digital - Distance of 2-3 km between repeaters Bandwidth – is
such that data rates up to a few Mbps for very short distances – even 100
Mbps.
o 2 types
Unshielded – UTP à This is commonly used.
Shielded – STP.
UTP:
o eg. Telephone cable
o Inexpensive
o 2 categories – CAT 3 à upto 16MHZ(Voice grade) and CAT 5 à
upto100 MHZ à More twists/inch. More expensive.
o Used extensively in LANs.

Coaxial cable

• 2 conductors – 1 inner wire surrounded by an outer conductor connected


to GND with insulation / dielectric material between them.
• can be used over longer distances.
o Base band cable – for analog – upto 400 MHZ.
o Broad band cable – for digital ≅ 500 bps.. with repeater spacing of 1 to 10
Km.
eg. T.V. Cable
- used in LANs.

Optical fibre

• A very thin flexible medium (made of glass/plastic) capable of conducting


an optical ray.
• Optical transmission of data.
• Has an inner core, a cladding on top and a jacket.
o Laser or LED light source is used.
o Light travels due to principle of total internal reflection.
• Many advantages.
o Very little loss.
o High data rates over very long distances.(2 Gbps.)
o Light weight.
o Used in long haul trunks (to carry tens of thousands of voice channels),
and MANs.
o Becoming increasingly popular.

Wireless Transmission

Also referred to as unguided media transmission.


Transmission takes place in the microwave spectrum or infrared spectrum.

Microwave spectrum (GHz range) à


(i)Terrestrial Microwave(point-to-point Tx.(2-40 GHz)).
(ii) Satellite microwave (point-to-point and broadcast Tx.)

Terrestrial Microwave

• Focusses a narrow beam of signal to achieve line-of-sight Tx to receiving


antenna.
• Usually microwave antennae are located at substantial height above
ground level to enable this.
• Used in long hand telecommunications and in short point-to-point link
between buildings.
• Data rates of up to 100 Mbps.

Satellite Microwave

• Use of satellite to link two or more ground stations. Uplink to satellite and
down link to station (at different frequencies) with satellite acting as
amplifier /repeater.
• Used for T.V. and long distance telephone Tx.
• Data rates – up to 100 Mbps.
• Problems due to interference etc.

Infrared

• Using transmitters / receivers that modulate non-coherent infrared light


(LEDs or Lasers)
• Line-of-sight communication – Does not cross walls à A kind of Security!
• Useful in LAN scenario.

Have you understood ?

1. What are the different types of guided media available for building
networks ?
2. What is the advantage of optical fibre ?
3. What data rates can be achieved using the various types of media ?
Chapter 4

THE LOGICAL CONNECTION – DATA LINK


The physical layer transports a stream of bits. But how does one manage or
control this flow of bits? This is handled by the data link layer that sits just above
the physical layer. At this layer, the focus shifts to organising the stream of bits
into blocks, and making sure that these blocks reach the other end- or atleast
knowing if they have or not. The issues here are :

• Framing,
• Flow Control,
• Error Control,
• Addressing and
• Link management.

We take a look at these in this chapter.

Lesson 4.1 FRAMING

A frame is a group of bits exchanged between two nodes at the data link layer.
The issue in framing is in determining which set of bits constitute a frame, so that
the sender and receiver are able to identify the frames uniquely. There are
several approaches to this. We look at these in this lesson.

4.1.1 Byte – oriented protocols

In this set of protocols, a frame is a collection of bytes. This approach was used
on the earliest systems. Specific byte values indicate start of frame, end of frame,
synchronization information and so on. A typical example would be the BISYNC
protocol whose frame format is given below,

SYN SYN SOH Header STX DATA ETX CRC

SYN - Indicates synchronization information.

SOH - Indicates start of Header character


STX - Indicates start of text.
ETX - Indicates end of text.
CRC - Cyclic redundancy check (A code used for error detection.)

Other than CRC, which is 2 bytes, all other control characters are of 1 byte.
This system works fine as long as an ETX character pattern does not appear in
the data stream. If it does, it would indicate a premature end- of –frame signal to
the receiver.

To avoid this, if this pattern occurs in the data stream, it is preceded by another
control character – the DLE character. Now if the DLE character itself appears in
the data stream, it is preceded by one more DLE,!

Yet another approach that can be adopted to specify the end of frame, is to have
a ‘byte-count’ field in the header and read-in that many bytes. The danger here
is-‘What happens if that field gets corrupted?’ The receiver would go on reading
as many bytes, but would determine at the end (Using the error- check field) that
an erroneous frame has been received.

Similarly, think about what would happen if the ETX field in the previous approach
got corrupted!!

A disadvantage of byte-oriented protocols is that while they are very natural for
transmission of text, they are not really suited for any arbitrary data, like, say, pixel
values in an image. The bit-oriented protocols are more suited for that.

4.1.2 Bit-Oriented protocols:

The common standard for bit-oriented protocol is the HDLC protocol, standing for
High-level Data Link Control protocol.

In this protocol, both the start of frame and the end of frame are denoted by a
unique pattern of bits 01111110, called the flag. This flag pattern is transmitted
even when the link is idle to keep the clocks of the transmitter and the receiver in
synchronization. When two frames are sent back-to-back, a single flag serves as
the end of the first frame and the start flag of the second.

Here again it is possible that this sequence appears amidst data, thus signaling a
premature end-of-frame. To avoid this, a technique called bit- stuffing is used.
This technique works as follows :

Whenever 5 consecutive ones are transmitted, as part of the message, a 0 is


inserted. On the receiver vide, a zero following 5 ones is removed.

(Note that – this is similar to adding a DLE character in the byte-oriented protocol.
If this is bit-stuffing, that is character stuffing) .

Look at the errors that can occur in this framing scheme. One frame could get
split into two frames due to an error, or two frames could be combined into one.
Can you identify the situation when these would occur?
The basic HDLC frame format is as below (fig 4.1) :

Flag Address Control Messag FCS Flag


e
8 bits 8 extendable 8 or 16 bits Variable 16 or 32 bits
8 bits

Address field – Identifies the transmitting/receiving station.


Control field – Identities the type of the frame- as information frame or control
frame.
FCS – Frame check sequence-16 bit or 32 bit CRC.

Fig 4.1 HDLC frame format

We’ll look into other details of HDLC, later.

4.1.3 Clock- based framing

This is a third approach to framing. The major difference from the previous 2
approaches is that here we are talking about fixed-length frames, and the clocks
at the 2 ends are synchronized. Time- division multiplexing (TDM) may also be
viewed under this catergory. We’ll briefly look at 2 standards- the T1-
carrier(using TDM) and the SONET to understand this.

TDM:

TDM basically refers to multiple data sources sharing a common link on a time-
bound basis. We’ll focus on the framing aspects alone.

In TDM, each frame contains a set of time slots. In each frame, one or more slots
is dedicated to a data source. Each time-slot has a fixed number of bits, say one
byte. Data from each of the sources are interleaved to form the TDM frame. At
the receiver, the frame is de-multiplexed and routed to appropriate destination.

An example TDM

The T1 carrier uses a frame with 193 bits-24 channels of 8 bits each + 1 control
bit, repeated every 64µ s to give a data rate of 1.544 Mbps.

The frame format would be as follows (fig 4.2):

Fig. 4.2 TDM format


No explicit datalink control mechanisms are needed at the frame level. What
ever link control is required by the individual stations is provided on a per channel
basis. However, some basic synchronization has to be provided between the
multiplexer at the transmitter end and the demux at the Rx end. This is provided
by the control bit attached to each frame. A pattern of alternating 1s and 0s in
consecutive frames (101010…) is used as the control sequence. This pattern is
unlikely to be sustained at a single bit position on any data channel. So the
receiver looks for this sequence in consecutive frames to establish
synchronization.

SONET

This is a standard which specifies how data is transmitted over optical networks.
Here again, we’ll focus on the framing related aspects alone.

There are different SONET links used for different data rates. Let us look at the
lowest speed link, known as STS- , which runs at 51.84 Mbps. The STS-
SONET frame is of a fixed length of 810 bytes- normally depicted as 9 rows X 90
columns as shown in the figure (fig 4.3) below.

Fig. 4.3 Outline of SONET frame format

So that fixes the length of the frame. But how is the start of frame identified? – By
the first 2 bytes of the frame. The first 2 bytes contain a special bit pattern, and it
is this pattern that is used to detect the beginning of the frame. Again this pattern
could occur as part of the data pay load. So the Rx keeps checking at the end of
every 810 bytes for the desired pattern, to make sure it is in sync.

Lesson 4.2 Flow Control

Flow control is to ensure that the sending station does not send frames at a rate
faster than what can be handled by the receiving station. Normally, the receiver
allocates some buffer of a certain maximum length for the data. It also needs to
do some processing of the data received. So it is possible that the buffer gets full.
And if the Transmitter continues to send the data, it will have no place at the Rx
and will have to be dropped.

Hence, some kind of flow control mechanism is required to handle this. Two
simple schemes are commonly used-
• Stop and wait flow control and
• Sliding window flow control.

4.2.1 Stop and Wait:

It is the simplest form of flow control. Let us see how this works. A source
transmits a frame. After the destination receives it, it sends an
acknowledgement (Ack). The source waits for the acknowledgement before it
sends the next frame. The destination can thus control the flow by withholding or
delaying the ack.

While this is very simple to implement, it has major disadvantages.


• It is very slow, because we need to wait for the ACK before sending
data.
• It is wasteful of bandwidth because we cannot send data even if
the line is free.
• That leads to poor link utilization.

4.2.2 Sliding Window:

In this scheme, multiple frames are allowed to be in transit, thereby overcoming


the inefficiency of the stop and wait protocol.

It works as follows :

o Assume that the Rx station has a buffer space for n frames. The Tx station
can send up to n frames without waiting for an Ack. As and when it receives acks
from the Rx, it can send additional frames.
o Again, ACK need not be sent for every frame. If a sequence number is
given to the frames, an Ack will specify the next sequence number that the Rx is
ready to accept. The Tx can then send up to n frames starting from the number.
Both the Tx and the Rx keep track of a window of frames that need to be sent and
received. And this window keeps sliding as Acks are received – giving it its name
(Fig. 4.4).
o
o

Fig 4.4 Sliding window flow control

• Every frame, therefore has to have a sequence number field. This is


clearly an overhead and we need to determine the number of bits to be
allocated to this field. It is interesting to note that a continuous running
sequence number need not be used. Instead the sequence number can
repeat from φ after n frames, where n is the window size.
• Thus if n=7, a 3 –bit field can be used and the frames numbered from 0 to
7.
• Try to figure out why we need 8 sequence numbers for a window size of 7.

A typical sequence of transfers is shown in the figure (Fig. 4.5) below.


Fig 4.5 A sequence of data transfer using sliding window flow control

Assume window size n = 7, and sequence numbers from 0 to 7. See how the
windows shrink and expand (shown by the arrows) as data is sent and Acks are
received. Window size shrinks from 7 to 4, and then back to 7 and so on.

• In addition to a positive Ack, the Rx may also send a RNR – receiver not
ready signal, to stop the Tx. When ready, it can again send an Ack or Receiver
Ready Signal to resume the transmission.
• When both sides are transmitting and receiving, the Ack field is normally
piggybacked (literally sent on its back) on the data frame being sent in the
opposite direction.

Understand this protocol well - It’s the basis for reliable transmission in TCP.
Lesson 4.3 Error Control

This refers to the detection of errors that occur during transmission, and their
correction.

Is it possible to detect all errors and correct them? – Difficult to give an absolute
answer, because it depends on many factors - How noisy is the channel? What
is the bit-error rate (BER)? How good is the error detection technique etc. What if
the BER is very low? Do we still need error detection? Given the bit error rate, it
is possible to find the probability of a frame being in error. And it often turns out
that even for low BER, the probability of a frame being in error is of a higher
magnitude. So some error control mechanism is required.

Error correction is a more tedious process, and has higher over heads. Also, as
long as we detect that a frame is in error, we can always have it retransmitted.
Hence we’ll focus on the error detection techniques. 2 of them- Parity check,
CRC.

4.3.1 Parity Check

This is the simplest of all schemes – you must be familiar with this. Just XOR all
the bits to generate the parity bit and send that along. Rx checks for the valid
parity bit.
This can detect cases where odd number of bits are in error.

4.3.2 Cyclic Redundancy Check (CRC)

This is one of the powerful error-detecting codes. This is based on modulo-2


arithmetic. It basically works as follows :
• The message bits treated as a polynomial with binary coefficients is
divided by a CRC polynomial. The remainder obtained is the CRC, which
is appended to the message bits. At the Rx, the CRC appended message
is treated as a polynomial and divided by the same CRC polynomial.
A reminder of zero indicates no error, else error.
• If M is a message of k bits, and P is the CRC polynomial of n+1 bits, the
procedure is as follows,
Construct T = 2nM
Divide T by P.
n-bit remainder F is the Frame check sequence or the CRC.
Transmit T+F, i.e.2nM+F
At Rx,
Divide 2nM+F by P.
n
If, 2 M/P +F/P = F/P+F/P =0 no error,
else error!!
Let us look at an example. Consider a message M = 1001001, and a CRC of
1101.

At Tx:

1111011
1101) 1001001000
1101
1000
1101
1010
1101
1111
1101
1000
1101
1010
1101
111 = R
The new message is 1001001111.

At Rx:

1111011
1101) 1001001111
1101
1000
1101
1010
1101
1111
1101
1011
1101
1101
1101
0 = R.
Hence no error!!

What is the trick here – it is in choosing the polynomial. The polynomial should
have certain characteristics so that certain kinds of errors can be detected. An
important point to note is that the coefficients of xn and x0 should always be 1.
If(x+1) is a factor of the polynomial then all odd bit errors can be deducted. The
basis (Mathematics) for this comes from the topic of Fields and Groups. Don’t
worry too much about it. Just remember where it comes from, so that if you need
it at some point of time, you can refer to it. Also, our life has been made much
simpler by researchers who have already identified some standard polynomials
which are widely used today. Some of these are :

CRC – 16 x16+x15+x2+1
CRC - CCITT x16+x12+x5+1
CRC -32 x +x +x23+c22+c16+x12+x11+x10+x8+x7+x5+x4+x2+1
32 26

Another interesting feature of this technique is that it can be easily implemented in


hardware using a feed-back shift register, so that this calculation can actually be
done on-the-fly. Note that otherwise, this would be an additional overhead in
transmitting each frame of data.

Given that we have a mechanism for detecting errors, what does error control
involve? We’ll look at that now.

Actually there are 2 categories of errors. One is receiving an erroneous frame


which we detect using an FCS or some such scheme. Another is that the frame
might just not be received- lost frames!! Both these situations need to be
handled by the error control mechanism.

4.3.3 ARQ

Three common mechanisms exist. They are referred to as automatic repeat


request (ARQ) mechanisms.
They are :
o Stop and wait ARQ
o Go-back–N ARQ
o Selective reject ARQ.

Stop and wait ARQ:


• This is based on the stop and wait flow control technique.
• In stop and wait, a transmitter transmits a frame and then waits for an ack
to come from the receiver, before it transmits another frame.
• During this, 2 types of errors could occur.
(i) The frame sent may be in error which is detected at the destination.
Or the frame may be lost in transit. In either case, the Rx does not
send an Ack. The transmitter would be waiting for ever!

To break this “Never-ending-wait”, a timer is associated with each


frame that is sent. If an Ack is not received within a certain time,
and the timer expires, that frame is resent.

(ii) The frame could have reached correctly. But the Ack sent by the
receiver may be in error or be lost.
Now, again the transmitter will time-out, and resend the frame. The
duplicate frame will be accepted by the receiver as a separate
frame.

To avoid this, frames may be alternately labeled with a 0 and 1, and


the corresponding Acks as ack1 and ack0. This simple scheme will
take care of the duplicate frame problem and provide a “slow but
steady” kind of solution. This mechanism is depicted in the following
figure (fig 4.6).
Fig 4.6 Stop and Wait ARQ

Go-back-N ARQ:-

This goes with the sliding –window flow control mechanism.

Recollect that in the sliding window technique, the number of un-acknowledged


frames is determined by the window size N.
Basic Mechanism: Following cases occur.

• While no errors occur, the destination will acknowledge the incoming


frames by sending an RR frame (Receive Ready)
• If it detect an error in frame i, it will send a REJ i frame (Rejection for i),
and discard all frames that come after that.
• The source on receiving the REJ for i, will resend all the frames that it has
already sent, starting from the ith frame.
• If a frame is lost in transit, but the frame i+1 is received by the
destination, it rejects the frames that has been received “out-of-order”
by sending a REJi. The source will then resend frames starting from i.
• The frame i is lost in transit, and there is no RR or REJ from the receiver
or lets say the RR or REJ was lost. If the transmitter also had no further
frames to send, it’s timer for ’i’ would expire. Then it sends and ‘’RR’ frame
to the destination with what is known as a poll –bit or p-bit set. This is a
kind of query to the destination. The destination will then respond with an
RR for the last frame that it received correctly, and then the source can
start resending from the next frame.
• If the acknowledgement RR(i+1) sent for a frame i is lost, but the source
has sent further frames which have been received by the destination, then
an RR for a subsequent frame may be received at the source before the
timer for i expires. In that case, the sequence just continues,- because
acks are cumulative-An RR for j acks all frames up to j-1.
• However, if the senders timer expires, and it sends an RR frame with the
p-bit set, and this frame or the response to it gets lost-then what happens?
To take care of this, there is a p-bit timer associated with a p-bit frame.
This will expire. If this expires, the p-bit frame will be resent. This is done
for a set maximum number of times - before the transmitter actually gives
up and resets the entire procedure or reports an error.

• An example showing a sequence of operations is depicted in the figure


below (fig 4.7).

Stn A
Fig 4.7 Go-back N ARQ

Station A sends frames 0 to 5. Frame 3 is lost. Receiver replies are sent for
frames 0-2. When frame 4 arrives at B, it is out of sequence, hence a REJ3
frame is sent. On receiving the REJ3 frame, station A resends all frames
starting from frame 3. On receiving frame 3, B sends an ACK (RR) but it is
lost. However, subsequent acks for frames 4 and 5 arrive. Since ACKs are
cumulative, frames 4 and 5 are taken as ACKed. Meanwhile, frame 6
reaches B, but B does not respond with an ACk for some reason. A timeout
occurs at A for frame 6, and it sends a RR frame with a P-bit set. On
receiving that B responds with an RR for 5. A sends frame 6 again, and B
ACKs it.

Selective – reject ARQ:

o This is an improvement on the Go-back-N ARQ. Instead of retransmitting


all frames starting from the REJected frame, only the erroneous frame is
‘selectively’ retransmitted.
o An SREJ is sent for the frame in error, and only that is retransmitted. The
receiver accepts frames that have been received out-of-order. So it could
cumulatively ack the frames received up to that time after it receives the
retransmitted frame.
o For this to work, the maximum window size should be no more than half
the range of the sequence number. Can you say why?

Have you understood ?

1. What is meant by framing ?


2. What should go into a frame ?
3. What are the three primary schemes used to identify frames ?
4. Why is bit stuffing necessary ?
5. What is the basis of clock-based framing ?
6. What is meant by flow-control ?
7. What is the problem with stop-and-wait flow control ?
8. How does window-based flow control work ?
9. How are errors identified ?
10. What is meant by CRC ?
11. What is meant by ARQ ?
12. What is Go-back-n ARQ ?

Unit 1 Summary

• A network is formed when two or more machines are interconnected.


• The complexity of the network grows as more machines are added.
• The connectivity consists of just not the physical connection, but multiple
layers of logical connection that allow two applications to talk to each
other.
• The entire set of operations required for a network is often organized into
layers with distinct functions.
• As per the OSI standard, they are : physical, data link, network, transport,
session, presentation and application layers.
• There are protocols which define how the functionality is achieved.
• A collection of protocols operating at various layers is called a protocol
stack.
• The TCP/IP is one such stack that has become an alternative to the OSI
model and more popular.
• The physical layer determines how and the rate at which the raw bits are
actually transmitted.
• The physical media may be guided (wired) or unguided (wireless). Among
the guided, twisted pair and optical fibre cables are popular for their cost
and speed respectively.
• The data link layer deals with making sense out of the raw bits
transmitted.
• Framing, flow control and error control are three important functions of the
data link layer.
• Framing deals with organizing raw bits into a structure that is convenient
for processing, transmission etc.
• A bit pattern, a byte pattern, or a known format may be used to identify
beginning and end of frames.
• Flow control is to regulate flow of data between two machines with
different capacity – to prevent a fast sender from overwhelming a slow
receiver.
• Acknowledgements are used by receiver to let the sender know that it has
received the data.
• The same acknowledgements may also be used to regulate the flow of
data, as in stop and wait flow control where one frame is sent only after
the previous is acknowledged.
• In sliding window flow control, a window of n frames may be continuously
transmitted without waiting for ACKs, but you can’t go beyond that until
you get the ACKs for the previous frames transmitted.
• Errors will occur during transmission.
• We need mechanisms to identify errors and take corrective action.
• CRC is a commonly used error detection scheme.
• On detection of errors, the erroneous frames are dropped, retransmission
of frames is requested by the receiver.
• The retransmission request mechanism is closely tied to the flow control
mechanism adopted.
Objective type questions

1. Which is the best definition of encapsulation?

a. Each layer of the OSI model uses encryption to put the PDU from the
upper layer into its data field. It adds header and trailer information that is
available to its counterpart on the system that will receive it.
b. Data always needs to be tunneled to its destination so encapsulation
must be used.
c. Each layer of the OSI model uses compression to put the PDU from the
upper layer into its data field. It adds header and trailer information that is
available to its counterpart on the system that will receive it.
d. Each layer of the OSI model puts the PDU from the upper layer into its
data field. It adds header and trailer information so that its counterpart on
the system that will receive it correctly.

2. A protocol stack is
a. the way in which the data are passed between layers in the TCP/IP
architecture
b. a set of rules for making a sandwich
c. never implemented because of the lack of ISO standards
d. software that implements the layers of a protocol.

4. The Internet
a. is implemented using the TCP/IP protocol
b. has the attribute of service generality
c. allows multiplexing
d. all of the above

5. When a computer uses a single communication circuit/channel to


establish multiple connections to different applications, it is called
a. Multiplexing
b. Multiprogramming
c. Multitasking
d. Multicasting

6. Protocols are most often implemented in


a. Software
b. Layers
c. Hardware
d. a & b
e. a, b, and c

7. The header in a PDU contains information to be used by


a. The user at the receiving end
b. The workstation at the transmitting end
c. The owner of the communication link
d. The peer layer in the receiving machine

8. From the standpoint of connection, there are two basic types of data
transmission. They are
a. Unconnected
b. Connection-oriented
c. Connectionless
d. a & b
e. b& c

9. The term used to describe the ability of the receiving end to limit the
amount or rate at which data is sent by the transmitting end is
a. Transmit control
b. Flow control
c. Check damming
d. Flow limiting

10. When an ESC character is sent as part of a message, it means that


a. The transmission is being aborted
b. This is the end of transmission
c. The transmission should be ignored
d. The characters that follow are to be interpreted as having an
alternate meaning

11. In HDLC, when a 0 is inserted after all strings of five consecutive 1s, the
term is called
a. Zeroing
b. Synchronizing
c. String breaking
d. Bit stuffing

12. When two stations transmit at the same time


a. An altercation occurs
b. A division occurs
c. A collision occurs
d. Polling occurs

13. When a receiver must acknowledge every block of data before the next
block is sent, the type of flow control being used is
a. Sliding window
b. Stop and hop
c. Stop and go
d. Stop and wait
14. The name of the flow control protocol in which the sending station resends
the damaged or out of sequence frame and all frame after it, on receipt of
a NAK is
a. Selective reject
b. Selective repeat
c. Go-back-n
d. Sliding window

15. HDLC is an example of


a. Sliding window flow control
b. A serial line interface protocol
c. An asynchronous protocol
d. All of the above
e. None of the above

16. When a HDLC node receives a flag character, it knows that


a. An error has occurred
b. A frame is beginning or ending
c. It should signal the sender to stop sending
d. It should switch modes

17. Techniques to ensure that a fast transmitting node does not send data
faster than the receiving node can receive and process are called
a. Parity checking
b. CRC
c. Flow control
d. Error control

18. Parity checking


a. Can detect single bit error in transmission
b. Can detect an even number of bit errors
c. Adds an odd bit, even if no errors occur
d. Is not used if the circuits are at parity with one another

19. CRC
a. Is a particular implementation of a more general class of error
detection techniques called polynomial error checking
b. Provides additional bits so that errors can be corrected at the
receiving end
c. Requires a math coprocessor to calculate its value
d. Uses Hamming code to improve accuracy of data

20. A device that operates at the physical layer and is used to regenerate
signals is called
a. Gateway
b. Repeater
c. Switch
d. Bridge

Exercises :

1. A system has an n-layer protocol hierarchy. Applications generate


messages of length M bytes. At each of the layers, an h-byte header is
added. What fraction of the network bandwidth is filled with headers?
2. What are the 5 layers in an internet protocol stack ? Which of these layers
does a router possess ?
3. Answer true or false and justify your answer for the following :
a. With a selective repeat protocol, it is possible for the sender to
receive an ACK for a packet that falls outside of its current
window.
b. With Go-back-N, it is possible for the sender to receive an ACK
for a packet that falls outside of its current window.

4. List the functions to be performed by a data link control protocol. Why are
they needed?

5. How would you determine the number of bits for the sequence number
field in a sliding window protocol ? Calculate the number of bits in the
sequence number field for a 1 Mbps link with a one-way latency of 1.25
secs, assuming that each frame carries 1KB of data.

6. State true or false. Justify your answer.


(a) With Go-back N it is possible for the sender to receive an ACK for a
packet that falls outside its current window.

(b) With the selective repeat protocol, it is possible for the sender to
receive an ACK for a packet that falls outside its current window.

7. A world-wide web server receives relatively small messages from its


clients, but transmits very large messages to them. Explain, which type of
ARQ protocol (selective reject or go-back-n ) would be less of a burden to
the server.

8. Explain the principle behind CRC.

9. Show the CRC calculation involved in sending and receiving a message


1101001001 using a CRC polynomial 1001.

10. Show by example that Go-back-n requires a sequence number of 2n for a


window size of n.

You might also like