You are on page 1of 4

c  


 

In the 1980s, the European-dominated International Standards Organization (ISO), began to


develop its Open Systems Interconnection (OSI) networking suite. OSI has two major
components: an abstract model of networking (the Basic Reference Model, or @   

), and a set of concrete protocols. The standard documents that describe OSI are for
sale and not currently available online.

Parts of OSI have influenced Internet protocol development, but none more than the abstract
model itself, documented in OSI 7498 and its various addenda. In this model, a networking
system is divided into layers. Within each layer, one or more entities implement its
functionality. Each entity interacts directly only with the layer immediately beneath it, and
provides facilities for use by the layer above it. Protocols enable an entity in one host to
interact with a corresponding entity at the same layer in a remote host.
The seven layers of the OSI Basic Reference Model are (from bottom to top):

1.Ê The   


describes the physical properties of the various communications
media, as well as the electrical properties and interpretation of the exchanged signals.
Ex: this layer defines the size of Ethernet coaxial cable, the type of BNC connector
used, and the termination method. The physical layer of the OSI model refers to the actual
hardware specifications. The Physical Layer defines characteristics such as timing and
voltage.
2.Ê The è 
describes the logical organization of data bits transmitted on a
particular medium. Ex: this layer defines the framing, addressing and checksumming
of Ethernet packets. The data link layer can be sub divided into two other layers; the Media
Access Control (MAC) layer, and the Logical Link Control (LLC) layer. The MAC layer
basically establishes the computer¶s identity on the network, via its MAC address
3.Ê The £
 
describes how a series of exchanges over various data links can
deliver data between any two nodes in a network. Ex: this layer defines the addressing
and routing structure of the Internet. The Network Layer is responsible for determining
how the data will reach the recipient. This layer handles things like addressing, routing, and
logical protocols.
4.Ê The 

 
describes the quality and nature of the data delivery. Ex: this
layer defines if and how retransmissions will be used to ensure data delivery. This
layer is responsible formaintaining flow control. The transport layer take the data
from each application and convert in to single stream.
5.Ê The   
describes the organization of data sequences larger than the
packets handled by lower layers. Ex: this layer describes how request and reply
packets are paired in a remote procedure call. Once the data has been put into the correct
format, the sending host must establish a session with the receiving host. This is where the
session layer comes into play. It is responsible for establishing, maintaining, and eventually
terminating the session with the remote host
6.Ê The 
  
describes the syntax of data being transferred. Ex: this layer
describes how floating point numbers can be exchanged between hosts with different
math formats. The presentation layer does some rather complex things, but everything that
the presentation layer does can be summed up in one sentence. The presentation layer takes
the data that is provided by the application layer, and converts it into a standard format that
the other layers can understand. Likewise, this layer converts the inbound data that is received
from the session layer into something that the application layer can understand. The reason
why this layer is necessary is because applications handle data differently from one another.
In order for network communications to function properly, the data needs to be structured in a
standard way
7.Ê The   
describes how real work actually gets done. Ex: this layer
would implement file system operations. To understand what the application layer does,
suppose for a moment that a user wanted to use Internet Explorer to open an FTP session and
transfer a file. In this particular case, the application layer would define the file transfer
protocol. This protocol is not directly accessible to the end user. The end user must still use an
application that is designed to interact with the file transfer protocol. In this case, Internet
Explorer would be that application

The original Internet protocol specifications defined a four-level model, and protocols
designed around it (like TCP) have difficulty fitting neat ly into the seven-layer model. Most
newer designs use the seven-layer model.

The OSI Basic Reference Model has enjoyed a far greater acceptance than the OSI protocols
themselves. There are several reasons for this. OSI's committee-based design process bred
overgrown, unimaginative protocols that nobody ever accused of efficiency. Heavy European
dominance helped protect their investments in X.25 (CONS is basically X.25 for datagram
networks). Perhaps most importantly, X.25 data networks never caught people's imagination
like the Internet, which, with a strong history of free, downloadable protocol specifications,
has been loath to embrace yet another networking scheme where you have to pay to figure
how things work.

And why should we? OSI's biggest problem is that doesn't really offer anything new. The
strongest case for its implementation comes from its status as an "international standard", but
we already have a de facto international standard - the Internet. OSI protocols will be around,
but its most significant contribution is the philosophy of networking represented by its
layered model.

If the Internet community has to worry about anything, it's the danger of IETF turning into
another ISO - a big, overgrown standards organization run by committees, churning out
thousands of pages of rubbish, and dominated by big business players more interested in
preserving investments than advancing the state of the art.

The role of Network Drivers

The role of a network interface within the system is similar to that of a mounted block device.
A block device registers its features in the blk_dev array and other kernel structures, and it
then "transmits" and "receives" blocks on request, by means of its request function. Similarly,
a network interface must register itself in specific data structures in order to be invoked when
packets are exchanged with the outside world.

There are a few important differences between mounted disks and packet-delivery interfaces.
To begin with, a disk exists as a special file in the /dev directory, whereas a network interface
has no such entry point. The normal file operations (read, write, and so on) do not make sense
when applied to network interfaces, so it is not possible to apply the Unix "everything is a
file" approach to them. Thus, network interfaces exist in their own namespace and export a
different set of operations.

Although you may object that applications use the read and write system calls when using
sockets, those calls act on a software object that is distinct from the interface. Several
hundred sockets can be multiplexed on the same physical interface.

But the most important difference between the two is that block drivers operate only in
response to requests from the kernel, whereas network drivers receive packets
asynchronously from the outside. Thus, while a block driver is asked to send a buffer toward
the kernel, the network device asksto push incoming packets toward the kernel. The kernel
interface for network drivers is designed for this different mode of operation.

Network drivers also have to be prepared to support a number of administrative tasks, such as
setting addresses, modifying transmission parameters, and maintaining traffic and error
statistics. The API for network drivers reflects this need, and thus looks somewhat different
from the interfaces we have seen so far.
The network subsystem of the Linux kernel is designed to be completely protocol
independent. This applies to both networking protocols (IP versus IPX or other protocols) and
hardware protocols (Ethernet versus token ring, etc.). Interaction between a network driver
and the kernel proper deals with one network packet at a time; this allows protocol issues to
be hidden neatly from the driver and the physical transmission to be hidden from the
protocol.

When a driver module is loaded into a running kernel, it requests resources and offers
facilities; there's nothing new in that. And there's also nothing new in the way resources are
requested. The driver should probe for its device and its hardware location (I/O ports and IRQ
line). The way a network driver is registered by its module initialization function is different
from char and block drivers. Since there is no equivalent of major and minor numbers for
network interfaces, a network driver does not request such a number. Instead, the driver
inserts a data structure for each newly detected interface into a global list of network devices.

£  
     

You might also like