You are on page 1of 10

4.

11 PERFORMANCE EVALUATION
LOS Detection time
Average of LOS processing time between switch and controller. We can consider
multiple level connection between switch and controller It increases based on the number
of hops between the switch and the controller. In multiple topologies, it remained almost
constant for the switches and hosts when they are separated by similar number of hops as
shown in Figure 4.14

22.5
18
13.5
los processing time in secs 9
star 4.5 linear torus
0
0 1 2 3
Number of tree levels

Fig
ure 4.14 LOS Detection time

It is compared for number of tree levels and the LOS detection time value
in seconds in the Table 4.1. A standard 16 node topology is considered for this
comparison.

0 1 2 3
Star 2 4 10 16
linear 2 4 11 18
Torus 2 5 9 16
Table 4.1 Los Detection time

Restoration Time
Time lapse between LOS detection and start of traffic flow in restored
path. This includes LOS detection time, restoration path computation time,
rerouting time. This time depends on the spanning tree parameters, if we change
the spanning tree default parameters, we can have better restoration time but at
the expense of more BPDU exchange and false link timeout errors as the same
bandwidth is shared between control and data packets. The restoration time can
be improved the existence link level backup paths. We measure this by
computing the timestamp of Topology change by spanning tree and first ping
31.5
27
22.5
18
restoration time in secs
star 13.5 linear torus
9
4.5
0
0 1 2 3
Number of tree levels

success. packets between the topologies.

Figure 4.15 Restoration Time

A standard 16 node topology is considered for this comparison. The default RSTP and
MSTP parameters like forward delay, topology change timers values are used.

0 1 2 3
Star 4 8 20 24
linear 4 8 22 26
Torus 4 8 22 27
Table 4.2 Restoration time
BFD Loss interval
The interval duration of lost hello messages, which will trigger session down
indication. The BFD hello timer value indicates the time between two hello packet
exchanges. In real time networking scenarios, the BFD hello is maintained in milli
seconds. The simulation environment doesnt support such high-resolution timers we
have used 1 second as the Hello time vale and 3 misses of Hello for the Loss interval.
Each BFD session between switch and controller consumes CPU cycles as it is timer
interrupt based implementation. We have conducted experiments by varying the hello
timer values and the no of hello misses. In scaled 32 node setup with Reroute time shows
a clear proportional improvement along with Hello time

Reroute time
The reroute time is the complete time taken for rerouting the traffic from active to
backup path. This time includes the BFD level loss detection and flow modification
messages propagated by the controller. If we run a high granular BFD hello timer we get
a better performance as the rest of the component for this only message processing
between switch and controller. We have used a standard 16 node topology with BFD Loss
interval 3 secs (3* 1 (hello timer)). We have considered multiple level of topologies but
still there was not much impact on the reroute time. We have also maintained a
22.5
18
13.5
reroute time in secs 9
star 4.5 linear torus
0
0 1 2 3
Number of tree levels

sta
ndard traffic flow between the switches between switches.

Figure 4.14 Reroute time

It is compared for number of tree levels and the Restoration time value in

0 1 2 3
Star 2 4 10 16
linear 2 4 11 18
Torus 2 5 9 16
seconds in the Table 4.1. A standard 16 node topology is considered for this comparison.

Table 4.1 Los Detection time

Percentage of Data Traffic Loss

Under high load multiple traffic load experiments are done and results to
be gathered. Multiple traffic streams from different hosts are initiated and
allowed to pass in the network. This metric is considered only for restoration
case as there is measurable data traffic loss only. In case of backup reroute we
see loss only in high load 32 node topologies. Most of the loss is handled by the
upper layer protocols in case of TCP. We have considered UDP streams also and
based port level statistics, we could a small percentage of drops. The traffic

Traffic loss percentage


LINEAR TORUS STAR

2 STREAM 4 STREAM 8 STREAM


Number of maximum sized packet streams

resumes after the restoration time

Figure 4.17 Loss of data traffic


A standard 32 node topology is considered for the comparison as shown in Table 4.3.
The streams are of maximum size.

Table 4.3 Loss in data traffic


2 STREAM 4 STREAM 8 STREAM
STAR 50 60 100
TORUS 50 60 100
LINEAR 50 60 100
Percentage of Control Traffic Loss (No Queuing Case)

With Queuing feature enabled the controls packets have higher priority than

The data packets. The loss percentage of control packets was similar to data
packets. We disabled the priority queuing for control packets to identify the
specific dependencies of protocols during the restoration, the time sensitive
applications like spanning tree, LLDP, OSPF have a very high impact in this
scenario. It also causes false topology change message and mac address table
Traffic loss percentage
LINEAR TORUS STAR

Number of maximum sized packet streams

f
lushes.

Figure 4.17 Loss of data traffic

A standard 32 node topology is considered for the comparison as shown in Table 4.3.
The streams are of maximum size. We are sending a standard no of OpenFlow, spanning
tree , LLDP and OSPF packets for measurement. The control traffic originally sent is
really less to identify an exact comparion study.

Table 4.3 Loss in data traffic


2 STREAM 4 STREAM 8 STREAM
STAR 50 66 100
TORUS 50 66 100
LINEAR 50 66 100

You might also like