You are on page 1of 12

Nexus Technology Labs Virtual Port Channels (vPC)

Active Active NIC Teaming with vPC


Last updated: April 11, 2013

Active Active NIC Teaming with vPC Diagram (/uploads/workbooks/images/diagrams/VQC9DEWKRSFGAxmOqpLE.png)

(/uploads/workbooks/images/diagrams/VQC9DEWKRSFGAxmOqpLE.png)

Task
Configure a vPC Domain between N5K1 and N5K2 as follows: N5K1 and N5K2 are the vPC Peers. Create vPC Domain 1 on the peers, and use the mgmt0 ports for the vPC Peer Keepalive Link. Configure all links between the vPC peers as Port-Channel 1, and use this as the vPC Peer Link. The vPC Peer Link should use LACP negotiation, be an 802.1q trunk link, and be an STP Network Port. Configure vPCs from N5K1 and N5K2 to Server 1 and Server 2 as follows: Configure N5K1 and N5K2's links to Server 1 as Port-Channel 101. Port-Channel 101 should be configured as an access port in VLAN 10, an STP Edge FEEDBACK Port, and as vPC 101.

Configure N5K1 and N5K2's links to Server 2 as Port-Channel 102. Port-Channel 102 should be configured as an access port in VLAN 10, an STP Edge Port, and as vPC 102. Configure Active/Active NIC Teaming on Server 1 and Server 2 as follows: Configure a NIC Team on Server 1 using 802.3ad (LACP); both links to N5K1 and N5K2 should be in this team, and it should use the IP address 10.0.0.1/24. Configure a NIC Team on Server 2 using 802.3ad (LACP); both links to N5K1 and N5K2 should be in this team, and it should use the IP address 10.0.0.2/24. When complete, ensure that Server 1 and Server 2 have IP connectivity to each other, and that traffic between them uses both uplinks to N5K1 and N5K2 simultaneously.

Configuration
N 5 K 1 : f e a t u r el a c p f e a t u r ev p c ! v l a n1 0 ! v p cd o m a i n1 p e e r k e e p a l i v ed e s t i n a t i o n1 9 2 . 1 6 8 . 0 . 5 2 ! i n t e r f a c ep o r t c h a n n e l 1 s w i t c h p o r tm o d et r u n k s p a n n i n g t r e ep o r tt y p en e t w o r k v p cp e e r l i n k ! i n t e r f a c ep o r t c h a n n e l 1 0 1 s w i t c h p o r tm o d ea c c e s s s w i t c h p o r ta c c e s sv l a n1 0 s p a n n i n g t r e ep o r tt y p ee d g e v p c1 0 1 ! i n t e r f a c ep o r t c h a n n e l 1 0 2 s w i t c h p o r tm o d ea c c e s s s w i t c h p o r ta c c e s sv l a n1 0 s p a n n i n g t r e ep o r tt y p ee d g e v p c1 0 2 ! i n t e r f a c eE t h e r n e t 1 / 1 s w i t c h p o r tm o d ea c c e s s s w i t c h p o r ta c c e s sv l a n1 0 c h a n n e l g r o u p1 0 1m o d ea c t i v e s p e e d1 0 0 0 ! i n t e r f a c eE t h e r n e t 1 / 2 s w i t c h p o r tm o d ea c c e s s s w i t c h p o r ta c c e s sv l a n1 0 c h a n n e l g r o u p1 0 2m o d ea c t i v e

s p e e d1 0 0 0 ! i n t e r f a c eE t h e r n e t 1 / 3-5 s w i t c h p o r tm o d et r u n k s p a n n i n g t r e ep o r tt y p en e t w o r k c h a n n e l g r o u p1m o d ea c t i v e N 5 K 2 : f e a t u r el a c p f e a t u r ev p c ! v l a n1 0 ! v p cd o m a i n1 p e e r k e e p a l i v ed e s t i n a t i o n1 9 2 . 1 6 8 . 0 . 5 1 ! i n t e r f a c ep o r t c h a n n e l 1 s w i t c h p o r tm o d et r u n k s p a n n i n g t r e ep o r tt y p en e t w o r k v p cp e e r l i n k ! i n t e r f a c ep o r t c h a n n e l 1 0 1 s w i t c h p o r tm o d ea c c e s s s w i t c h p o r ta c c e s sv l a n1 0 s p a n n i n g t r e ep o r tt y p ee d g e v p c1 0 1 ! i n t e r f a c ep o r t c h a n n e l 1 0 2 s w i t c h p o r tm o d ea c c e s s s w i t c h p o r ta c c e s sv l a n1 0 s p a n n i n g t r e ep o r tt y p ee d g e v p c1 0 2 ! i n t e r f a c eE t h e r n e t 1 / 1 s w i t c h p o r tm o d ea c c e s s s w i t c h p o r ta c c e s sv l a n1 0 c h a n n e l g r o u p1 0 1m o d ea c t i v e s p e e d1 0 0 0 ! i n t e r f a c eE t h e r n e t 1 / 2 s w i t c h p o r tm o d ea c c e s s s w i t c h p o r ta c c e s sv l a n1 0 c h a n n e l g r o u p1 0 2m o d ea c t i v e s p e e d1 0 0 0 ! i n t e r f a c eE t h e r n e t 1 / 3-5 s w i t c h p o r tm o d et r u n k s p a n n i n g t r e ep o r tt y p en e t w o r k c h a n n e l g r o u p1m o d ea c t i v e

Verification
In this design, the end servers are dual attached to separate access switches, N5K1 and N5K2. Additionally, N5K1 and N5K2 are configured for Virtual Port Channel (vPC), which is a type of Multi-Chassis EtherChannel (MEC). vPC means that the downstream devices, Server 1 and Server 2 in this case, see the upstream switches (the vPC Peers) as a single switch. In other words, while the physical topology is a triangle, the logical topology is a point-to-point port channel. vPC configuration is made up of three main components, the vPC Peer Keepalive Link, the vPC Peer Link, and the vPC Member Ports. The vPC Keepalive Link is any layer 3 interface, including the mgmt0 port, that is used to send UDP pings between the vPC peers. If the UDP ping is successful over the keepalive link, the peers are considered to be reachable. The second portion, the vPC Peer Link, is used to synchronize the control plane between the vPC Peers. The Peer Link is used for operations such as MAC address table synchronization, ARP table synchronization, IGMP Snooping synchronization, and so on. The Peer Link is a port channel made up of at least two 10Gbps links, and it should be configured as a layer 2 trunk link that runs as STP port type network. The final portions, the vPC member ports, are the port channel interfaces that go down the end hosts or downstream devices. The first step in vPC verification is to ensure that the vPC Peer Keepalive is up and that the vPC Peer Link is up, as shown below.

N 5 K 1 #s h o wv p c L e g e n d : ( * )-l o c a lv P Ci sd o w n ,f o r w a r d i n gv i av P Cp e e r l i n k v P Cd o m a i ni d P e e rs t a t u s v P Ck e e p a l i v es t a t u s P e r v l a nc o n s i s t e n c ys t a t u s T y p e 2c o n s i s t e n c ys t a t u s v P Cr o l e N u m b e ro fv P C sc o n f i g u r e d P e e rG a t e w a y D u a l a c t i v ee x c l u d e dV L A N s G r a c e f u lC o n s i s t e n c yC h e c k v P CP e e r l i n ks t a t u s i d 1 P o r t P o 1 S t a t u sA c t i v ev l a n s -u p 1 , 1 0 :1 :p e e ra d j a c e n c yf o r m e do k :p e e ri sa l i v e :s u c c e s s :s u c c e s s :p r i m a r y :2 :D i s a b l e d ::E n a b l e d

C o n f i g u r a t i o nc o n s i s t e n c ys t a t u s :s u c c e s s

< s n i p >

Next, the vPC Member Ports are configured to the end hosts. In the output below, PortChannel101 to Server 1 shows its vPC as down, because the vPC has been configured on the switch side but not yet on the server side. The end result is that the link runs as a normal access port, as indicated by the Individual flag of the show port-channel summary.

N 5 K 1 #s h o wv p c1 0 1 v P Cs t a t u s i d a n s -----1 0 1 P o 1 0 1 d o w n * s u c c e s s s u c c e s s P o r t S t a t u sC o n s i s t e n c yR e a s o n A c t i v ev l

N 5 K 1 #s h o wp o r t c h a n n e ls u m m a r y F l a g s : D-D o w n s-S u s p e n d e d S-S w i t c h e d P-U pi np o r t c h a n n e l( m e m b e r s ) r-M o d u l e r e m o v e d R-R o u t e d I-I n d i v i d u a l H-H o t s t a n d b y( L A C Po n l y )

U-U p( p o r t c h a n n e l ) M-N o ti nu s e .M i n l i n k sn o tm e t G r o u pP o r t C h a n n e l 1 1 0 1 1 0 2 P o 1 ( S U ) P o 1 0 1 ( S D ) P o 1 0 2 ( S U ) E t h E t h E t h L A C P L A C P L A C P E t h 1 / 3 ( P ) E t h 1 / 1 ( I ) E t h 1 / 2 ( P ) E t h 1 / 4 ( P ) E t h 1 / 5 ( P ) T y p e P r o t o c o l M e m b e rP o r t s

Next, the end server is configured for NIC Teaming. In the case of the Intel ANS software, an LACP-based channel is called 802.3ad Dynamic Link Aggregation.

After the server signals the switch with LACP, the channel can form and the vPC comes up, as shown below.

N 5 K 1 # 2 0 1 3M a r 31 8 : 5 8 : 3 9N 5 K 1% E T H P O R T 5 I F _ D O W N _ I N I T I A L I Z I N G :I n t e r f a c eE t h e r n e t 1 / 1i sd o w n( I n i t i a l i z i n g ) 2 0 1 3M a r 31 8 : 5 8 : 3 9N 5 K 1% E T H _ P O R T _ C H A N N E L 5 P O R T _ I N D I V I D U A L _ D O W N :i n d i v i d u a lp o r tE t h e r n e t 1 / 1i sd o w n 2 0 1 3M a r 31 8 : 5 8 : 3 9N 5 K 1% E T H P O R T 5 S P E E D :I n t e r f a c ep o r t c h a n n e l 1 0 1 ,o p e r a t i o n a ls p e e dc h a n g e dt o1G b p s 2 0 1 3M a r 31 8 : 5 8 : 3 9N 5 K 1% E T H P O R T 5 I F _ D U P L E X :I n t e r f a c ep o r t c h a n n e l 1 0 1 , o p e r a t i o n a ld u p l e xm o d ec h a n g e dt oF u l l 2 0 1 3M a r 31 8 : 5 8 : 3 9N 5 K 1% E T H P O R T 5 I F _ R X _ F L O W _ C O N T R O L :I n t e r f a c ep o r t c h a n n e l 1 0 1 ,o p e r a t i o n a lR e c e i v eF l o wC o n t r o ls t a t ec h a n g e dt oo f f 2 0 1 3M a r 31 8 : 5 8 : 3 9N 5 K 1% E T H P O R T 5 I F _ T X _ F L O W _ C O N T R O L :I n t e r f a c ep o r t c h a n n e l 1 0 1 ,o p e r a t i o n a lT r a n s m i tF l o wC o n t r o ls t a t ec h a n g e dt oo f f 2 0 1 3M a r 31 8 : 5 8 : 4 2N 5 K 1% E T H _ P O R T _ C H A N N E L 5 P O R T _ U P :p o r t c h a n n e l 1 0 1 :E t h e r n e t 1 / 1i su p N 5 K 1 #2 0 1 3M a r 31 8 : 5 8 : 5 1N 5 K 1% E T H _ P O R T _ C H A N N E L 5 F O P _ C H A N G E D :p o r t c h a n n e l 1 0 1 :f i r s to p e r a t i o n a lp o r tc h a n g e df r o mn o n et oE t h e r n e t 1 / 1 2 0 1 3M a r 31 8 : 5 8 : 5 1N 5 K 1% E T H P O R T 5 I F _ U P :I n t e r f a c eE t h e r n e t 1 / 1i su pi n m o d ea c c e s s 2 0 1 3M a r 31 8 : 5 8 : 5 1N 5 K 1% E T H P O R T 5 I F _ U P :I n t e r f a c ep o r t c h a n n e l 1 0 1i s u pi nm o d ea c c e s s N 5 K 1 #s h o wv p c1 0 1 v P Cs t a t u s i d a n s -----1 0 1 P o 1 0 1 u p s u c c e s s s u c c e s s 1 0 P o r t S t a t u sC o n s i s t e n c yR e a s o n A c t i v ev l

The IP configuration of the server goes on the logical NIC Team interface, similar to how NX-OS and IOS use the logical Port-Channel interface to reference the physical members of the channel.

Testing the traffic flows over the vPC in the data plane becomes a little difficult in this case. Each device that has a port channel configured ultimately controls the decision of how its outbound traffic flows. For example, if a traffic flow is moving from Server 1 to Server 2, Server 1 first determines which links to send the flows out on, and then the upstream switches choose which outbound links to send the flows out on, until the final destination is reached. This is an issue because you will not see an even distribution of traffic among the NIC Team and vPC Member Ports unless there is a sufficiently large number of flows from diverse source and destination addresses. Although the port-channel load balancing method can be changed on the Nexus switches, it cant be changed in the Intel NIC drivers in this design. Therefore, to fully verify that Active/Active forwarding is working, we need more than one destination address to send to. This is achieved below by configuring a secondary IP address on the NIC Team of Server 1.

Next, Server 2 is configured to send separate UDP flows to each of the addresses on Server 1 with the iPerf app, as shown below.

On the network side, the traffic flows in the data plane can be verified by looking at the interface counters of the vPC Member Ports. If the input bandwidth counter from Server 2 is split between both N5K1 and N5K2, we would then know that Server 2 is distributing the load between both members of its NIC Team in an Active/Active manner. Furthermore, if we see that the output bandwidth counters from N5K1 and N5K2 to Server 1 is split between them, we would also know that the switches are doing Active/Active forwarding to the destination. This can be seen in the output below.

N 5 K 1 #s h o wi n t e r f a c ee 1 / 1 2|i nr a t e | E t h e r n e t E t h e r n e t 1 / 1i su p H a r d w a r e :1 0 0 0 / 1 0 0 0 0E t h e r n e t ,a d d r e s s :0 0 0 d . e c a 2 . e d 8 8( b i a0 0 0 d . e c a 2 . e d 8 8 ) 3 0s e c o n d si n p u tr a t e9 4 6 9 9 2b i t s / s e c ,1 9 8p a c k e t s / s e c 3 0s e c o n d so u t p u tr a t e5 8 9 9 4 0 0b i t s / s e c ,9 2 6p a c k e t s / s e c i n p u tr a t e9 4 6 . 9 9K b p s ,1 9 8p p s ;o u t p u tr a t e5 . 9 0M b p s ,9 2 6p p s E t h e r n e t 1 / 2i su p H a r d w a r e :1 0 0 0 / 1 0 0 0 0E t h e r n e t ,a d d r e s s :0 0 0 d . e c a 2 . e d 8 9( b i a0 0 0 d . e c a 2 . e d 8 9 ) 3 0s e c o n d si n p u tr a t e5 8 9 9 0 3 2b i t s / s e c ,9 2 6p a c k e t s / s e c 3 0s e c o n d so u t p u tr a t e9 4 7 3 8 4b i t s / s e c ,1 9 9p a c k e t s / s e c i n p u tr a t e5 . 9 0M b p s ,9 2 6p p s ;o u t p u tr a t e9 4 7 . 3 8K b p s ,1 9 9p p s N 5 K 2 #s h o wi n t e r f a c ee 1 / 1 2|i nr a t e | E t h e r n e t E t h e r n e t 1 / 1i su p H a r d w a r e :1 0 0 0 / 1 0 0 0 0E t h e r n e t ,a d d r e s s :0 0 0 d . e c a 4 . 7 4 0 8( b i a0 0 0 d . e c a 4 . 7 4 0 8 ) 3 0s e c o n d si n p u tr a t e4 0b i t s / s e c ,0p a c k e t s / s e c 3 0s e c o n d so u t p u tr a t e6 2 1 1 4 2 4b i t s / s e c ,9 7 5p a c k e t s / s e c i n p u tr a t e4 0b p s ,0p p s ;o u t p u tr a t e6 . 2 1M b p s ,9 7 5p p s E t h e r n e t 1 / 2i su p H a r d w a r e :1 0 0 0 / 1 0 0 0 0E t h e r n e t ,a d d r e s s :0 0 0 d . e c a 4 . 7 4 0 9( b i a0 0 0 d . e c a 4 . 7 4 0 9 ) 3 0s e c o n d si n p u tr a t e6 2 1 1 2 1 6b i t s / s e c ,9 7 5p a c k e t s / s e c 3 0s e c o n d so u t p u tr a t e1 4 4b i t s / s e c ,0p a c k e t s / s e c i n p u tr a t e6 . 2 1M b p s ,9 7 5p p s ;o u t p u tr a t e1 4 4b p s ,0p p s

Note that on N5K1 the input rate of E1/2, which connects to Server 2, matches the output rate of E1/1, which connects to Server 1. Likewise, on N5K2 the input rate of E1/2, which connects to Server 2, matches the output rate of E1/1, which connects to Server 1. Also note that these traffic flows do not cross the vPC Peer Link between the Nexus 5Ks, because this link is excluded from the data plane under normal correct operations. Verification of the counters of Port-Channel1, the vPC Peer Link, show little to no traffic being sent or received on the port.

N 5 K 1 #s h o wi n t e r f a c ep o r t c h a n n e l1|i n c l u d er a t e 3 0s e c o n d si n p u tr a t e9 4 4b i t s / s e c ,1p a c k e t s / s e c 3 0s e c o n d so u t p u tr a t e1 1 6 8b i t s / s e c ,1p a c k e t s / s e c i n p u tr a t e9 7 6b p s ,1p p s ;o u t p u tr a t e1 . 0 7K b p s ,1p p s

The output shown above indicates the normal forwarding logic of vPC, which is that the vPC Peer will first attempt to forward traffic to a local vPC Member Port instead of crossing the vPC Peer Link. The only time that this rule is normally broken for known unicast traffic is if the local vPC Member Port is down. For example, if a failure occurs between N5K1 and Server 1, N5K1s traffic received from Server 1 going to Server 2 must be sent over the vPC Peer Link; otherwise it would be blackholed. This can be seen below.

Normally this detection is immediate based on link failure, but in this topology design Server 1 is a Virtual Machine that is not directly physically connected to N5K1. When the LACP timer expires, N5K1 detects that the vPC peer is gone, and the vPC Member Port goes down.

N 5 K 1 # 2 0 1 3M a r 32 2 : 5 4 : 3 4N 5 K 1% E T H _ P O R T _ C H A N N E L 5 P O R T _ D O W N :p o r t c h a n n e l 1 0 1 : E t h e r n e t 1 / 1i sd o w n 2 0 1 3M a r 32 2 : 5 4 : 3 4N 5 K 1% E T H _ P O R T _ C H A N N E L 5 P O R T _ D O W N :p o r t c h a n n e l 1 0 1 : p o r t c h a n n e l 1 0 1i sd o w n < s n i p > N 5 K 1 #s h o wv p c L e g e n d : ( * )-l o c a lv P Ci sd o w n ,f o r w a r d i n gv i av P Cp e e r l i n k v P Cd o m a i ni d P e e rs t a t u s v P Ck e e p a l i v es t a t u s P e r v l a nc o n s i s t e n c ys t a t u s T y p e 2c o n s i s t e n c ys t a t u s v P Cr o l e N u m b e ro fv P C sc o n f i g u r e d P e e rG a t e w a y D u a l a c t i v ee x c l u d e dV L A N s G r a c e f u lC o n s i s t e n c yC h e c k v P CP e e r l i n ks t a t u s i d 1 P o r t P o 1 S t a t u sA c t i v ev l a n s -u p 1 , 1 0 :1 :p e e ra d j a c e n c yf o r m e do k :p e e ri sa l i v e :s u c c e s s :s u c c e s s :p r i m a r y :2 :D i s a b l e d ::E n a b l e d

C o n f i g u r a t i o nc o n s i s t e n c ys t a t u s :s u c c e s s

v P Cs t a t u s i d a n s -----1 0 1 1 0 2 P o 1 0 1 P o 1 0 2 d o w n * s u c c e s s u p s u c c e s s s u c c e s s s u c c e s s 1 0 P o r t S t a t u sC o n s i s t e n c yR e a s o n A c t i v ev l

Now any traffic that comes in on N5K1 from Server 2 that is going toward Server 1 must transit the vPC Peer Link.

N 5 K 1 #s h o wi n t e r f a c ep o r t c h a n n e l1|i n c l u d er a t e 3 0s e c o n d si n p u tr a t e1 7 8 4b i t s / s e c ,1p a c k e t s / s e c 3 0s e c o n d so u t p u tr a t e5 5 2 0 8 6 4b i t s / s e c ,8 6 2p a c k e t s / s e c i n p u tr a t e9 9 2b p s ,1p p s ;o u t p u tr a t e5 . 6 7M b p s ,8 5 6p p s

This situation normally only happens during a failure event. It is highly undesirable for vPC because the vPC Peer Link is usually much lower bandwidth (such as 20Gbps) than the aggregate of the vPC Member Ports (such as 400Gbps+, depending on port density), so the Peer Link can quickly become overwhelmed if it needs to be used in the data plane. ^ back to top

Disclaimer (http://www.ine.com/feedback.htm) | Privacy Policy (http://www.ine.com/resources/) Inc., All Rights Reserved (http://www.ine.com/about-us.htm)

2013 INE

You might also like