Professional Documents
Culture Documents
BRKCOM-2986 marregoc@cisco.com @ i
Presentation_ID
Cisco Public
Agenda
Overview Design Considerations A Unified DC Architecture Deployment Scenarios
BRKCOM-2986_c2
Cisco Public
UCS 5100 Series Blade Server Chassis Flexible bay configurations UCS B-Series Blade Server Industry-standard architecture UCS Virtual Adapters Choice of multiple adapters
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
Legend
Catalyst 6500 Multilayer Switch Catalyst 6500 L2 Switch Virtual Switching System Generic Cisco Multilayer Switch Generic Virtual Switch Nexus Multilayer Switch Nexus L2 Switch Nexus Virtual DC Switch (Multilayer) Nexus Virtual DC Switch (L2) MDS 9500 Director Switch MDS Fabric Switch
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
Nexus 1000V (VEM) Nexus 1000V (VSM) Embedded VEM with VMs
VM VM VM VM VM VM
Nexus 2000 (Fabric Extender) Nexus 5K with VSM UCS Fabric Interconnect UCS Blade Chassis ASA ACE Service Module Virtual Blade Switch
IP
IP Storage
4
Agenda
Overview Design Considerations
Blade Chassis Rack Capacity & Server Density System Capacity & Density Chassis External Connectivity Chassis Internal Connectivity
BRKCOM-2986_c2
Cisco Public
Slots
Rear View
Fabric Extenders
BRKCOM-2986_c2
Cisco Public
8 Blade Slots
8 half-width servers or; 4 full-width servers
FEX Port count must match per enclosure Any port on FEX could be utilized
Selected when blades are inserted Slot assignment occurs at power up All slot traffic is assigned to a single uplink
Other Logic
Monitor and control of environmentals Blade insertion/removal events
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
UCS-6140XP 2U
slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8
Ethernet:
Fabric Extenders 6 x 10GE SFP+
Fixed Ports: FEX or uplink connectivity Expansion Slot Ports: uplink connectivity only
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
10
7 Foot Rack
24
80
73.5
85
77
19
11
7 Foot Rack
3 Chassis
4 Chassis
5 Chassis
6 Chassis
Half-width
24 12
Cisco Public
32 16
40 20
48 24
12
Full-width
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved.
Mezz Card
Mezz Cards: 11W to 21W Chassis: 210W 710W Ch i Operational Blade: ~60W to ~395W Nominal Power Range Depends g p
Processor Memory Mezz Cards HD Turbo Mode Workload
13
Power Supply
BRKCOM-2986_c2
Cisco Public
Bandwidth vs Oversubscription
Bandwidth:
slot 1 slot 2 slot 3 slot 4 l t slot 5 slot 6 slot 7 slot 8
Traffic load a server needs to support Specified by server and/or application engineer
Oversubscription: O b i i
A measure of network capacity Designed by network engineer
Multi-homed servers
More IO ports may not mean more bandwidth Depends on active-active vs. active-standby
14
Fabric Extenders
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
Fabric Extenders
1 FEX Uplink
2 FEX Uplinks
4 FEX Uplinks
20 80-160 40 160-320
10 40-80 20 80-160
5 20-40 10 40-80
15
Influenced by Oversubscription
East - West Traffic Subscription: is 1:1 North South Traffic Subscription:
Needs to be Engineered Interconnect Uplinks
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
20 20G 160 2.5 Gbps 80 5 Gbps 40 20G 320 2.5 Gbps 160 5 Gbps
Cisco Public
Full-width: 40Gbps
Interconnect Location:
Based on # of Racks and chassis density per rack Dependent on Cabling Depending on Interconnect Density
3 Chassis
4 Chassis
5 Chassis
6 Chassis
2 x 10GE
4 x 10GE
BRKCOM-2986_c2
Cisco Public
20 32 4 16
8 8
20 32 16 16 4 16
8 8
80 40 8 16 16 16
16 16
80 40 16
16
16
16
16 40
2 2
40
2 2
40
16
Fabric A Access A
LAN Access SAN Edge A
Fabric B
SAN Edge B
Ethernet Fabric
Single Fabric Fabric Interconnect: 10GE attached
Switch Mode End-host Mode
SAN Fabric
Dual Fabrics Fabric Interconnect: 4G FC attached
NPV Mode
Internal Connectivity
Mezz Card Ports and Blades Connectivity
Fabric Extender Connectivity
Fabric Extender Ports
s1 s1 s5 s2 s3 s3 s5 s2 s4 s7 s6 s5 s2 s3 s6 s4 s7 s7 s6 s8 s4 s8 s8 slot 1 slot 2 slot 3 blade2 slot 4 slot 5 blade3 slot 6 slot 7 blade4 slot 8 s1 s1 s1 s5 s2 s3 s3 s5 s2 s4 s7 s6 s5 s6 s3 s2 s7 s7 s4 s8 s6 s4 s8 s8
blade1
Connect to single Fabric Interconnect Each port is independently used Do not form a port channel Each port is a trunk
Traffic di t ib ti is based on slot at b i up ti T ffi distribution i b d l t t bring time Slot Mezz Port map to connected Fabric Extender
FCS: in a sequential fashion port pinning Post FCS: a port could have dedicated uplink
Each slot connects to each fabric extender Each slot supports a dual-port mezz card Slots and blades:
One slot, one mezz card, one half-width blade Two Slots, two mezz cards, full-width blade
BRKCOM-2986_c2
20
Internal Connectivity
Mezz Cards and Virtual Interfaces Connectivity
Interconnect
vEth vEth vEth
Interconnect
vEth vEth vEth
Interconnect
vEth vEth vEth
Interconnect
vEth vEth vEth
mezz1
mezz2
mezz15
mezz16
mezz1
mezz2
mezz15
mezz16
eth0 eth1
eth0 eth1
chassis
chassis
General Connectivity
Port channels
Are not formed between mezz ports Are not formed across mezz cards
Blade Connectivity
Full width 2 mezz cards 4 ports
vnics mapped to any port vhbas are round robin mapped to fabric *
Backup Interfaces
Mezz port backup within mezz card p p Redundancy: Depends on Mezz card
Interface Redundancy
vnic redundancy done across mezz card ports Cisco Systems, Inc. All rights reserved. BRKCOM-2986_c2 2009 Cisco Public
Agenda
Overview Design Considerations A Unified DC Architecture
The Unified DC Architecture The Virtual Access Layer Unified Compute Pods Distributed Access Fabric Model
Deployment Scenarios
BRKCOM-2986_c2
Cisco Public
22
Core: L3 b C boundary to the DC network. d h k Functional point for route summarization, the injection of default routes and termination of segmented virtual transport networks
Service Modules Catalyst 6500 Unified Compute System
L3 L2
Service Appliances
Aggregation: Typical L3/L2 boundary. DC aggregation point for uplink and DC services offering key features: VPC, VDC, 10GE density and 1st point of migration to 40GE and 100GE Access: Classic network layer providing nonblocking paths to servers & IP storage devices through VPC. It leverages Distributed Access Fabric Model (DAF) to centralize config & mgmt and ease h i d horizontal cabling d l bli demands d related to 1G and 10GE server environments Virtual Access: A virtual layer of network intelligence offering access layer-like controls layer like to extend traditional visibility, flexibility and mgmt into virtual server environments. Virtual network switches bring access layer switching capabilities to virtual servers without burden of topology control plane protocols. Virtual Adapters provide granular control over virtual and physical server IO resources
23
L2
NEXUS 5000
vL2
NEXUS 2000
VM VMVM VM VMVM VM VMVM VM VMVM VM VMVM VM VMVM VM VMVM VM VMVM VM VMVM VM VMVM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
NEXUS 1000v
VM VM VM VM VM VM VM VM VM VM
NEXUS 1000v
VM VM VM VM VM VM VM VM VM VM VM VM
POD
NEXUS 1000v
VM VM VM VM VM VM
VM VM VM VM VM VM
Rack 1
Cisco Public
Rack x
POD
Aggregation
Core
POD
POD
Aggregation Access
Fabric A Access
LAN Access SAN Edge A
Fabric B
SAN Edge B
Virtual Access
Fabric Extenders
UCS
UCS
The POD Concept: applies to distinct application environments and through a modular approach to building the physical, network and compute infrastructure in a predictable and repeatable manner. It allows organizations to plan the rollout of distinct compute environment as needed in a shared physical data center using a pay as you go model
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
24
DC Zone
POD
HOT AISLE
POD
4 PODs
ToR POD
POD
POD
25
Row
Row
Row
What is emerging
What influences physical layout
Primarily: Power Cooling Cabling Secondarily Access Model Port Density
Fabric Interconnect
Network Rack et o ac
Why Distributed Access Fabric? All chassis are managed by Fabric Interconnect: single config point, single monitoring point
Fabric instances per chassis present at rack level reduced management point Fabric instances are extensions of the fabric Interconnect: they are fabric Extenders Simplifies cabling infrastructure: Horizontal cabling choice: what is available Fiber of copper CX1 cabling for brownfield installations Fabric Interconnect Centrally located USR for greenfield installations Fabric Interconnect at End of Row near the cross connect Vertical cabling: just an in-rack patch cable in rack
BRKCOM-2986_c2
Cisco Public
27
DAF
Access fabric on top of rack & access switch on end of row
Network Modular Switch at the end of Low RU, lower port density switch per server rack Fabric & a row of server racks Location Copper: server to ToR switch Cabling Copper: server to access
Fiber: access to aggregation Fiber: ToR to aggregation 40 48 ports C49xx GE
Port Density
240 336 ports - C6500 288 672 ports N7000 6 12 multi-RU servers multi-
20 4000 ports N2K-5K GE N2K8-30 1 RU servers 3 blade enclosures per rack Applicable to low and high density server racks most 1212-48 blade servers flexible Typically 2: Access and aggregation Typically 2: Access and aggregation
Copper of fiber: In rack patch Fiber: Fib access f b i t f b i fabric to fabric switch 14 16 Servers (dual-homed) Ranges from classic EoR to (dualToR designs
BRKCOM-2986_c2
Cisco Public
28
Agenda
Overview Design Considerations A Unified DC Architecture Deployment Scenarios
Overall Architecture Ethernet Environments Switch mode EHM UIO Environments
BRKCOM-2986_c2
Cisco Public
29
Fabric A
Fabric B
Fabric Interconnect
Fabric Extenders
Unified Compute System Unified Compute System
POD
VSS or VPC
Switch Mode
Fabric Interconnect
Behaves like any other Ethernet Switch y participates on STP topology Follow STP design best practices
End-Host-Mode
Switch Looks Like a Host Switch Still Performs L2 Switching MAC addresses Active on 1 link at a time
MAC addresses pinned to uplinks MAC learning does not take place Forms Loop-free Topology
Simplified Topologies
Upstream Switches Provide Nonblocking path mechanism: VSS or VPC Interaction with VSS or VPC
No Special feature needed Topology is Loopfree Less R li L Reliance on STP
POD
8
16
Aggregation
16
POD
8
16
L3 L2
Access
8
UCS-6140
L2
End-Host Mode
UCS-6140
112 10GE
UCS-6140
3.3:1
L2
Rack 1
Rack N
Oversubscription Rates
Scalability y
6140 pair: 10 chassis = 80 compute nodes 3.3:1 subscription 5040 pair: 6 UCS systems = 480 compute nodes 15:1 subscription 7018 pair: 13 5040 pairs: 6,240 compute node Enclosure: 80 GE Compute Node: 10GE Attached
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
Scalability y
6140 pair: 10 chassis = 80 compute nodes 5:1 subscription 7018 pair: 14 6140 pairs: 1,120 compute nodes Enclosure 80GE Compute node: 10GE Attached 33
POD
8
16
16
POD
8
16
8
UCS-6140
8
UCS-6140
6
UCS-6140
UCS-6140
Rack 1
Rack N
Rack 1
Rack N
34
Aggregation
6
UCS6120 UCS6120
L3 L2
POD
6
UCS6120
6
UCS6120
L2
8 8
8
UCS6140 UCS6140
8
UCS6140
8
UCS6140
16
16
Access
16
16
16
16
vL2
Rack 1
Rack 2
Rack 1
Rack 2
Virtual Access
Rack 1
Rack 3
Rack 1
Rack 3
POD
NEXUS 7018
UCS6120
UCS6120
UCS6120
UCS6120
UCS6140
UCS6140
UCS6140
UCS6140
2 x 10GE FEX uplinks 4 x 10GE FEX Uplinks 5G per slot p 10G per slot p
UCS-6140
8 uplinks 40 downlinks
~18 UCS-6120 pairs 2,880,1440 & 720 ~ 14 UCS-6140 pairs 4,480, 4 480 2240 & 1120
20 Chassis 160 blades 40 Chassis 320 blades 20 Chassis 160 bl d blades 40 Chassis 320 blades
10 Chassis 80 blades 20 Chassis 160 blades 10 Chassis 80 bl d blades 20 Chassis 160 blades
~8 6120 pairs 1,280, 1 280 640 & 320 ~ 6 6140 pairs 1,920, 960 & 480
2008 Cisco Systems, Inc. All rights reserved.
UCS-6140
8 uplinks 40 downlinks
Session_ID Presentation_ID
Cisco Confidential
SAN Environments
NPV Mode
A A B B
POD
POD
NPV Mode: Useful addressing domain scalability (No domain ID needed on fabric switch) SAN Fabric agnostic N-port ( p (Fabric Interconnect side) )
Connecting Point: g
Based on Scalability and Port Density NPIV could add significant load Load: FLOGI, Zoning, other services
Core Layer:
Tends to b T d t be scalable due to limited port count l bl d t li it d t t
Edge Layer:
Is more scalable because load is spread across multiple switches
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
37
B
8
UCS6120 UCS6140
8
UCS6140
Edge
8
Virtual Access
Rack 1
Rack 4
Rack 1
Rack 8
Core: MDS9509
12 4GFC x 7 = 84 4G FC = 21 UCS = 1680 servers
Core: MDS9509
12 4GFC x 7 = 84 4G FC = 10 UCS = 1600 servers
Edge: 6120 x 2
40 10GE 1:1 ports 8 4G FC uplinks = 10 chassis ports, 10 chassis = 80 halfwidth servers FC chassis bandwidth: 4 x 10 GE = 5G per blade 6120 subscription: 8x4G FC: 20G = 6.25 : 1
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public
Edge: 6140 x 2
80 10GE 1:1 ports 16 4G FC uplinks = 20 chassis ports, 20 chassis = 160 halfwidth servers chassis Subscription: 4 x 10GE : 5G per blade 6140 subscription: 16x4G FC: 40G = 6.25:1
38
Core
2
UCS6120
2 2
UCS6120 UCS6140
4 4
UCS6140
LAN Environment
Using 7018 at Aggregation Layer
28 UCS based on 6120s 14 UCS based on 6140s Using 6120s in the access
5 Chassis (4 ports per FEX) 40 blades
SAN Environment
Using 9509 at Aggregation Layer
21 UCS based on 6120s 10 UCS based on 6140s Using 6120s in the access
5 Chassis (4 ports per FEX) 40 blades
Conclusion
End-host-mode vs Switch mode
EHM to Access Switch is possible Switch mode to L2/L3 Boundary switch is appropriate If Aggregating to ToR switch is desired use EHM
40
41
BRKCOM-2986_c2
Cisco Public
BRKCOM-2986_c2
Cisco Public
43
Dont f forget to activate your Cisco Live Virtual account for access to all session material, communities, and on-demand and live activities throughout the year. A i h Activate your account at the h Cisco booth in the World of Solutions or visit www.ciscolive.com.
44
BRKCOM-2986_c2
Cisco Public
BRKCOM-2986_c2
Cisco Public
45