You are on page 1of 45

Unified Data Center Architecture: Integrating Unified Compute System Technology

BRKCOM-2986 marregoc@cisco.com @ i

Presentation_ID

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Agenda
Overview Design Considerations A Unified DC Architecture Deployment Scenarios

BRKCOM-2986_c2

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

UCS Building Blocks


UCS Manager Embedded manages entire system UCS 6100 Series Fabric Interconnect 20 Port 10Gb FCoE UCS-6120 40 Port 10Gb FCoE UCS-6140 UCS Fabric Extender UCS 2100 Series Remote line card

UCS 5100 Series Blade Server Chassis Flexible bay configurations UCS B-Series Blade Server Industry-standard architecture UCS Virtual Adapters Choice of multiple adapters
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

Legend
Catalyst 6500 Multilayer Switch Catalyst 6500 L2 Switch Virtual Switching System Generic Cisco Multilayer Switch Generic Virtual Switch Nexus Multilayer Switch Nexus L2 Switch Nexus Virtual DC Switch (Multilayer) Nexus Virtual DC Switch (L2) MDS 9500 Director Switch MDS Fabric Switch
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

Nexus 1000V (VEM) Nexus 1000V (VSM) Embedded VEM with VMs
VM VM VM VM VM VM

Nexus 2000 (Fabric Extender) Nexus 5K with VSM UCS Fabric Interconnect UCS Blade Chassis ASA ACE Service Module Virtual Blade Switch
IP

IP Storage
4

Agenda
Overview Design Considerations
Blade Chassis Rack Capacity & Server Density System Capacity & Density Chassis External Connectivity Chassis Internal Connectivity

A U ifi d DC A hi Unified Architecture Deployment Scenarios

BRKCOM-2986_c2

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Blade Chassis UCS 5108 Details


Functional View Actual View
Front View

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

Slots

10.5 26.7 cm 32 32 81.2 cms 17.5 44.5 cm

Front to back Airflow

Rear View

Fabric Extenders

BRKCOM-2986_c2

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Blade Chassis UCS 5108 Details


UCS 5108 Chassis Characteristics
Functional View Chassis
slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

8 Blade Slots
8 half-width servers or; 4 full-width servers

Up to two Fabric Extenders


Both concurrently active

Redundant and Hot Swappable


Power Supplies pp Fan Modules

Additional Chassis Details:


Slots Fabric Extenders

Size: 10 5 (6U) x 18 5 x 32 Si 10.5 18.5 Total Power Consumption


Nominal Estimates**
Half-width Servers: 1.5 3.5 kW Full-width Full width Servers: 1.5 3 5 kW 1 5 3.5

Cabling: SFP+ Connectors


For FEX to Fabric Interconnect CX1: 3 & 5 meters USR: 100 meters Late summer SR: 300 meters

Actual Front and Rear Views

Airflow: front to back


BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

Blades, Slots, and Mezz Cards


B-Series Blades
Chassis Mezz Card Ports Full-width Half-width

Half-width Blade B200-M1


Each blade uses one slot

Full-width B250-M1 F ll id h B250 M1


Each blade uses two slots Always 1,2 or 3,4, or 5,6, or 7,8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

Slots and Mezz Cards


Each Slot Takes a Single Mezz Card Each Mezz Card Has 2 Ports
Each port connect to one FEX
Blade Slots

Total of 8 Mezz Cards Per Chassis

Blades, Sl t and M Bl d Slots d Mezz C d Cards


Half-width Blade B200-M1
One mezz card two ports per server Each Port to different FEX

Full-width Blade B250-M1


Two mezz cards four ports per server Two: from each mezz card to each FEX
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

UCS 2100 Series Fabric Extenders Details


UCS 2104 XP Fabric Extender Port Usage
Two Fabric Extenders per Chassis 4 x 10GE Uplinks per Fabric Extender Fabric Extender Connectivity
1, 2 or 4 uplinks per Fabric Extender All uplinks connect to one fabric interconnect

Port Combination Across Fabric Extenders


slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

FEX Port count must match per enclosure Any port on FEX could be utilized

Fabric Extender Capabilities


Managed as part of UC System 802.1q Trunking and FCoE Capabilities Uplink Traffic Distribution Distrib tion
Fabric Extenders

Fabric Extenders Ports

Selected when blades are inserted Slot assignment occurs at power up All slot traffic is assigned to a single uplink

Other Logic
Monitor and control of environmentals Blade insertion/removal events
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

UCS 6100 Series Fabric Interconnect Details


Fabric Interconnect Overview
Management of Compute System y Network Connectivity
Fabric Interconnects

to/from Compute Nodes to/from LAN/SAN Environments

Types of Fabric Interconnect


UCS-6120XP 1U
20 Fixed 10GE/FCoE ports & 1 expansion slot

UCS-6140XP 2U
slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

40 Fixed 10GE/FCoE ports & 2 expansion slots

Fabric Interconnect Details


Expansion Modules
Fibre C a e b e Channel:
8 x 1/2/4G FC

Ethernet:
Fabric Extenders 6 x 10GE SFP+

Fibre Channel & Ethernet:


4 1/2/4G FC & 4 x 10GE SFP SFP+

Fixed Ports: FEX or uplink connectivity Expansion Slot Ports: uplink connectivity only
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

10

Rack Capacity and Cabling Density


6 Foot Rack
24

Space What is possible! 1RU = 1.75 = 44.45 mm

7 Foot Rack
24

6 Foot Rack - 42U usable


Up to 7 chassis

7 Foot Rack - 44U usable


Up to 7 chassis

Power what is realistic!


2 chassis = 3 - 6 kW 3 chassis = 4.5 - 9 kW 4 chassis = 6 - 12 kW 5 chassis = 7.5 15kW 6 chassis = 9 - 18 kW
Determined by power per rack

80

73.5

85

77

Cabling what is realistic!


2 chassis = 4 16 uplinks p 3 chassis = 6 24 uplinks 4 chassis = 8 32 uplinks
Determined by bandwidth requirements 19
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

19
11

Server Density per Rack


6 Foot Rack
Server Density
2 chassis = 4 - 16 3 chassis = 12 - 24 4 chassis = 16 32 5 chassis = 20 - 40 6 chassis = 24 - 48
Determined by power per rack

7 Foot Rack

Bandwidth Per Chassis!


1 uplink per FEX = 20GE 2 uplinks per FEX = 40GE 4 uplinks per FEX = 80GE
Determined by application requirements

Blades per Rack

3 Chassis

4 Chassis

5 Chassis

6 Chassis

Half-width

24 12
Cisco Public

32 16

40 20

48 24
12

Full-width
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved.

UCS Power - Nominal Estimates


Fabric Interconnect

UCS Power Specifics


Fabric Interconnect
UCS 6120XP 350W 450W UCS-6120XP: UCS-6140XP: 480W 750W

Full Width Blade Enclosure Front View Half Width Blade

Processors (Max consumption)


Xeon L5520: 60W Xeon E5520: 80W X E5520 Xeon E5540: 80W Xeon X5570: 95W

Mezz Card

Mezz Cards: 11W to 21W Chassis: 210W 710W Ch i Operational Blade: ~60W to ~395W Nominal Power Range Depends g p
Processor Memory Mezz Cards HD Turbo Mode Workload
13

Enclosure Rear View Fabric E t d F b i Extender

Power Supply

BRKCOM-2986_c2

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

UCS Compute Node Density


Fabric Interconnects

Unified Compute System Density


Unified Compute System Density Factors:
Blade Server Type Chassis Server Density Fabric Interconnect density Uplinks from Fabric Extenders Bandwidth per compute node Network Oversubscription N kO b i i

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

Bandwidth vs Oversubscription
Bandwidth:
slot 1 slot 2 slot 3 slot 4 l t slot 5 slot 6 slot 7 slot 8

Traffic load a server needs to support Specified by server and/or application engineer

Oversubscription: O b i i
A measure of network capacity Designed by network engineer

Multi-homed servers
More IO ports may not mean more bandwidth Depends on active-active vs. active-standby
14

Fabric Extenders
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

UCS Compute Node Density


Total number of blades per Unified Compute System
Fabric Interconnects

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

Unified Compute Chassis

Fabric Extenders

Chassis Uplink Capacity


Influenced by blade bandwidth requirements
Influenced by bandwidth per blade type Half-width:10GE per blade, 4 uplinks per FEX Full-width: 20GE per blade, 4 uplinks per FEX

Compute System UCS-6120 Blades UCS-6140 Blades

1 FEX Uplink

2 FEX Uplinks

4 FEX Uplinks

20 80-160 40 160-320

10 40-80 20 80-160

5 20-40 10 40-80
15

Influenced by Oversubscription
East - West Traffic Subscription: is 1:1 North South Traffic Subscription:
Needs to be Engineered Interconnect Uplinks
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

UCS Compute Density and Bandwidth Capacity


# of Servers determined by FI Density and #of uplinks per chassis Server Density Ranges: 20 320
Half-width: 40 320 Full-width: 20 160 Full-width: 5 20Gbps

Bandwidth Ranges: 2.5 20 Gbps


Half-width: 2.5 10Gbps

Max Chassis per UCS: 5 40 Uplinks per FEX # of Chassis UCS-6120


B/W per Chassis Half width Half-width Blades # of Blades B/W per Blade Full-width Blades # of Blades B/W per Blade # of Chassis UCS 6140 UCS-6140 B/W per Chassis Half-width Blades # of Blades B/W per Blade Full-width Blades # of Blades f B/W per Blade 1 x 10GE 2 x 10GE 4 x 10GE

20 20G 160 2.5 Gbps 80 5 Gbps 40 20G 320 2.5 Gbps 160 5 Gbps
Cisco Public

10 40G 80 5 Gbps 40 10 Gbps 20 40G 160 5 Gbps 80 10 Gbps

5 80G 40 10 Gbps 20 20 Gbps 10 80G 80 10 Gbps 40 20 Gbps


16

Max Bandwidth Per blade: Half-width: 20Gbps


BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved.

Full-width: 40Gbps

Unified Compute System and Rack Capacity


Higher Density Racks
More power per rack less racks per system

Lower Density Racks


Less power per rack more racks per system

Racks per UCS Range:


1 to 14 racks

Interconnect Location:
Based on # of Racks and chassis density per rack Dependent on Cabling Depending on Interconnect Density

Chassis per Rack FEX Uplinks 1 x 10GE

3 Chassis

4 Chassis

5 Chassis

6 Chassis

6 uplinks 7 or 14 racks 12 uplinks 4 or 7 racks 24 uplinks 2 or 4 racks

8 uplinks 5 or 10 racks 16 uplinks 3 or 5 racks 32 uplinks 2 or 3 racks

10 uplinks 4 or 8 racks 20 uplinks 2 or 4 racks 40 uplinks 1 or 2 racks

12 uplinks 4 or 7 racks 24 uplinks 2 or 4 racks 48 uplinks 1 or 2 racks


17

2 x 10GE

4 x 10GE

BRKCOM-2986_c2

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

UCS Fabric Interconnect Location


Horizontal Cabling
Alternatives to Interconnect Placement and Cabling
CX1: 1 5 meters: Interconnect may be placed in centralized location mid row* USR: 100 meters: Interconnect may be placed at the end of the row* near cross connect SR: 300 meters: Interconnect may be placed near multi-row* interconnect y p
* Row length calculations are based on 24 wide racks/cabinets
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

20 32 4 16
8 8

20 32 16 16 4 16
8 8

80 40 8 16 16 16

16 16

80 40 16

16

16

16

16 40
2 2

40
2 2

40

16

CX1 Cabling 5 meters g


5 meters ~ 16 feet Using 2 Uplinks per Fabric Extender 3 chassis per rack

USR & Fiber 100 meters


100 meters ~ 330 feet Using 2 Uplinks per fabric extender 3 chassis per rack

4 Fabric Interconnects (UCS-6120)


2 x 20 10GE Ports for chassis connectivity G f 4x10GE & 4x4G-FC North facing

4 Fabric Interconnects (UCS-6120)


2 x 20 10GE Ports for chassis connectivity 2 4x10GE & 4x4G-FC North Facing

Server Count: 20 chassis x 8 = 160 servers


BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

Server Count: 20 x 8 = 160 half-width servers


18

Unified Compute System External Connectivity


Core POD Aggregation LAN Fabric Storage Arrays SAN Fabric Core

Fabric A Access A
LAN Access SAN Edge A

Fabric B

SAN Edge B

Fabric Interconnect Virtual Access Fabric Extenders


Unified Compute System Unified Compute System

Ethernet Fabric
Single Fabric Fabric Interconnect: 10GE attached
Switch Mode End-host Mode

SAN Fabric
Dual Fabrics Fabric Interconnect: 4G FC attached
NPV Mode

Interconnect Connectivity Point


SAN Core; or SAN Edge for more Scalability
19

Interconnect Connectivity Point


L3/L2 Boundary in all cases Nexus 7000 & Catalyst 6500
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

Internal Connectivity
Mezz Card Ports and Blades Connectivity
Fabric Extender Connectivity
Fabric Extender Ports
s1 s1 s5 s2 s3 s3 s5 s2 s4 s7 s6 s5 s2 s3 s6 s4 s7 s7 s6 s8 s4 s8 s8 slot 1 slot 2 slot 3 blade2 slot 4 slot 5 blade3 slot 6 slot 7 blade4 slot 8 s1 s1 s1 s5 s2 s3 s3 s5 s2 s4 s7 s6 s5 s6 s3 s2 s7 s7 s4 s8 s6 s4 s8 s8

blade1

Connect to single Fabric Interconnect Each port is independently used Do not form a port channel Each port is a trunk

Traffic di t ib ti is based on slot at b i up ti T ffi distribution i b d l t t bring time Slot Mezz Port map to connected Fabric Extender
FCS: in a sequential fashion port pinning Post FCS: a port could have dedicated uplink

Mezz Card Port Usage


s1 s1 s5 s3 s2 s5 s3 s2 s7 s4 s6 s5 s2 s6 s3 s4 s7 s7 s6 s8 s4 s8 s8 slot t blade1 bl ll d 11 slot blade1 bl d 11 blade2 slot 2 blade2 slot 2 slot 3 blade3 slot 3 blade3 slot 4 blade4 slot 4 blade4 slot 5 blade5 slot 5 blade5 blade6 bslot 6 blade6 ade6 slot 6 slot 7 blade7 slot 7 blade7 blade8 slot 8 blade8 slot 8 s1 s1 s1 s5 s3 s2 s5 s3 s2 s7 s4 s6 s5 s2 s6 s3 s4 s7 s7 s6 s8 s4 s8 s8

Each slot connects to each fabric extender Each slot supports a dual-port mezz card Slots and blades:
One slot, one mezz card, one half-width blade Two Slots, two mezz cards, full-width blade

Slot Mezz Port is assigned Fabric Extender


Based on available uplinks in sequence Port Redundancy based least physical connections
Cisco Public

BRKCOM-2986_c2

2009 Cisco Systems, Inc. All rights reserved.

20

Internal Connectivity
Mezz Cards and Virtual Interfaces Connectivity
Interconnect
vEth vEth vEth

Interconnect
vEth vEth vEth

Interconnect
vEth vEth vEth

Interconnect
vEth vEth vEth

mezz1

mezz2

mezz15

mezz16

mezz1

mezz2

mezz15

mezz16

eth0 eth1

eth0 eth1

eth0 eth1 vnic1 vnic2

eth0 eth1 vnic1 vnic2

eth0 eth1 vhba vhba

eth0 eth1 vhba vhba

eth0 eth1 vhba vhba

eth0 eth1 vhba vhba

vnic1 vnic2 vnic1 vnic2

Full idth F ll width

Half idth H lf width

Half idth H lf width

Full idth F ll width

Half idth H lf width

Half idth H lf width

chassis

chassis

General Connectivity
Port channels
Are not formed between mezz ports Are not formed across mezz cards

Blade Connectivity
Full width 2 mezz cards 4 ports
vnics mapped to any port vhbas are round robin mapped to fabric *

Backup Interfaces
Mezz port backup within mezz card p p Redundancy: Depends on Mezz card

Half width 1 mezz card 2 ports


vnics mapped to any port vhbas mapped to 1 port not redundant* vhba mapping is roundrobin
*Host multipahing sw required for redundancy
21

Interface Redundancy
vnic redundancy done across mezz card ports Cisco Systems, Inc. All rights reserved. BRKCOM-2986_c2 2009 Cisco Public

Agenda
Overview Design Considerations A Unified DC Architecture
The Unified DC Architecture The Virtual Access Layer Unified Compute Pods Distributed Access Fabric Model

Deployment Scenarios

BRKCOM-2986_c2

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

22

The Unified DC Architecture


L3
NEXUS 7000

Core: L3 b C boundary to the DC network. d h k Functional point for route summarization, the injection of default routes and termination of segmented virtual transport networks
Service Modules Catalyst 6500 Unified Compute System

L3 L2

Service Appliances

NEXUS 7000 VPC

Aggregation: Typical L3/L2 boundary. DC aggregation point for uplink and DC services offering key features: VPC, VDC, 10GE density and 1st point of migration to 40GE and 100GE Access: Classic network layer providing nonblocking paths to servers & IP storage devices through VPC. It leverages Distributed Access Fabric Model (DAF) to centralize config & mgmt and ease h i d horizontal cabling d l bli demands d related to 1G and 10GE server environments Virtual Access: A virtual layer of network intelligence offering access layer-like controls layer like to extend traditional visibility, flexibility and mgmt into virtual server environments. Virtual network switches bring access layer switching capabilities to virtual servers without burden of topology control plane protocols. Virtual Adapters provide granular control over virtual and physical server IO resources
23

L2

NEXUS 5000

NEXUS 7000 VPC

vL2

NEXUS 2000
VM VMVM VM VMVM VM VMVM VM VMVM VM VMVM VM VMVM VM VMVM VM VMVM VM VMVM VM VMVM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM

NEXUS 1000v
VM VM VM VM VM VM VM VM VM VM

NEXUS 1000v
VM VM VM VM VM VM VM VM VM VM VM VM

POD

NEXUS 1000v
VM VM VM VM VM VM

VM VM VM VM VM VM

Rack 1 Rack 2 Rack 3


BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved.

Rack 1
Cisco Public

Rack x

A Unified Compute Pod


A Modular Predictable Virtualized Compute Environment Modular,
POD: Modular Repeatable Compute Environment w/ Predictable Scalability & Deterministic Functions
Core Core LAN Fabric Storage Arrays SAN Fabric

POD
Aggregation

Core

POD

POD
Aggregation Access

Fabric A Access
LAN Access SAN Edge A

Fabric B

Fabric Interconnect Virtual Access

SAN Edge B

Virtual Access

Fabric Extenders

UCS

UCS

General Purpose POD Typical enterprise application environment


Classic client server applications Multi-tier Multi tier Applications: web app DB web, app, Low to High Density Compute Environments Include stateful services

Unified Compute Pod


Collection of Unified Compute Systems
IO Consolidation Workload M bilit W kl d Mobility Application Flexibility Virtualization

The POD Concept: applies to distinct application environments and through a modular approach to building the physical, network and compute infrastructure in a predictable and repeatable manner. It allows organizations to plan the rollout of distinct compute environment as needed in a shared physical data center using a pay as you go model
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

24

Physical Infrastructure and Network Topology


Mapping the Physical to the Logical
COLD AISLE

DC Zone

POD

HOT AISLE

Pod Network Rack DAF POD Storage Rack Server Rack

POD

4 PODs

DAF Rack UCS Blade Rack

ToR POD

POD

POD

EoR Access POD


BRKCOM-2986_c2

DAF UCS POD


2009 Cisco Systems, Inc. All rights reserved. Cisco Public

25

Access Layer Network Model


End of Row Top of Rack and Blade Switches Row, Rack,
What it used to be
GE Access End of Row Top of Rack Blade Switches

Row

Row

Row

What is emerging
What influences physical layout
Primarily: Power Cooling Cabling Secondarily Access Model Port Density

EoR & Blades

ToR & Blades

DAF & 1U Servers

DAF & Blades

What Cisco has done done


BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

Nexus 2K+Nexus 5k Future 7K & 6K

UCS Fabric Extender to Fabric Interconnect


26

Distributed Access Fabric in Blade Environment


Distributed Access Fabric - DAF
DAF Blade Rack Fabric Instance

Fabric Interconnect

Cross Connect Rack

Horizontal Cabling Vertical Cabling

Network Rack et o ac

Why Distributed Access Fabric? All chassis are managed by Fabric Interconnect: single config point, single monitoring point
Fabric instances per chassis present at rack level reduced management point Fabric instances are extensions of the fabric Interconnect: they are fabric Extenders Simplifies cabling infrastructure: Horizontal cabling choice: what is available Fiber of copper CX1 cabling for brownfield installations Fabric Interconnect Centrally located USR for greenfield installations Fabric Interconnect at End of Row near the cross connect Vertical cabling: just an in-rack patch cable in rack

BRKCOM-2986_c2

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

27

Network Equipment Distribution


EoR, ToR, EoR ToR Blade Switches and Distributed Access Fabric Switches,
EoR ToR Blade Switches
Switches Integrated in to blade enclosures per server racks Copper or Fiber: access to aggregation ti

DAF
Access fabric on top of rack & access switch on end of row

Network Modular Switch at the end of Low RU, lower port density switch per server rack Fabric & a row of server racks Location Copper: server to ToR switch Cabling Copper: server to access
Fiber: access to aggregation Fiber: ToR to aggregation 40 48 ports C49xx GE

Port Density

240 336 ports - C6500 288 672 ports N7000 6 12 multi-RU servers multi-

Rack Server Density Tiers on Typically 2: Access and aggregation POD

20 4000 ports N2K-5K GE N2K8-30 1 RU servers 3 blade enclosures per rack Applicable to low and high density server racks most 1212-48 blade servers flexible Typically 2: Access and aggregation Typically 2: Access and aggregation

Copper of fiber: In rack patch Fiber: Fib access f b i t f b i fabric to fabric switch 14 16 Servers (dual-homed) Ranges from classic EoR to (dualToR designs

One or Two. Collapsed access / aggregation or classic aggregation & access

BRKCOM-2986_c2

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

28

Agenda
Overview Design Considerations A Unified DC Architecture Deployment Scenarios
Overall Architecture Ethernet Environments Switch mode EHM UIO Environments

BRKCOM-2986_c2

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

29

UCS and Network Environments


Storage Arrays Core

Fabric A

Fabric B

Fabric Interconnect

Fabric Extenders
Unified Compute System Unified Compute System

LAN & SAN Deployment Considerations


Deployment Options
Uplinks per Fabric Extender
Oversubscription & Bandwidth

Other Considerations Details


Mezz Cards: CNAs
Up to 10G FC (FCoE) Enet traffic could take full capacity if needed

Uplinks per Fabric Interconnect


Flavor of Expansion Module

Fabric Interconnect FC Uplinks


4G FC: 8 or 4 Ports and N port channels N-port Uplinks per Fabric Interconnect
2 or 4 FC uplinks per Fabric 2 or 3 10GE uplinks to each upstream switch
30

Fabric Interconnect Connectivity Point


Fabric Interconnect is the Access Layer Should connect to L2/L3 Boundary switch
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

UCS Fabric Interconnect Operation Modes


Switch Mode & End-Host-Mode End Host Mode
POD
VSS or VPC

POD
VSS or VPC

Switch Mode
Fabric Interconnect
Behaves like any other Ethernet Switch y participates on STP topology Follow STP design best practices

End-Host-Mode
Switch Looks Like a Host Switch Still Performs L2 Switching MAC addresses Active on 1 link at a time
MAC addresses pinned to uplinks MAC learning does not take place Forms Loop-free Topology

Provides L2 Switching Functionality Uplink Capacity based on Port-Channels


6140: 2 Expansion Slots 8-port Port Channels 6120: 1 Expansion Slot 6-port Port Channels
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

UCS mgr Syncs Fabric Interconnects Uplink Capacity


6140: 12-way (12 uplinks) 6120: 6-way (6 uplinks)
31

UCS on Ethernet Environments


Fabric Interconnect & Network Topologies
POD POD
VSS or VPC

Classic STP Topologies


Fabric Interconnect
Switch runs STP participates on STP topology Follow STP design best practices

Simplified Topologies
Upstream Switches Provide Nonblocking path mechanism: VSS or VPC Interaction with VSS or VPC
No Special feature needed Topology is Loopfree Less R li L Reliance on STP

Upstream Devices: Any L3/L2 boundary switch


Catalyst 6500 Nexus 7000
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

Increased Overall Bandwidth Capacity


8-way multi-pathing
32

UCS on Ethernet Environments


Connectivity Point y
16

POD
8

16

256 10GE 6.5:1 651

Aggregation

16

POD
8

16

L3 L2
Access

256 10GE 14.5:1


8

8
UCS-6140

112 10GE 4.5:1


6
UCS-6140

L2
End-Host Mode

Switch 112 10GE Mode 5 5:1

UCS-6140

112 10GE
UCS-6140

3.3:1

L2

Rack 1

Rack N

Wire speed 10GE ports


Rack 1 Rack N

Oversubscription Rates

Fabric I t F b i Interconnect to ToR Access Switches tt T RA S it h


Fabric Interconnect in End-Host Mode No STP Leverage Total Uplink Capacity: 60G or 120 G Capacity L2 topology remains 2-tier

Fabric I t F b i Interconnect to Aggregation Switches tt A ti S it h


Fabric Interconnect in switch mode Leverage Total Uplink Capacity: 60G or 80G Capacity L2 topology remains 2-tier

Scalability y
6140 pair: 10 chassis = 80 compute nodes 3.3:1 subscription 5040 pair: 6 UCS systems = 480 compute nodes 15:1 subscription 7018 pair: 13 5040 pairs: 6,240 compute node Enclosure: 80 GE Compute Node: 10GE Attached
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

Scalability y
6140 pair: 10 chassis = 80 compute nodes 5:1 subscription 7018 pair: 14 6140 pairs: 1,120 compute nodes Enclosure 80GE Compute node: 10GE Attached 33

UCS on Ethernet Environments


Connectivity Point y
16

POD
8

16

16

POD
8

16

8
UCS-6140

8
UCS-6140

6
UCS-6140

UCS-6140

Rack 1

Rack N

Rack 1

Rack N

What other factors need to be considered


East to West Traffic Profiles Size of L2 domain: VLANs, MAC, number of ports, servers, switches, etc Tiers in the L2 topology: STP domain, diameter Oversubscription Latency Server Virtualization Load: MAC addresses, VLANs, etc Switch Mode vs End-Host-Mode
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

34

Unified Compute System POD Example


Ethernet Only - Switch Mode y
POD

Aggregation
6
UCS6120 UCS6120

L3 L2

POD

6
UCS6120

6
UCS6120

L2
8 8

8
UCS6140 UCS6140

8
UCS6140

8
UCS6140

16

16

Access

16

16

16

16

vL2

Rack 1

Rack 2

Rack 1

Rack 2

Virtual Access

Rack 1

Rack 3

Rack 1

Rack 3

Access Layer: UCS-6120 x 2


52 10GE 1:1 ports: 12 uplinks & 40 downlinks = 5 chassis 5 chassis = 40 halfwidth servers Chassis Subscription: 16:8 = 2:1 Blade Bandwidth: 10G Access to aggregation oversubscription: 20:6 ~ 3.3:1

Access Layer: UCS-6140 x 2


104 10GE 1:1 ports: 12 uplinks & 80 downlinks = 10 chassis 10 chassis = 80 halfwidth servers Chassis Subscription: 16:8 = 2:1 Blade Bandwidth: 10G Access to Aggregation oversubscription: 40:8 ~ 5:1

Aggregation Layer: 7010 + vPC


128 10GE 1:1 ports = 8 UCS = 320 servers BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved.
Cisco Public

Aggregation Layer: 7018 + vPC


256 10GE 1:1 ports = 14 UCS = 1,120 servers
35

UCS POD Density


Ethernet Only y
POD
NEXUS 7010

POD

NEXUS 7018

UCS6120

UCS6120

UCS6120

UCS6120

UCS6140

UCS6140

UCS6140

UCS6140

320 - 1,280 Blades


Rack 1 Rack X Rack 1 Rack X Rack 1

1,120 - 4,480 Blades


Rack X Rack 1 Rack X

Nexus 7018: 256 10GE 1:1 per pair


UCS-6120
6 uplinks 20 downlinks

1 x 10GE FEX uplink p 2.5G per slot

2 x 10GE FEX uplinks 4 x 10GE FEX Uplinks 5G per slot p 10G per slot p

UCS-6140
8 uplinks 40 downlinks

~18 UCS-6120 pairs 2,880,1440 & 720 ~ 14 UCS-6140 pairs 4,480, 4 480 2240 & 1120

20 Chassis 160 blades 40 Chassis 320 blades 20 Chassis 160 bl d blades 40 Chassis 320 blades

10 Chassis 80 blades 20 Chassis 160 blades 10 Chassis 80 bl d blades 20 Chassis 160 blades

5 Chassis 40 blades 10 Chassis 80 blades 5 Chassis 40 bl d blades 10 Chassis 80 blades


36

Nexus 7010: 128 10GE 1:1 per pair


UCS-6120
6 uplinks 20 downlinks

~8 6120 pairs 1,280, 1 280 640 & 320 ~ 6 6140 pairs 1,920, 960 & 480
2008 Cisco Systems, Inc. All rights reserved.

UCS-6140
8 uplinks 40 downlinks
Session_ID Presentation_ID

Cisco Confidential

SAN Environments
NPV Mode
A A B B

POD

POD

NPV Mode: Useful addressing domain scalability (No domain ID needed on fabric switch) SAN Fabric agnostic N-port ( p (Fabric Interconnect side) )

Connecting Point: g
Based on Scalability and Port Density NPIV could add significant load Load: FLOGI, Zoning, other services

Core Layer:
Tends to b T d t be scalable due to limited port count l bl d t li it d t t

Edge Layer:
Is more scalable because load is spread across multiple switches
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

37

UCS on a Storage Network


FC/ FCoE Only NPV Mode y
POD Core A
4
UCS6120

B
8
UCS6120 UCS6140

8
UCS6140

Edge
8

Virtual Access

Rack 1

Rack 4

Rack 1

Rack 8

Core: MDS9509
12 4GFC x 7 = 84 4G FC = 21 UCS = 1680 servers

Core: MDS9509
12 4GFC x 7 = 84 4G FC = 10 UCS = 1600 servers

Edge: 6120 x 2
40 10GE 1:1 ports 8 4G FC uplinks = 10 chassis ports, 10 chassis = 80 halfwidth servers FC chassis bandwidth: 4 x 10 GE = 5G per blade 6120 subscription: 8x4G FC: 20G = 6.25 : 1
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

Edge: 6140 x 2
80 10GE 1:1 ports 16 4G FC uplinks = 20 chassis ports, 20 chassis = 160 halfwidth servers chassis Subscription: 4 x 10GE : 5G per blade 6140 subscription: 16x4G FC: 40G = 6.25:1
38

UCS Density on LAN/SAN Environments


Storage Arrays g y A B

Core

2
UCS6120

2 2
UCS6120 UCS6140

40x10GE: 8x10GE = 5:1


8

4 4
UCS6140

40x10GE: 8x4G = 12.5:1


8

Unified Compute System

Unified Compute System

LAN Environment
Using 7018 at Aggregation Layer
28 UCS based on 6120s 14 UCS based on 6140s Using 6120s in the access
5 Chassis (4 ports per FEX) 40 blades

SAN Environment
Using 9509 at Aggregation Layer
21 UCS based on 6120s 10 UCS based on 6140s Using 6120s in the access
5 Chassis (4 ports per FEX) 40 blades

Using 6140s in the access


10 Chassis (4 ports per FEX) 80 blades
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

Using 6140s in the access


10 Chassis (4 ports per FEX) 8- blades
39

Conclusion
End-host-mode vs Switch mode
EHM to Access Switch is possible Switch mode to L2/L3 Boundary switch is appropriate If Aggregating to ToR switch is desired use EHM

Oversubscription & Bandwidth


Understand both: Network folks: oversubscription Server folks: bandwidth

Get Trained Early on


Lots of New and Useful Technology Architecture is Rapidly Evolving
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

40

Interested in Data Center?


Discover the Data Center of the Future
Cisco booth: #617 See a simulated data center and discover the benefits including investing to save, energy efficiency and innovation.

Data Center Booth


Come by and see whats happening in the world of Data Center demos; social media activities; bl d i l di ti iti bloggers; author signings th i i Demos include: Unified Computing Systems Cisco on Cisco Data Center Interactive Tour Unified Service Delivery for Service Providers y Advanced Services
BRKCOM-2986_c2 2009 Cisco Systems, Inc. All rights reserved. Cisco Public

41

Interested in Data Center?


Data Center Super Session
Data Center Virtualization Architectures, Road to Cloud Computing ( p g (UCS) ) Wednesday, July 1, 2:30 3:30 pm, Hall D Speakers: John McCool and Ed Bugnion

Panel: 10 Gig LOM


Wednesday 08:00 AM Moscone S303

Panel: Next Generation Data Center


Wednesday 04:00 PM Moscone S303

Panel: Mobility in the DC Data


Thursday 08:00 AM Moscone S303
42

BRKCOM-2986_c2

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Please Visit the Cisco Booth in the World of Solutions


See the technology in action
Data Center and Virtualization
DC1 Cisco Unified Computing System DC2 Data Center Switching: Cisco Nexus and Catalyst DC3 Unified Fabric Solutions DC4 Data Center Switching: Cisco Nexus and Catalyst DC5 Data Center 3.0: Accelerate Your Business, Optimize Your Future DC6 Storage Area Networking: MDS DC7 Application Networking Systems: WAAS and ACE

BRKCOM-2986_c2

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

43

Complete Your Online Session Evaluation


Give us your feedback and you could win fabulous prizes prizes. Winners announced daily. Receive 20 Passport points for each session evaluation you complete. Complete your session evaluation online now (open a browser ( through our wireless network to access our portal) or visit one of the Internet stations throughout the Convention Center.

Dont f forget to activate your Cisco Live Virtual account for access to all session material, communities, and on-demand and live activities throughout the year. A i h Activate your account at the h Cisco booth in the World of Solutions or visit www.ciscolive.com.
44

BRKCOM-2986_c2

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

BRKCOM-2986_c2

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

45

You might also like