About This eBook
ePUB is an open, industry-standard format for eBooks. However, support of ePUB and its many
features varies across reading devices and applications. Use your device or app settings to customize
the presentation to your liking. Settings that you can customize often include font, font size, single or
double column, landscape or portrait mode, and figures that you can click or tap to enlarge. For
additional information about the settings and features on your reading device or app, visit the device
manufacturer’s Web site.
Many titles include programming code or configuration examples. To optimize the presentation of
these elements, view the eBook in single-column, landscape mode and adjust the font size to the
smallest setting. In addition to presenting code and configurations in the reflowable text format, we
have included images of the code that mimic the presentation found in the print book; therefore, where
the reflowable format may compromise the presentation of the code listing, you will see a “Click here
to view code image” link. Click the link to view the print-fidelity code image. To return to the
previous page viewed, click the Back button on your device or app.

CCNA Data Center
DCICT 640-916 Official Cert Guide

NAVAID SHAMSEE, CCIE No. 12625
DAVID KLEBANOV, CCIE No. 13791
HESHAM FAYED, CCIE No. 9303
AHMED AFROSE
OZDEN KARAKOK, CCIE No. 6331

Cisco Press
800 East 96th Street
Indianapolis, IN 46240

CCNA Data Center DCICT 640-916 Official Cert Guide
Navaid Shamsee, David Klebanov, Hesham Fayed, Ahmed Afrose, Ozden Karakok
Copyright © 2015 Pearson Education, Inc.
Published by:
Cisco Press
800 East 96th Street
Indianapolis, IN 46240 USA
All rights reserved. No part of this book may be reproduced or transmitted in any form or by any
means, electronic or mechanical, including photocopying, recording, or by any information storage
and retrieval system, without written permission from the publisher, except for the inclusion of brief
quotations in a review.
Printed in the United States of America
First Printing February 2015
Library of Congress Control Number: 2015930189
ISBN-13: 978-1-58714-422-6
ISBN-10: 1-58714-422-0

Warning and Disclaimer
This book is designed to provide information about the 640-916 DCICT exam for CCNA Data Center
certification. Every effort has been made to make this book as complete and as accurate as possible,
but no warranty or fitness is implied.
The information is provided on an “as is” basis. The authors, Cisco Press, and Cisco Systems, Inc.
shall have neither liability nor responsibility to any person or entity with respect to any loss or
damages arising from the information contained in this book or from the use of the discs or programs
that may accompany it.
The opinions expressed in this book belong to the authors and are not necessarily those of Cisco
Systems, Inc.

Trademark Acknowledgements
All terms mentioned in this book that are known to be trademarks or service marks have been
appropriately capitalized. Cisco Press or Cisco Systems, Inc., cannot attest to the accuracy of this
information. Use of a term in this book should not be regarded as affecting the validity of any
trademark or service mark.

Special Sales
For information about buying this title in bulk quantities, or for special sales opportunities (which
may include electronic versions; custom cover designs; and content particular to your business,
training goals, marketing focus, or branding interests), please contact our corporate sales department

at corpsales@pearsoned.com or (800) 382-3419.
For government sales inquiries, please contact governmentsales@pearsoned.com.
For questions about sales outside the U.S., please contact international@pearsoned.com.

Feedback Information
At Cisco Press, our goal is to create in-depth technical books of the highest quality and value. Each
book is crafted with care and precision, undergoing rigorous development that involves the unique
expertise of members from the professional technical community.
Readers’ feedback is a natural continuation of this process. If you have any comments regarding how
we could improve the quality of this book, or otherwise alter it to better suit your needs, you can
contact us through email at feedback@ciscopress.com. Please make sure to include the book title and
ISBN in your message.
We greatly appreciate your assistance.
Publisher: Paul Boger
Associate Publisher: Dave Dusthimer
Business Operation Manager, Cisco Press: Jan Cornelssen
Executive Editor: Mary Beth Ray
Managing Editor: Sandra Schroeder
Development Editor: Ellie Bru
Senior Project Editor: Tonya Simpson
Copy Editor: Barbara Hacha
Technical Editor: Frank Dagenhardt
Editorial Assistant: Vanessa Evans
Cover Designer: Mark Shirar
Composition: Mary Sudul
Indexer: Ken Johnson
Proofreader: Debbie Williams

Americas Headquarters
Cisco Systems, Inc.
San Jose, CA
Asia Pacific Headquarters
Cisco Systems (USA) Pte. Ltd.

Singapore
Europe Headquarters
Cisco Systems International BV
Amsterdam, The Netherlands
Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed
on the Cisco Website at www.cisco.com/go/offices.
CCDE, CCENT, Cisco Eos, Cisco HealthPresence, the Cisco logo, Cisco Lumin, Cisco Nexus, Cisco
StadiumVision, Cisco TelePresence, Cisco WebEx, DCE, and Welcome to the Human Network are
trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks;
and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP,
CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo,
Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity,
Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me
Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, IronPort,
the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound,
MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels,
ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to
Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of
Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective
owners. The use of the word partner does not imply a partnership relationship between Cisco and any
other company. (0812R)

About the Authors
Navaid Shamsee, CCIE No.12625, is a senior solutions architect in the Cisco Services organization.
He holds a master’s degree in telecommunication and a bachelor’s degree in electrical engineering.
He also holds a triple CCIE in routing and switching, service provider, and data center technologies.
Navaid has extensive experience in designing and implementing many large-scale enterprise and
service provider data centers. In Cisco, Navaid is focused on security of data center, cloud, and
software-defined networking technologies. You can reach Navaid on Twitter: @NavaidShamsee.
David Klebanov, CCIE No.13791 (Routing and Switching) is a technical solutions architect with
Cisco Systems. David has more than 15 years of diverse industry experience architecting and
deploying complex network environments. In his work, David influences strategic development of the
industry-leading data center switching platforms, which lay the foundation for the next generation of
data center fabrics. David also takes great pride in speaking at industry events, releasing
publications, and working on patents. You can reach David on Twitter: @DavidKlebanov.
Hesham Fayed, CCIE No.9303 (Routing and Switching/Data Center), is a consulting systems
engineer for data center and virtualization based in California. Hesham has been with Cisco for more
than 9 years and has 18 years of experience in the computer industry, working with service providers
and large enterprises. His main focus is working with customers in the western region of the United
States to address their challenges by doing end-to-end data center architectures.
Ahmed Afrose is a solutions architect with the Cisco Cloud and IT Transformation (CITT) services
organization. He is responsible for providing architectural design guidance and leading complex
multitech service deliveries. Furthermore, he is involved in demonstrating the Cisco value
propositions in cloud software, application automation, software-defined data centers, and Cisco
Unified Computing System (UCS). He is experienced in server operating systems and virtualization
technologies and holds various certifications. Ahmed has a bachelor’s degree in Information Systems.
He started his career with Sun Microsystem-based technologies and has 15 years of diverse
experience in the industry. He was also directly responsible for setting up Cisco UCS Advanced
Services delivery capabilities while evangelizing the product in the region.
Ozden Karakok, CCIE No. 6331, is a technical leader from the data center products and
technologies team in the Technical Assistant Center (TAC). Ozden has been with Cisco Systems for
15 years and specializes in storage area and data center networks. Prior to joining Cisco, Ozden spent
five years working for a number of Cisco’s large customers in various telecommunication roles.
Ozden is a Cisco Certified Internetwork Expert in routing and switching, SNA/IP, and storage. She
holds VCP and ITIL certifications and is a frequent speaker at Cisco and data center events. She
holds a degree in computer engineering from Istanbul Bogazici University. Currently, she works on
Application Centric Infrastructure (ACI) and enjoys being a mother of two wonderful kids.

About the Technical Reviewer
Frank Dagenhardt, CCIE No. 42081, is a systems engineer for Cisco, focusing primarily on data
center architectures in commercial select accounts. Frank has more than 19 years of experience in
information technology and holds certifications from HP, Microsoft, Citrix, and Cisco. A Cisco
veteran of more than eight years, he works with customers daily, designing, implementing, and
supporting end-to-end architectures and solutions, such as workload mobility, business continuity,
and cloud/orchestration services. In recent months, he has been focusing on Tier 1 applications,
including VDI, Oracle, and SAP. He presented on the topic of UCS/VDI at Cisco Live 2013. When
not thinking about data center topics, Frank can be found backpacking, kayaking, or at home spending
time with his wife Sansi and sons Frank and Everett.

Dedications
Navaid Shamsee: To my parents, for their guidance and prayers. To my wife, Hareem, for her love
and support, and to my children, Ahasn, Rida, and Maria.
David Klebanov: This book is dedicated to my beautiful wife, Tanya, and our two wonderful
daughters, Lia and Maya. You fill my life with great joy and make everything worth doing! In the
words of a song: “Everything I do, I do it for you.” This book is also dedicated to all of you who
relentlessly pursue the path of becoming Cisco-certified professionals. Through your dedication you
build your own success. You are the industry leaders!
Hesham Fayed: This book is dedicated to my lovely wife, Eman, and my beautiful children, Ali,
Laila, Hamza, Fayrouz, and Farouk. Thank you for your understanding and support, especially when I
was working on the book during our vacation to meet deadlines. Without your support and
encouragement I would never have been able to finish this book. I can’t forget to acknowledge my
parents; your guidance, education, and making me always strive to be better is what helped in my
journey.
Ahmed Afrose: I dedicate this book to my parents, Hilal and Faiziya. Without them, none of this
would have been possible, and also to those people who are deeply curious and advocate self
advancement.
Ozden Karakok: To my loving husband, Tom, for his endless support, encouragement, and love.
Merci beaucoup, Askim.
To Remi and Mira, for being the most incredible miracles of my life and being my number one source
of happiness.
To my wonderful parents, who supported me in every chapter of my life and are an inspiration for
life.
To my awesome sisters, Gulden and Cigdem, for their unconditional support and loving me just the
way I am.
To the memory of Steve Ritchie, for being the best mentor ever.

Acknowledgements
To my co-authors, Afrose, David, Hesham, and Ozden: Thank you for your hard work and dedication.
It was my pleasure and honor to work with such a great team of co-authors. Without your support, this
book would not have been possible.
To our technical reviewer, Frank Dagenhardt: Thank you for providing us with your valuable
feedback and suggestions. Your input helped us improve the quality and accuracy of the content.
To Mary Beth Ray, Ellie Bru, Tonya Simpson, and the Cisco Press team: A big thank you to the entire
Cisco Press team for all their support in getting this book published. Special thanks to Mary Beth and
Ellie for keeping us on track and guiding us in the journey of writing this book.
—Navaid Shamsee
My deepest gratitude goes to all the people who have shared my journey over the years and who have
inspired me to always strive for more. I have been truly blessed to have worked with the most
talented group of individuals.
—David Klebanov
I’d like to thank my co-authors Navaid, Afrose, Ozden, and David for working as a team to complete
this project. A special thanks goes to Navaid for asking me to be a part of this book. Afrose, thank
you for stepping in to help me complete Chapter 18 so we could meet the deadline; really, thank you.
Mary Beth and Ellie, thank you both for your patience and support through my first publication. It has
been an honor working with you both, and I have learned a lot during this process.
I want to thank my family for their support and patience while I was working on this book. I know it
was not nice, especially on holidays and weekends, when I had to stay to work on the book rather
than taking you out.
—Hesham Fayed
The creation of this book, my first publication, has certainly been a tremendous experience, a source
of inspiration, and a whirlwind of activity. I do hope to impart more knowledge and experience in the
future on different pertinent topics in this ever-changing world of data center technologies and
practices.
Navaid Shamsee has been a good friend and colleague. I am so incredibly thankful to him for this
awesome opportunity. He has helped me achieve a milestone in my career by offering me the
opportunity to be associated with this publication and the world-renowned Cisco Press.
I’m humbled by this experienced and talented team of co-authors: Navaid, David, Hesham, and
Ozden. I enjoyed working with a team of like-minded professionals.
I am also thankful to all our professional editors, especially Mary Beth Ray and Ellie Bru for their
patience and guidance every step of the way. A big thank you to all the folks involved in production,
publishing, and bringing this book to the shelves.
I would also like to thank our technical reviewer, Frank Dagenhardt, for his keenness and attention to
detail; it helped gauge depth, consistency, and improved the quality of the material overall for the
readers.
And last, but not least, a special thanks to my girlfriend, Anna, for understanding all the hours I

needed to spend over my keyboard almost ignoring her. You still cared, kept me energized, and
encouraged me with a smile. You were always in my mind; thanks for the support, honey.
—Ahmed Afrose
This book would never have become a reality without the help, support, and advice of a great number
of people.
I would like to thank my great co-authors Navaid, David, Afrose, and Hesham. Thank you for your
excellent collaboration, hard work, and priceless time. I truly enjoy working with each one of you. I
really appreciate Navaid taking the lead and being the glue of this diverse team. It was a great
pleasure and honor working with such professional engineers. It would have been impossible to
finish this book without your support.
To our technical reviewer, Frank Dagenhardt: Thank you for providing us with your valuable
feedback, suggestions, hard work, and quick turnaround. Your excellent input helped us improve the
quality and accuracy of the content. It was a great pleasure for all of us to work with you.
To Mary Beth, Ellie, and the Cisco Press team: A big thank you to the entire Cisco Press team for all
their support in getting this book published. Special thanks to Mary Beth and Ellie for keeping us on
track and guiding us in the journey of writing this book.
To the extended teams at Cisco: Thank you for responding to our emails and phone calls even during
the weekends. Thank you for being patient while our minds were in the book. Thank you for covering
the shifts. Thank you for believing in and supporting us on this journey. Thank you for the world-class
organization at Cisco.
To my colleagues Mike Frase, Paul Raytick, Mike Brown, Robert Hurst, Robert Burns, Ed Mazurek,
Matthew Winnett, Matthias Binzer, Stanislav Markovic, Serhiy Lobov, Barbara Ledda, Levent
Irkilmez, Dmitry Goloubev, Roland Ducomble, Gilles Checkroun, Walter Dey, Tom Debruyne, and
many wonderful engineers: Thank you for being supportive in different stages of my career. I learned
a lot from you.
To our Cisco customers and partners: Thank you for challenging us to be the number one IT company.
To all extended family and friends: Thank you for your patience and giving us moral support in this
long journey.
—Ozden Karakok

Contents at a Glance
Introduction
Getting Started
Part I Data Center Networking
Chapter 1 Data Center Network Architecture
Chapter 2 Cisco Nexus Product Family
Chapter 3 Virtualizing Cisco Network Devices
Chapter 4 Data Center Interconnect
Chapter 5 Management and Monitoring of Cisco Nexus Devices
Part I Review
Part II Data Center Storage-Area Networking
Chapter 6 Data Center Storage Architecture
Chapter 7 Cisco MDS Product Family
Chapter 8 Virtualizing Storage
Chapter 9 Fibre Channel Storage-Area Networking
Part II Review
Part III Data Center Unified Fabric
Chapter 10 Unified Fabric Overview
Chapter 11 Data Center Bridging and FCoE
Chapter 12 Multihop Unified Fabric
Part III Review
Part IV Cisco Unified Computing
Chapter 13 Cisco Unified Computing System Architecture
Chapter 14 Cisco Unified Computing System Manager
Chapter 15 Cisco Unified Computing System Pools, Policies, Templates, and Service Profiles
Chapter 16 Administration, Management, and Monitoring of Cisco Unified Computing System
Part IV Review
Part V Data Center Server Virtualization
Chapter 17 Server Virtualization Solutions
Chapter 18 Cisco Nexus 1000V and Virtual Switching
Part V Review

Part VI Data Center Network Services
Chapter 19 Cisco ACE and GSS
Chapter 20 Cisco WAAS and Application Acceleration
Part VI Review
Part VII Final Preparation
Chapter 21 Final Review
Part VIII Appendices
Appendix A Answers to the “Do I Know This Already?” Questions
Glossary
Index
On the CD:
Appendix B Memory Tables
Appendix C Memory Tables Answer Key
Appendix D Study Planner

Contents
Introduction
Getting Started
A Brief Perspective on the CCNA Data Center Certification Exam
Suggestions for How to Approach Your Study with This Book
This Book Is Compiled from 20 Short Read-and-Review Sessions
Practice, Practice, Practice—Did I Mention: Practice?
In Each Part of the Book You Will Hit a New Milestone
Use the Final Review Chapter to Refine Skills
Set Goals and Track Your Progress
Other Small Tasks Before Getting Started
Part I Data Center Networking
Chapter 1 Data Center Network Architecture
“Do I Know This Already?” Quiz
Foundation Topics
Modular Approach in Network Design
The Multilayer Network Design
Access Layer
Aggregation Layer
Core Layer
Collapse Core Design
Spine and Leaf Design
Port Channel
What Is Port Channel?
Benefits of Using Port Channels
Port Channel Compatibility Requirements
Link Aggregation Control Protocol
Port Channel Modes
Configuring Port Channel
Port Channel Load Balancing
Verifying Port Channel Configuration
Virtual Port Channel
What Is Virtual Port Channel?
Benefits of Using Virtual Port Channel
Components of vPC

vPC Data Plane Operation
vPC Control Plane Operation
vPC Limitations
Configuration Steps of vPC
Verification of vPC
FabricPath
Spanning Tree Protocol
What Is FabricPath?
Benefits of FabricPath
Components of FabricPath
FabricPath Frame Format
FabricPath Control Plane
FabricPath Data Plane
Conversational Learning
FabricPath Packet Flow Example
Unknown Unicast Packet Flow
Known Unicast Packet Flow
Virtual Port Channel Plus
FabricPath Interaction with Spanning Tree
Configuring FabricPath
Verifying FabricPath
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 2 Cisco Nexus Product Family
“Do I Know This Already?” Quiz
Foundation Topics
Describe the Cisco Nexus Product Family
Cisco Nexus 9000 Family
Cisco Nexus 9500 Family
Chassis
Supervisor Engine
System Controller
Fabric Modules
Line Cards
Power Supplies

Fan Trays
Cisco QSFP Bi-Di Technology for 40 Gbps Migration
Cisco Nexus 9300 Family
Cisco Nexus 7000 and Nexus 7700 Product Family
Cisco Nexus 7004 Series Switch Chassis
Cisco Nexus 7009 Series Switch Chassis
Cisco Nexus 7010 Series Switch Chassis
Cisco Nexus 7018 Series Switch Chassis
Cisco Nexus 7706 Series Switch Chassis
Cisco Nexus 7710 Series Switch Chassis
Cisco Nexus 7718 Series Switch Chassis
Cisco Nexus 7000 and Nexus 7700 Supervisor Module
Cisco Nexus 7000 Series Supervisor 1 Module
Cisco Nexus 7000 Series Supervisor 2 Module
Cisco Nexus 7000 and Nexus 7700 Fabric Modules
Cisco Nexus 7000 and Nexus 7700 Licensing
Cisco Nexus 7000 and Nexus 7700 Line Cards
Cisco Nexus 7000 and Nexus 7700 Series Power Supply Options
Cisco Nexus 7000 and Nexus 7700 Series 3.0-KW AC Power Supply Module
Cisco Nexus 7000 and Nexus 7700 Series 3.0-KW DC Power Supply Module
Cisco Nexus 7000 Series 6.0-KW and 7.5-KW AC Power Supply Module
Cisco Nexus 7000 Series 6.0-KW DC Power Supply Module
Cisco Nexus 6000 Product Family
Cisco Nexus 6001P and Nexus 6001T Switches and Features
Cisco Nexus 6004 Switches Features
Cisco Nexus 6000 Switches Licensing Options
Cisco Nexus 5500 and Nexus 5600 Product Family
Cisco Nexus 5548P and 5548UP Switches Features
Cisco Nexus 5596UP and 5596T Switches Features
Cisco Nexus 5500 Products Expansion Modules
Cisco Nexus 5600 Product Family
Cisco Nexus 5672UP Switch Features
Cisco Nexus 56128P Switch Features
Cisco Nexus 5600 Expansion Modules
Cisco Nexus 5500 and Nexus 5600 Licensing Options
Cisco Nexus 3000 Product Family
Cisco Nexus 3000 Licensing Options

Cisco Nexus 2000 Fabric Extenders Product Family
Cisco Dynamic Fabric Automation
Summary
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 3 Virtualizing Cisco Network Devices
“Do I Know This Already?” Quiz
Foundation Topics
Describing VDCs on the Cisco Nexus 7000 Series Switch
VDC Deployment Scenarios
Horizontal Consolidation Scenarios
Vertical Consolidation Scenarios
VDCs for Service Insertion
Understanding Different Types of VDCs
Interface Allocation
VDC Administration
VDC Requirements
Verifying VDCs on the Cisco Nexus 7000 Series Switch
Describing Network Interface Virtualization
Cisco Nexus 2000 FEX Terminology
Nexus 2000 Series Fabric Extender Connectivity
VN-Tag Overview
Cisco Nexus 2000 FEX Packet Flow
Cisco Nexus 2000 FEX Port Connectivity
Cisco Nexus 2000 FEX Configuration on the Nexus 7000 Series
Summary
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Chapter 4 Data Center Interconnect
“Do I Know This Already?” Quiz
Foundation Topics
Introduction to Overlay Transport Virtualization
OTV Terminology

OTV Control Plane
Multicast Enabled Transport Infrastructure
Unicast-Enabled Transport Infrastructure
Data Plane for Unicast Traffic
Data Plane for Multicast Traffic
Failure Isolation
STP Isolation
Unknown Unicast Handling
ARP Optimization
OTV Multihoming
First Hop Redundancy Protocol Isolation
OTV Configuration Example with Multicast Transport
OTV Configuration Example with Unicast Transport
Summary
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Chapter 5 Management and Monitoring of Cisco Nexus Devices
“Do I Know This Already?” Quiz
Foundation Topics
Operational Planes of a Nexus Switch
Data Plane
Store-and-Forward Switching
Cut-Through Switching
Nexus 5500 Data Plane Architecture
Control Plane
Nexus 5500 Control Plane Architecture
Control Plane Policing
Control Plane Analyzer
Management Plane
Nexus Management and Monitoring Features
Out-of-Band Management
Console Port
Connectivity Management Processor
Management Port (mgmt0)
In-band Management

Role-Based Access Control
User Roles
Rules
User Role Policies
RBAC Characteristics and Guidelines
Privilege Levels
Simple Network Management Protocol
SNMP Notifications
SNMPv3
Remote Monitoring
RMON Alarms
RMON Events
Syslog
Embedded Event Manager
Event Statements
Action Statements
Policies
Generic Online Diagnostics (GOLD)
Smart Call Home
Nexus Device Configuration and Verification
Perform Initial Setup
In-Service Software Upgrade
ISSU on the Cisco Nexus 5000
ISSU on the Cisco Nexus 7000
Important CLI Commands
Command-Line Help
Automatic Command Completion
Entering and Exiting Configuration Context
Display Current Configuration
Command Piping and Parsing
Feature Command
Verifying Modules
Verifying Interfaces
Viewing Logs
Viewing NVRAM logs
Exam Preparation Tasks
Review All Key Topics

Complete Tables and Lists from Memory
Define Key Terms
Part I Review
Part II Data Center Storage-Area Networking
Chapter 6 Data Center Storage Architecture
“Do I Know This Already?” Quiz
Foundation Topics
What Is a Storage Device?
What Is a Storage-Area Network?
How to Access a Storage Device
Block-Level Protocols
File-Level Protocols
Putting It All Together
Storage Architectures
Scale-up and Scale-out Storage
Tiered Storage
SAN Design
SAN Design Considerations
1. Port Density and Topology Requirements
2. Device Performance and Oversubscription Ratios
3. Traffic Management
4. Fault Isolation
5. Control Plane Scalability
SAN Topologies
Fibre Channel
FC-0: Physical Interface
FC-1: Encode/Decode
Encoding and Decoding
Ordered Sets
FC-2: Framing Protocol
Fibre Channel Service Classes
Fibre Channel Flow Control
FC-3: Common Services
Fibre Channel Addressing
Switched Fabric Address Space
Fibre Channel Link Services

Fibre Channel Fabric Services
FC-4: ULPs—Application Protocols
Fibre Channel—Standard Port Types
Virtual Storage-Area Network
Dynamic Port VSAN Membership
Inter-VSAN Routing
Fibre Channel Zoning and LUN Masking
Device Aliases Versus Zone-Based (FC) Aliases
Reference List
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 7 Cisco MDS Product Family
“Do I Know This Already?” Quiz
Foundation Topics
Once Upon a Time There Was a Company Called Andiamo
The Secret Sauce of Cisco MDS: Architecture
A Day in the Life of a Fibre Channel Frame in MDS 9000
Lossless Networkwide In-Order Delivery Guarantee
Cisco MDS Software and Storage Services
Flexibility and Scalability
Common Software Across All Platforms
Multiprotocol Support
Virtual SANs
Intelligent Fabric Applications
Network-Assisted Applications
Cisco Data Mobility Manager
I/O Accelerator
XRC Acceleration
Network Security
Switch and Host Authentication
IP Security for FCIP and iSCSI
Cisco TrustSec Fibre Channel Link Encryption
Role-Based Access Control
Port Security and Fabric Binding
Zoning

Availability
Nondisruptive Software Upgrades
Stateful Process Failover
ISL Resiliency Using Port Channels
iSCSI, FCIP, and Management-Interface High Availability
Port Tracking for Resilient SAN Extension
Manageability
Open APIs
Configuration and Software-Image Management
N-Port Virtualization
Autolearn for Network Security Configuration
FlexAttach
Cisco Data Center Network Manager Server Federation
Network Boot for iSCSI Hosts
Internet Storage Name Service
Proxy iSCSI Initiator
iSCSI Server Load Balancing
IPv6
Traffic Management
Quality of Service
Extended Credits
Virtual Output Queuing
Fibre Channel Port Rate Limiting
Load Balancing of Port Channel Traffic
iSCSI and SAN Extension Performance Enhancements
FCIP Compression
FCIP Tape Acceleration
Serviceability, Troubleshooting, and Diagnostics
Cisco Switched Port Analyzer and Cisco Fabric Analyzer
SCSI Flow Statistics
Fibre Channel Ping and Fibre Channel Traceroute
Call Home
System Log
Other Serviceability Features
Cisco MDS 9000 Family Software License Packages
Cisco MDS Multilayer Directors
Cisco MDS 9700

Cisco MDS 9500
Cisco MDS 9000 Series Multilayer Components
Cisco MDS 9700–MDS 9500 Supervisor and Crossbar Switching Modules
Cisco MDS 9000 48-Port 8-Gbps Advanced Fibre Channel Switching Module
Cisco MDS 9000 32-Port 8-Gbps Advanced Fibre Channel Switching Module
Cisco MDS 9000 18/4-Port Multiservice Module (MSM)
Cisco MDS 9000 16-Port Gigabit Ethernet Storage Services Node (SSN) Module
Cisco MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module
Cisco MDS 9000 8-Port 10-Gbps Fibre Channel over Ethernet (FCoE) Module
Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module
Cisco MDS 9700 48-Port 10-Gbps Fibre Channel over Ethernet (FCoE) Module
Cisco MDS 9000 Family Pluggable Transceivers
Cisco MDS Multiservice and Multilayer Fabric Switches
Cisco MDS 9148
Cisco MDS 9148S
Cisco MDS 9222i
Cisco MDS 9250i
Cisco MDS Fibre Channel Blade Switches
Cisco Prime Data Center Network Manager
Cisco DCNM SAN Fundamentals Overview
DCNM-SAN Server
Licensing Requirements for Cisco DCNM-SAN Server
Device Manager
DCNM-SAN Web Client
Performance Manager
Authentication in DCNM-SAN Client
Cisco Traffic Analyzer
Network Monitoring
Performance Monitoring
Reference List
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 8 Virtualizing Storage
“Do I Know This Already?” Quiz
Foundation Topics

What Is Storage Virtualization?
How Storage Virtualization Works
Why Storage Virtualization?
What Is Being Virtualized?
Block Virtualization—RAID
Virtualizing Logical Unit Numbers
Tape Storage Virtualization
Virtualizing Storage-Area Networks
N-Port ID Virtualization (NPIV)
N-Port Virtualizer (NPV)
Virtualizing File Systems
File/Record Virtualization
Where Does the Storage Virtualization Occur?
Host-Based Virtualization
Network-Based (Fabric-Based) Virtualization
Array-Based Virtualization
Reference List
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 9 Fibre Channel Storage-Area Networking
“Do I Know This Already?” Quiz
Foundation Topics
Where Do I Start?
The Cisco MDS NX-OS Setup Utility—Back to Basics
The Power On Auto Provisioning
Licensing Cisco MDS 9000 Family NX-OS Software Features
License Installation
Verifying the License Configuration
On-Demand Port Activation Licensing
Making a Port Eligible for a License
Acquiring a License for a Port
Moving Licenses Among Ports
Cisco MDS 9000 NX-OS Software Upgrade and Downgrade
Upgrading to Cisco NX-OS on an MDS 9000 Series Switch

Downgrading Cisco NX-OS Release on an MDS 9500 Series Switch
Configuring Interfaces
Graceful Shutdown
Port Administrative Speeds
Frame Encapsulation
Bit Error Thresholds
Local Switching
Dedicated and Shared Rate Modes
Slow Drain Device Detection and Congestion Avoidance
Management Interfaces
VSAN Interfaces
Verify Initiator and Target Fabric Login
VSAN Configuration
Port VSAN Membership
Verify Fibre Channel Zoning on a Cisco MDS 9000 Series Switch
About Smart Zoning
About LUN Zoning and Assigning LUNs to Storage Subsystems
About Enhanced Zoning
VSAN Trunking and Setting Up ISL Port
Trunk Modes
Port Channel Modes
Reference List
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Part II Review
Part III Data Center Unified Fabric
Chapter 10 Unified Fabric Overview
“Do I Know This Already?” Quiz
Foundation Topics
Challenges of Today’s Data Center Networks
Cisco Unified Fabric Principles
Convergence of Network and Storage
Scalability and Growth
Security and Intelligence

Inter-Data Center Unified Fabric
Fibre Channel over IP
Overlay Transport Virtualization
Locator ID Separation Protocol
Reference List
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Chapter 11 Data Center Bridging and FCoE
“Do I Know This Already?” Quiz
Foundation Topics
Data Center Bridging
Priority Flow Control
Enhanced Transmission Selection
Data Center Bridging Exchange
Quantized Congestion Notification
Fibre Channel Over Ethernet
FCoE Logical Endpoint
FCoE Port Types
FCoE ENodes
Fibre Channel Forwarder
FCoE Encapsulation
FCoE Initialization Protocol
Converged Network Adapter
FCoE Speed
FCoE Cabling Options for the Cisco Nexus 5000 Series Switches
Optical Cabling
Transceiver Modules
SFP Transceiver Modules
SFP+ Transceiver Modules
QSFP Transceiver Modules
Unsupported Transceiver Modules
Delivering FCoE Using Cisco Fabric Extender Architecture
FCoE and Cisco Nexus 2000 Series Fabric Extenders
FCoE and Straight-Through Fabric Extender Attachment Topology
FCoE and vPC Fabric Extender Attachment Topology
FCoE and Server Fabric Extenders

Adapter-FEX
Virtual Ethernet Interfaces
FCoE and Virtual Fibre Channel Interfaces
Virtual Interface Redundancy
FCoE and Cascading Fabric Extender Topologies
Adapter-FEX and FCoE Fabric Extenders in vPC Topology
Adapter-FEX and FCoE Fabric Extenders in Straight-Through Topology with vPC
Adapter-FEX and FCoE Fabric Extenders in Straight-Through Topology
Reference List
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Chapter 12 Multihop Unified Fabric
“Do I Know This Already?” Quiz
Foundation Topics
First Hop Access Layer Consolidation
Aggregation Layer FCoE Multihop
FIP Snooping
FCoE-NPV
Full Fibre Channel Switching
Dedicated Wire Versus Unified Wire
Cisco FabricPath FCoE Multihop
Dynamic FCoE FabricPath Topology
Dynamic FCoE Virtual Links
Dynamic Virtual Fibre Channel Interfaces
Redundancy and High Availability
Reference List
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Part III Review
Part IV Cisco Unified Computing
Chapter 13 Cisco Unified Computing System Architecture
“Do I Know This Already?” Quiz
Foundation Topics
Evolution of Server Computing in a Nutshell

What Is a Socket? What Is a Core? What Is Hyperthreading? Understanding Server Processor Numbering Value of Cisco UCS in the Data Center One Unified System Unified Fabric Unified Management and Operations Cisco UCS Stateless Computing Intelligent Automation. Cloud-Ready Infrastructure Describing the Cisco UCS Product Family Cisco UCS Computer Hardware Naming Cisco UCS Fabric Interconnects What Is a Unified Port? Cisco UCS Fabric Interconnect—Expansion Modules Cisco UCS 5108 Blade Server Chassis Cisco UCS 5108 Blade Server Chassis—Power Supply and Redundancy Modes Cisco UCS I/O Modules (FEX) Cisco UCS B-series Blade Servers Cisco UCS B-series Best Practices for Populating DRAM Cisco Adapters (Mezzanine Cards) for UCS B-series Blade Servers Cisco Converged Network Adapters (CNA) Cisco Virtual Interface Cards Cisco UCS Port Expander for VIC 1240 What Is Cisco Adapter FEX? What Is Cisco Virtual Machine-Fabric Extender (VM-FEX)? Cisco UCS Storage Accelerator Adapters Cisco UCS C-series Rackmount Servers Cisco UCS C-series Best Practices for Populating DRAM Cisco Adapters for UCS C-series Rackmount Servers Cisco UCS C-series RAID Adapter Options Cisco UCS C-series Virtual Interface Cards and OEM CNA Adapters What Is SR-IOV (Single Root-IO Virtualization)? Cisco UCS C-series Network Interface Card Cisco UCS C-series Host Bus Adapter What Is N_Port ID Virtualization (NPIV)? Cisco UCS Storage Accelerator Adapters .

Cisco UCS E-series Servers Cisco UCS “All-in-One” Cisco UCS Invicta Series—Solid State Storage Systems Cisco UCS Software Cisco UCS Manager Cisco UCS Central Cisco UCS Director Cisco UCS Platform Emulator Cisco goUCS Automation Tool Cisco UCS Connectivity Architecture Cisco UCS 5108 Chassis to Fabric Interconnect Physical Connectivity Cisco UCS C-series Rackmount Server Physical Connectivity Cisco UCS 6200 Fabric Interconnect to LAN. SAN Connectivity Ethernet Switching Mode Fibre Channel Switching Mode Cisco UCS I/O Module and Fabric Interconnect Connectivity Cisco UCS I/O Module Architecture Fabric and Adapter Port Channel Architecture Cisco Integrated Management Controller Architecture Reference List Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Chapter 14 Cisco Unified Computing System Manager “Do I Know This Already?” Quiz Foundation Topics Cabling a Cisco UCS Fabric Interconnect HA Cluster Cisco UCS Fabric Interconnect HA Architecture Cisco UCS Fabric Interconnect Cluster Connectivity Initial Setup Script for Primary Fabric Interconnect Initial Setup Script for Subordinate Secondary Fabric Interconnect Verify Cisco UCS Fabric Interconnect Cluster Setup Changing Cluster Addressing via Command-Line Interface Command Modes Changing Cluster IP Addresses via Command-Line Interface Changing Static IP Addresses via Command-Line Interface .

Cisco UCS Manager Operations Cisco UCS Manager Functions Cisco UCS Manager GUI Layout Cisco UCS Manager: Navigation Pane—Tabs Equipment Tab Servers Tab LAN Tab SAN Tab VM Tab Admin Tab Device Discovery in Cisco UCS Manager Basic Port Personalities in the Cisco UCS Fabric Interconnect Verifying Device Discovery in Cisco UCS Manager Reference List Exam Preparation Tasks Review All Key Topics Define Key Terms Chapter 15 Cisco Unified Computing System Pools. Policies. Templates. and Service Profiles “Do I Know This Already?” Quiz Foundation Topics Cisco UCS Service Profiles Cisco UCS Hardware Abstraction and Stateless Computing Contents of a Cisco UCS Service Profile Other Benefits of Cisco UCS Service Profiles Ease of Incremental Deployments Plan and Preconfigure Cisco UCS Solutions Ease of Server Replacement Right-Sized Server Deployments Cisco UCS Templates Organizational Awareness with Service Profile Templates Cisco UCS Service Profile Templates Cisco UCS vNIC Templates Cisco UCS vHBA Templates Cisco UCS Logical Resource Pools UUID Identity Pools MAC Address Identity Pools WWN Address Identity Pools .

Management.WWNN Pools WWPN Pools Cisco UCS Physical Resource Pools Server Pools Cisco UCS Server Qualification and Pool Policies Server Auto-Configuration Policies Creation of Policies for Cisco UCS Service Profiles and Service Profile Templates Commonly Used Policies Explained Equipment Tab—Global Policies BIOS Policy Boot Policy Local Disk Configuration Policy Maintenance Policy Scrub Policy Host Firmware Packages Adapter Policy Cisco UCS Chassis and Blade Power Capping What Is Power Capping? Cisco UCS Power Capping Strategies Static Power Capping (Explicit Blade Power Cap) Dynamic Power Capping (Implicit Chassis Power Cap) Power Cap Groups (Explicit Group) Utilizing Cisco UCS Service Profile Templates Creating Cisco UCS Service Profile Templates Modifying a Cisco UCS Service Profile Template Utilizing Cisco UCS Service Profile Templates for Different Workloads Creating Service Profiles from Templates Reference List Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Chapter 16 Administration. and Monitoring of Cisco Unified Computing System “Do I Know This Already?” Quiz Foundation Topics Cisco UCS Administration Organization and Locales .

Role-Based Access Control Authentication Methods LDAP Providers LDAP Provider Groups Domain. Realm. Event. and License Management Firmware Terminology Firmware Definitions Firmware Update Guidelines Host Firmware Packages Cross-Version Firmware Support Firmware Auto Install UCS Backups Backup Automation License Management Cisco UCS Management Operational Planes In-Band Versus Out-of-Band Management Remote Accessibility Direct KVM Access Advanced UCS Management Cisco UCS Manager XML API goUCS Automation Toolkit Using the Python SDK Multi-UCS Management Cisco UCS Monitoring Cisco UCS System Logs Fault. Backup. and Default Authentication Settings Communication Management UCS Firmware. and Audit Logs (Syslogs) System Event Logs UCS SNMP UCS Fault Suppression Collection and Threshold Policies Call Home and Smart Call Home Reference List Exam Preparation Tasks Review All Key Topics .

Define Key Terms Part IV Review Part V Data Center Server Virtualization Chapter 17 Server Virtualization Solutions “Do I Know This Already?” Quiz Foundation Topics Brief History of Server Virtualization Server Virtualization Components Hypervisor Virtual Machines Virtual Switching Management Tools Types of Server Virtualization Full Virtualization Paravirtualization Operating System Virtualization Server Virtualization Benefits and Challenges Server Virtualization Benefits Server Virtualization Challenges Reference List Exam Preparation Tasks Review All Key Topics Define Key Terms Chapter 18 Cisco Nexus 1000V and Virtual Switching “Do I Know This Already?” Quiz Foundation Topics Evolution of Virtual Switching Before Server Virtualization Server Virtualization with Static VMware vSwitch Virtual Network Components Virtual Access Layer Standard VMware vSwitch Overview Standard VMware vSwitch Operations Standard VMware vSwitch Configuration VMware vDS Overview VMware vDS Configuration .

VMware vDS Enhancements VMware vSwitch and vDS Cisco Nexus 1000V Virtual Networking Solution Cisco Nexus 1000V System Overview Cisco Nexus 1000V Salient Features and Benefits Cisco Nexus 1000V Series Virtual Switch Architecture Cisco Nexus 1000V Virtual Supervisory Module Cisco Nexus 1000V Virtual Ethernet Module Cisco Nexus 1000V Component Communication Cisco Nexus 1000V Management Communication Cisco Nexus 1000V Port Profiles Types of Port Profiles in Cisco Nexus 1000V Cisco Nexus 1000V Administrator View and Roles Cisco Nexus 1000V Verifying Initial Configuration Cisco Nexus 1000V VSM Installation Methods Initial VSM Configuration Verification Verifying VMware vCenter Connectivity Verify Nexus 1000V Module Status Cisco Nexus 1000V VEM Installation Methods Initial VEM Status on ESX or ESXi Host Verifying VSM Connectivity from vCenter Server Verifying VEM Agent Running Verifying VEM Uplink Interface Status Verifying VEM Parameters with VSM Configuration Validating VM Port Groups and Port Profiles Verifying Port Profile and Groups Key New Technologies Integrated with Cisco Nexus 1000V What Is the Difference Between VN-Link and VN-Tag? What Is VXLAN? What Is vPath Technology? Reference List Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Part V Review .

Part VI Data Center Network Services Chapter 19 Cisco ACE and GSS “Do I Know This Already?” Quiz Foundation Topics Server Load-Balancing Concepts What Is Server Load Balancing? Benefits of Server Load Balancing Methods of Server Load Balancing ACE Family Products ACE Module Control Plane Data Plane ACE 4710 Appliances Global Site Selector Understanding the Load-Balancing Solution Using ACE Layer 3 and Layer 4 Load Balancing Layer 7 Load Balancing TCP Server Reuse Real Server Server Farm Server Health Monitoring In-band Health Monitoring Out-of-band Health Monitoring (Probes) Load-Balancing Algorithms (Predictor) Stickiness Sticky Groups Sticky Methods Sticky Table Traffic Policies for Server Load Balancing Class Maps Policy Map Configuration Steps ACE Deployment Modes Bridge Mode Routed Mode One-Arm Mode ACE Virtualization .

Contexts Domains Role-Based Access Control Resource Classes ACE High Availability Management of ACE Remote Management Simple Network Management Protocol XML Interface ACE Device Manager ANM Global Load Balancing Using GSS and ACE Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Chapter 20 Cisco WAAS and Application Acceleration “Do I Know This Already?” Quiz Foundation Topics Wide-Area Network Optimization Characteristics of Wide-Area Networks Latency Bandwidth Packet Loss Poor Link Utilization What Is WAAS? WAAS Product Family WAVE Appliances Router-Integrated Form Factors Virtual Appliances (vWAAS) WAAS Express (WAASx) WAAS Mobile WAAS Central Manager WAAS Architecture WAAS Solution Overview Key Features and Benefits of Cisco WAAS .

Cisco WAAS WAN Optimization Techniques Transport Flow Optimization TCP Window Scaling TCP Initial Window Size Maximization Increased Buffering Selective Acknowledgement Binary Increase Congestion TCP Compression Data Redundancy Elimination LZ Compression Application-Specific Acceleration Virtualization Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Part VI Review Part VII Final Preparation Chapter 21 Final Review Advice About the Exam Event Learn the Question Types Using the Cisco Certification Exam Tutorial Think About Your Time Budget Versus the Number of Questions A Suggested Time-Check Method Miscellaneous Pre-Exam Suggestions Exam-Day Advice Exam Review Take Practice Exams Practicing Taking the DCICT Exam Advice on How to Answer Exam Questions Taking Other Practice Exams Find Knowledge Gaps Through Question Review Practice Hands-On CLI Skills Other Study Tasks Final Thoughts Part VIII Appendices Appendix A Answers to the “Do I Know This Already?” Questions .

Glossary Index On the CD: Appendix B Memory Tables Appendix C Memory Tables Answer Key Appendix D Study Planner .

Icons Used in This Book .

boldface indicates commands that are manually input by the user (such as a show command).Command Syntax Conventions The conventions used to present command syntax in this book are the same conventions used in the IOS Command Reference. Vertical bars (|) separate alternative. In actual configuration examples and output (not general command syntax). Braces ({ }) indicate a required choice. Square brackets ([ ]) indicate an optional element. Italic indicates arguments for which you supply actual values. Braces within brackets ([{ }]) indicate a required choice within an optional element. . The Command Reference describes these conventions as follows: Boldface indicates commands and keywords that are entered literally as shown. mutually exclusive elements.

In fact. network services. DCICN focuses on networking technology. and in data centers in particular. you need to know Cisco. as shown in Figure I-1. Cisco dominates the networking marketplace. you must pass two exams: 640-911 Introduction to Cisco Data Center Networking (DCICN) and 640-916 Introduction to Cisco Data Center Technologies (DCICT). .Introduction About the Exam Congratulations! If you are reading far enough to look at this book’s Introduction. Exams That Help You Achieve CCNA Data Center Certification Cisco CCNA Data Center is an entry-level Cisco data center certification that is also a prerequisite for other Cisco Data Center certifications. and data networking features unique to the Cisco Nexus series of switches. Getting your CCNA Data Center certification is a great first step in building your skills and becoming a recognized authority in the data center field. These technologies include storage networking. Cisco has achieved significant market share and has become one of the primary vendors for server hardware. focusing on Ethernet switching and IP routing. The DCICT exam instead focuses on technologies specific to the data center. To achieve the CCNA Data Center certification. DCICN explains the basics of networking. which leads to the Cisco Certified Entry Network Technician (CCENT) certification. Unified Computing (the term used by Cisco for its server products and services). If you want to succeed as a technical person in the networking industry in general. it overlaps quite a bit with the topics in the ICND1 100-101 exam. The only data center focus in the DCICN exam is that all the configuration and verification examples use Cisco Nexus data center switches. Unified Fabric. CCNA Data Center itself has no other prerequisites. and after a few short years of entering the server marketplace. you’ve probably already decided to pursue your Cisco CCNA Data Center certification. Figure I-1 Path to Cisco CCNA Data Center Certification The DCICN and DCICT exams differ quite a bit in terms of the topics covered.

you can complete a few other tasks on the PC. The questions typically fall into one of the following categories: Multiple choice. These questions require you to use the simulator to examine network behavior by interpreting the output of show commands you decide to leverage to answer the question. DND questions require you to move some items around in the graphical user interface (GUI). these questions begin with a broken configuration. while your task is to configure one or more routers and switches to fix it. The last two types. the two types enable Cisco to assess two very different skills. Simlet questions also use a network simulator. simlets require you to analyze both working and broken networks. single answer Multiple choice. to get the question correct. you sit in a quiet room in front of the PC. You left-click the mouse to hold the item. Cisco traditionally tells you how many answers you need to choose.cisco. At the testing center. Cisco openly publishes the topics of each of its certification exams. sim and simlet questions. all based on the same larger scenario. they include one or more multiple-choice questions. go to www. To find the Cisco Certification Exam Tutorial. and you must fix it to answer the question correctly. The testlet asks you several multiple-choice questions. multiple answers Testlet Drag-and-drop (DND) Simulated lab (sim) Simlet The first three items in the list are all multiple-choice questions. Basically. Whereas sim questions require you to troubleshoot problems related to configuration. After the exam starts. you will be presented with a series of questions. First. and release the mouse button to place the item in its destination (usually into a list). You can watch and even experiment with these command types using the Cisco Exam Tutorial. The multiple-choice format requires you to point to and click a circle or square beside the correct answer(s). for any test. Before the exam timer begins. on the PC screen. Interestingly. The exam then grades the question based on the configuration you changed or added. but instead of answering the question by changing or adding the configuration. For some questions. you can take a sample quiz just to get accustomed to the PC and the testing engine. for example.” What’s on the DCICT Exam? Everyone wants to know what is on the test.Types of Questions on the Exams Cisco certification exams follow the same general format. correlating show command output with your knowledge of networking theory and configuration commands. ever since the early days of school. sim questions generally describe a problem. and the testing software prevents you from choosing too many answers.com and search for “exam tutorial. both use a network simulator to ask questions. one at a time. Anyone who has basic skills in getting around a PC should have no problems with the testing environment. move it to another area. Cisco wants the candidates to know the . you might need to put a list of items in the proper order or sequence.

are only guidelines.” Questions beginning with “Troubleshoot” require the highest skills level. Alternatively. Cisco has begun making two stylistic improvements to the posted exam topics. making it easier to refer to specific ones. which should come from each major topic area. be able to configure it (to see what’s wrong with the configuration). In the past. because to troubleshoot. one topic might begin with “Describe.. Cisco’s posted exam topics.. shown in Table I-1. the topics were simply listed as bullets with indentation to imply subtopics under a major topic. plan to study all the exam topics.cisco. Over time. the weighting probably does not change how you study.. where you can easily navigate to the nearby CCNA Data Center page. and we know from talking to those involved that every question is analyzed for whether it fits the stated exam topic.” and another with “Troubleshoot. The DCICT contains six major headings with their respective weighting. All six topic areas hold enough weighting so that if you completely ignore an individual topic. you can go to www.. including for the DCICN and DCICT exam topics. so you need all the pieces before you can understand the holistic view. Additionally. data center technologies require you to put many concepts together. Tables I-2 through I-7 list the details of the exam topics... Cisco’s disclaimer language mentions that fact. The weighting tells the percentage of points from your exam. Cisco lists the weighting for each of the major topic headings. you probably will not pass. The verb tells us to what degree the topic must be understood and what skills are required. Cisco also numbers the exam topics. Furthermore... with one table for each of the major topic . For example..” another with “Configure. DCICT 640-916 Exam Topics The exam topics for both the DCICN and DCICT exams can be easily found at Cisco..” another with “Verify. Table I-1 Six Major Topic Areas in the DCICT 640-916 Exam Note that while the weighting of each topic area tells you something about the exam.com by searching. you must understand the topic. The weighting might indicate where you should spend a little more time during the last days before taking the exam. however. and be able to verify it (to find the root cause of the problem). but otherwise. which gets you to the page for CCNA Routing and Switching. Exam topics are very specific and the verb used in their description is very important. More often today.com/go/ccna. in the authors’ opinion... Pay attention to the question verbiage.variety of topics and get an idea about the kinds of knowledge and skills required for each topic. Cisco makes an effort to keep the exam questions within the confines of the stated exam topics.

areas listed in Table I-1. Note that these tables also list the book chapters that discuss each of the exam topics. Table I-2 Exam Topics in the First Major DCICT Exam Topic Area .

Table I-3 Exam Topics in the Second Major DCICT Exam Topic Area Table I-4 Exam Topics in the Third Major DCICT Exam Topic Area Table I-5 Exam Topics in the Fourth Major DCICT Exam Topic Area Table I-6 Exam Topics in the Fifth Major DCICT Exam Topic Area .

the methods used in this book to help you pass the exam are also designed to make you much more knowledgeable in the general field of the data center and help you in your daily job responsibilities. This book’s companion title.Table I-7 Exam Topics in the Sixth Major DCICT Exam Topic Area Note Because it is possible that the exam topics may change over time. We strongly recommend that you plan and structure your learning to align with both exam requirements.com/go/certifications and navigate to the CCNA Data Center page). About the Book This book discusses the content and skills needed to pass the 640-916 DCICT certification exam. and it would be a disservice to you if this book did not help you truly learn the material. it might be worth the time to double-check the exam topics as listed on the Cisco website (www. this book does not try to help you pass the exams only by memorization. CCNA entry-level certification is the foundation for many of the Cisco professional-level certifications. This book uses several tools to help you discover your weak topic areas. but by truly learning and understanding the topics. Book Features The most important and somewhat obvious objective of this book is to help you pass the DCICT exam and help you achieve the CCNA Data Center certification. CCNA Data Center DCICN 640-911 Official Cert Guide. This book helps you pass the CCNA exam by using the following methods: Helping you discover which exam topics you have not mastered Providing explanations and information to fill in your knowledge gaps . if the primary objective of this book were different. Importantly. which is the second and final exam to achieve CCNA Data Center certification. and to prove that you have retained your knowledge of those topics. the book’s title would be misleading! At the same time. discusses the content needed to pass the 640-911 DCICN certification exam. to help you improve your knowledge and skills with those topics.cisco. In fact.

the core chapters have several features that help you make the best use of your time: “Do I Know This Already?” Quizzes: Each chapter begins with a quiz that helps you determine the amount of time you need to spend studying the chapter. The “Review Key Topics” activity lists the key topics from the chapter and their corresponding page numbers. answering those questions again can be a useful way to review facts. The part review includes sample test questions that require you to apply the concepts from multiple chapters in that part. The part reviews list tasks. The “Part Review” section suggests that you repeat the DIKTA questions. Each book part contains several related chapters. along with checklists. The following list explains the most common tasks you will see in the part review: Review “Do I Know This Already?” (DIKTA) Questions: Although you have already seen the DIKTA questions from the chapters in a part. so that you can track your progress. This document lists only partial information. Although the content of the entire chapter could appear on the exam. The activities include the following: Review Key Topics: The Key Topic icon is shown next to the most important items in the “Foundation Topics” section of the chapter. Foundation Topics: These are the core sections of each chapter. Each chapter includes the activities that make the most sense for studying the topics in that chapter. Part Review The part review tasks help you prepare to apply all the concepts you learned in that part of the book. and configuration for the topics in the chapter.” the CCNA exams require that you learn and know a lot of networking terminology. “Define this term. They explain the protocols. Exam Preparation Tasks: At the end of the “Foundation Topics” section of each chapter. References: Some chapters contain a list of reference links for additional information and details on the topics discussed in that particular chapter. This section lists the most important terms from the chapter. many of the more important lists and tables from the chapter are included in a document on the CD. allowing you to complete the table or list. concepts. asking you to write a short definition and compare your answer to the Glossary at the end of this book. uncovering what you truly understood and what you did not quite yet understand. Complete Tables and Lists from Memory: To help you exercise your memory and memorize certain lists of facts.Supplying exercises that enhance your ability to grasp topics and deduce the answers to subjects related to the exam Providing practice exercises on the topics and the testing process via test questions on the CD Chapter Features To help you customize study time using this book. Define Key Terms: Although the exam might be unlikely to ask a question such as. the “Exam Preparation Tasks” section lists a series of study activities that should be completed at the end of the chapter. you should definitely know the information listed in each key topic. but use the Pearson IT Certification Practice Test (PCPT) exam software that comes .

has additional study resources. Check out the great CCNA articles. Each core chapter covers a subset of the topics on the 640-916 DCICT exam. we have included a special offer on a coupon card inserted in the CD sleeve in the back of the book.com/title/9781587144226 posts up-to-theminute material that further clarifies complex exam topics. In addition to three versions of the e-book—PDF (for reading on your computer). PearsonITCertification. Final Prep Tasks Chapter 21. for extra practice in answering multiple-choice questions on a computer. this book. again! They are indeed the most important topics in each chapter. This will help you develop a thorough understanding of important exam topics pertaining to that part.ciscopress. The core chapters are organized into sections. with Chapter 21 including some suggestions for how to approach the actual exams. Review Key Topics: Yes. sometimes from multiple chapters. mobile device. or Nook or other e-reader). including the following: CD-based practice exam: The companion CD contains the powerful PCPT exam engine. Companion website: The website www. Answer Part Review Questions: The PCPT exam software includes several exam databases.com: The website www. and Mobi (the native Kindle version)—you will receive additional practice test questions and enhanced practice test features. “Define this term.” lists a series of tasks that you can use for your final preparation before taking the exam. You can answer the questions in study mode or take simulated DCICT exams with the CD and activation code included in this book.” but the CCNA exams require that you learn and know a lot of technology concepts and architectures. Check this site regularly for new and updated postings written by the authors that provide further insight into the more troublesome topics on the exam. One exam database holds part review questions. written specifically for part reviews. blogs. Book Organization. Chapters. EPUB (for reading on your tablet. This section asks you some open questions that you should try to describe or explain in your own words. as a whole.com is a great resource for all things IT-certification related. Open Self-Assessment Questions: The exam is unlikely to ask a question such as.with the book. to help build the skills needed for the more challenging analysis questions on the exams.pearsonitcertification. Premium Edition e-book and practice test at a 70 percent discount off the list price. and Appendixes This book contains 20 core chapters—Chapters 1 through 20. videos. This offer enables you to purchase the CCNA Data Center DCICT 640-916 Official Cert Guide. These questions purposefully include multiple concepts in each question. and other certification preparation tools from the industry’s best authors and trainers. The core chapters cover the following topics: . Other Features In addition to the features in each of the core chapters. eBook: If you are interested in obtaining an e-book version of this title. “Final Review.

It explains the functions performed by each plane and the key features of data plane. Chapter 3. “Cisco Nexus Product Family”: This chapter provides an overview of the Nexus switches product family. The chapter explains the different specifications of each product. and Cisco FabricPath. and storage-area network (SAN). and Data Center Network Management. “Data Center Network Architecture”: This chapter provides an overview of the data center networking architecture and design practices relevant to the exam. Chapter 5. It also provides an overview of out-of-band and in-band management interfaces. virtual port channel. Part II: Data Center Storage-Area Networking Chapter 6. and how it is deployed. such as port channel. and Nexus 2000. “Management and Monitoring of Cisco Nexus Devices”: This chapter provides an overview of operational planes of the Nexus platform. Basic configuration and verification commands for these technologies are also included in this chapter. Nexus 5000. Also covered in this chapter is the NIV—what it is. Chapter 4. and how it is configured. Chapter 8. Cisco MDS Software and Storage Services. network-attached storage (NAS) connectivity for remote server storage. which is called Overlay Transport Virtualization (OTV).Part I: Data Center Networking Chapter 1. “Virtualizing Storage”: Storage virtualization is the process of combining multiple . Cisco MDS Multiservice and Multilayer Fabric Switches. configuration. This chapter also identifies the mechanism available in the Nexus platform to protect the control plane of the switch. “Virtualizing Cisco Network Devices”: This chapter covers the virtualization capabilities of the Nexus 7000 and Nexus 5000. Chapter 7. control plane. The Nexus platform provides several methods for device configuration and management. The edge/core layer of the SAN design is also included. Nexus 6000. and verification. It compares Small Computer System Interface (SCSI). It goes into detail about OTV. The different VDC types and commands used to configure and verify the setup are also included in the chapter. This chapter goes directly into the key concepts of the Cisco Storage Networking product family. “Data Center Storage Architecture”: This chapter provides an overview of the data center storage-networking technologies. how it works. It details the VDC concept and the VDC deployment scenarios. These methods are also discussed with some important commands for initial setup. how it works. Nexus 3000. Nexus 7000. It covers Fibre Channel protocol and operations in detail. “Data Center Interconnect”: This chapter covers the latest Cisco innovations in the data center extension solutions and in LAN extension in particular. using Virtual Device Contexts (VDC) and Network Interface Virtualization (NIV). Commands to configure and how to verify the operation of the OTV are also covered in this chapter. and the focus is on the current generation of modules and storage services solutions. “Cisco MDS Product Family”: This chapter provides an overview of the Cisco Multilayer Data Center Switch (MDS) product family in four main areas: Cisco MDS Multilayer Directors. Chapter 2. and management plane. Fibre Channel. which includes Nexus 9000. It goes into the detail of multilayer data center network design and technologies.

the fabric login. It takes a look at characteristics of each option. secure. Application virtualization is decoupling the application and its data from the operating system. Part III: Data Center Unified Fabric Chapter 10. followed by an introduction to the Cisco UCS value proposition. Chapter 11. Chapter 9. It also describes how to verify virtual storage-area networks (VSAN). It also details the Cisco Integrated Management Controller architecture and purpose. It also explains how to monitor and verify this process. hardware. It also takes a look at different cabling options to support a converged Unified Fabric environment. It goes directly into the key concepts of storage virtualization. Chapter 12. This chapter guides you through storage virtualization concepts answering basic what. Chapter 14. fabric domain. as well as their benefits and challenges. “Unified Fabric Overview”: This chapter provides an overview of challenges faced by today’s data centers. why. Server virtualization is partitioning a physical server into smaller virtual servers. configure. It discusses FCoE protocol and its operation over numerous switching topologies leveraging a variety of Cisco Fabric Extension technologies. . This chapter goes directly into the practical configuration steps of the Cisco MDS product family and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. and software portfolio. “Fibre Channel Storage-Area Networking”: This chapter provides an overview of how to configure Cisco MDS 9000 Series multilayer switches and how to update software licenses. “Cisco Unified Computing System Architecture”: This chapter begins with a quick fly-by on the evolution of server computing. It also discusses the use of converged Unified Fabric leveraging Cisco FabricPath technology. It then describes the process of hardware and software discovery in Cisco UCS. It also takes a look at some of the technologies allowing extension of the Unified Fabric environment beyond the boundary of a single data center. zoning. “Multihop Unified Fabric”: This chapter discusses various options for multihop FCoE topologies to extend the reach of Unified Fabric beyond the single-hop boundary. “Data Center Bridging and FCoE”: This chapter introduces principles behind IEEE data center bridging standards and explains how they enhance traditional Ethernet environments to support convergence of network and storage traffic over a single Unified Fabric. and setting up an ISL port using the command-line interface. Part IV: Cisco Unified Computing Chapter 13.network storage devices into what appears to be a single storage unit. and intelligent services. It also covers the power-on auto provisioning (POAP). It focuses on how Cisco Unified Fabric architecture addresses those challenges by converging traditionally disparate network and storage environments while providing a platform for scalable. Network virtualization is using network resources through a logical segmentation of a single physical network. “Cisco Unified Computing System Manager”: This chapter starts by describing how to set up. VSAN trunking. and how questions. and verify the Cisco UCS Fabric Interconnect cluster. Then the chapter explains UCS architecture in terms of component connectivity options and unification of blade and rackmount server connectivity.

“Cisco Unified Computing System Pools. Policies. Chapter 20. Chapter 16. It explains the Cisco WAAS solution and describes the concepts and features that are important for the DCICT exam. Part VII: Final Preparation Chapter 21. “Server Virtualization Solutions”: This chapter takes a peek into the history of server virtualization and discusses fundamental principles behind different types of server virtualization technologies. and integration with VMware vCenter Server. in particular. tips. “Administration. and Service Profiles”: The chapter starts by explaining the hardware abstraction layer in more detail and how it relates to stateless computing. virtual supervisory modules. “Cisco WAAS and Application Acceleration”: This chapter gives you an overview of the challenges associated with a wide-area network. and most relevant features. The methods used to optimize the WAN traffic are also discussed in detail. Chapter 18. you also see notes. the Cisco vision—Cisco Nexus 1000V. and then introduces the distributed virtual switches and.Chapter 15. It also details different deployment modes of ACE and their application within the data center design. commands to verify initial configuration of virtual Ethernet modules. “Cisco ACE and GSS”: This chapter gives you an overview of server load balancing concepts required for the DCICT exam. and Monitoring of Cisco Unified Computing System”: This chapter starts by explaining some of the important features used when administering and monitoring Cisco UCS. Templates. The Glossary contains definitions for all the terms listed in the “Definitions of Key Terms” . Appendix A. which helps in choosing the correct solution for the load balancing requirements in the data center. GSS and ACE integration is also explained using an example. Management. An overview of these products is also provided in this chapter. and the well-documented UCS XML API using the Python SDK. goUCS automation toolkit. followed by explanations of logical and physical resource pools and the essentials to create templates and aid rapid deployment of service profiles. Part V: Data Center Server Virtualization Chapter 17. “Answers to the ‘Do I Know This Already?’ Quizzes”: Includes the answers to all the questions from Chapters 1 through 20. “Final Review”: This chapter suggests a plan for exam preparation after you have finished the core parts of the book. It describes Cisco Application Control Engine (ACE) and Global Site Selector (GSS) products and their key features. in particular explaining the many study options available in the book. The chapter explains installation options. “Cisco Nexus 1000V and Virtual Switching”: The chapter starts by describing the challenges of current virtual switching layers in data centers. Part VI: Data Center Network Services Chapter 19. Cisco supports a range of hardware and software platforms that are suitable for various wide-area network optimization and application acceleration requirements. It evaluates the benefits and challenges of server virtualization while offering approaches to mitigate performance and security concerns. It also gives you an introduction to Cisco UCS XML. As a bonus.

with some of the content removed. “Memory Tables”: Holds the key tables and lists from each chapter. The goal is to help you memorize facts that can be useful on the exams. “Study Planner”: A spreadsheet with major study milestones enabling you to track your progress through your study. Appendix C.section at the conclusion of Chapters 1 through 20. Appendix D. . complete the tables and lists. Appendixes On the CD Appendix B. “Memory Tables Answer Key”: Contains the answer key for the exercises in Appendix B. You can print this appendix and. as a memory exercise.

the PCPT software will use your Internet connection and download the latest versions of both the software and the question databases for this book. Premium Edition eBook and Practice Test. Step 2. The following steps outline the installation process: Step 1. which lists several contact details. Respond to windows prompts as with any typical software installation process. In particular. or you can take a simulated DCICT exam that mimics real exam conditions. Step 3. fill-in-the-blank. drag-and-drop. but you may also skip these topics and refer to them later. and testlet questions). you will find a one-time-use coupon code that will give you 70% off the purchase of the CCNA Data Center DCICT 640-916 Official Cert Guide. on the opposite side from the exam activation code. After you install the software. click the option to Install the Exam Engine. Insert the CD into your PC. Do not lose the activation code. Install the Software from the CD The software installation process is routine as compared with other software installation processes. The practice exam (the database of DCICT exam questions) is not on the CD. you do not need to reinstall the software. Using the PCPT engine. make sure to note the final page of this introduction. you can either study by going through the questions in study mode. The software that automatically runs is the Cisco Press software to access and use all CDbased features. Simply launch the software on your desktop and proceed to activate the practice exam from this book by using the activation code included in the CD sleeve. Note The cardboard CD case in the back of this book includes both the CD and a piece of thick paper. The installation process gives you the option to activate your exam with the activation code supplied . The CD in the back of this book has a recent copy of the PCPT engine. From the main menu. The paper lists the activation code for the practice exam associated with this book. If you have already installed the PCPT software from another Pearson product. Note Also on this same piece of paper. including the exam engine. The installation process requires two major steps. Install the Pearson IT Certification Practice Test Engine and Questions The CD in the book includes the Pearson IT Certification Practice Test (PCPT) engine (software that displays and grades a set of exam-realistic multiple-choice. including how to get in touch with Cisco Press.Reference Information This short section contains a few topics available for reference elsewhere in the book. You may read these when you first use the book.

Updating your exams will ensure that you have the latest changes and updates to the exam data. If you do not see the exam. Step 4. Activating Other Products The exam software installation process and the registration process have to happen only once. use PCPT to review the DIKTA questions for that part. The questions come in different exams or exam databases. “Final Review. Then for each new product. extract the activation code from the CD sleeve in the back of that book. select the Tools tab and click the Update Application button. all you have to do is start PCPT (if not still up and running) and perform Steps 2 through 4 from the previous list. PCPT Exam Databases with This Book This book includes an activation code that allows you to load a set of practice questions. . using study mode. At this point. you do not even need the CD at this point. and then click Finish. many people find it best to save some of the exams until exam review time. you get four different “exams. you should then activate the exam associated with this book (if you did not do so during the installation process) as follows: Step 1. the My Products tab should list your new exam. If you already have a Pearson website login. the software and practice exam are ready to use. You can choose to use any of these exam databases at any time. Here is a suggested study plan: During part review. With the DCICT book alone. use the questions built specifically for part review (the part review questions) for that part of the book. At the next screen. Save the remaining exams to use with Chapter 21. The activation process will download the practice exam.” using practice exam mode. both in study mode and practice exam mode. so please do register when prompted. Just use your existing login. To activate and download the exam associated with this book. only a few steps are required. If you want to check for updates to the PCPT software. from the My Products or Tools tab. You will need this login to activate the exam. For instance. if you buy another new Cisco Press Official Cert Guide or Pearson IT Certification Cert Guide. From there. Just select the exam and click the Open Exam button. enter the activation key from paper inside the cardboard CD holder in the back of the book. This process requires that you establish a Pearson website login. Activate and Download the Practice Exam When the exam engine is installed.” or four different sets of questions. To update a particular product’s exams that you have already activated and downloaded. select the Tools tab and click the Update Products button. This will ensure that you are running the latest version of the software engine. Click Next. When the activation process completes. after they have finished reading the entire book. using study mode. Start the PCPT software from the Windows Start menu or from your desktop shortcut icon. click the Activate button. When you install the PCPT software and type in the activation code.on the paper in the CD sleeve. the PCPT software downloads the latest version of all these exam databases. During part review. Then click the Activate button. The two modes inside PCPT give you better options for study versus practicing a timed exam event. there is no need to register again. However. make sure that you have selected the My Products tab on the menu. Step 2. Step 3.

The top of the next window should list some exams. How to View Only DIKTA Questions by Part Each “Part Review” section asks you to repeat the Do I Know This Already (DIKTA) quiz questions from the chapters in that part. Select any other options on the right side of the window. click at the bottom of the screen to deselect all objectives (chapters). This tells the PCPT software to give you part review questions from the selected part. then select the box beside each chapter in the part of the book you are reviewing. select the box beside Part Review Questions. From the main (home) menu. so you can study the topics more easily. In this same window. from all chapters. Click Start to start reviewing the questions. This selects the questions intended for part-ending review. Select any other options on the right side of the window. with a name like DCICT 640916 Official Cert Guide. Start the PCPT software. and deselect the other boxes. Step 5. Also. Step 2. Practice exam mode also gives you a score for that timed event. The top of the next window that appears should list some exams. DIKTA questions focus more on facts. click at the bottom of the screen to deselect all objectives. Step 4. for instance. with a timer event. To view these questions. The part review questions instead focus more on application and look more like real exam questions. From the main (home) menu. select the item for this product. To view these DIKTA (book) questions inside the PCPT software. Click Start to start reviewing the questions. with a name like DCICT 640916 Official Cert Guide. Practice exam mode creates an event somewhat like the actual exam. Step 5. select the item for this product. On this same window. but select the part review database instead of the book database. How to View Only Part Review Questions by Part The exam databases you get with this book include a database of questions created solely for study during the part review process. you can see the answers immediately. as follows: Step 1. Although you could simply scan the book pages to review these questions. with basic application. and then select (check) the box beside the book part you want to review. follow the same process as you did with DIKTA/book questions. Step 6. and click Open Exam. follow these steps: Step 1.In study mode. Start the PCPT software. you can view questions from the chapters in only one part of the book. the DIKTA questions from the beginning of each chapter). Step 6. Step 4. Step 2. . you can choose a subset of the questions in an exam database. it is slightly better to review these questions from inside the PCPT software. This selects the “book” questions (that is. It gives you a preset number of questions. Step 3. and click Open Exam. select the box beside DCICT Book Questions. just to get a little more practice in how to read questions from the testing software. Step 3. and deselect the other boxes.

com/go/certification for the latest details. Just go to the website. Cisco might make changes that affect the CCNA data center certification from time to time. The CCNA Data Center DCICT 640-916 Official Cert Guide helps you attain the CCNA data center certification. . This is the DCICT exam prep book from the only Cisco-authorized publisher. submit them via www. We at Cisco Press believe that this book certainly can help you achieve CCNA data center certification. select Contact Us.com. and type your message.cisco.ciscopress.For More Information If you have any comments about the book. but the real work is up to you! We trust that your time will be well spent. You should always check www.

he wrote in his book: “For me. by any means. A Brief Perspective on the CCNA Data Center Certification Exam Cisco sets the bar somewhat high for passing its certification exams. On running as a metaphor. and security to the test. Many questions in the CCNA data center certification exam apply to connectivity between the data center devices. In long-distance running the only opponent you have to . I’m no great runner. It requires a bit of practice every day. and troubleshoot networking features. and by clearing each level I elevate myself. including all the exams related to CCNA certifications. You will need certain analytical problem-solving skills. he has competed in more than 20 marathons and an ultramarathon. But that’s not the point. I’m at an ordinary—or perhaps more like mediocre—level. For example. Think about studying for the CCNA data center certification exam as preparing for a marathon. and plan accordingly to review material for exam topics you are less comfortable with. and today you are making a very significant step in achieving your goal. The point is whether or not I improved over yesterday. Take time to review it before starting your CCNA data center certification journey. Strategizing about your studying is a key element to your success. storage. and analyze emerging technology trends. what needs to be considered when designing that topology. The CCNA data center exam puts your knowledge in network. and so on. running is both exercise and a metaphor. Suggestions for How to Approach Your Study with This Book Although Cisco certification exams are challenging. At least that’s why I’ve put in the effort day after day: to raise my own level. This book offers a solid foundation for the knowledge required to pass the CCNA data center certification exam. such as your ability to analyze. what is the missing ingredient of the solution. These exams challenge you from many angles. configure. Running day after day. bit-by-bit I raise the bar. we also encourage you to develop your practical knowledge by having hands-on experience in the topics covered in the CCNA data center certification exam. but at the same time it also encourages you to continue your journey to become a better data center networking professional. This “Getting Started” chapter guides you through building a strategy toward success. Spend time thinking about key milestones you want to reach. Murakami started running in the early 1980s. understand relevant technology landscape. compute. piling up the races.Getting Started It might look like this is yet another certification exam preparation book—indeed. At the same time. many people pass them every day. We welcome your decision to become a Cisco Certified Network Associate Data Center professional! You are probably wondering how to get started. So. it is. a question might give you a data center topology diagram and then ask what needs to be configured to make that setup work. A team of five data center professionals came together to carefully pick the content for this book to send you on your CCNA data center certification journey. Such skills require you to prepare by doing more than just reading and memorizing this book’s material. since then. such as CCNA data center. what do you need to do to be ready to pass the exam? We encourage you to build your theoretical knowledge by thoroughly reading this book and taking time to remember the most critical facts. What I Talk About When I Talk About Running is a memoir by Haruki Murakami in which he writes about his interest and participation in long-distance running. virtualization.

do the study tasks in the “Exam Preparation Tasks” section at the end of the chapter. Table 1 shows the suggested study approach for each content chapter.beat is yourself. Practice—Did I Mention: Practice? Second. and Storage Area Networks in a Data Center. view the book as having six major milestones. The following list describes the majority of the activities you will find in “Exam Preparation Tasks” sections: Review key topics Complete memory tables Define key terms Approach each chapter with the same plan. each on a relatively small set of related topics. DIKTA is a self-assessment quiz appearing at the beginning of each content chapter. remembering terms. one for each major topic. look at your study as a series of read-and-review tasks. such as Network.” Similar principles apply here. Remember. plan to use the practice tasks at the end of each chapter. and linking the concepts in your brain so that you can remember how it all fits together. Plan your time accordingly and make progress every day. Do not put off using these tasks until later! The chapter-ending “Exam Preparation Tasks” section helps you with the first phase of deepening your knowledge and skills of the key topics. These topics are structured to cover main data center infrastructure technologies. Each chapter consists of numerous “Foundation Topics” sections and “Exam Preparation Tasks” at the end. and doing them at the end of the chapter.” Doing these tasks. Table 1 Suggested Study Approach for the Content Chapter In Each Part of the Book You Will Hit a New Milestone Third. and the only opponent you have to beat is yourself. Server Virtualization. Each chapter marks key topics you should pay extra attention to. This Book Is Compiled from 20 Short Read-and-Review Sessions First. . Practice. Each chapter ends with practice and study tasks under the heading “Exam Preparation Tasks. Review those sections carefully and make sure you are thoroughly familiar with their content. This book organizes the content into topics of a more manageable size to give you something more digestible to build your knowledge. This book has 20 content chapters covering the topics you need to know to pass the exam. you can choose to read the entire core (Foundation Topics) section of the chapter or just skim the chapter. the way you used to be. Based on your score in the “Do I Know This Already?” (DIKTA) quiz. regardless of whether you skim or read thoroughly. Practice. Take time to review this book chapter by chapter and topic by topic. really helps you get ready. Computer.

such as putting down every single task in the “Exam Preparation Tasks” section. it helps you further develop the analytical skills you need to answer the more complicated questions in the exam. The tasks in the chapter also help you find your weak areas. before you start reading the book. helping you avoid some of the pitfalls people experience with the actual exam. Acknowledge yourself for completing major milestones. depending on your personality. take extra time to complete “Part Review” tasks and ask yourself where your weak and strong areas are. The list of tasks does not have to be very detailed. this chapter’s tasks give you activities to make further progress. Many questions require that you cross-connect ideas about design. take the time to make a plan. Chapter 21 has two major goals. Set Goals and Track Your Progress Finally. do the tasks outlined at the end of the book in Chapter 21. this book also organizes the chapters into six major topic areas called book parts. To set the goals. you need to know what tasks you plan to do. First. More reading will not necessarily develop these skills. Chapter 21. Consider setting a goal date for finishing each part of the book and reward yourself for achieving this goal! Plan breaks—some personal time out—to get refreshed and motivated for the next part. uncovering any gaps in your knowledge. Each “Part Review” instructs you to repeat the DIKTA questions while using the PCPT software. “Final Review. goal setting can help everyone studying for these exams. the “Part Review” tasks section. Many of the questions are purposely designed to test your knowledge in areas where the most common mistakes and misconceptions occur. Completing each part means that you have completed a major area of study. Table 2 Six Major Milestones: Book Parts Note that the “Part Review” directs you to use the Pearson Certification Practice Test (PCPT) software to access the practice questions. Although making lists of tasks might or might not appeal to you. verification. At the end of each part. It also instructs you how to access a specific set of questions reserved for reviewing concepts at the “Part Review. and be ready to track your progress. and troubleshooting. Use the Final Review Chapter to Refine Skills Fourth. This final element gives you repetition with high-challenge exam questions.Beyond the more obvious organization into chapters.” gives some recommendations on how to best use those questions for your final exam preparation. configuration.” Note that the PCPT software and exam databases included with this book also give you the rights to additional questions. set some goals. Table 2 lists the six parts in this book. or the “Final Review” .

” Table 3 shows a sample of a task planning table for Part I of this book. You can complete these tasks now or do them in your spare time when you need a study break during the first few chapters of the book. for both the DCICN and DCICT exams. think about how fast you read and the length of each chapter’s “Foundation Topics” section. When setting your goals. and so on—and either adjust your goals or work a little harder on your study. If you finish a task sooner than planned. do not forget to list tasks for “Part Reviews” and the “Final Review.com) and join the CCNA data center study group. move up the next few goal dates. later. Do not delay. Of course. and set up an e-mail filter to redirect the messages to a separate folder. Register (for free) at the Cisco Learning Network (CLN. Pick reasonable dates that you can meet. work commitments. do not start skipping the tasks listed at the ends of the chapters! Instead. Register. If you miss a few dates. This mailing list allows you to lurk and participate in discussions about CCNA data center topics. Instead. such as install software. Table 3 Sample Excerpt from a Planning Table Use your goal dates as a way to manage your study. Other Small Tasks Before Getting Started You will need to complete a few overhead tasks. and so on. think about what is impacting your schedule—family commitments. as listed in the Table of Contents. Even if you do not spend time reading all the posts yet. you have sufficient time to resolve them before you need the tool. listing only the major tasks can be sufficient. when you have time to read.chapter. http://learningnetwork. join the group. and do not get discouraged if you miss a date. find some PDFs.cisco. you can browse through the posts to find . should you encounter software installation problems. You should track at least two tasks for each typical chapter: reading the “Foundation Topics” section and doing the “Exam Preparation Tasks” section at the end of the chapter.

All the recorded sessions last a maximum of one hour. which you can listen to as much as you want. Install the PCPT exam software and activate the exams. For more details on how to load the software. There are many recorded CCNA data center sessions from the technology experts. and enjoy the ride! .” under the heading “Install the Pearson Certification Practice Test Engine and Questions. You can also search the posts from the CLN website.” Keep calm.relevant topics. refer to the “Introduction.

and Management Plane Initial Setup of Nexus Switches Nexus Device Configuration and Verification Chapter 1: Data Center Network Architecture Chapter 2: Cisco Nexus Product Family Chapter 3: Virtualizing Cisco Network Devices Chapter 4: Data Center Interconnect Chapter 5: Management and Monitoring of Cisco Nexus Devices .Part I: Data Center Networking Exam topics covered in Part I: Modular Approach in Network Design Core. and Access Layer Collapse Core Model Port Channel Virtual Port Channel FabricPath Describe the Cisco Nexus Product Family Describe FEX Products Describe Device Virtualization Identify Key Differentiator Between DCI and Network Interconnectivity Control Plane. Data Plane. Aggregation.

network plays an important role in the availability of data center resources. Aggregation. This chapter discusses the data center networking architecture and design practices relevant to the exam. read the entire chapter. and from the remote locations over the public Internet. from a branch office over the widearea network (WAN). You can find the answers in Appendix A. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. It is assumed that you are familiar with the basics of networking concepts outlined in the Introducing Cisco Data Center Networking (DCICN). Data Center Network Architecture This chapter covers the following exam topics: Modular Approach in Network Design Core. These resources can be utilized from a campus building of an enterprise network. Table 1-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. “Answers to the ‘Do I Know This Already?’ Quizzes. Basic network configuration and verification commands are also included in this chapter. This chapter goes directly into the key concepts of data center networking and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. Therefore.Chapter 1.” Table 1-1 “Do I Know This Already?” Section-to-Question Mapping . and Access Layer Collapse Core Model Port Channel Virtual Port Channel FabricPath Data centers are designed to host critical computing resources in a centralized place. Network architecture is an important subject that addresses many functional areas of the data center to ensure that data center services are running around the clock.

Quality of Service d. It provides connectivity to network-based services (load balancer and firewall). Aggregation layer c. c. Which of the following statements about the data center aggregation layer is correct? a. The purpose of the aggregation layer is to provide high-speed packet forwarding. Provides connectivity to servers c. Which of the following are benefits of port channels? a. In Collapse Core design. d. Security d. 6. High availability . What is a Collapse Core data center network design? a. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. 4.Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. Scalability b. Collapse Core design is a nonredundant design. It provides policy-based connectivity. In Collapse Core design. c. This layer is optional in data center design. d. Collapse Core is a design where only one core switch is used. distribution and core layers are collapsed. Where will you connect a server in a data center? a. b. What are the benefits of modular approach in network design? a. Provides connectivity to aggregation layer b. Which of the following are key characteristics of data center core layer? a. and core layers are collapsed. High availability c. Increased capacity b. Load balancing c. Access layer b. access layer. b. Simplicity 2. Used to enforce network security 3. Services layer 5. Core layer d. If you do not know the answer to a question or are only partially sure of the answer. you should mark that question as wrong for purposes of the self-assessment. High availability d. distribution layer. 1.

Which of the following are benefits of using vPC? a. Speed b. Flow control 8. Active d. The port responds to LACP packets that it receives but does not initiate LACP negotiation. Which LACP mode is used? a. Passive d. Source and destination MAC address c. a. ON e. Nexus 7000 Series switches c. Which LACP mode is used? a. It uses all available uplink bandwidth. All Cisco switches 13. It eliminates Spanning Tree Protocol (STP) blocked ports. Passive c. LACP is enabled on a port. LACP port priority d. Active 11. Duplex c. Source and destination IP address b. Passive d. OFF 9. Progressive c. . The port initiates negotiations with a peer switch by sending LACP packets. Standby c. ON b. Active b.7. ON 10. LACP is enabled on a port. Which of the following are valid port channel modes? a. Progressive b. Identify the Cisco switching platform that supports vPC. b. Catalyst 6500 switches b. Which of the following attributes must match to bundle ports into a port channel? a. Nexus 5000 Series switches d. Cisco Nexus switches support which of the following port channel load-balancing methods? a. None of the above 12. Source and destination TCP/UDP port number d.

It performs load balancing of traffic between two vPC switches. b. Only two c. It performs consistency and compatibility checks.c. vPC peer keepalive link b. Dynamic Resource Allocation Protocol (DRAP) d. Dynamic Host Configuration Protocol (DHCP) b. System ID c. Cisco Fabric Services over Ethernet (CFSoE) c. 255 d. Switch ID d. d. It provides fast convergence if either the link or the device fails. 16. 17. In a FabricPath network. In a FabricPath network. vPC peer link and vPC peer keepalive link 15. IS-IS . It monitors the status of vPC member ports. There is no limit. It is supported on all Cisco switches. Source switch ID in outer source address (OSA) b. It is used to exchange Layer 2 forwarding tables between vPC peers. MAC address 19. d. 14. vPC peer link c. What is the name of the FabricPath control plane protocol? a. Destination MAC address c. vPC ports d. the switch ID must be allocated manually. In vPC setup. How many vPC domains can be configured on a switch or VDC? a. the root of the first multidestination tree is elected based on which of the following parameters? a. Cisco Fabric Services over Ethernet (CFSoE) performs which of the following operations? a. FabricPath routes the frames based on which of the following parameters? a. In FabricPath. 18. Root priority b. IP address of packet 20. you can configure as many as you want. which protocol is used to automatically assign a unique switch ID to each FabricPath switch? a. Destination switch ID in outer destination address (ODA) d. Cisco Fabric Service over Ethernet (CFSoE) uses which of the following links? a. c. Only one b.

This approach has several advantages. Modularity also implies isolation of building blocks. A network must always be kept as simple as possible. Aggregation. and troubleshoot. the addressing is also simplified within the data center network. These blocks or modules can be added and removed from the network without redesigning the entire data center network. Figure 1-1 shows functional layers of a multilayer network design. the network is divided into three functional layers: Core. The concept seems obvious. OSPF c. and Access. the network is divided into distinct building blocks based on features and behaviors. The Multilayer Network Design When you design a data center using a modular approach. Modular networks are scalable. In other words.b. This section describes the function of each layer in the data center. these blocks are independent from each other. . It allows for the considerable growth or reduction in the size of a network without making drastic changes or needing any redesign. so these blocks are separated from each other and communicate through specific network connections between the blocks. These layers can be physical or logical. Modular designs are simple to design. and changes (MACs). adds. a change in one block does not affect other blocks. and incremental changes in the network. Modularity also enables faster moves. BGP d. These building blocks are used to construct a network topology within the data center. EIGRP Foundation Topics Modular Approach in Network Design In a modular design. configure. but it is very often forgotten. modular design provides easy control of traffic flow and improved security. Scalable data center network design is achieved by using the principle of hierarchy and modularity. The main advantage occurs during the changes in the network. Because of the hierarchical topology of a modular design. Therefore.

Table 1-2 describes the key features of each layer within the data center. The hierarchical model divides networks or their modular blocks into three layers. .Figure 1-1 Functional Layers in the Hierarchical Model The hierarchical network model provides a modular framework that enables flexibility in network design and facilitates ease of implementation and troubleshooting.

The multilayer network design takes maximum advantage of many Layer 3 services. New buildings and server farms can be easily added or removed without changing the design. The redundancy of the building block is extended with redundancy in the backbone. load balancing. Figure 1-2 shows sample topology of a multilayer network design. An advantage of the multilayer network design is scalability. In the most general model. and failure recovery. including segmentation. Layer 2 and Layer 3 switching in the distribution layer. . Layer 2 switching is used in the access layer.Table 1-2 Data Center Functional Layers The multilayer network design consists of several building blocks connected across a network backbone. and Layer 3 switching in the core.

The data center access layer provides Layer 2. and IPS devices) are often deployed as a module in the aggregation layer. which allows better sharing of service devices across multiple servers. Aggregation Layer The aggregation (or distribution) layer aggregates the uplinks from the access layer to the data center core. The default gateway would be implemented at the aggregation layer switch. This topology provides access layer high availability. SSL offloading devices. A mix of both Layer 2 and Layer 3 access models permits a flexible solution and enables application environments to be optimally positioned. and mainframe connectivity. Layer 3. firewalls. there are physical loops in the topology that must be managed by the STP. Rapid Per-VLAN Spanning Tree Plus (Rapid PVST+) is recommended to ensure a logically loop-free topology over the physical topology. This design lowers TCO and reduces complexity by reducing the number of components that you need to configure and manage. depending on whether you use Layer 2 or Layer 3 access. This design also enables the use of Layer 2 clustering. With Layer 2 at the aggregation layer. Security and application service devices (such as load-balancing devices. .Figure 1-2 Multilayer Network Design Access Layer The access layer is the first point of entry into the network for edge devices. Although Layer 2 at the aggregation layer is tolerated for traditional designs. The design of the access layer varies. end stations. The access layer in the data center is typically built at Layer 2. and servers. new designs try to confine Layer 2 to the access layer. With a dual-homing network interface card (NIC). With Layer 2 access. which requires the servers to be Layer 2 adjacent. This layer is the critical point for control and application services. the default gateway for the servers can be configured at the aggregation layer. The VLAN or trunk supports the same IP address allocation on the two server links to the two separate switches. The switches in the access layer are connected to two separate distribution layer switches for redundancy. you need two access layer switches with the same VLAN extended across both access switches using a trunk port.

ACLs. If the same VLAN is required across multiple access layer devices. Anticipation of future development: The impact that can result from implementing a separate data center core layer at a later date might make it worthwhile to install the core layer at the beginning. the aggregation layer might also need to support a large Spanning Tree Protocol (STP) processing load. and the core injects a default route into the data center network. but can be logical layers. the firewalls. The data center network is summarized. The following drivers are used to determine whether a core solution is appropriate: 40 and 100 Gigabit Ethernet density: Are there requirements for higher bandwidth connectivity. 10 Gigabit Ethernet density: Without a data center core. these layers do not have to be physical layers. The data center typically connects to the campus core using Layer 3 links. it helps ease network expansion and avoid disruption to the data center environment. will there be enough 10 Gigabit Ethernet ports on the campus core switch pair to support both the campus distribution and the data center aggregation modules? Administrative domains and policies: Separate cores help to isolate campus distribution layers from data center aggregation layers for troubleshooting. such as 40 or 100 Gigabit Ethernet? With the introduction of the Cisco Nexus 7000 M-2 Series of modules. or the content switching devices. Core Layer Implementing a data center core is a best practice for large data centers.Note In traditional multilayer data center design. Depending on the requirements. and maintenance). administration. Key core characteristics include the following: Distributed forwarding architecture Low-latency switching 10 Gigabit Ethernet scalability Scalable IP multicast support Collapse Core Design Although there are three functional layers in a network design. troubleshooting. This topic describes the reasons for combining the core and . the Cisco Nexus 7000 Series switches can now support much higher bandwidth densities. and implementation of policies (such as quality of service. Service devices that are deployed at the access layer provide benefit only to the servers that are directly attached to the specific access switch. When the core is implemented in an initial data center design. service devices that are deployed at the aggregation layer are shared among all the servers. The aggregation layer typically provides Layer 3 connectivity to the core. the boundary between Layer 2 and Layer 3 at the aggregation layer can be in the multilayer switches.

the three-tier architecture might be excessive. server farm. connect to leaf switches. Figure 1-3 shows sample topology of a collapse core network design. In the collapse core design. Figure 1-4 shows a sample spine and leaf topology. while keeping other layers of the data center intact. and edge modules. Spine and Leaf Design Most large-scale modern data centers are now using spine and leaf design. a threetier architecture should be considered. An alternative would be to use a two-tier architecture in which the distribution and the core layer are combined.aggregation layers. adding more spine switches can increase the capacity of the network. In the spine and leaf design. In this mixed approach you can build a scalable access layer using spine and leaf architecture. and adding more leaf switches can increase the port density on the access layer. With this option. such as servers and other edge devices. As the small campus begins to scale. The spine provides connectivity between the leaf switches. Examples of spine and leaf architectures are FabricPath and Application Centric Infrastructure (ACI). Spine and leaf design can also be mixed with traditional multilayer data center design. the small or medium-sized business can reduce costs because the same switch performs both distribution and core functions. Figure 1-4 Spine and Leaf Architecture The following are key characteristics of spine and leaf design: . This design can have only two layers that are spine and leaf. therefore. For the campus of a small or medium-sized business. One disadvantage of two-tier architecture is scalability. Figure 1-3 Collapse Core Design The three-tier architecture previously defined is traditionally used for a larger enterprise campus. the core switches would provide direct connections to the access-layer switches. and it connects to all the spine switches. The end hosts. a leaf switch acts as the access layer.

delay. or port channel. The two devices connected via this logical group of ports see it as a single. a port channel is an aggregation of multiple physical interfaces that creates a logical interface. spine and leaf topology is supported only by Cisco Nexus switches. This logical group is called EtherChannel. On the Nexus 5000 Series switch you can bundle up to 16 links into a port channel. On the Nexus 7000 Series switch. bandwidth. IP address. Port Channel In this section. When you connect two devices using multiple physical links. Future proofing for higher-performance network through the selection of higher port count switches at the spine. These switches are connected to each other using physical network links. The current power consumption of a 40-Gbps optic is more than ten times a single 10-Gbps optic. and up to 16 ports on the F-Series module. big network pipe between the two devices.Multiple design and deployment options enabled using a variable-length spine with ECMP to leverage multiple available paths between leaf and spine. Reduced network congestion using a large. Figure 1-5 Port Channel When a port channel is created. data center network topology is built using multiple switches. Note At the time of writing this book. you can group these links to form a logical bundle. The port channel interface can be configured with its own speed. Lower power consumption using multiple 10 Gbps between spine and leaf instead of a single 40-Gbps link. you learn about the important concepts of a port channel. shared buffer-switching platform for leaf node. This new interface is a logical representation of all the member ports of the port channel. you will see a new interface in the switch configuration. What Is Port Channel? As you learned in the previous section. Network congestion results when multiple packets inside a switch request the same outbound port because of application architecture. duplex. Therefore. you can bundle eight active ports on an M-Series module. flow control. . Figure 1-5 shows how multiple physical interfaces are combined to create a logical port channel between two switches. and physical interfaces participating in this group are called member ports of the group.

Benefits of Using Port Channels There are several benefits of using port channels. In case of a physical link failure. if you have multiple links between a pair of switches. STP blocks all the links except one. it automatically increases availability of the network. Duplex and Flow-control.maximum transmission unit (MTU). multiple Ethernet links are bundled together to create a logical link. For example. MTU. the port channel continues to operate even if a single member link is alive. and so on. Duplex. it is recommended that you use member ports from different line cards. port channels can be used to simplify network topology by aggregating multiple links into one logical link. Therefore. port channel creation will fail. In case of an incompatible attribute. Load balancing: The switch distributes traffic across all operational interfaces in the port channel. Simplified network topology: In some cases. This enables you to distribute traffic across multiple physical interfaces. when you add an interface to a port channel. On the Nexus platform. You can force ports with incompatible parameters to join the port channel if the following parameters are the same: Speed. eight 10-Gbps links can be combined to create a logical link with bandwidth of 80 Gbps. In the case of port channels. Some of these attributes are Speed. Port Channel Compatibility Requirements To bundle multiple switch interfaces into a port channel. Port channel load balancing configuration is discussed later in this chapter. If you configure a member port with an incompatible attribute. You can also shut down the port channel interface. As an example. VLANs. Port mode. it simplifies the network topology by avoiding the STP calculation and reducing network complexity by reducing the number of links between switches. Flow-control. you cannot put a 1 G interface and a 10 G interface into the same port channel. when creating a port channel on a modular network switch. This will help you increase network availability in the case of line card failure. and interface description. In other words. its speed must match with other member interfaces in this port channel. Other interface attributes are checked by switch software to ensure that the interface is compatible with the channel group. High availability: In a port channel. these interfaces must meet the compatibility requirements. increasing the efficiency of your network. the capacity of the link can be increased. which will result in shutting down all member ports. Media type. Some of these benefits are as follows: Increased capacity: By combing multiple Ethernet links into one logical link. you can use the show port-channel compatibility-parameters command to see the full list of compatibility checks. . the software suspends that port in the port channel. to avoid network loops. multiple links are combined into one logical link. and because of these benefits you will find it commonly used within data center networks. Typically. For example. therefore.

Then you enable LACP for each channel by setting the channel mode for each interface to active or passive. Starting from NX-OS Release 5.3ad standard and later moved to the IEEE 802. Nexus switches support three port channel modes: on. active. On M Series modules. LACP was initially part of the IEEE 802. You can configure channel mode active or passive for individual links in the LACP channel group when you are adding the links to the channel group. Flow-control. LACP must be enabled on switches at both ends of the link. To run LACP. This protocol ensures that both sides of the link have compatible configuration (Speed. Note Nexus switches do not support Cisco proprietary Port Aggregation Protocol (PAgP) to create port channels.1. you first enable the LACP feature globally on the device. a maximum of 8 interfaces can be in active state. allowed VLAN) to form a bundle. Port Channel Modes Member ports in a port channel are configured with channel mode. the channel mode is set to on. Network devices use LACP to negotiate an automatic bundling of links by sending LACP packets to the peer. and a maximum of 8 interfaces can be placed in a standby state. When you configure a static port channel without specifying a channel mode. Table 1-3 shows the port channel modes supported by Nexus switches. and passive. On Nexus 7000 switches.Link Aggregation Control Protocol The Link Aggregation Control Protocol (LACP) provides a standard mechanism to ensure that peer switches exchange information and agree on the necessary details to bundle ports into a port channel. you can bundle up to 16 active links into a port channel on the F Series module. Duplex. . LACP enables you to configure up to 16 interfaces into a port channel.1AX standard.

Both the passive and active modes enable LACP to negotiate between ports to determine whether they can form a port channel. or partner. the device returns an error message. it does not join the LACP channel group. The passive mode is useful when you do not know whether the remote system. active. as shown in Figure 1-6. Configure the physical interface of the switch with the channel-group command and specify the channel number. you can enable LACP on ports by configuring each interface in the channel for the channel mode as either active or passive. 2. Figure 1-6 Port Channel Mode Compatibility Configuring Port Channel Port channel configuration on the Cisco Nexus switches includes the following steps: 1. This command automatically creates an interface port channel with the number that you specified in the command. or passive within the channel-group command.Table 1-3 Port Channel Modes If you attempt to change the channel mode to active or passive before enabling the LACP feature on the switch. it does not receive any LACP packets. This step is required only if you are using active mode or passive mode. . Enable the LACP feature. When an LACP attempts to negotiate with an interface in the on state. based on criteria such as the port speed and the trunking state. therefore. After LACP is enabled globally. You can also specify the channel mode on. Ports can form an LACP port channel when they are in different LACP modes as long as the modes are compatible. supports LACP. and it becomes an individual link for that interface.

hence. This provides load balancing across all member ports of a port channel. allowed VLANs. such as description. Configure the newly created interface port channel with the appropriate configuration. they are always received in order by the remote device.3. Figure 1-7 Port Channel Configuration Example Port Channel Load Balancing Nexus switches distribute the traffic across all operational ports in a port channel. These commands are helpful to verify port channel configuration and troubleshoot the port channel issue. Table 1-4 provides some useful commands. . This load balancing is done using a hashing algorithm that takes addresses in the frame as input and generates a value that selects one of the links in the channel. and so on. but it also ensures that frames from a particular address are always forwarded on the same port. Figure 1-7 shows configuration of a port channel between two switches. trunk configuration. You can configure the device to use one of the following methods to load balance across the port channel: Destination MAC address Source MAC address Source and destination MAC address Destination IP address Source IP address Source and destination IP address Source TCP/UDP port number Destination TCP/UDP port number Source and destination TCP/UDP port number Verifying Port Channel Configuration Several show commands are available on the Nexus switch to check port channel configuration.

These two switches work together to create a virtual domain. All the member ports of a port channel belong to the same network switch. Cisco vPC technology allows dual-homing of a downstream device to two upstream switches. so a port channel can be extended across the two devices within this virtual domain. What Is Virtual Port Channel? You learned in the previous section that a port channel bundles multiple physical links into a logical link. . The upstream switches present themselves to the downstream device as one switch from the port channel and Spanning Tree Protocol (STP) perspective. A vPC enables the extension of a port channel across two physical switches. Figure 1-8 shows a downstream switch connected to two upstream switches in a vPC configuration. In Layer 2 network design.Table 1-4 Verify Port Channel Configuration Virtual Port Channel In this section. you learn about important concepts of virtual port channel (vPC). It means that member ports in a virtual port channel can be from two different network switches.

However. it is important to migrate away from STP as a primary loop management technology toward new technologies such as vPC and Cisco FabricPath. smaller Layer 2 segments are preferred. A virtualized server running hypervisor software. data center network architecture is shifting from a highly scalable Layer 3 network model to a highly scalable Layer 2 model. FabricPath technology is discussed later in this chapter. A large Layer 2 network is built using many network switches connected to each other in a redundant topology using multiple network links. However. As a result. it was necessary to develop protocols and control mechanisms to limit the disastrous effects of a Layer 2 topology loop in the network. such as VMware ESX Server. The innovation of new technologies to manage large Layer 2 network environments is the result of this shift in the data center architecture. this preference is changing. Building redundancy is an important design consideration to protect against device or link failure. Over a period of time. These requirements lead to the use of large Layer 2 networks. it still has a fundamental issue that makes it suboptimal for the network. Microsoft Hyper-V. There are some other applications. and Citrix XenServer. therefore. only one will be active at any given time. Detailed . causing high CPU and link utilization. for an Ethernet network this redundant topology can result in a Layer 2 loop. It means that no matter how many connections exist between network devices. that require member servers to be on the same Layer 2 Ethernet network to support private interprocess communication and public client facing communication using a virtual IP of the cluster. with the current data center virtualization trend. These loops can cause a meltdown of network resources. Although STP can support very large network environments. STP became very mature and stable. such as high availability clusters. Spanning Tree Protocol (STP) was the primary solution to this problem because it provides loop detection and loop management capability for Layer 2 Ethernet networks. and after going though a number of enhancements and extensions. A Layer 2 loop is a condition where packets are circulating within the redundant network topology. This issue is related to the STP principle that allows only one active network path from one device to another to break the topology loops in a network. therefore. requires Layer 2 Ethernet connectivity between the physical servers to support seamless workload mobility. within and across the data center locations. Note A brief summary of Spanning Tree is provided later in the FabricPath section. STP does not provide an efficient way to manage large Layer 2 networks.Figure 1-8 Virtual Port Channel In a conventional network design.

It is important to observe that there is no blocking port in the vPC topology. there is only one logical path between the two switches. with minimal loss of traffic and no effect on the active network topology. This environment combines the benefits of hardware redundancy with the benefits of port channel. In large networks with redundant devices. Layer 2 loops are managed by bundling multiple physical links into one logical link. Port channel technology has another great benefit. These switches collaborate only for the purpose of creating a vPC. Virtual Port Channel (vPC) addresses this limitation by allowing a pair of switches acting as a virtual endpoint. Figure 1-9 shows a comparison of STP topology and vPC topology. It also provides the capability to equally balance traffic by sending packets to all available links using a load-balancing algorithm. This configuration keeps the switches from forwarding broadcast and multicast traffic back to the same link. Figure 1-9 Virtual Port Channel Bi-Section Bandwidth The two switches that act as the logical port channel endpoint are still two separate devices. the alternative path is often connected to a different network switch in a topology that would cause a loop.discussion on STP is available in Introducing Cisco Data Center Networking (DCICN). avoiding loops. In a port channel. It means that if you configure a port channel between two network devices. Port channel technology discussed earlier in this chapter was a key enhancement to Layer 2 Ethernet networks. The limitation of the classic port channel is that it operates between only two devices. resulting in efficient use of network capacity. It can recover from a link loss in the bundle within a second. Benefits of Using Virtual Port Channel Virtual Port Channel provides the following benefits: . so it looks like a single logical entity to port channel–attached devices. This enhancement enables the use of multiple Ethernet links to forward traffic between the two network devices. Figure 1-9 shows the comparison between STP and vPC topology using the same devices and network links.

This subject is covered in Chapter 5. These two upstream Nexus switches operate independently with their own control. and management planes. and management planes. data. .Enables a single device to use a port channel across two upstream switches Eliminates STP blocked ports Provides a loop-free topology Uses all available uplink bandwidth Provides fast convergence in the case of link or device failure Provides link-level resiliency Helps ensure high availability Components of vPC In vPC configuration. data. However. Note The operational plane of a switch includes control. Figure 1-10 shows the components of a virtual port channel.” Before going into the detail of vPC control and data plane operation. The control planes of these switches also exchange information so it appears as a single logical Layer 2 switch to a downstream device. “Management and Monitoring of Cisco Nexus Devices. a downstream network device that connects to a pair of upstream Nexus switches sees them as a single Layer 2 switch. the data planes of these switches are modified to ensure optimal packet forwarding in vPC configuration. it is important to understand the components of vPC and the function of these components.

and for the traffic of orphaned ports. This link creates the illusion of a single control plane by forwarding bridge protocol data units (BPDUs) and Link Aggregation Control Protocol (LACP) packets to the primary vPC switch from the secondary vPC switch. such as multicast traffic. The vPC peer link is the most important connectivity element in the vPC system. This pair of switches acts as a single logical switch. the peer link also carries Hot Standby Router Protocol (HSRP) and Virtual Router Redundancy Protocol (VRRP) packets. this link is built using two physical interfaces in port channel configuration.Figure 1-10 Virtual Port Channel Components The vPC architecture consists of the following components: vPC peers: The two Cisco Nexus switches combined to build a vPC architecture are referred to as vPC peers. If a vPC device is also a Layer 3 switch. Typically. The peer link is also used to synchronize MAC address tables between the vPC peers and to synchronize Internet Group Management Protocol (IGMP) entries for IGMP snooping. The peer link provides the necessary transport for data plane traffic. which enables other devices to connect to the two chassis using vPC. . vPC peer link: This is a link between two vPC peers to synchronize state.

Orphan port: This is a switch port that is connected to an orphan device. There is no need for this device to support vPC itself. the device automatically enables Cisco Fabric Services over Ethernet (CFSoE). vPC domain: This is a pair of vPC peer switches combined using vPC configuration. it only sends IP packets to make sure that the originating switch is operating and running vPC. This situation can happen in a failure condition where a device connected to vPC loses all its connection to one of the vPC peer switches. When you enable the vPC feature. vPC member port: This port is a member of a virtual port channel on the vPC peer switch. CFSoE carries messages and packets for many features linked with vPC. The device on the other end of vPC sees the vPC peer switches as a single logical switch. It uses the vPC peer link to exchange information between peer switches. It helps the vPC switch to determine whether the peer has failed or whether the vPC peer switch has failed entirely. Orphan device: The term orphan device is used for any device that is connected to only one switch within a vPC domain. vPC Data Plane Operation vPC technology is designed to always forward traffic locally when possible. vPC peer link is not used for the regular vPC traffic and is considered to be an extension of . Peer keepalive is used as a secondary test to make sure that the remote peer switch is operating properly. These VLANs must also be allowed on the vPC peer link. The peer keepalive link is used to monitor the status of the vPC peer when the vPC peer link goes down. A vPC domain is created using two vPC peer devices that are connected together using a vPC peer keepalive link and a vPC peer link. Non-vPC VLANs: These VLANs are not part of any vPC and are not present on a vPC peer link. It means that the orphan device is not using port channel configuration. Spanning Tree has been modified to keep the peer-link ports always forwarding. This device can be configured to use LACP or a static port channel configuration. vPC peer keepalive link: This is a Layer 3 communication path between vPC peer switches.Cisco Fabric Services: The Cisco Fabric Services is a reliable messaging protocol used between the vPC peer switches to synchronize control and data plane information. Peer keepalive can also use the management port of the switch and often runs over an out-of-band (OOB) network. To help ensure that the vPC peer link communication for the CFSoE protocol is always available. In a normal network condition. It connects to the vPC peer switches using a regular port channel. No CFSoE configuration is required for vPC to function properly. This term is also used for the vPC member ports that are connected to only one vPC switch. The vPC peer keepalive link is not used for any data or synchronization traffic. A numerical domain ID is assigned to the vPC domain. A peer switch can join only one vPC domain. such as STP and IGMP. and for this device there is no vPC configured on the peer switches. vPC VLANs: The VLANs that are allowed on vPC are called vPC VLANs. vPC: This is a Layer 2 port channel that spans the two vPC peer switches.

such as broadcast. but the other peer switch has lost all the associated vPC member ports. vPC performs loop avoidance at the data plane by implementing forwarding rules in the hardware. vPC Control Plane Operation .the control plane between the vPC peer switches. it is used specifically for switch management traffic and occasionally for the data packets from failed network ports. The vPC peer link carries the following type of traffic: vPC control traffic. Cisco Fabric Services over Ethernet (CFSoE) is used to synchronize the Layer 2 forwarding tables between the vPC peer switches. This behavior of vPC enables the solution to scale because the bandwidth requirement for the vPC peer link is not directly related to the total bandwidth of all vPC ports. is not allowed to exit the switch on the vPC member port. This packet can exit on any other type of port. One of the most important forwarding rules for vPC is that a packet that enters the vPC peer switch via the vPC member port. For this type of orphan port. and LACP messages Flooding traffic. and then goes to other peer switches via peer link. such as an L3 port or an orphan port. the vPC peer switch will be allowed to forward the traffic that is received on the peer link to one of the remaining active vPC member ports. The traffic for this type of orphan port can use a vPC peer link as a transit link to reach the devices on the other vPC peer switch. such as Cisco FSoE. CFSoEbased MAC address learning is applicable only to the vPC ports. For this type of orphan port. and unknown unicast traffic Traffic from orphan ports It means that vPC peer link does not typically forward data packets. Therefore. The second type of orphan port is the one that is a member of a vPC configuration. the vPC loop avoidance rule is disabled. The vPC loop avoidance rule is depicted in Figure 1-11. To implement the specific vPC forwarding behavior. In this special case. This rule prevents the packets that are received on a vPC from being flooded back onto the same vPC by the other peer switch. Figure 1-11 vPC Loop Avoidance It is important to understand vPC data plane operation for the orphan ports. multicast. This method of learning is not used for the ports that are not part of vPC configuration. BPDUs. normal switch forwarding rules are applied. there is no dependency on the regular MAC address learning between the vPC peer switches. The first type of orphan port is the one that is connected to an orphan device and is not part of any vPC configuration.

two peer switches agree on LACP . If the primary vPC peer device fails. the secondary switch shut downs all vPC member ports and all switch virtual interfaces (SVI) that are associated with the vPC VLANs. Synchronize the Address Resolution Protocol (ARP) table: This feature enables faster convergence time upon reload of a vPC switch. This new method of MAC address learning using CFSoE replaces the conventional method of MAC address learning for vPC. both vPC peer switches know that MAC address. Perform consistency and compatibility checks: The peer switches exchange information using CFSoE to ensure that vPC configuration is consistent across both switches. BPDUs. and for Nexus 7000 it is implemented in Cisco NXOS Software Version 4. This mechanism protects from split brain between the vPC peer switches. CFSoE performs the following vPC control plane operations: Exchange the Layer 2 forwarding table between the vPC peers: It means that if one vPC peer learns a new MAC address. and it remains primary even if the original device is restored. IGMP snooping behavior is modified to synchronize the IGMP entries between the peer switches. If both switches are operational and the peer link is down. CFSoE monitors the status of vPC member ports. When IGMP traffic enters a vPC peer switch. Agree on LACP and STP parameters: In a vPC domain. CFSoE is used to validate that vPC member ports are compatible and can be combined to form a port channel. Synchronize the IGMP snooping status: In vPC configuration. In the case of peer link failure. The ARP table is synchronized between the vPC peer devices using CFSoE. It is helpful in reducing data traffic on the vPC peer link by forwarding the traffic to the local vPC member ports. this feature is implemented in Cisco NX-OS Software Release 5. When all member ports of a vPC go down on one of the peer switches. Monitor the status of the vPC member ports: For a smooth vPC operation during failure conditions. Determine primary and secondary vPC devices: When you combine the vPC peer switches in a vPC domain. The CFSoE is used as the primary control plane protocol for vPC. This role is a control plane function. the vPC peer link is used for control plane messaging. vPC is also subject to the port compatibility checks.As discussed earlier. the secondary vPC peer device takes the primary role. It checks the switch configuration for vPC as well as switch-wide parameters that need to be configured consistently on both peer switches. an election is held to determine the primary and secondary role of the peer switches. Similar to the conventional port channel. and LACP messages. This vPC role is nonpreemptive.2(6). the other peer switch is notified that its ports are now orphan ports. Switch modifies the forwarding behavior so that all traffic received on the peer link for that vPC is now forwarded locally on the vPC port.0(2). the peer keepalive mechanism is used to find whether the peer switch is still operational. On the Nexus 5000 Series switch. and this MAC is programmed on the Layer 2 Forwarding (L2F) table of both peer devices. it triggers the synchronization of the multicast entry on both vPC peer switches. and it defines which switch will be primarily responsible for the generation and processing of spanning tree BPDUs. such as CFSoE. It also helps to stabilize the vPC operation during failure conditions.

For STP. both primary and secondary switches use the same bridge ID to present themselves as a single switch. vPC Limitations It is important to know the limitations of vPC technology. and all ports for a given vPC must be in the same VDC. and it uses its own bridge ID.and STP parameters to present themselves as a single logical switch to the device that is connected on a vPC. This domain ID must be unique . Dynamic routing from the vPC peers to routers connected on a vPC is not supported. In this case. Before you start vPC configuration. make sure that peer switches are connected to each other via physical links that can support the requirements of a vPC peer link. It is recommended that routing adjacencies be established on separate routed links. The vPC technology does not support the configuration of Layer 3 port channels. and that a Layer 3 path is available between the two peer switches. the behavior depends on the peer-switch option. which must be enabled on both peer switches. which are combined with the vPC domain ID. only the primary vPC switch sends and processes BPDUs. a common LACP system ID is generated from a reserved pool of MAC addresses. For LACP. vPC is a Layer 2 port channel: A vPC is a Layer 2 port channel. the secondary switch only relays BPDUs and does not generate any BPDU. vPCs can be configured in multiple virtual device contexts (VDC). Peer-link is always 10 Gbps or more: Only 10 Gigabit Ethernet ports can be used for the vPC peer link. It is recommended that you use at least two 10 Gigabit Ethernet ports in dedicated mode on two different I/O modules. In this case. vPC domains cannot be stretched across multiple VDCs on the same switch. but the configuration is entirely independent. Configuration Steps of vPC When you are configuring a vPC. Use the following command to enable the vPC feature: feature vpc 2. When the peer-switch option is used. Enable the vPC feature. It is not possible to add more than two switches or Virtual Device Context (VDC) to a vPC domain. It is not possible for a switch or VDC to participate in more than one vPC domain. Create the vPC domain and assign a domain ID to this domain. If the peer-switch option is not used. Only one vPC domain ID per switch: Only one vPC domain ID can be configured on a single switch or VDC. both switches send and process BPDU. vPC configuration on the Cisco Nexus switches includes the following steps: 1. A separate vPC peer link and vPC peer keepalive link is required for each of the VDCs. the configuration must be done on both peer switches. Some of the key points about vPC limitations are outlined next: Only two switches per vPC domain: A vPC domain by definition consists of a pair of switches identified by a shared vPC domain ID. Each VDC is a separate switch: vPC is a per-VDC function on the Cisco Nexus 7000 Series switches.

and should be configured on both vPC switches. Create a regular port channel and add the vpc command under the port channel configuration to create a vPC. Use the following command to create the vPC domain: vpc domain <domain-id> 3. The peer keepalive link can be in any VRF. Click here to view code image Interface port-channel <vPC number> vpc <vPC number> Figure 1-12 shows configuration of vPC to connect an access layer switch to a pair of distribution layer switches. 4. including management VRF. A peer keepalive link must be configured before you set up the vPC peer link. Create the vPC itself. Create a peer link. This configuration must be the same on both peer switches. Use the following command under the vPC domain to configure the peer keepalive link: Click here to view code image peer-keepalive destination <remote peer IP> source <local IP> vrf mgmt. This port channel must be trunked to enable multiple VLANs over the vPC. Establish peer keepalive connectivity. Use the following commands under the port channel to configure the peer link: Click here to view code image Interface port-channel <peer link number> switchport mode trunk vpc peer-link 5. . The vPC peer link is configured in a port channel configuration.

several show commands are available for vPC. Table 1-5 shows some useful vPC commands. These commands are helpful in verifying vPC configuration and troubleshooting the vPC issue.Figure 1-12 vPC Configuration Example Verification of vPC To verify the status of vPC. . In addition. you can use all port channel show commands discussed earlier.

. and traffic does not necessarily take the optimal path. To create loop-free topology. Spanning Tree Protocol To support high availability within a Layer 2 domain. STP is a Layer 2 control plane protocol that runs on switches to ensure that you do not create topology loops when you have these redundant paths in the network. and flexible Layer 2 fabric is needed in most data centers to support distributed and virtualized application workloads. A large-scale. STP builds a treelike structure by blocking certain ports in the network. This tree topology implies that certain links are unused. Switches exchange information using bridge protocol data units (BPDU). To understand FabricPath.Table 1-5 Verify Port Channel Configuration FabricPath As mentioned earlier in this chapter. and this information is used to elect a root bridge and perform subsequent network configuration by selecting only the shortest path to the root bridge. highly scalable. server virtualization and seamless workload mobility is a leading trend in the data center. Figure 1-13 shows the STP topology. All network ports connecting to the root bridge via an alternative network path are blocked. it is important to discuss limitations of STP. switches are normally interconnected using redundant links. and STP ensures that only one network path to the root bridge exists. A root bridge is elected.

This limit is increasing with improvements in the silicon technology. This leads to inefficient utilization of network resources. Protocol safety: A Layer 2 packet header does not have a time to live (TTL). packet flooding for broadcast and unknown unicast packets populates all end host MAC addresses in all switches in the network.Figure 1-13 Spanning Tree Topology Key limitations of STP are as follows: Lack of multipathing: In STP. . A simple example of inefficient path selection is shown in Figure 1-13. when there are changes in the spanning tree topology the switches will clear and rebuild their forwarding table. A switch has limited space to store Layer 2 forwarding information. however. Rapid Spanning Tree Protocol (RSTP) has improved the STP convergence to a few seconds. As you know. and it is not possible to optimize Layer 2 forwarding tables by summarizing the Layer 2 addresses. equal cost multipath cannot be used to efficiently load balance traffic across multiple network links. however. Such failures can be extremely dangerous for the switched network because a topology loop can consume all network bandwidth. Furthermore. and all the redundant links are blocked. so in a Layer 2 loop condition. where host A and host B cannot use the direct link between the two switches because it is blocked by STP. Slow convergence: A link failure or any other small change in network topology can cause large changes in the spanning tree that must be propagated to all switches and could take about 30 seconds in older versions of STP. all the switches build only one shortest path to the root switch. Therefore. certain failure scenarios are complex and can destabilize the network for longer periods of time. This results in more flooding to learn new MAC addresses. Therefore. packets can continue circulating in the network with no time out. causing a major network outage. Limited scalability: Several factors limit scalability of classic Ethernet. In addition to the large forwarding table. Inefficient path selection: STP does not always provide a shortest path between two switches. there is no safety against STP failure or misconfiguration. the Layer 2 MAC addressing is nonhierarchical. the demand to store more addresses is also growing with computing capacity and the introduction of server virtualization.

FabricPath brings the stability and performance of Layer 3 routing to the Layer 2 switched network and builds a highly resilient. combining them in a fabric using a few simple commands. however. FabricPath can be deployed in any arbitrary topology. This protocol requires less configuration than the STP in the conventional Layer 2 network. Flexibility: FabricPath provides a single domain Layer 2 fabric to connect all the servers. and traffic can be load balanced across these paths. offering a much more scalable and flexible network fabric. The Ethernet fabric can use all the available paths in the network. A FabricPath network is created by connecting a group of switches into an arbitrary topology. In addition to ECMP. High performance: High performance in a FabricPath network is achieved using an equal-cost multipath (ECMP) feature. which makes it less flexible for workload mobility within a virtualized data center. the network latency is reduced and higher performance can be achieved. multipathing. A single control protocol is used for unicast forwarding. There is no need to run Spanning Tree Protocol within the FabricPath network. overall complexity of the solution is reduced significantly. and VLAN pruning. they can be moved around within the data center in a nondisruptive manner without any constrains on the design. It combines the benefits of Layer 3 routing with the advantages of Layer 2 switching. The switch control plane protocol for FabricPath starts automatically. and shortest path selection from any host to any other host in the network. Figure 1-14 shows the FabricPath topology. Intermediate System-to-Intermediate System (IS-IS) Protocol is used to provide fabric-wide intelligence and ties the elements together. Benefits of Cisco FabricPath technology are as follows: Operational simplicity: Cisco FabricPath is simple to configure and easy to operate. therefore. Because all servers within this fabric reside in one common Layer 2 network. a typical spine and leaf topology is used in the data center. However. Figure 1-14 FabricPath Topology Benefits of FabricPath FabricPath converts the entire data center into one big switching fabric. hence.What Is FabricPath? Cisco FabricPath is an innovation to build a simplified and scalable Layer 2 fabric. FabricPath supports the shortest path to the destination. Reliability based on proven technologies: Cisco FabricPath uses a reliable and proven . fast convergence. massively scalable. Layer 3 creates network segmentation. The level of enhanced flexibility translates into lower operational costs. as soon as the feature is enabled and ports are assigned to the FabricPath network. multicast forwarding. Layer 3 routing provides scalability. and flexible Layer 2 fabric.

The FabricPath frames includes a time-to-live (TTL) field similar to the one used in IP. and a Reverse Path Forwarding (RPF) check is applied for multidestination frames.control plane mechanism. The role of . Spine switches act as the backbone of the switching fabric. In addition. It means incremental levels of resiliency and bandwidth capacity. you might require equipment upgrades. FabricPath learns MAC addresses selectively at the edge. driving scale requirements up in the data center. and it increases the cost of switches. Efficiency: Conventional Ethernet switches learn the MAC addresses from the frames they receive on the network. Components of FabricPath To understand FabricPath control and data plane operation. the Ethernet MAC address cannot be summarized to reduce the size of the forwarding table. spine switches are deployed to provide connectivity between leaf switches. which is required as computing and virtualization density continue to grow. High scalability and high availability: FabricPath delivers highly scalable bandwidth with capabilities such as ECMP and multi-topology forwarding. allowing scaling the network beyond the limits of the MAC address table of individual switches. enabling IT architects to start with a small Layer 2 network and build a much bigger data center fabric. which is based on the IS-IS routing protocol. This protocol provides fast convergence that has been proven in large service provider networks. FabricPath provides increased redundancy and high availability with the multiple paths between the end hosts. FabricPath data plan also provides robust controls for loop prevention and mitigation. it is important to understand the components of FabricPath. Therefore. As the network size grows. large networks with many hosts require a large forwarding table. Furthermore. Figure 1-15 Components of FabricPath Spine switches: In a two-tier architecture. Figure 1-15 shows components of a FabricPath network.

Edge port: Edge ports are switch interfaces that are part of classic Ethernet. Therefore. FabricPath network: A network that connects the FabricPath switches is called a FabricPath network. Because these ports are part of classic Ethernet. Only FabricPath VLANs are allowed on core ports. FabricPath core ports always forward Ethernet frames encapsulated in a FabricPath header. MAC address table: This table contains MAC address entries and the source from where these MAC addresses are learned. Core port: Ports that are part of a FabricPath network are called core ports. This link state protocol is the core of the FabricPath control plane.1Q trunk. Dynamic Resource Allocation Protocol: Dynamic Resource Allocation Protocol (DRAP) is an extension to FabricPath IS-IS that ensures networkwide unique and consistent Switch IDs and FTAG values. The FabricPath header is added to the Layer 2 packet when a packet enters the FabricPath network at the ingress switch. In a data center. The source could be a local switch interface or a remote switch. and it is stripped when the packet leaves the FabricPath network at the egress switch. In a case where a MAC address table is pointing to a remote switch. To allow a VLAN over the FabricPath network. the packet is delivered using classic Ethernet format. and packets are transmitted using standard IEEE 802. with some ports participating in FabricPath and some ports in a classic Ethernet network.3 Ethernet frames. which includes information such as switch ID and hop count. packets are forwarded based on a new header. Nexus switches can be part of both a FabricPath network and a classic Ethernet network at the same time. a lookup is done in the switch table to find the next-hop interface. It contains Switch IDs and next-hop interfaces to reach the switch. Leaf switches: Leaf switches provide access layer connectivity for servers and other access layer devices. When you create a VLAN. FabricPath IS-IS: FabricPath switches run a Shortest Path First (SPF) routing protocol based on standard IS-IS to build their forwarding table similar to a Layer 3 network. FabricPath VLANs: The VLANs that are allowed on the FabricPath network are called FabricPath VLANs. The hosts on two leaf switches use spine switches to communicate with each other. You can configure an edge port as an access port or as an IEEE 802. devices connected on an edge port in FabricPath VLAN can send traffic to a remote host over the FabricPath network. by default it operates in Classic Ethernet (CE) mode. Switch table: The switch table provides Layer 2 routing information based on Switch ID. In this FabricPath network. you must go to VLAN configuration and change the mode to FabricPath.spine switches is to provide a transit network between the leaf switches.1Q VLAN tag and can be considered as a trunk port. These interfaces behave like normal Ethernet ports. and you can connect any Ethernet device to these ports. Classic Ethernet (CE): Conventional Ethernet using transparent bridging and running STP is referred to as classic Ethernet. Ethernet frames transmitted on core ports always carry an IEEE 802. An edge port can carry both CE as well as FabricPath VLANs. a typical FabricPath network is built using a spine and leaf topology. If a MAC address table has a local switch interface. The packet is then encapsulated in the FabricPath . MAC learning is performed as usual.

where a virtual or physical end host located behind a single port can uniquely identify itself to the FabricPath network. in FabricPath a set of VLANs that have the same forwarding behavior will be mapped to a topology or forwarding instance. FabricPath Frame Format When an Ethernet frame enters the FabricPath network. where a set of VLANs could belong to one or a different instance. indicating that the MAC address is a locally administered address. and different topologies could be constructed for a different set of VLANs. FabricPath topology: Similar to STP. Any multidestination addresses have this bit set. The 48-bit source and destination MAC address contains a 12-bit unique identifier called SwitchID that is used for forwarding in the FabricPath network. U/L bit: FabricPath switches set this bit in all unicast OSA and ODA fields. This is required because the OSA and ODA fields are not MAC addresses and do not uniquely identify a particular hardware component the way a standard MAC address would. and a 32-bit FabricPath tag. .header and sent to the next hop within the FabricPath network. the switch encapsulates the frame with a 16byte FabricPath header. Figure 1-16 FabricPath Frame Format Endnode ID: This field is currently not in use. The purpose of this field is to provide future capabilities. OOO/DL: This field is currently not in use. a 48bit outer MAC source address (OSA). The ODA and SDA contain a 12bit unique identifier called SwitchID that is used to forward packets within the FabricPath network. This could be used for traffic engineering. Figure 1-16 shows the FabricPath frame format. I/G bit: The I/G bit serves the same function in FabricPath as in standard Ethernet. a forwarding tag called FTAG that uniquely identifies a forwarding graph and the TTL field that has the same functionality as in the IP header. administrative purposes. The function of the OOO (out of order)/don’t learn (DL) bit varies depending on whether it is used in outer source address (OSA) or outer destination address (ODA). The FabricPath tag contains the new Ethertype for the FabricPath header. or security. This header includes a 48-bit outer MAC destination address (ODA). The purpose of this field is to provide future capabilities for perpacket load sharing when multiple equal cost paths are available.

There is no need to run STP within the FabricPath network. the FTAG identifies a multidestination forwarding tree within a given topology. In case of temporary loops during protocol convergence. vPC+ is discussed later in this chapter. FTAG: FabricPath forwarding tag (FTAG) is a 10-bit traffic identification tag within the FabricPath header. length. For unicast traffic it identifies the FabricPath topology the frame is traversing. FabricPath Control Plane Cisco FabricPath uses a single control plane that functions for unicast. and frames with an expired TTL are discarded. and multicast packets by leveraging the Layer 2 Intermediate System-to-Intermediate System (IS-IS) Protocol. IS-IS devices can exchange information about virtually anything. TTL operates in the same manner within FabricPath as it does in traditional IP forwarding. Ethertype: An Ethertype value of 0x8903 is used for FabricPath. Provides Shortest Path First (SPF) routing: Excellent topology building and reconvergence characteristics. In the absence of vPC+. LID is used to identify the specific physical or logical edge port from which the packet is originated or is destined to. This emulated switch represents a vPC+ port-channel interface associated with a pair of FabricPath switches. These switches then start a negotiation process that . The system selects the FTAG value for each multidestination tree. In the case of multicast and broadcast traffic. this field is always set to 0. the TTL prevents Layer 2 frames from circulating endlessly within the network. value (TLV) settings. The forwarding decision at the destination switch can use the LID to send the packet to the correct edge port without any additional lookups on the MAC table. The FabricPath control plane uses IS-IS Protocol to perform the following functions: Switch ID allocation: FabricPath switches establish IS-IS adjacency to the directly connected neighboring switches on the core ports. broadcast. The system selects a unique FTAG for each topology configured within FabricPath. The value of this field in the FabricPath header is locally significant to each switch. Sub-SwitchID: Sub-SwitchID is a unique identifier assigned to an emulated switch within the FabricPath SwitchID. TTL: This field represents the Time to Live (TTL) of the FabricPath frame. FabricPath IS-IS provides the following benefits: No IP dependency: No need for IP reachability to form adjacency between devices. Each FabricPath switch hop decrements the TTL by 1.Switch ID: The switch ID is a 12-bit unique identifier for a switch within the FabricPath network. The Cisco FabricPath Layer 2 IS-IS is a separate process from Layer 3 IS-IS. LID: Local ID (LID) is also known as port ID. This switch ID can be assigned manually within the switch configuration or allocated automatically using Dynamic Resource Allocation Protocol (DRAP). Easily extensible: Using custom type. In the outer source address (OSA) of the FabricPath header. In the outer destination address (OSD) this field identifies the destination switch for the FabricPath packet. this field is used to identify the switch that originated the packet.

FabricPath switches use the following three parameters to elect a root for the first tree. It also ensures that the type and number of FTAG values in use are consistent. Figure 1-17 shows a network with two multidestination trees. and select . up to 16 next hops are stored in the FabricPath routing table. multidestination trees are built within the FabricPath network. the core interfaces are brought up but not added to the FabricPath topology. Figure 1-17 FabricPath Multidestination Trees Within the FabricPath domain. While the initial negotiation is in progress. Unicast routing: After IS-IS adjacencies are built and routing information is exchanged. the FabricPath calculates its unicast routing table. Dynamic Resource Allocation Protocol (DRAP) automatically performs this task. two such trees are supported per topology. multicast. where switch S1 is the root for tree number 1 and switch S4 is the root for tree number 2. a configuration command is also provided for the network administrator to statically assign a switch ID to a FabricPath switch.ensures all FabricPath switches have a unique switch ID. If multiple equal-cost paths to a particular switch ID exist. This routing table stores the best path to the destination FabricPath switch. and no data plane traffic is passed on these interfaces. However. Multidestination trees: To ensure loop-free topology for broadcast. which includes the FabricPath switch ID and its next hop.2(2). and unknown unicast traffic. After a switch becomes the root for the first tree. first a root switch is elected for tree number 1 in a topology. FabricPath IS-IS automatically creates multiple multidestination trees connecting to all FabricPath switches and uses them to forward multidestination traffic. this root switch selects the root for each additional multidestination tree in the topology and assigns each multidestination tree a unique FTAG value. As of NX-OS release 6. By default. You can modify the number of next hops installed using the maximum paths command in FabricPath-domain configuration mode.

FabricPath uses IS-IS to propagate multicast receiver locations within the FabricPath network. FabricPath IS-IS computes the SwitchID reachability for each switch in the network. It is recommended to configure a centrally connected switch as root switch. Known unicast forwarding is done based on the mapping between the unicast MAC address and the switch ID that is already learned by the network. and as it passes different switches. The FTAG uniquely identifies a topology because different topology could have different broadcast trees. Root priority: An 8-bit value. It is composed of the VDC MAC address. FabricPath Data Plane The Layer 2 packets forwarded through a FabricPath network are of the following types: known unicast packet. Core FabricPath switch Receives a FabricPath-encapsulated frame on a FabricPath core port. spine switches are the best candidates for the root.root for any additional trees. the packets are flow-based load balanced to distribute traffic over multiple equal cost paths. System ID: A 48-bit value. Decrements TTL and sends the frame to the core port for the next-hop FabricPath switch. You can influence root bridge election by using the root-priority command. . Encapsulates frames with the FabricPath header and sends it to the core port for the next-hop FabricPath switch. This allows switches to prune off subtrees that do not have downstream receivers. Therefore. Broadcast and unknown unicast forwarding is also done based on a multidestination tree computed by IS-IS for each topology. Switch ID: The unique 12-bit switch ID. This forwarding logic is based on the role that particular switch plays relative to the traffic flow within the FabricPath network. it goes though different lookups and forwarding logic. or broadcast packets. In this case. Multiple equal cost paths could exist to a destination switch ID. these trees are further pruned based on IGMP snooping at the edges of the FabricPath network. Following are three general roles and the forwarding logic for FabricPath traffic: Ingress FabricPath switch Receives the classic Ethernet (CE) frame from a FabricPath edge port. Performs switch ID table lookup to determine on which next-hop interface frame it is destined to. When a unicast packet enters the FabricPath network. Multicast forwarding is done based on multidestination trees computed by IS-IS. For optimized IP multicast forwarding. unknown unicast. Performs MAC table lookup to identify the FabricPath switch ID. multicast packet. Performs switch ID table lookup to determine FabricPath next-hop interface. The default root priority is 64. A higher value of these parameters is preferred.

Host A connected to the leaf switch S10 would like to communicate with the Host B connected to the leaf switch S15. The following steps explain . FabricPath is configured correctly and the network is fully converged. In other words. S14. FabricPath VLANs use conversational MAC address learning. and classic Ethernet VLANs use traditional MAC address learning. the MAC address table of switch S10 and S14 is empty. Multicast frames trigger learning on FabricPath switches (both edge and core). and leaf switches are S10. Broadcast frames do not trigger learning on edge switches. if a host moves from one switch to another and sends a gratuitous ARP to update the Layer 2 forwarding tables. For example. or a MAC address table lookup. therefore. Conversational Learning FabricPath switches learn MAC addresses selectively. The unknown unicast frames being flooded in the FabricPath network do not necessarily trigger learning on edge switches. and S15. The MAC address table of switch S15 has only one local entry for Host B pointing to port 1/2. the switch learns remote MAC addresses only if the remote device is having a bidirectional “conversation” with a locally connected device. Uses the LID value in the outer destination address (ODA). allowing a significant reduction in the size of the MAC address table. FabricPath Packet Flow Example In this section. FabricPath switches receiving that broadcast will update an existing entry for the source MAC address. to determine on which FabricPath edge port the frame should be forwarded to. broadcast frames are used to update any existing MAC entries already in the table. In a FabricPath switch. This selective MAC address learning is based on bidirectional conversation of end hosts. For Ethernet frames received from a directly connected access or trunk port. the switch learns the source MAC address of the frame as a remote MAC entry only if the destination MAC address matches an already learned local MAC entry. The following are MAC learning rules FabricPath uses: Only FabricPath edge switches populate the MAC table and use MAC table lookups to forward frames. For unicast frames received with FabricPath encapsulation. There are two spine switches and three leaf switches in this sample topology.Egress FabricPath switch Receives a FabricPath-encapsulated frame on a FabricPath core port. Performs switch ID table lookup to determine whether it is the egress FabricPath switch. it is called conversational learning. you review an example of packet flow within a FabricPath network. and frames are forwarded based on the switch ID in the outer destination address (ODA). the switch unconditionally learns the source MAC address as a local MAC entry. This MAC learning rule is the same as the classic Ethernet switch. Unknown Unicast Packet Flow In the beginning. however. FabricPath core switches generally do not learn any MAC addresses. because several key LAN protocols (such as HSRP) rely on source MAC learning from multicast frames to facilitate proper forwarding. Spine switches are S1 and S2.

S10 updates the classic Ethernet MAC address table with the entry for the source MAC address of Host A and the port 1/1 where the host is connected. some MAC address tables are populated. S14 receives the packet and performs a lookup for Host B in its MAC address table. . The packet is delivered to port 1/2 on switch S15. Figure 1-18 illustrates these steps for an unknown unicast packet flow. and the Host A MAC address is added to the MAC table on switch S15. 3. This MAC address is learned on FabricPath and the switch ID in OSA is S10. S10 encapsulates the packet into the FabricPath header and floods it on the FabricPath multidestination tree. 8. The first entry is local for Host B pointing toward port 1/2. The MAC address table on S15 shows Host A pointing to switch S10. This lookup is missed because no entry exists for Host B in the switch S10 MAC address table. 6. The MAC address lookup is missed. The following steps explain known unicast packet flow originated from Host B: 1. therefore. S15 receives the packet and performs a lookup for Host B in its MAC address table. A lookup is performed for the destination MAC address of Host B. the Host A MAC address is not learned on switch S14. 4. Host A sends an unknown unicast packet to switch S10 on port 1/1. Host B sends a known unicast packet to Host A on switch S15 port 1/2. Switch S10 has only one entry for Host A connected on port 1/1. Switch S15 has two entries in its MAC address table. 2. The MAC address lookup finds a hit. and the second entry is remote for Host A pointing toward switch S10. This flow is known unicast because switch S15 knows how to reach the destination MAC address of Host A.unknown unicast packet flow originated from Host A: 1. The FabricPath frame is received by switch S14 and switch S15. 5. Switch S14 has no entry in its MAC address table. 7. Figure 1-18 FabricPath Unknown Unicast Known Unicast Packet Flow After the unknown unicast packet flow from switch S10 to S14 and S15. This packet has switch S10 switch ID in its outer source address (OSA). This flow is unknown unicast because switch S10 does not know how to reach destination Host B.

and it can destabilize the network. In a FabricPath network. which is switch S10. it sees it as MAC move. The packet is encapsulated in FabricPath header. Figure 1-19 FabricPath Known Unicast Virtual Port Channel Plus A virtual port channel plus (vPC+) enables a classical Ethernet (CE) vPC domain and Cisco FabricPath to interoperate. with S15 in the OSA and S10 in the ODA. 4. S15 performs the MAC address lookup and finds a hit pointing toward switch S10. This behavior is called MAC flapping. When a packet is received at a remote FabricPath switch. and the packet is delivered on the port 1/1. The packet is received by switch S10 and a MAC address of Host B is learned by switch S10. the MAC address table of this remote FabricPath switch is updated with the switch ID of the first vPC switch. On this remote FabricPath switch. 3. the next traffic flow can go via a second vPC switch. with the switch ID of the first vPC switch in OSA. Switch S15 performs the switch ID lookup and selects a FabricPath core port to send the packet. This emulated switch is a logical way of combining two or more FabricPath switches to emulate a single switch to the rest of . the Layer 2 MAC addresses are learned only by the edge switches. This MAC address is learned on FabricPath and the switch ID in OSA is S15. the MAC address table is updated with the switch ID of the second vPC switch. When the remote FabricPath switch receives a packet for the same MAC address with the switch ID of the second vPC switch in its OSA. FabricPath implements an emulated switch. To address this problem. 5. Figure 1-19 illustrates these steps for a known unicast packet flow. The packet traverses the FabricPath network to reach the destination switch. Because the virtual port channel performs load balancing. The MAC address table on S10 shows Host B pointing to switch S15. and it is performed in the data plane. Switch S10 performs the MAC address table lookup for Host A. In a vPC configuration. The learning consists of mapping the Layer 2 MAC address to a switch ID.2. a dual-connected host can send a packet into the FabricPath network from two edge switches.

This results in MAC flapping at switch S10. It means that the two-emulating switch must be directly connected via peer link. FabricPath isolates STP domains. Figure 1-20 FabricPath with vPC+ In this example.the FabricPath network. BPDUs including TCNs are not transmitted on FabricPath core ports and as a result. Host B is connected to switch S14 and S15 in vPC configuration. make sure you configure all edge switches as STP root.75FA. The emulated switch implementations in FabricPath where two FabricPath edge switches provide a vPC to a thirdparty device is called vPC+. To appear as one STP bridge. As a result of this configuration. coming from S14 as well as from S15. no BPDUs are forwarded or tunneled through the FabricPath network. simply configure the vPC peer-link as a FabricPath core port. and assign an emulated switch ID to the vPC. FabricPath switches transmit and process STP BPDUs on FabricPath edge ports. To configure vPC+. if multiple FabricPath edge switches connect to the same STP domain. or BID (C84C. Each FabricPath edge switch must be configured as root for all FabricPath VLANs. The other FabricPath switches are not aware of this emulation and simply see the emulated switch reachable through both of these emulating switches. and these packets are seen at the remote switch S10. FabricPath Interaction with Spanning Tree FabricPath edge ports support direct Classic Ethernet host connections and connections to the traditional Spanning-Tree switches. all FabricPath switches share a common bridge ID. make sure . To solve this problem. The FabricPath packets originated by the two emulating switches are sourced with the emulated switch ID. Additionally. If you are connecting STP devices to the FabricPath network. configure switch S14 and S15 with a FabricPath emulated switch using vPC+ configuration. This BID is statically defined and is not user configurable. and there should be a peer keepalive path between the two switches.6000). However. The packets are load-balanced across the vPC link. Therefore. By default. remote switch S10 sees the Host B MAC address entry pointing to the emulated switch. Figure 1-20 shows an example of vPC+ configuration. The entire FabricPath network appears as a single STP bridge to any connected STP domains.

Configuring a port in FabricPath mode automatically enables IS-IS protocol on the port. and identifying FabricPath VLANs. Figure 1-21 shows an example of FabricPath topology and its configuration. These ports connect to other FabricPath switches to establish IS-IS adjacency. the FabricPath feature set must be enabled. use the following command in switch global configuration mode: feature-set fabricpath 2. The Enhanced Layer 2 license is required to run FabricPath. Before you start FabricPath configuration. identifying FabricPath interfaces. Configuring FabricPath FabricPath can be configured using a few simple steps. Before you start configuring the FabricPath on a switch.1 (Nexus 7000) / NX-OS 5.1. You can also configure a static switch ID using the fabricpath switch-id <ID number> command. FabricPath Dynamic Resource Allocation Protocol (DRAP) automatically assigns a switch ID to each switch within FabricPath network. ensure the following: You have Nexus devices that support FabricPath. To ensure that the FabricPath network acts as STP root. all FabricPath edge ports have the STP rootguard function enabled implicitly. By default. These steps involve enabling the FabricPath feature. Define FabricPath VLANs: identify the VLANs that will be extended across the FabricPath network. If a superior BPDU is received on a FabricPath edge port. The system is running minimum NX-OS 5. Use the following command to change the VLAN mode for a range of VLANs: vlan <range> mode fabricpath 3. The unicast and multicast routing information is exchanged. To enable the FabricPath feature set. all the VLANs are in Classic Ethernet mode. Identify FabricPath interfaces: identify the FabricPath core ports. You can change the VLAN mode to FabricPath by going into the VLAN configuration.that those edge switches use the same bridge priority value. If required. . FabricPath devices will form IS-IS adjacencies. The following command is used to configure an interface in FabricPath mode: Click here to view code image interface <name> switchport mode fabricpath After performing this configuration on all FabricPath switches in the topology. install the license using the following command: install license <filename> Install the FabricPath feature set by issuing install feature-set fabricpath.3 (Nexus 5500) software release. the port is placed in the L2 Gateway Inconsistent state until the condition is cleared. You perform these steps on all FabricPath switches. and switches will start forwarding traffic. The FabricPath configuration steps are as follows: 1.1. You have the appropriate license for FabricPath. There is no need to configure IS-IS protocol for FabricPath.

For remote addresses. Table 1-6 lists some useful commands to verify FabricPath.Figure 1-21 FabricPath Configuration Example Verifying FabricPath The show mac address table command can be used to verify that MAC addresses are learned on Cisco FabricPath edge devices. The show fabricpath route command can be used to view the Cisco FabricPath routing table that results from the Cisco FabricPath IS-IS SPF calculations. If multiple equal paths are available between two switches. . all paths will be installed in the Cisco FabricPath routing table to provide ECMP. The Cisco FabricPath routing table shows the best paths to all the switches in the fabric. The command shows local addresses with a pointer to the interface on which the address was learned. it provides a pointer to the remote switch from which this address was learned.

Table 1-6 Verify FabricPath Configuration Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. . noted with the key topics icon in the outer margin of the page. Table 1-7 lists a reference for these key topics and the page numbers on which each is found.

includes completed tables and lists to check your work. and complete the tables and lists from memory. and check your answers in the glossary: Virtual LAN (VLAN) Virtual Port Channel (vPC) Network Interface Card (NIC) Application Centric Infrastructure (ACI) Maximum Transmit Unit (MTU) Link Aggregation Control Protocol (LACP) Spanning Tree Protocol (STP) . “Memory Tables Answer Key.Table 1-7 Key Topics for Chapter 1 Complete Tables and Lists from Memory Print a copy of Appendix B.” (found on the disc) or at least the section for this chapter. Define Key Terms Define the following key terms from this chapter.” also on the disc. “Memory Tables. Appendix C.

Virtual Device Context (VDC) Cisco Fabric Services over Ethernet (CFSoE) Hot Standby Router Protocol (HSRP) Virtual Router Redundancy Protocol (VRRP) Bridge Protocol Data Unit (BPDU) Internet Group Management Protocol (IGMP) Time To Live (TTL) Dynamic Resource Allocation Protocol (DRAP) .

you should mark that question as wrong for purposes of the self-assessment. it is covered in detail in Chapter 18. 1. Cisco Nexus products provide the performance.” Table 2-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. You can find the answers in Appendix A. Nexus 6000. FC. such as FCoE. read the entire chapter.” “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. Cisco Nexus Product Family This chapter covers the following exam topics: Describe the Cisco Nexus Product Family Describe FEX Products The Cisco Nexus product family offers a broad portfolio of LAN and SAN switching products spanning software switches that integrate with multiple types of hypervisors to hardware data center core switches and campus core switches. which includes Nexus 9000. including different types of data center protocols. and availability to accommodate diverse data center needs. If you do not know the answer to a question or are only partially sure of the answer. iSCSI.) . Nexus 5000. Note Nexus 1000V is not covered in this chapter. scalability. Which of the following Nexus 9000 hardwares can work as a spine in a Spine-Leaf topology? (Choose three. Table 2-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. “Cisco Nexus 1000V and Virtual Switching. “Answers to the ‘Do I Know This Already?’ Quizzes. Nexus 7000. Nexus 3000.Chapter 2. You learn the different specifications of each product. and so on. This chapter focuses on the Cisco Nexus product family. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. and Nexus 2000.

8 b. False 3. 20 b. 4+1 7. 64 . True or false? By upgrading SUP1 to the latest software and using the correct license. How many unified ports does the Nexus 6004 currently support? a. 2. How many strands do the 40F BiDi optics use? a. True b. 550 Gbps b. True b. False 6. What is the current bandwidth per slot for the Nexus 7700? a. Nexus 9336PQ 2.3 Tbps d. 2+1 b. 16+1 c. a.6 Tbps 5. 16 4. 2 c. Nexus 9508 b. 1. 8+1 d. True or false? The F3 modules can be interchanged between the Nexus 7000 and Nexus 7700. True b. you can use FCoE with F2 cards. False 8. True or false? The Nexus N9K-X9736PQ can be used in standalone mode when the Nexus 9000 modular switches use NX-OS. a. 48 d. 12 d. 24 c. Nexus 9396PX d. a. 220 Gbps c. Nexus 9516 c. Nexus 93128TX e.a. How many VDCs can a SUP2 have? a.

40 G. HP b. Which of the following statements are correct? (Choose two. which is the Unified Fabric. It offers high-density 10 G. How many unified ports does the Nexus 5672 have? a. 12 b. Nexus 2248TP-E Foundation Topics Describe the Cisco Nexus Product Family The Cisco Nexus product family is a key component of Cisco unified data center architecture. a. c. Fujitsu e. d. which are Ethernet ports only.9. and Nexus 5548UP has 32 fixed FCoE only. Nexus 2232TM d. and 100 G ports as well. Modern data center designs need the following properties: . IBM c.) a. b. Which models from the following list of switches support FCoE ? (Choose two. True or false? The 40-Gbps ports on the Nexus 6000 can be used as 10 Gbps. whereas the Nexus 5548UP has all ports unified. Nexus 2248PQ c. Which vendors support the B22 FEX? a. The Nexus 5548P has 32 fixed ports. and the Nexus 5548UP has 32 fixed unified ports.) a. highly secure network fabrics. 8 d. Nexus 2232PP b. The Nexus 5548P has 48 fixed ports and Nexus 5548UP has 32 fixed ports and 16 unified ports. Dell c. 16 c. 11. The Nexus 5548P has 32 fixed ports and can have 16 unified ports. Using the Cisco Nexus products. The objective of the Unified Fabric is to build highly available. you can build end-to-end data center designs based on three-tier architecture or based on Spine-Leaf architecture. True b. All the above 13. 24 12. False 10. which are Ethernet only. The Nexus 5548P has 32 fixed ports.

Cisco is helping customers utilize this concept using Intercloud Fabric. . which in turn reduces cooling requirements. Using the concept of a service profile and booting from SAN in the Cisco Unified Computing system will reduce the time to instantiate new servers. using Cisco virtual interface cards. and using technologies such as VM-FEX and Adapter-FEX. Ways to address this problem include using Unified Fabric (converged SAN and LAN). Rather than using. 8 × 10 G links. The concept of hybrid clouds can benefit your organization. That will make it easy to build and tear down test and development environments. for example. must move without the need to change the topology or require an address change. Improved reliability during software updates. which happens by building a computing fabric and dealing with CPU and memory as resources that are utilized when needed. Hybrid clouds extend your existing data center to public clouds as needed. you can use 2 × 40 G links and so on. or adding components to the data center environment. This is addressed today using Layer 2 multipathing technologies such as FabricPath and virtual port channels (vPC). Reducing cabling creates efficient airflow. Hosts. In this chapter you will understand the Cisco Nexus product family. Figure 2-1 shows the different product types available at the time this chapter was written. Power and cooling are key problems in the data center today. configuration changes. Computing resources must be optimized. especially virtual hosts. that should happen with minimum disruption.Effective use of available bandwidth in designs where multiple links exist between the source and destination and one path is active and the other is blocked by spanning tree. These products are explained and discussed in the following sections. or the design is limiting us to use Active/Standby NIC teaming. with consistent network and security policies. Doing capacity planning for all the workloads and identifying candidates to be virtualized help reduce the number of compute nodes in the data center.

Figure 2-2 Nexus 9000 Spine-Leaf Architecture Cisco Nexus 9500 Family The Nexus 9500 family consists of three types of modular chassis. In this case. they provide an application-centric infrastructure. the design follows the standard three-tier architecture. as shown in Figure 2-3: the 4-slot Nexus 9504. When they run in ACI mode and in combination with Cisco Application Policy Infrastructure Controller (APIC). the design follows the Spine-Leaf architecture shown in Figure 2-2.Figure 2-1 Cisco Nexus Product Family Cisco Nexus 9000 Family There are two types of switches in the Nexus 9000 Series: the Nexus 9500 modular switches and the Nexus 9300 fixed configuration switches. they function as a classical Nexus switch. . Therefore. and the 16-slot Nexus 9516. When they run in NX-OS mode and use the enhanced NX-OS software. the 8-slot Nexus 9508. They can run in two modes.

Table 2-2 shows the comparison between the different models of the Nexus 9500 switches.Figure 2-3 Nexus 9500 Chassis Options The Cisco Nexus 9500 Series switches have a modular architecture that consists of the following: Switch chassis Supervisor engine System controllers Fabric modules Line cards Power supplies Fan trays Optics Among these parts. supervisors. system controllers. and power supplies are common components that can be shared among the entire Nexus 9500 product family. . line cards.

Because there is no midplane with a precise alignment mechanism. Midplanes tend to block airflow.Table 2-2 Nexus 9500 Modular Platform Comparison Chassis The Nexus 9500 chassis doesn’t have a midplane. fabric cards and line cards align together. which results in reduced cooling efficiency. . as shown in Figure 2-4.

The supervisor has an external clock source. . pulse per second (PPS). The supervisor engine is responsible for the control plane function. The supervisor modules manage all switch operations. as shown in Figure 2-5.Figure 2-4 Nexus 9500 Chassis Supervisor Engine The Nexus 9500 modular switch supports two redundant half-width supervisor engines. and 16 GB RAM. an RS-232 Serial Port (RJ-45). Each supervisor module consists of a Romely 1.8 GHZ CPU. upgradable to 48 GB RAM and 64 GB SSD storage. 4 core. that is. There are multiple ports for management. and a 10/100/1000-MBps network port (RJ-45). including two USB ports.

and the EPC channel handles the intrasystem data plane protocol communication. The EOBC provides the intrasystem management communication across modules. and fabric modules. They offload chassis management functions from the supervisor modules. . They host two main control and management paths—the Ethernet Out-of-Band Channel (EOBC) and Ethernet Protocol Channel (EPC) —between supervisor engines. The system controller is responsible for managing power supplies and fan trays. line cards.Figure 2-5 Nexus 9500 Supervisor Engine System Controller There is a pair of redundant system controllers at the back of the Nexus 9500 chassis. as shown in Figure 2-6.

The Nexus 9504 has one NFE per fabric module. The NFE is a Broadcom trident two ASIC (T2). both contain multiple network forwarding engines (NFE). All fabric modules are active. as shown in Figure 2-7. and the T2 uses 24 40GE ports to guarantee the line rate.Figure 2-6 Nexus 9500 System Controllers Fabric Modules The platform supports up to six fabric modules. the Nexus 9508 has two. and the Nexus 9516 has four. The packet lookup and forwarding functions involve both the line cards and the fabric modules. . each fabric module consists of multiple NFEs.

When you use the 36-port 40GE line cards. the ACI-ready leaf line cards contain an additional ASIC called Application Leaf Engine (ALE). All line cards have multiple NFEs for packet lookup and forwarding. Figure 2-8 . Line Cards It is important to understand that there are multiple types of Nexus 9500 line cards.Figure 2-7 Nexus 9500 Fabric Module When you use the 1/10G + 4 40GE line cards. ALE performs the ACI leaf function when the Nexus 9500 is used as a leaf node when deployed in the ACI mode. There are also line cards that can be used in both modes: standalone mode using NX-OS and ACI mode. you need a minimum of three fabric modules to achieve line-rate speeds. There are cards that can be used in standalone mode when used with enhanced NX-OS. so to install them you must remove the fan trays. Note The fabric modules are behind the fan trays. in a classical design. There are line cards that can be used in application-centric infrastructure mode (ACI) only. In addition. The ACI-only line cards contain an additional ASIC called application spine engine (ASE). the ASE performs ACI spine functions when the Nexus 9500 is used as a spine in the ACI mode. you will need six fabric modules to achieve line-rate speeds.

and offloading BFD protocol handling from the supervisors. such as programming the hardware table resources. Figure 2-8 Nexus 9500 Line Cards Positioning Nexus 9500 line cards are also equipped with dual-core CPUs. . This CPU is used to speed up some control functions. Table 2-3 shows the different types of cards available for the Nexus 9500 Series switches and their specification. collecting and sending line card counters.shows the high-level positioning of the different cards available for the Nexus 9500 Series network switches. statistics.

Table 2-3 Nexus 9500 Modular Platform Line Card Comparison .

Power Supplies The Nexus 9500 platform supports up to 10 power supplies. however. as shown in Figure 2-10. . The 3000W AC power supply shown in Figure 2-9 is 80 Plus platinum rated and provides more than 90% efficiency. and optics. Fan Trays The Nexus 9500 consists of three fan trays. they are accessible from the front and are hot swappable. Figure 2-9 Nexus 9500 AC Power Supply Note The additional four power supply slots are not needed with existing line cards shown in Table 2-3. they support N+1 and N+N (grid redundancy). Dynamic speed is driven by temperature sensors and front-to-back air flow with N+1 redundancy per tray. each tray consists of three fans. they offer head room for future port densities. bandwidth. Two 3000W AC power supplies can operate a fully loaded chassis. Fan trays are installed after the fabric module installation.

you must replace the cabling. . the first is the cost barrier of the 40 G port itself. 10 G operates on what is referred to as two strands of fiber. when you migrate from 10 G to 40 G. however. the other two fan trays will speed up to compensate for the loss of cooling. and second. and they enable customers to take the current 10 G cabling plant and use it for 40 G connectivity without replacing the cabling. access to aggregation and SpineLeaf design at the spine layer will move to 40 G.Figure 2-10 Nexus 9500 Fan Tray Note To service the fabric modules. Figure 2. Cisco QSFP Bi-Di Technology for 40 Gbps Migration As data center designs evolve from 1 G to 10 G at the access layer. If one of the fan trays is removed. The 40 G adoption is slow today because of multiple barriers. 40 G operates on eight strands of fiber. Bi-Di optics are standard based. the fan tray must be removed first.11 shows the difference between the QSFP SR and the QSFP Bi-Di.

Table 2-4 Nexus 9500 Fixed-Platform Comparison .Figure 2-11 Cisco Bi-Di Optics Cisco Nexus 9300 Family The previous section discussed the Nexus 9500 Series modular switches. The Nexus 9300 is designed for top-of-rack (ToR) and mid-of-row (MoR) deployments. Table 2-4 summarizes the different specifications of each chassis. This section discusses details of the Cisco Nexus 9300 fixed configuration switches. There are currently four chassis-based models in the Nexus 9300 platform.

as shown in Table 2-5. and redundant fabric cards for high availability. The Nexus 9336PQ can operate in ACI mode only and act as as spine node. The Nexus 7000 and the Nexus 7700 switches offer a comprehensive set of features for the data center network. the Nexus 9396PX. The Nexus 7000 hardware architecture offers redundant supervisor engines. scalable way with a switching capacity beyond 15 Tbps for the Nexus 7000 Series and 83 Tbps for the Nexus 7700 Series. Figure 2-12 shows the different models available today from the Nexus 9300 switches. The uplink module is the same for all switches. There are multiple chassis options from the Nexus 7000 and Nexus 7700 product family. you can utilize NX-OS APIs to manage it. As shown in Table 2-4. If used with the Cisco Nexus 93128TX. . and 93128TX are provided on an uplink module that can be serviced and replaced by the user. It is very reliable by supporting in-service software upgrades (ISSU) with zero packet loss. redundant power supplies. 8 out of the 12 × 40 Gbps QSFP+ ports will be available.The 40-Gbps ports for Cisco Nexus 9396PX. 9396TX. The modular design of the Nexus 7000 and Nexus 7700 enables them to offer different types of network interfaces—1 G. Figure 2-12 Cisco Nexus 9300 Switches Cisco Nexus 7000 and Nexus 7700 Product Family The Nexus 7000 Series switches form the core data center networking fabric. 10 G. and 93128TX can operate in NX-OS mode and in ACI mode (acting as a leaf node). It is easy to manage using the command line or through data center network manager. 40 G and 100 G—in a high density. 9396TX.

power. the supervisor modules are not interchangeable between the Nexus 7000 and Nexus 7700 chassis. Note There are dedicated slots for the supervisor engines in all Nexus chassis. which provide seamless and stateful switchover in the event of a supervisor module failure. the Nexus 7000 supports up to five fabric modules and Nexus 7700 supports up to six fabric modules. State and configuration are in sync between the two supervisors. . this has been applied across the physical. With the current shipping fabric cards and current I/O modules the switches support N+1 redundancy. Switch fabric redundancy: The fabric modules support load sharing. environmental.Table 2-5 Nexus 7000 and Nexus 7700 Modular Platform Comparison The Nexus 7000 and Nexus 7700 product family is modular in design with great focus on the redundancy of all the critical components. and system software aspects of the chassis. You can have multiple fabric modules. Supervisor module redundancy: The chassis can have up to two supervisor modules operating in active and standby modes.

otherwise commonly called N+1 redundancy.) PSU redundancy. including multiple instances of a particular service that provide effective fault isolation between services and that make each service individually monitored and managed. Most of the services allow stateful restart. The transparent front door allows observation of cabling and module indicator and status lights. but not both at the same time. Each PSU has two supply inputs. where the total power available is the sum of all power supplies minus 1. fan. PSU Redundancy is the default. Each service running is an individual memory protected process. Cisco NX-OS In-Service Software Upgrade (ISSU): With the NX-OS modular architecture you can support ISSU. supervisor. which helps to address specific issues and minimize the system overall impact. which is the combination of PSU redundancy and grid redundancy. and I/O module status. Modular software upgrades: The NX-OS software is designed with a modular architecture. Cooling subsystem: The system has redundant fan trays. fabric. Full redundancy. Nexus 7000 and Nexus 7700 have multiple models with different specifications. These LEDs report the power supply. and any failure of one of the fans will not result in loss of service. allowing it to be connected to separate isolated A/C supplies. which enables you to do a complete system upgrade without disrupting the data plane and achieve zero packet loss. The LEDs alert operators to the need to conduct further investigation. and Table 2-5 shows their specifications. Power subsystem availability features: The system will support the following power redundancy options: Combined mode. . where the total power available is the sum of the outputs of all the power supplies installed. where the total power available is the sum of the power from only one input on each PSU. Cable management: The cable management cover and optional front module doors provide protection from accidental interference with both the cabling and modules that are installed in the system. Figure 2-13 shows these switching models. System-level LEDs: A series of LEDs at the top of the chassis provides a clear summary of the status of the major system components. this will be the same as grid mode but will assure customers that they are protected for either a PSU or a grid failure. Grid redundancy.Note The fabric modules between the Nexus 7000 and Nexus 7700 are not interchangeable. allowing maximum flexibility. In most cases. In the event of an A/C supply failure. enabling a service experiencing a failure to be restarted and resume operation without affecting other services. There are multiple fans on the fan trays. 50% of power is secure. (This is not redundant. Cable management: The integrated cable management system is designed to support the cabling requirements of a fully configured system to either or both sides of the switch.

so the calculation is [(550 Gbps/slot) × (8 payload slots) = 4400 Gbps.6 Tbps. if the bandwidth/slot is 2. Therefore.2 Tbps system bandwidth].6 Tbps. the Nexus 7010 has 8 line cards slots.8 Tbps. For example. the local I/O module fabrics are connected back-to-back to form a two-stage crossbar that interconnects the I/O modules and the supervisor engines. It has one fan tray and four 3KW power supplies. It is worth mentioning that the Nexus 7004 doesn’t have any fabric modules.9 Tbps system bandwidth]. Cisco Nexus 7004 Series Switch Chassis The Cisco Nexus 7004 switch chassis shown in Figure 2-14 has two supervisor module slots and two I/O modules.Figure 2-13 Cisco Nexus 7000 and Nexus 7700 Product Family Note When you go through Table 2-5 you might wonder how the switching capacity was calculated.6Tbps/slot) × (16 payload slots) = 41. we’ll use the Nexus 7718 as an example. and the 18-slot chassis will have 18.3 Tbps. the calculation is [(2. (4400Gbps) × (2 for full duplex operation) = 83. the supervisor slots are taken into consideration because each supervisor slot has a single channel of connectivity to each fabric card.8 Tbps system bandwidth]. Although the bandwidth/slot shown in the table give us 1.7 Tbps. Using this type of calculation. (4950 Gbps) × (2 for full duplex operation) = 9900 Gbps = 9. . for example. (4400 Gbps) × (2 for full duplex operation) = 8800 Gbps = 8. In some bandwidth calculations for the chassis. So. a 9-slot chassis will have 8. For the Nexus 7700 Series. so that makes a total of 5 crossbar channels. the Cisco Nexus 7010 bandwidth is being calculated like so: [(550 Gbps/slot) × (9 payload slots) = 4950 Gbps. the complete specification for the Nexus 7004 is shown in Table 2-5. the Nexus 7700 is capable of more than double its capacity. and that can be achieved by upgrading the fabric cards for the chassis.

The fan redundancy is two parts: individual fans in the fan tray and fan tray controllers. and the system will continue to function. there is a solution for hot aisle–cold aisle design with the 9-slot chassis. reducing the probability of a total fan tray failure. So. If an individual fan fails. The fan controllers are fully redundant. The fans in the fan trays are individually wired to isolate any failure and are fully redundant so that other fans in the tray can take over when one or more fans fail. other fans automatically run at higher speeds. the complete specification for the Nexus 7009 is shown in Table 2-5. giving you time to get a spare and replace the fan tray. Figure 2-15 Cisco Nexus 7009 Switch Although the Nexus 7009 has side-to-side airflow. . The Nexus 7009 switch has a single fan tray.Figure 2-14 Cisco Nexus 7004 Switch Cisco Nexus 7009 Series Switch Chassis The Cisco Nexus 7009 switch chassis shown in Figure 2-15 has two supervisor module slots and seven I/O module slots. there is redundancy within the cooling system due to the number of fans.

The fans on the trays are N+1 redundant. and the fan tray will need replacing to restore N+1. If either of the fan trays fails. There are two fabric fan modules in the 7010. the complete specification is shown in Table 2-5. the box does not overheat. the system will keep running without overheating as long as the operating environment is within specifications. For the 7018 system there are two system fan trays. and they will continue to cool the system. There are multiple fans on the 7010 system fan trays. The . both are required for normal operation. Cisco Nexus 7018 Series Switch Chassis The Cisco Nexus 7018 switch chassis shown in Figure 2-17 has two supervisor engine slots and 16 I/O module slots. If either fabric fan fails. restoring the N+1 redundancy. the complete specification is shown in Table 2-5. the remaining fabric fan will continue to cool the fabric modules until the fan is replaced. Each row cools three module slots (I/O and supervisor).Cisco Nexus 7010 Series Switch Chassis The Cisco Nexus 7010 switch chassis shown in Figure 2-16 has two supervisor engine slots and eight I/O modules slots. Fan tray replacement restores N+1 redundancy. Figure 2-16 Cisco Nexus 7010 Switch There are two system fan trays in the 7010. Both fan trays must be installed at all times (apart from maintenance). until the fan tray is replaced. Any single fan failure will not result in degradation in service. one for the upper half and one for the lower half of the system. The failure of a single fan will result in the other fans increasing speed to compensate. The system should not be operated with either the system fan tray or the fabric fan components removed apart from the hot swap period of up to 3 minutes. Each fan tray contains 12 fans that are in three rows or four.

the fabric fans operate only when installed in the upper fan tray position. . The fabric fans are at the rear of the system fan tray. There are 192 10-G ports. the remaining fan will cool the fabric modules. true front-to-back airflow. and redundant fabric cards.fan tray should be replaced to restore the N+1 fan resilience. Failure of a single fabric fan will not result in a failure. 48 100-G ports. Figure 2-17 Cisco Nexus 7018 Switch Note Although both system fan trays are identical. the complete specification is shown in Table 2-5. but when inserted in the lower position the fabric fans are inoperative. The two fans are in series so that the air passes through both to leave the switch and cool the fabric modules. Fan trays are fully interchangeable. 96 40G ports. Cisco Nexus 7706 Series Switch Chassis The Cisco Nexus 7706 switch chassis shown in Figure 2-18 has two supervisor module slots and four I/O module slots. redundant fans. Integrated into the system fan tray are the fabric fans.

the complete specification is shown in Table 2-5. Figure 2-19 Cisco Nexus 7710 Switch Cisco Nexus 7718 Series Switch Chassis The Cisco Nexus 7718 switch chassis shown in Figure 2-20 has two supervisor engine slots and 16 I/O module slots. true front-to-back airflow. redundant fans. There are 384 1-G ports. 96 100-G ports. and redundant fabric cards. true front-to-back airflow. the complete specification is shown in Table 2-5.Figure 2-18 Cisco Nexus 7706 Switch Cisco Nexus 7710 Series Switch Chassis The Cisco Nexus 7710 switch chassis shown in Figure 2-19 has two supervisor engine slots and eight I/O module slots. . 192 100-G ports. There are 768 10-G ports. 192 40G ports. redundant fans. 384 40-G ports. and redundant fabric cards.

. Redundancy is achieved by having both supervisor slots populated.Figure 2-20 Cisco Nexus 7718 Switch Cisco Nexus 7000 and Nexus 7700 Supervisor Module Nexus 7000 and Nexus 7700 Series switches have two slots that are available for supervisor modules. Table 2-6 describes different options and specifications of the supervisor modules.

There is dual redundant Ethernet out-of-band channels (EOBC) to each I/O and fabric modules to provide resiliency for the communication between control and line card processors. An embedded packet analyzer reduces the need for a dedicated packet analyzer to provide faster resolution for control plane problems.Table 2-6 Nexus 7000 and Nexus 7700 Supervisor Modules Comparison Cisco Nexus 7000 Series Supervisor 1 Module The Cisco Nexus 7000 supervisor module 1 shown in Figure 2-21 is the first-generation supervisor module for the Nexus 7000. Figure 2-21 Cisco Nexus 7000 Supervisor Module 1 The Connectivity Management Processor (CMP) provides an independent remote system management and monitoring capability. dual supervisor engines run in active-standby mode with stateful switch over (SSO) and configuration synchronization between both supervisors. It removes the need for separate terminal server devices for OOB . The USB ports allow access to USB flash memory devices to software image loading and recovery. the operating system runs on a dedicated dualcore Xeon processor. As shown in Table 2-6.

Supervisor module 2E is the enhanced version of supervisor module 2 with two quad-core CPUs and 32 G of memory. and it offers complete visibility during the entire boot process. Cisco Nexus 7000 Series Supervisor 2 Module The Cisco Nexus 7000 supervisor module 2 shown in Figure 2-22 is the next-generation supervisor module. Sup2 scale is the same as Sup1. it will support 4+1 VDCs.management. As shown in Table 2-6. it has a quad-core CPU and 12 G of memory compared to supervisor module 1. larger memory. The Power-on Self Test (POST) and Cisco Generic Online Diagnostics (GOLD) provide proactive health monitoring both at startup and during system operation. Figure 2-22 Cisco Nexus 7000 Supervisor 2 Module Supervisor module 2 and supervisor module 2E have more powerful CPUs. such as enhanced user experience. which will enable you to carve out CPU for higher priority VDCs. It has the capability to initiate a complete system restart and shutdown. such as higher VDC and FEX. The Cisco Nexus 7000 supervisor module 1 incorporates highly advanced analysis and debugging capabilities. Sup2E supports 8+1 VDCs. This is useful in detecting hardware faults. and a higher control plane scale. which has single-core CPU and 8 G of memory. If a fault is detected. corrective action can be taken to mitigate the fault and reduce the risk of a network outage. Both supervisor module 2 and supervisor module 2E support FCoE. faster boot and switchover times. and nextgeneration ASICs that together will result in improved performance. Note . Administrators must authenticate to get access to the system through CMP. and it also allows access to supervisor logs and full console control on the supervisor engine. when choosing the proper line card. they support CPU Shares.

This will be a nondisruptive migration. Figure 2-23 Cisco Nexus 7000 Fabric Module In the case of Nexus 7000.You cannot mix Sup1 and Sup2 in the same chassis.32 Tbps per slot. and the architecture supports lossless fabric failover. Figure 2-24 . by using Fabric Module 2. In Nexus 7700. the remaining fabric modules will load balance the remaining bandwidth to all the remaining line cards. adding fabric modules increases the available bandwidth per I/O slot because all fabric modules are connected to all slots. Note that this will be a disruptive migration requiring removal of supervisor 1. VOQ and credit-based arbitration allow fair sharing of resources when a speed mismatch exists to avoid head-of-line (HOL) blocking. In case of a failure or removal of one of the fabric modules. which is 220 Gbps per slot. you can deliver a maximum of 550 Gbps per slot. and the Nexus 7700 has six. when using Fabric Module 1. Sup2 and Sup2E can be mixed for migration only. you can deliver a maximum of 1. and stage 2 is implemented on the fabric module. Nexus 7000 supports virtual output queuing (VOQ) and credit-based arbitration to the crossbar to increase performance. which is 46 Gbps. The Nexus 7000 implements a three-stage crossbar switch. Cisco Nexus 7000 and Nexus 7700 Fabric Modules The Nexus 7000 and Nexus 7700 fabric modules provide interconnection between line cards and provide fabric channels to the supervisor modules. which is 110 Gbps. All fabric modules support load sharing. Fabric stage 1 and fabric stage 3 are implemented on the line card module. you can deliver a maximum of 230 Gbps per slot using five fabric modules. Figure 2-23 shows the different fabric modules for the Nexus 7000 and Nexus 7700 products. The Nexus 7000 has five fabric modules. When using Fabric Module 2.

Figure 2-24 Cisco Nexus 7700 Crossbar Fabric There are two connections from each fabric module to the supervisor module. . It provides an aggregate bandwidth of 1. There are four connections from each fabric module to the line cards. These connections are also 55 Gbps. there are 12 connections from the switch fabric to the supervisor module.shows how these stages are connected to each other. Each supervisor module has a single 23 Gbps trace to each fabric module. the total number of connections from the fabric cards to each line card is 24. providing an aggregate bandwidth of 275 Gbps. Table 2-7 describes each license and the features it enables. Cisco Nexus 7000 and Nexus 7700 Licensing Different types of licenses are required for the Nexus 7000 and the Nexus 7700. Note Cisco Nexus 7000 fabric 1 modules provide two 23 Gbps traces to each fabric module. When all the fabric modules are installed.32 Tbps per slot. providing 230 Gbps of switching capacity per I/O slot for a fully loaded chassis. When populating the chassis with six fabric modules. and each one of these connections is 55 Gbps.

.

The host ID is also referred to as the device serial number. as shown in Example 2-1. There is also an evaluation license. each of which is a separate license. disable the use of that feature. which is a temporary license. Note To manage the Nexus 7000 two types of licenses are needed: the DCNM LAN and DCNM SAN. Evaluation licenses are time bound (valid for a specified number of days) and are tied to a host ID (device serial number). If at the end of the 120-day grace period the device does not have a valid license key for the feature.Table 2-7 Nexus 7000 and Nexus 7700 Software Licensing Features It is worth mentioning that Nexus switches have a grace period. You then have 120 days to install the appropriate license keys. To get the license file you must obtain the serial number for your device by entering the show license host-id command. which is the amount of time the features in a license package can continue functioning without a license. Example 2-1 NX-OS Command to Obtain the Host ID . or disable the grace period feature. Enabling a licensed feature that does not have a license key starts a counter on the grace period. the Cisco NX-OS software automatically disables the feature and removes the configuration from the device.

Click here to view code image switch# show license host-id License hostid: VDH=FOX064317SQ Tip Use the entire ID that appears after the equal sign (=). Each has different performance metrics and features. After executing the copy licenses command from the default VDC. In this example. Perform the installation by using the install license command on the active supervisor module from the device console. as shown in Example 2-2. the usb1: device. . save your license file to one of four locations—the bootflash: directory. or the usb2: device. There are two types of I/O modules: the M-I/O modules and the F-I/O modules.done You can check what licenses are already installed by issuing the command shown in Example 2-3. the host ID is FOX064317SQ.lic Installing license . Example 2-2 Command Used to Install the License File Click here to view code image switch# install license bootflash:license_file.. the slot0: device. Example 2-3 Command Used to Obtain Installed Licenses Click here to view code image switch# show license usage Feature Ins Lic Status Expiry Date Comments Count -------------------------------------------------------------------------------LAN_ENTERPRISE_SERVICES_PKG Yes In use Never -------------------------------------------------------------------------------- Cisco Nexus 7000 and Nexus 7700 Line Cards Nexus 7000 and Nexus 7700 support various types of I/O modules. Table 2-8 shows the comparison between M-Series modules.

you see that the M1 32 port I/O module is the only card that supports .Table 2-8 Nexus 7000 and Nexus 7700 M-Series Modules Comparison Note From the table.

Table 2-9 shows the comparison between F-Series modules. .LISP.

Variable Fan Speed: Automatically adjusts to changing thermal characteristics. for the right sizing of power supplies. It is a single 20 Ampere (A) AC input power supply. The switches offer different types of redundancy modes. and environmental cooling. UPSs.0-KW AC power supply shown in Figure 2-25 is designed only for the Nexus 7004 chassis and is used across all the Nexus 7700 Series chassis. and modules enabling accurate power consumption monitoring. Real-time power draw: Shows real-time power consumption. They offer visibility into the actual power consumption of the total system. lower fan speeds use lower power. Power Redundancy: Multiple system-level options for maximum data center availability. Cisco Nexus 7000 and Nexus 7700 Series 3. no downtime in replacing power supplies. reducing power wasted as heat and reducing associated data center cooling requirements. Variable-speed fans adjust dynamically to lower power consumption and optimize system cooling for true load. Temperature measurement: Prevents damage due to overheating (every ASIC on the board has a temperature sensor). When connecting to high line nominal voltage (220 VAC) it will produce a power output of 3000W.0-KW AC Power Supply Module The 3.Table 2-9 Nexus 7000 and Nexus 7700 F-Series Modules Comparison Cisco Nexus 7000 and Nexus 7700 Series Power Supply Options The Nexus 7000 and Nexus 7700 use power supplies with +90% power supply efficiency. Fully Hot Swappable: Continuous system operations. . Internal fault monitoring: Detects component defect and shuts down unit. connecting to low line nominal voltage (110 VAC) will produce a power output of 1400W.

it is not officially supported by Cisco.0-kW DC power supply has two isolated input stages.0-KW DC power supply shown in Figure 2-26 is designed only for the Nexus 7004 chassis and is used across all the Nexus 7700 Series chassis.0-KW DC Power Supply Module The 3.0-KW AC Power Supply Note Although the Nexus 7700 chassis and the Nexus 7004 use a common power supply architecture.Figure 2-25 Cisco Nexus 7000 3. the system will log an error complaining about the wrong power supply in the system. Cisco Nexus 7000 and Nexus 7700 Series 3. different PIDs are used on each platform. The unit will deliver 1551W when only one input is active and 3051W when two inputs are active. Each stage uses a -48V DC connection. The Nexus 3. although technically this might work. each delivering up to 1500W of output power. Therefore. . if you interchange the power supplies.

7010. . and 7018.Figure 2-26 Cisco Nexus 7000 3. They allow mixed-mode AC and DC operation enabling migration without disruption and providing support for dual environments with unreliable AC power.5-KW power supplies shown in Figure 2-27 are common across Nexus 7009.0-KW and 7. with battery backup capability.0-KW and 7.5-KW AC Power Supply Module The 6.0-KW DC Power Supply Cisco Nexus 7000 Series 6.

It supports the same operational characteristics as the AC units: redundancy modes (N+1 and N+N). The power supply can be used in combination with AC units or as an all DC setup. . 7010. The 6-KW has four isolated input stages.0-KW DC Power Supply Module The 6-KW DC power supply shown in Figure 2-28 is common to the 7009.0-KW and 7.5-KW Power Supply Table 2-10 shows the specifications of both power supplies with different numbers of inputs and input types.5-KW Power Supply Specifications Cisco Nexus 7000 Series 6. Table 2-10 Nexus 7000 and Nexus 7700 6.0-KW and 7. and 7018 systems.Figure 2-27 Cisco Nexus 7000 6. each delivering up to 1500W of power (6000W total on full load) with peak efficiency of 91% (high for a DC power supply).

in most cases this will be the same as grid redundancy. You can lose one power supply or one grid. which is the combination of PSU redundancy and grid redundancy.Real-time power—actual power levels Single input mode (3000W) Online insertion and removal Integrated Lock and On/Off switch (for easy removal) Figure 2-28 Cisco Nexus 7000 6. where the total power available is the sum of all power supplies – 1.0-KW DC Power Supply Multiple power redundancy modes can be configured by the user: Combined mode. allowing them to be connected to separate isolated A/C supplies. where the total power available is the sum of the power from only one input on each PSU. An example of each mode is shown in Figure 2-29. otherwise commonly called N+1 redundancy. Full redundancy provides the highest level of redundancy. so it is recommended.) PSU redundancy. In the event of an A/C supply failure. However. Full redundancy. where the total power available is the sum of the outputs of all the power supplies installed. Grid redundancy. it is always better to choose the mode of power supply operation based on the requirements and needs. 50% of power is secure. Each PSU has two supply inputs. . (This is not redundant.

The power calculator can be located at http://www. Table 2-11 describes the difference between the Nexus 6000 models. low-latency 1G/10G/40G/FCoE Ethernet port. It is worth mentioning that the power calculator cannot be taken as a final power recommendation. Cisco Nexus 6000 Product Family The Cisco Nexus 6000 product family is a high-performance. When using the unified module on the Nexus 6004 you can get native FC as well. and Nexus 6004X. . Nexus 6001P. high-density.cisco. Cisco has made a power calculator that can be used as a starting point.Figure 2-29 Nexus 6. Nexus 6004EF.0-KW Power Redundancy Modes To help with planning for the power requirements. Multiple models are available: Nexus 6001T.com/go/powercalculator.

3 microseconds. Each 40 GE port can be split into 4 × 10GE ports. . 6a. It offers two choices of air flow. Each 40 GE port can be split into 4 × 10 GE ports. The split can be done by using a QSFP+ break-out cable (Twinax or Fiber). The Nexus 6001T is a fixed configuration 1RU Ethernet switch. Cisco Nexus 6001P and Nexus 6001T Switches and Features The Nexus 6001P is a fixed configuration 1RU Ethernet switch. front-to-back (port side exhaust) and back-to-front (port side intake). or 7 cables are used. Nexus 6001T supports FCOE on the RJ-45 ports for up to 30M when CAT 6. It offers two choices of airflow. It provides integrated Layer 2 and Layer 3 at wire speed.Table 2-11 Nexus 6000 Product Specification Note At the time this chapter was written. port-to-port latency is around 3. front-to-back (port side exhaust) and back-to-front (port side intake). It provides 48 1G/10G SFP+ ports and four 40-G Ethernet QSFP+ ports. Both switches are shown in Figure 2-30. It provides integrated Layer 2 and Layer 3 at wire speed. this can be done by using a QSFP+ break-out cable (Twinax or Fiber). It provides 48 1G/10G BASE-T ports and four 40 G Ethernet QSFP+ ports. Port-to-port latency is around 1 microsecond. the Nexus 6004X was renamed Nexus 5696Q.

The main difference is that the Nexus 6004X supports Virtual Extensible LAN (VXLAN). The Nexus 6004 offers integrated Layer 2 and Layer 3 at wire rate. There are two choices of LEMs: 12-port 10 G/40 G QSFP and 20-port unified ports offering either 1 G/10 G SFP+ or 2/4/8 G FC. Port-to-port latency is approximately 1 microsecond for any packet size.Figure 2-30 Cisco Nexus 6001 Switches Cisco Nexus 6004 Switches Features The Nexus 6004 is shown in Figure 2-31. Currently. the Nexus 6004EF and the Nexus 6004X. The Nexus 6004 is 4RU 10 G/40 G Ethernet switch. it offers eight line card expansion module (LEM) slots. It offers up to 96 40 G QSFP ports and up to 384 10 G SFP+ ports. . there are two models.

enabling a licensed feature that does not have a license key to start a counter on the grace period. disable the use of that feature. You then have 120 days to install the appropriate license keys. which is the amount of time the features in a license package can continue functioning without a license. The SFP+ can support both 10 G and 1 GE transceivers. Cisco Nexus 6000 Switches Licensing Options Different types of licenses are required for the Nexus 6000.Figure 2-31 Nexus 6004 Switch Note The Nexus 6004 can support 1 G ports using a QSA converter (QSFP to SFP adapter) on a QSFP interface. This converts each QSFP+ port to one SFP+ port. It is worth mentioning that Nexus switches have a grace period. Table 2-12 describes each license and the features it enables. or disable the grace .

period feature. . If at the end of the 120-day grace period the device does not have a valid license key for the feature. the Cisco NX-OS software automatically disables the feature and removes the configuration from the device.

Table 2-13 shows the comparison between different models. which is a temporary license. Cisco Nexus 5500 and Nexus 5600 Product Family The Cisco Nexus 5000 product family is a Layer 2 and Layer 3 1G/10G Ethernet with unified ports. it includes Cisco Nexus 5500 and Cisco Nexus 5600 platforms.Table 2-12 Nexus 6000 Licensing Options There is also an evaluation license. Evaluation licenses are time bound (valid for a specified number of days) and are tied to a host ID (device serial number). Each of them is a separate license. Note To manage the Nexus 6000 two types of licenses are needed: the DCNM LAN and DCNM SAN. .

.

.com for more information. Figure 2-32 Nexus 5548P and Nexus 5548UP Switches Cisco Nexus 5596UP and 5596T Switches Features The Nexus 5596T shown in Figure 2-33 is a 2RU 1/10 Gbps Ethernet. Figure 2-32 shows the layout for them. Refer to www. the 10 G BASE-T ports support FCoE up to 30m with Category 6a and Category 7 cables. so they are not covered in this book. native Fibre Channel and FCoE switch. The switch has three expansion modules.cisco. Cisco Nexus 5548P and 5548UP Switches Features The Nexus 5548P and the Nexus 5548UP are 1/10-Gbps switches with one expansion module.Table 2-13 Nexus 5500 and Nexus 5600 Product Specification Note The Nexus 5010 and 5020 switches are considered end-of-sale. It has 32 fixed ports of 10 G BASE-T and 16 fixed ports of SFP+. The Nexus 5548UP has the 32 ports unified ports. The Nexus 5548P has all the 32 ports 1/10 Gbps Ethernet only. meaning that the ports can run 1/10 Gbps or they can run 8/4/2/1 Gbps native FC or mix between both. the switch supports unified ports on all SFP+ ports.

and the Nexus 5596UP/5596T has three. and 8 ports 8/4/2/1-Gbps Native Fibre Channel ports using . Figure 2-34 Cisco N55-M16P Expansion Module The Cisco N55-M8P8FP module shown in Figure 2-35 is a 16-port module. The Cisco N55-M16P module shown in Figure 2-34 has 16 ports 1/10-Gbps Ethernet and FCoE using SFP+ interfaces.Figure 2-33 Nexus 5596UP and Nexus 5596T Switches Cisco Nexus 5500 Products Expansion Modules You can have additional Ethernet and FCoE ports or native Fibre Channel ports with the Nexus 5500 products by adding expansion modules. It has 8 ports 1/10-Gbps Ethernet and FCoE using SFP+ interfaces. The Nexus 5548P/5548UP has one expansion module.

It has 16 ports 1/10-Gbps Ethernet and FCoE using SFP+ interfaces or up to 16 ports 8/4/2/1-Gbps Native Fibre Channel ports using SFP+ and SFP interfaces. . Figure 2-36 Cisco N55-M16UP Expansion Module The Cisco N55-M4Q shown in Figure 2-37 is a 4-port 40-Gbps Ethernet module. Figure 2-35 Cisco N55-M8P8FP Expansion Module The Cisco N55-M16UP shown in Figure 2-36 is a 16 unified ports module.SFP+ and SFP interfaces. Each QSFP 40-GE port can only work in 4×10G mode and supports DCB and FCoE.

. it supports FCoE up to 30m on category 6a and category 7 cables. which is shared among all 48 ports. Figure 2-38 Cisco N55-M12T Expansion Module The Cisco 5500 Layer 3 daughter card shown in Figure 2-39 is used to enable Layer 3 on the Nexus 5548P and 5548UP. or it is field upgradable as a spare.Figure 2-37 Cisco N55-M4Q Expansion Module The Cisco N55-M12T shown in Figure 2-38 is a 12-port 10-Gbps BASE-T module. This module is supported only in the Nexus 5596T. which can be ordered with the system. This daughter card provides 160 Gbps of Layer 3 forwarding (240 million packets per second [mpps]).

Figure 2-40 Cisco Nexus 5548P and 5548UP Layer 3 Daughter Card Upgrade Procedure To enable Layer 3 on the Nexus 5596P and 5596UP. There is no need to remove the switch from the rack. and follow the steps as shown in Figure 2-40. Figure 241 shows the Layer 3 expansion module. . power off the switch. you must replace the Layer 2 I/O module. which is shared among all 48 ports. This daughter card provides 160 Gbps of Layer 3 forwarding (240 million packets per second [mpps]). you can have only one Layer 3 expansion module per Nexus 5596P and 5596UP. which can be ordered with the system or as a spare. you must have a Layer 3 expansion module.Figure 2-39 Cisco Nexus 5548P and 5548UP Layer 3 Daughter Card To install the Layer 3 module. currently.

1-microsecond port-toport latency with all frame sizes. For example. Nexus 5672UP and Nexus 56128P. The Nexus 5600 has two models.Figure 2-41 Cisco Nexus 5596P and 5596UP Layer 3 Daughter Card Upgrade Procedure Enabling Layer 3 affects the scalability limits for the Nexus 5500. cut-through switching for 10/40 Gbps. and 25 MB buffer per port ASIC. Table 2-14 shows the summary of the features. Verify the scalability limits based on the NX-OS you will be using before creating a design. Cisco Nexus 5600 Product Family The Nexus 5600 is the new generation of the Nexus 5000 switches. true 40-Gbps flow. Both models bring integrated L2 and L3. . the maximum FEXs per Cisco Nexus 5500 Series switches is 24 with Layer 2 only. Enabling Layer 3 makes the supported number per Nexus 5500 to be 16. 40-Gbps FCoE.

The switch has two redundant power supplies and three fan modules. meaning that the ports can run 8/4/2 Gbps Fibre Channel as well as 10 Gigabit Ethernet and FCoE connectivity options.Table 2-14 Nexus 5600 Product Switches Feature Cisco Nexus 5672UP Switch Features The Nexus 5672UP shown in Figure 2-42 has 48 fixed 1/10-Gbps SFP+ ports. . The switch supports both port-side exhaust and port-side intake. True 40 Gbps ports use QSFP+ for Ethernet/FCOE. of which 16 ports are unified.

The Cisco Nexus 56128P has two expansion modules that support 24 unified ports. The 48 fixed SFP+ ports and four 40 Gbps QSFP+ ports support FCOE as well. .Figure 2-42 Cisco Nexus 5672UP Cisco Nexus 56128P Switch Features The Cisco Nexus 56128P shown in Figure 2-43 is a 2RU switch. It has 48 fixed 1/10-Gbps Ethernet SFP+ ports and four 40-Gbps QSFP+ ports.

Figure 2-43 Cisco Nexus 56128P The 24 unified ports provide 8/4/2 Gbps Fibre Channel as well as 10 Gigabit Ethernet and FCoE connectivity options. Cisco Nexus 5600 Expansion Modules Expansion modules enable the Cisco Nexus 5600 switches to support unified ports with native Fibre Channel connectivity. plus two 40 Gbps ports. or disable the grace period feature. and a management and console interface on the fan side of the switch. Table 2-15 describes each license and the features it enables. It has four N+N redundant. Evaluation licenses are time bound (valid for a specified number of days) and are tied to a host ID (device serial number). the Cisco NX-OS software automatically disables the feature and removes the configuration from the device. The Nexus 56128P currently supports one expansion module. Enabling a licensed feature that does not have a license key starts a counter on the grace period. the N56M24UP2Q expansion module. hot-swappable independent fans. four N+1 redundant. which is the amount of time the features in a license package can continue functioning without a license. That module provides 24 ports 10 G Ethernet/FCoE or 2/4/8 G Fibre Channel and 2 ports 40 Gigabit QSFP+ Ethernet/FCoE ports. There is also an evaluation license. hot-swappable power supplies. You then have 120 days to install the appropriate license keys. If at the end of the 120-day grace period the device does not have a valid license key for the feature. disable the use of that feature. which is a temporary license. Nexus switches have a grace period. Figure 2-44 Cisco Nexus 56128P Unified Port Expansion Module Cisco Nexus 5500 and Nexus 5600 Licensing Options Different types of licenses are required for the Nexus 5500 and Nexus 5600. . as shown in Figure 2-44.

.

Nexus 3064T.Table 2-15 Nexus 5500 Product Licensing Note To manage the Nexus 5500 and Nexus 5600. Each is a separate license. This product family consists of multiple switch models: the Nexus 3000 Series. All of them offer wire rate Layer 2 and Layer 3 on all ports. and 3048. they offer high performance and high density at ultra low latency. The Nexus 3000 switches are 1 RU server access switches. . two types of licenses are needed: the DCNM LAN and DCNM SAN. Nexus 3100 Series. and Nexus 3500 switches. These are compact 1RU 1/10/40-Gbps Ethernet switches. Nexus 3064-32T. The Nexus 3000 Series consists of five models: Nexus 3064X. Table 2-16 gives a comparison and specifications of these different models. and ultra low latency. Nexus 3016Q. Cisco Nexus 3000 Product Family The Nexus 3000 switches are part of the Cisco Unified Fabric architecture. they are shown in Figure 2-45.

Figure 2-45 Cisco Nexus 3000 Switches .

.

Figure 2-46 Cisco Nexus 3100 Switches . Nexus 3164Q. and Nexus 3172TQ. Shown in Figure 2-46. all of them offer wire rate Layer 2 and Layer 3 on all ports and ultra low latency. These are compact 1RU 1/10/40-Gbps Ethernet switches. Table 2-17 gives a comparison and specifications of different models. Nexus 3172Q.Table 2-16 Nexus 3000 Product Model Comparison The Nexus 3100 switches have four models: Nexus 3132Q.

They are shown in Figure 2-47. . These are compact 1RU 1/10-Gbps Ethernet switches. Both offer wire rate Layer 2 and Layer 3 on all ports and ultra low latency. Nexus 3524 and Nexus 3548. Table 2-18 gives a comparison and specifications of the different models.Table 2-17 Nexus 3100 Product Model Comparison The Nexus 3500 switches have two models.

Figure 2-47 Cisco Nexus 3500 Switches Table 2-18 Nexus 3500 Product Model Comparison Note The Nexus 3524 and the Nexus 3548 support a forwarding mode called warp mode. It is worth mentioning that the Nexus 3000 doesn’t support the grace period feature. They are very well suited for highperformance trading and high-performance computing environments. Table 2-19 shows the license options. . which uses a hardware technology called Algo boost. Algo boost provides ultra-low latency—as low as 190 nanoseconds (ns). Cisco Nexus 3000 Licensing Options There are different types of licenses for the Nexus 3000.

.

Nexus 7000. Rack-01 is using dual redundant Cisco Nexus 2000 Series Fabric Extender.Table 2-19 Nexus 3000 Licensing Options Cisco Nexus 2000 Fabric Extenders Product Family The Cisco Nexus 2000 Fabric Extenders behave as remote line cards. Figure 2-48 Cisco Nexus 2000 Top-of-Rack and End-of-Row Design As shown in Figure 2-48. The cabling between the servers and the Cisco Nexus 2000 Series Fabric Extenders is contained within the rack. Cisco Nexus 7000. This type of architecture provides the flexibility and benefit of both architectures: top-of-rack (ToR) and end-of-row (EoR). Nexus 6000. Figure 2-48 shows both types. It also enables highly scalable servers access design without the dependency on spanning tree. but from an operation point of view this design looks like an EoR design. which is placed at the top of the rack. This is a ToR design from a cabling point of view. reducing cabling between racks. ToR and EoR. Using Nexus 2000 gives you great flexibility when it comes to the type of connectivity and physical topology. So no configuration or software maintenance tasks need to be done with the FEXs. All Nexus 2000 Fabric Extenders connected to the same parent switch are managed from a single point. which is explained shortly. or Nexus 9000 Series switch that is installed in the EoR position as the parent switch. They appear as an extension to the parent switch to which they connect. which can be 10 Gbps or 40 Gbps. because all these Nexus 2000 Fabric Extenders will be managed from the parent switch. Multiple connectivity options can be used to connect the FEX to the parent switch. Only the cables between the Nexus 2000 and the parent switches will run between the racks. The uplink ports on the Cisco Nexus 2000 Series Fabric Extenders can be connected to a Cisco Nexus 5000. The parent switch can be Nexus 5000. as shown in Figure 2-49. and Nexus 9000 Series switches. . Nexus 6000.

You must use the pinning max-links command to create pinned fabric interface connections so that the parent switch can determine a distribution of host interfaces. The traffic is being distributed using a hashing algorithm. This method is used when the FEX is connected straight through to the parent switch. The server port will always use the same uplink port. The default value is maxlinks equals 1.Figure 2-49 Cisco Nexus 2000 Connectivity Options Straight-through. the host port will be statically pinned to one of the uplinks between the FEX and the parent switch. Note The Cisco Nexus 2248PQ Fabric Extender does not support the static pinning fabric interface connection. Table 2-20 shows the different models of the 1/10-Gbps fabric extenders. and Source IP Destination IP. for Layer 3 it uses Source MAC. This method bundles multiple uplink interfaces to one logical port channel. Straight-through. the FEX is dual homed using vPC to different parent switches. Destination MAC. using dynamic pinning (port channel): You can use this method to load balance between the down link ports connected to the server and the fabric ports connected to the parent switch. using static pinning: To achieve a deterministic relationship between the host port on the FEX to which the server connects and the fabric interfaces that connect to the parent switch. . Static pinning and active-active FEX are currently supported only on the Cisco Nexus 5000 Series switches. Note The Cisco Nexus 7000 Series switches currently support only straight-through deployment using dynamic pinning. Active-active FEX using vPC: In this scenario. The host interfaces are divided by the number of the max-links and distributed accordingly. for Layer 2 it uses Source MAC and Destination MAC.

Table 2-20 Nexus 2000 1/10-Gbps Model Comparison Table 2-21 shows the different models of the 100M and 1 Gbps fabric extenders. .

It extends the FEX to third-party blade centers from HP.Table 2-21 Nexus 2000 100 Mbps/1 Gbps Model Comparison As shown in Figure 2-50. They simplify the operational model by making the blade switch management part of the parent switch and make it appear as a remote line card to the parent switch. Fujitsu. the Cisco Nexus B22 Fabric Extender is part of the fabric extenders family. . similar to the Nexus 2000 fabric extenders. Dell. and IBM. There is a separate hardware model for each vendor.

The uplink ports are visible and are used for the connectivity to the upstream parent switch. host ports for blade server attachments and uplink ports. the B22 consists of two types of ports. It is similar to the Nexus 2000 Series management architecture. Similar to the Nexus 2000 Series. This architecture gives the benefit of centralized management through the Nexus parent switch. as shown in Figure 2-51. . which are called fabric ports.Figure 2-50 Cisco Nexus B22 Fabric Extender The B22 topology shown in Figure 2-51 is creating a highly scalable server access design with no spanning tree running.

as well as the L4 to L7 services. . The leaf nodes are at the edge. as shown in Figure 2-52. The DFA architecture includes fabric management using Central Point of Management (CPoM) and the automation of configuration of the fabric components. Using open application programming interfaces (API).Figure 2-51 Cisco Nexus B22 Access Topology The number of host ports and fabric ports for each model is shown in Table 2-22. The spines are at the core of the network. Table 2-22 Nexus B22 Specifications Cisco Dynamic Fabric Automation Besides using the Nexus 7000. Nexus 6000. and nothing connects to the spines except the leaf switches. you can build a Spine-Leaf architecture called Dynamic Fabric Automation (DFA). and Nexus 5000 to build standard three-tier data center architectures. you can integrate automation and orchestration tools to provide control for provisioning of the fabric components. and this is where the host is connected.

and the Nexus 9000 switches. and resource efficiency. It uses a 24-bit (16 million) segment identifier to support a large-scale virtual fabric that can scale beyond the traditional 4000 VLANs. the Nexus 7700 can scale up to 1. The Nexus product family offers different types of connectivity options that are needed in the data center. Following are some points to remember: With the Nexus product family you can have an infrastructure that can be scaled cost effectively. Nexus 7700. The Nexus family provides operational continuity to meet your needs for an environment where system availability is assumed and maintenance windows are rare. true . spanning from 100 Mbps per port and going up to 100 Gbps per port. It supports unified ports. if not totally extinct. The Nexus 7000 can scale up to 550 Gbps per slot. Multiple supervisor modules are available for the Nexus 7000 switches and the Nexus 7700 switches: supervisor module 1. Nexus modular switches are the Nexus 7000. Summary In this chapter. The Nexus 5600 Series is the latest addition to the Nexus 5000 family of switches. and the Nexus 9000 can scale up to 3.Figure 2-52 Cisco Nexus B22 Topology Cisco DFA enables creation of tenant-specific virtual fabrics and allows these virtual fabrics to be extended anywhere within the physical data center fabric. The Nexus 6004X is the latest addition to the Nexus 6000 family. Nexus modular switches go from 1 Gbps ports up to 100 Gbps ports.84 Tbps per slot. supervisor module 2. The Nexus product family is designed to meet the stringent requirements of the next-generation data center. budget. and supervisor module 2E.32 Tbps per slot. you learned about the different Cisco Nexus family of switches. One of the key features it supports is VXLAN and true 40 Gbps ports. It supports true 40 Gbps ports with 40 Gbps flows. and that helps you increase energy. If you want 100 Mbps ports you must use Nexus 2000 fabric extenders.

. Nexus 9000. noted with the key topic icon in the outer margin of the page.40 Gbps. Nexus 6000. It is an ultra-low latency switch that is well suited to the high-performance trading and high-performance computing environments. They act as the parent switch for the Nexus 2000. The Nexus 2000 can be connected to Nexus 7000. Nexus 3000 is part of the unified fabric architecture. and is the only switch in the Nexus 6000 family that supports VXLAN. and Nexus 5000 switches. Nexus 2000 fabric extenders must have a parent switch connected to them. Table 2-23 lists a reference for these key topics and the page numbers on which each is found. Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter.

” also on the disc. “Memory Tables Answer Key. and complete the tables and lists from memory. or at least the section for this chapter.Table 2-23 Key Topics for Chapter 2 Complete Tables and Lists from Memory Print a copy of Appendix B. includes completed tables and lists to check your work. “Memory Tables” (found on the disc). Appendix C. Define Key Terms Define the following key terms from this chapter. and check your answers in the glossary: Application Centric Infrastructure (ACI) port channel (PC) virtual port channel (vPC) Fabric Extender (FEX) Top of Rack (ToR) End of Row (EoR) Dynamic Fabric Automation (DFA) Virtual Extensible LAN (VXLAN) .

How many VDCs are supported on Nexus 7000 SUP 1 and SUP 2E (do not count admin VDC)? a. That approach has its own benefits and business requirements. 1. The other option is how to make multiple switches appear as if they are one big modular switch. (Choose three. “Answers to the ‘Do I Know This Already?’ Quizzes. Table 3-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. This chapter covers the virtualization capabilities of the Nexus 7000 and Nexus 5000 switches using Virtual Device Context (VDC) and Network Interface Virtualization (NIV).) a. you should mark that question as wrong for purposes of the self-assessment. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. Admin VDC . Choose the different types of VDCs that can be created on the Nexus 7000. SUP1 support 2 * SUP 2E support 8 d.” Table 3-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. SUP1 support 4 * SUP 2E support 8 2. SUP1 support 4 * SUP 2E support 4 b. read the entire chapter.Chapter 3. too. If you do not know the answer to a question or are only partially sure of the answer. each with its own administrative domain and security policy. That approach has its own benefits and business requirements. SUP1 support 4 * SUP 2E support 6 c. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. The first is how to partition one physical switch and make it appear as multiple switches. Virtualizing Cisco Network Devices This chapter covers the following exam topic: Describe Device Virtualization There are two types of device virtualization when it comes to Nexus devices. Default VDC b.

as shown in Figure 3-1. 802. True or false? We can assign physical interfaces to an admin VDC. within this VDC are multiple processes and Layer 2 and Layer 3 services running. 802. switchto vdc{vdc name} c. the control plane runs a single VDC. a. You can assign a line card to a VDC or a set of ports to the VDC. you can have a unique set of VLANs and VRFs allowing the virtualization of the control plane. Without creating any VDC on the Nexus 7000. Within a VDC. switchto {vdc name} 5. what does the acronym VIF stand for? a. 802. OTV VDC 3. VN-Tag initial frame b. Each VDC will have its own administrative domain allowing the management plane as well to be virtualized. 802. This VDC is always active and cannot be deleted. The VN-Tag is being standardized under which IEEE standard? a. VN-Tag interface Foundation Topics Describing VDCs on the Cisco Nexus 7000 Series Switch The Virtual Device Context (VDC) is used to virtualize the Nexus 7000 switch.1BR d. meaning presenting the Nexus 7000 switch as multiple logical switches. . a.1qpg b. False 4. False 6. Which command is used to access a nondefault VDC from the default VDC? a. True or false? The Nexus 2000 Series switch can be used as a standalone access switch.Qbr 7. Storage VDC d. True b. Virtualized interface d. In the VN-Tag fields.1Qbh c. connectTo b. Virtual interface c.c. VDC {id} command and press Enter d. True b.

Figure 3-2 shows what the structure looks like when creating multiple VDCs. and hardware partitioning. separate administration per VDC. these processes are replicated for each VDC. meaning you can have VRF production in VDC1 and VRF production in VDC2. each VDC administrator will interface with the process for his or her own VDC. Within each VDC you can have duplicate VLAN names and VRF names. separate data traffic. You can have fault isolation. . When enabling Role-Based Access Control (RBAC) for each VDC.Figure 3-1 Single VDC Operation Mode Structure Virtualization using VLANS and VRFs within this VDC can still be used. When enabling multiple VDCs.

which enable the reduction of the physical footprint in the data center. which in turn saves space. and DMZ Creation of separate organizations on the same physical switch Creation of separate application environments Creation of separate departments for the same organization due to different administration domains VDC separation is industry certified. it has NSS labs for Payment Card Industry (PCI) compliant environments. and cooling. power. which can be used in the design of the data center: Creation of separate production. and Common Criteria Evaluation and Validation Scheme (CCEVS) 10349 certification with EAL4 conformance. and test environments Creation of intranet.Figure 3-2 Multiple VDC Operation Mode Structure The following are typical use cases for the Cisco Nexus 7000 VDC. Federal Information Processing Standards (FIPS 140-2) certification. Horizontal Consolidation Scenarios . development. extranet. VDC Deployment Scenarios There are multiple deployment scenarios using the VDCs.

In horizontal consolidation. Figure 3-4 shows multiple aggregation layers created on the same physical devices. you can consolidate core functions and aggregation functions. using a pair of Nexus 7000 switches you can build two redundant data center cores. which can be useful in facilitating migration scenarios.VDCs can be used to consolidate multiple physical devices that share the same function role. This can happen by creating two VDCs. as shown in Figure 3-3. VDC1 will accommodate Core A and VDC2 will accommodate Core B. and after the migration you can reallocate the old core ports to the new core. so rather than building a separate aggregation layer for test and development. For example. Aggregation 1 and Aggregation 2 are completely isolated with a separate role-based access and separate configuration file. you can create multiple VDCs called Aggregation 1 and Aggregation 2. . You can allocate ports or line cards to the VDCs. Figure 3-3 Horizontal Consolidations for Core Layer Aggregation layers can also be consolidated.

For example. . as shown in Figure 3-5. you can build a redundant core and aggregation layer. VDC2 accommodates Core B and Aggregation B. using a pair of Nexus 7000 switches. you can consolidate core functions and aggregation functions. Although the port count is the same. it can be useful because it reduces the number of physical switches. You can allocate ports or line cards to the VDCs. This can happen by creating two VDCs.Figure 3-4 Horizontal Consolidation for Aggregation Layers Vertical Consolidation Scenarios When using VDCs in vertical consolidation. VDC1 accommodates Core A and Aggregation A.

as you can see. This design can make service insertion more deterministic and secure. Aggregation. . and traffic must go out VDC-A to reach VDC-B. Figure 3-6 shows that scenario. for traffic to cross VDC-A to go to VDC-B it must pass through the firewall. Each VDC has its own administrative domain. By creating two separate VDCs for two logical areas inside and outside. we used a pair of Nexus 7000 switches to build a three-tier architecture.Figure 3-5 Vertical Consolidation for Core and Aggregation Layers Another scenario is consolidation of core aggregation and access while you maintain the same hierarchy. The only way is through the firewall. Figure 3-6 Vertical Consolidation for Core. as shown in Figure 3-7. and Access Layers VDCs for Service Insertion VDCs can be used for service insertion and policy enforcement.

memory NX-OS upgrade across all VDCs EPLD upgrade—as directed by TAC or to enable new features Ethanalyzer captures—control plane traffic Feature-set installation for Nexus 2000. There is a separate . Connecting two VDCs on the same physical switch must be done through external cables. the NX-OS on the Nexus 7000 supports Virtual Device Contexts (VDC). Nondefault VDC: The nondefault VDC is a fully functional VDC that can be created from the default VDC or the Admin VDC. VDCs simply partition the physical Nexus device into multiple logical devices with a separate control plane. The default VDC has certain characteristics and tasks that can only be done in the default VDC. you cannot connect VDCs together from inside the chassis. you log in to the default VDC. data plane.Figure 3-7 Using VDCs for Firewall Insertion Understanding Different Types of VDCs As explained earlier in the chapter. and FCoE Control Plane Policing (CoPP) Port channel load balancing Hardware IDS checks control ACL capture feature enabled Note With CPU shares. You must be in the default VDC or the Admin VDC (discussed later in the chapter) to do certain tasks. and management plane. which can be done only in these two VDC types. FabricPath. An independent process is started for each protocol per VDC created. The default VDC is a full-function VDC. Changes in the nondefault VDC affect only that VDC. you don’t dedicate a CPU to a VDC rather than giving access and assigning priorities to the CPU. which can be summarized in the following: VDC creation/deletion/suspend Resource allocation—interfaces. the admin always uses it to manage the physical device and other VDCs created. CPU shares are supported only with SUP2/2e. The following list describes the different types of VDCs that can be created on the Nexus 7000: Default VDC: When you first log in to the Nexus 7000.

the default VDC becomes the Admin VDC. Features and feature sets cannot be enabled in an Admin VDC. All nonglobal configuration on the default VDC will be migrated to the new VDC. . After the creation of the storage VDC. You cannot assign any interface to the Admin VDC.configuration file per VDC. you will be prompted to select an Admin VDC. When you enable the Admin VDC. The storage VDC relies on the FCoE license and doesn’t need a VDC license. ADMIN VDC: The Admin VDC is used for administration functions. you can create the admin VDC in different ways: During a fresh switch boot. To change back to the default VDC you must erase the configuration and do a fresh boot script. which can carry both Ethernet and FCoE traffic. and so on. dedicated FCoE interfaces or shared interfaces. and starting from the NX-OS 6. only mgmt0 can be assigned to the Admin VDC. we assign interfaces to it. Enter the system admin-vdc command after boot. You must be careful or you will lose the nonglobal configuration. You can enable starting from NX-OS 6. After the Admin VDC is created. Figure 3-8 shows the two types of ports. Enter the system admin-vdc migrate new vdc name command. SNMP. and all nonglobal configuration in the default VDC will be lost. We can create only one storage VDC on the physical device. In that case. The storage VDC counts toward the total number of VDCs. There are two types of interfaces. The Admin VDC has certain characteristics that are unique to the Admin VDC. Note Creating the Admin VDC doesn’t require the advanced package license or VDC license. it cannot be deleted or changed back to the default VDC.1 for SUP2/2E modules. and each VDC has its own RBAC. Storage VDC: The storage VDC is a nondefault VDC that helps maintain the operation model when the customer has a LAN admin and a SAN admin.2(2) for SUP1 module. it replaces the default VDC.

2. Supervisor engine 2 can have up to four VDCs and one Admin VDC. With Supervisor 1 you can have up to four VDCs and one Admin VDC. Interface Allocation The only thing you can assign to a VDC are the physical ports. When you allocate the physical interface to a VDC. A physical interface can belong to only one VDC at a given time. are created within the VDC. At the beginning and before creating a VDC. Logical interfaces. When you create a shared interface. all interfaces belong to the default VDC. For example. After you create a VDC. which requires 8 GB of RAM because you must upgrade to NX-OS 6. all configurations that existed on it get erased. in the N7K-M132XP-12 (4 interfaces × 8 port groups = 32 interfaces) and N7K-M132XP-12L (same as non-L M132) (1 interface × 8 port groups = . Beyond four VDCs on Supervisor 2E we require a license to increment the VDCs with an additional four. you start assigning interfaces to that VDC. The physical interface gets configured within the VDC. Sup2/2e is required. physical and logical interfaces can be assigned to VLANs or VRFs. Note All members of a port group are automatically allocated to the VDC when you allocate an interface. With Supervisor engine 2E you can have up to eight VDCs and one Admin VDC. the physical interface belongs to one Ethernet VDC and one storage VDC at the same time.Figure 3-8 Storage VDC Interface Types Note To run FCoE. such as tunnel interfaces and switch virtual interfaces. and within a VDC.

The role must be assigned specifically to a user by a user who has network-admin rights. and you can configure eight port groups. The VDC-admin role is automatically assigned to this admin user on a nondefault VDC. delete. This role includes the ability to create. The network-admin role is automatically assigned to this user. the first user account on that VDC is admin. The network-admin role gives a user complete read and write access to the device. By default. Network-operator: The second default role that exists on Cisco Nexus 7000 Series switches is the network-operator role. This role grants the user read-only rights in the default VDC. or change nondefault VDCs. and it is available only in the default VDC.8 interfaces). All M132 cards require allocation in groups of four ports. which can be used to access a nondefault VDC from the default VDC. When a network-admin or network-operator user accesses a nondefault VDC by using the switchto command. VDC-admin: When a new VDC is created. this user does not have any rights in any other VDC and cannot access other VDCs through the switchto command. See the example for this module in Figure 3-9. A user with the network-operator role is given the VDC-operator role in the nondefault VDC. Note You can create custom roles for a user. each VDC maintains its user role . However. but you cannot change the default user roles for a VDC. no users are assigned to this role. That means a user with the network-admin role is given the VDC-admin role in the nondefault VDC. Interfaces belonging to the same port group must belong to the same VDC. The network-operator role includes the right to issue the switchto command. Figure 3-9 VDC Interface Allocation Example VDC Administration The operations allowed for a user in VDC depend on the role assigned to that user. This role has no rights to any other VDC. This role gives a user complete control over the specific nondefault VDC. VDC-operator: The VDC-operator role has read-only rights for a specific VDC. and the NX-OS provides four default user roles: Network-admin: The first user account that is created on a Cisco Nexus 7000 Series switch in the default VDC is admin. User roles are not shared between VDCs. Users can have multiple roles assigned to them. that user is mapped to a role of the same level in the nondefault VDC.

When you enable a feature. the show feature-set command is used to verify and show which feature has been enabled or disabled in a VDC. but another feature was enabled. Physical interfaces also are assigned to nondefault VDCs from the default VDC. When the grace period expires. all the features of the license package must be stopped. so if you enable a feature to try it.database. any nondefault VDCs will be removed from the switch configuration. VDCs are created or deleted from the default VDC only. Example 3-2 Displaying Information About All VDCs (Running the Command from the Default VDC) Click here to view code image . You need network admin privileges to do that. VDC Requirements To create VDCs.-------. You can try the feature for the 120-day grace period. The storage VDC is enabled using the storage license associated with a specific line card. As shown in Example 3-1. the countdown for the 120 days doesn’t stop if a single feature of the license package is already enabled. an advanced license is required. for example. this command displays information about all VDCs on the physical device. As stated before. Verifying VDCs on the Cisco Nexus 7000 Series Switch The next few lines show certain commands to display useful information about the VDC. the license package can contain several features. The license is associated with the serial number of a Cisco Nexus 7000 switch chassis. Note The grace period operates across all features in a license package. 100 days before. you are left with 20 days to install the license. To stop the grace period. The show vdc command displays different outputs depending on where you are executing it from. Example 3-1 Displaying the Status of a Feature Set on a Switch Click here to view code image N7K-VDC-A# show feature-set Feature Set Name ID State -------------------. If you are executing the command from the default VDC.-------fabricpath 2 enabled fex 3 disabled N7K-VDC-A# The show feature-set command in Example 3-1 shows that the FabricPath feature is enabled and the FEX feature is disabled.

the output displays information about the current VDC.N7K-Core# show vdc vdc_id -----1 2 3 vdc_name -------prod dev MyVDC state ----active active active mac ---------00:18:ba:d8:3f:fd 00:18:ba:d8:3f:fe 00:18:ba:d8:3f:ff If you are executing the show vdc command within the nondefault VDC. which in our case is the N7K-Core-Prod. execute show vdc detail from the default VDC. Example 3-4 Displaying Detailed Information About All VDCs Click here to view code image N7K-Core# show vdc detail vdc id: 1 vdc name: N7K-Core vdc state: active vdc mac address: 00:22:55:79:a4:c1 vdc ha policy: RELOAD vdc dual-sup ha policy: SWITCHOVER vdc boot Order: 1 vdc create time: Thu May 14 08:14:39 2012 vdc restart count: 0 vdc vdc vdc vdc vdc vdc vdc vdc vdc id: 2 name: prod state: active mac address: 00:22:55:79:a4:c2 ha policy: RESTART dual-sup ha policy: SWITCHOVER boot Order: 1 create time: Thu May 14 08:15:22 2012 restart count: 0 vdc vdc vdc vdc vdc vdc vdc vdc id: 3 name: dev state: active mac address: 00:22:55:79:a4:c3 ha policy: RESTART dual-sup ha policy: SWITCHOVER boot Order: 1 create time: Thu May 14 08:15:29 2012 . as shown in Example 3-4. Example 3-3 Displaying Information About the Current VDC (Running the Command from the Nondefault VDC) Click here to view code image N7K-Core-Prod# show vdc vdc_id -----1 vdc_name -------prod state ----active mac ---------00:18:ba:d8:3f:fd To display the detailed information about all VDCs.

execute show vdc membership from the default VDC. as shown in Example 3-5. it will show the interface membership only for that VDC. which is executed from the . Use the copy running-config startup-config vdc-all command. as shown in Example 3-6. Example 3-6 Displaying Interface Allocation per VDC Click here to view code image N7K-Core# show vdc membership vdc_id: 1 vdc_name: N7K-Core interfaces: Ethernet2/1 Ethernet2/2 Ethernet2/4 Ethernet2/5 Ethernet2/7 Ethernet2/8 Ethernet2/10 Ethernet2/11 Ethernet2/13 Ethernet2/14 Ethernet2/16 Ethernet2/17 Ethernet2/19 Ethernet2/20 Ethernet2/22 Ethernet2/23 Ethernet2/25 Ethernet2/26 Ethernet2/28 Ethernet2/29 Ethernet2/31 Ethernet2/32 Ethernet2/34 Ethernet2/35 Ethernet2/37 Ethernet2/38 Ethernet2/40 Ethernet2/41 Ethernet2/43 Ethernet2/44 Ethernet2/48 vdc_id: 2 vdc_name: prod interfaces: Ethernet2/47 vdc_id: 3 vdc_name: dev interfaces: Ethernet2/46 Ethernet2/3 Ethernet2/6 Ethernet2/9 Ethernet2/12 Ethernet2/15 Ethernet2/18 Ethernet2/21 Ethernet2/24 Ethernet2/27 Ethernet2/30 Ethernet2/33 Ethernet2/36 Ethernet2/39 Ethernet2/42 Ethernet2/45 If you execute show VDC membership from the VDC you are currently in. You can save all the running configurations to the startup configuration for all VDCs using one command.vdc restart count: 0 You can display the detailed information about the current VDC by executing show vdc {vdc name} detail from the nondefault VDC. Example 3-5 Displaying Detailed Information About the Current VDC When You Are in the Nondefault VDC Click here to view code image N7K-Core-prod# show vdc prod detail vdc id: 2 vdc name: prod vdc state: active vdc mac address: 00:22:55:79:a4:c2 vdc ha policy: RESTART vdc dual-sup ha policy: SWITCHOVER vdc boot Order: 1 vdc create time: Thu May 14 08:15:22 2012 vdc restart count: 0 To verify the interface allocation per VDC.

You cannot see the configuration for other VDCs except from the default VDC. A copy of each such license is available at http://www. after that. Describing Network Interface Virtualization You can deliver an extensible and scalable fabric solution using the Cisco FEX technology.1.php N7K-Core-Prod# To switch back to the default VDC. The copyrights to certain works contained in this software are owned by other third parties and used and distributed under license. This technology enables operational simplicity by having a single point of management and policy enforcement on the access parent switch.0 or the GNU Lesser General Public License (LGPL) Version 2. Virtual Interface (VIF): Logical construct consisting of a front panel port.org/licenses/lgpl-2.1. All rights reserved. The FEX technology can be extended all the way to the server hypervisor.php and http://www. Cisco Systems. use the show running-config vdc-all command. Example 3-7 Saving All Configurations for All VDCs Click here to view code image N7K-Core# copy running-config startup-config vdc-all [########################################] 100% To display the running configurations for all VDCs.0. when you create the user accounts and configure the IP connectivity. You cannot execute the switchto command from the nondefault VDC. Host Interface (HIF): Front panel port on FEX.com/tac Copyright (c) 2002-2008. Cisco Nexus 2000 FEX Terminology The following terminology and acronyms are used with the Nexus 2000 fabric extenders: Network Interface (NIF): Port on FEX. use the switchback command. You can navigate from the default VDC to the nondefault VDC using the switchto command. connecting to parent switch.opensource. VLAN. as shown in Example 3-8. you must do switchback to the default VDC first before going to another VDC. Use these commands for the initial setup.default VDC.opensource. connecting to server. executed from the default VDC. you use Secure Shell (SSH) and Telnet to connect to the desired VDC. Inc.cisco. Certain components of this software are licensed under the GNU General Public License (GPL) version 2. and several .org/licenses/gpl-2. Example 3-8 Navigating Between VDCs on the Nexus 7000 Switch Click here to view code image N7K-Core# switchto vdc Prod TAC support: http://www.

Figure 3-10 Nexus 2000 Interface Types and Acronyms Nexus 2000 Series Fabric Extender Connectivity The Cisco Nexus 2000 Series cannot run as a standalone switch. From the logical network deployment point of view. this design is an EoR design. Figure 3-10 shows the ports with reference to the physical hardware. Fabric Extender (FEX): NEXUS 2000 is the instantiation of FEX. This switch can be a Nexus 9000. actively forwarding traffic to two parent switches. Based on the type and the density of the access ports required. Nexus 5600. Fabric Port: N7K side of the link connected to FEX. FEX uplink: Network-facing port on the FEX side of the link connected to N7K. Only a small number of cables will run between the racks. This type of design combines the benefit of the Top-of-Rack (ToR) design with the benefit of the End-of-Row (EoR) design. FEX A/A or dual-attached FEX: Scenario in which the fabric links of FEX are connected. FEX port: Server-facing port on FEX. which will be installed at the EoR. Fabric Port Channel: Port channel between N7K and FEX. dual redundant Nexus 2000 fabric extenders are placed at the top of the rack.other parameters. as stated before. Nexus 7000. The uplink ports from the Nexus 2000 will be connected to the parent switch. The cabling between the servers and the Nexus 2000 Series fabric extenders is contained within the rack. Logical interface (LIF): Presentation of a front panel port in parent switch. FEX port is also called a Host Interface (HIF). Fabric links (Fabric Port Channel or FPC): Links that connect FEX NIF and FEX fabric ports on parent switch. The FEX acts as a . this design is a ToR design. also referred to as server port or host port in this presentation. From a cabling point of view. or Nexus 5000 Series. Nexus 6000. it needs a parent switch. The FEX uplink is also called a Network Interface (NIF). Parent switch: Switch (N7K/N5K) where FEX is connected. which will be 10/40 Gbps.

you need to turn off Bridge Protocol Data Units (BPDUs) from the access switch and turn on “bpdufilter” on the Nexus 2000.remote line card. Figure 3-11 Nexus 2000 Deployment Physical and Logical View VN-Tag Overview The Cisco Nexus 2000 Series fabric extender acts as a remote line card to a parent switch. At the time of writing this book there is no local switching happening on the Nexus 2000 fabric extender. The physical ports on the Nexus 2000 fabric extender appear as logical ports on the parent switch. A frame exchanged between the Nexus 2000 fabric extender and the parent switch will have an information tag inserted in it called VN-Tag. The host connected to the Nexus 2000 fabric extender is unaware of any tags. From an operation perspective. At the beginning. The Nexus 2000 host ports (HIFs) do not expect to receive BPDUs on the host ports and expect only end hosts like servers. Forwarding is performed by the parent switch. Note You can connect a switch to the FEX host interfaces. However. Figure 3- . The effort was moved to the IEEE 802. as stated earlier.1Qbh working group. “Cisco Nexus Product Family” in the section “Cisco Nexus 2000 Fabric Extenders Product Family” to see the different models and specifications. so all traffic must be sent to the parent switch. all configuration and maintenance tasks are done from the parent switch. but this is not a recommended solution. VN-Tag is a Network Interface Virtualization (NIV) technology.1Qbh was withdrawn and is no longer an approved IEEE project. which enables advanced functions and policies to be applied to it. All control and management functions are performed by the parent switch. Refer to Chapter 2. this design has the advantage of the EoR design. Figure 3-10 shows the physical and the logical architecture of the FEX implementation.1BR in July 2011. The server appears as if it is connected directly to the parent switch. and no operation tasks are required to be performed from the FEXs. VN-Tag was being standardized under the IEEE 802. To make this work. and the hosts connected to the Nexus 2000 fabric extender appear as if they are directly connected to the parent switch. Figure 3-11 shows the physical and logical view for the Nexus 2000 fabric extender deployment. which might create a loop in the network. 802.

and the packet is forwarded over a fabric link using a specific VN-Tag. The Cisco Nexus 2000 Series switch adds a unique VN-Tag for each Cisco Nexus 2000 Series host interface. The Cisco Nexus 2000 Series switch adds a VN-Tag. The frame arrives from the host. these events occur: 1.12 shows the VN-Tag fields in an Ethernet frame.1 is network-to-host) p: Pointer bit (set if this is a multicast frame and requires egress replication) L: Looped filter (set if sending back to source Cisco Nexus 2000 Series) VIF: Virtual interface Cisco Nexus 2000 FEX Packet Flow The packet processing on the Nexus 2000 happens in two parts: when the host send a packet to the network and when the network sends a packet to the host. 2. Figure 3-13 shows the two scenarios. Figure 3-13 Cisco Nexus 2000 Packet Flow When the host sends a packet to the network (diagrams A and B). Figure 3-12 VN-Tag Fields in an Ethernet Frame d: Direction bit (0 is host-to-network forwarding. These are the VN-Tag field values: .

The p (pointer). 3. and the frame is forwarded to the host interface. The Cisco Nexus switch applies an ingress policy that is based on the physical Cisco Nexus Series switch port and logical interface: Access control and forwarding are based on frame fields and virtual (logical) interface policy. l (looped). these events occur: 1. The Cisco Nexus switch strips the VN-Tag and sends the packet to the network. The Cisco Nexus switch inserts a VN-Tag with these characteristics: The direction is set to 1 (network to host). 12-port 40-Gigabit Ethernet F3-Series QSFP I/O module (N7K-F312FQ-25) for Cisco Nexus 7000 Series switches 24-port Cisco Nexus 7700 F3 Series 40-Gigabit Ethernet QSFP I/O module (N77-F324FQ-25) 48-port Cisco Nexus 7700 F3 Series 1/10-Gigabit Ethernet SFP+ I/O module (N77-F348XP23) 32-port 10-Gigabit Ethernet SFP+ I/O module (N7K-M132XP-12) 32-port 10-Gigabit Ethernet SFP+ I/O module (N7K-M132XP-12L) 24-port 10-Gigabit Ethernet I/O M2 Series module XL (N7K-M224XP-23L) 48-port 1/10-Gigabit Ethernet SFP+ I/O F2 Series module (N7K-F248XP-25) Enhanced 48-port 1/10-Gigabit Ethernet SFP+ I/O module (F2e Series) (N7K-F248XP-25E) . and destination virtual interface are undefined (0). depending on the type and model of card installed in it. which identifies the logical interface that corresponds to the physical host interface on the Cisco Nexus 2000 Series. The packet is received over the fabric link using a specific VN-Tag. 3. The Cisco Nexus switch performs standard lookup and policy processing when the egress port is determined to be a logical interface (Cisco Nexus 2000 Series) port. The Cisco Nexus switch extracts the VN-Tag. The source virtual interface is set based on the ingress host interface. Cisco Nexus 2000 FEX Port Connectivity You can connect the Nexus 2000 Series fabric extender to one of the following Ethernet modules that are installed in the Cisco Nexus 7000 Series. The frame is received on the physical or logical interface. The destination virtual interface is set to be the Cisco Nexus 2000 Series port VN-Tag. indicating host-to network forwarding. When the network sends a packet to the host (diagram C). The l (looped) bit filter is set if sending back to a source Cisco Nexus 2000 Series switch. 4. The Cisco Nexus 2000 Series switch strips the VN-Tag. The source virtual interface is set if the packet was sourced from a Cisco Nexus 2000 Series port. The p bit is set if this frame is a multicast frame and requires egress replication. 2. Physical link-level properties are based on the Cisco Nexus Series switch port. The packet is forwarded over a fabric link using a specific VN-Tag. There is a best practice for connecting the Nexus 2000 fabric extender to the Nexus 7000.The direction bit is set to 0.

all the associated host interfaces go down and will remain down as long as the fabric port is down. the bandwidth that is dedicated to each end host toward the parent switch is never changed by the switch. Traffic is automatically redistributed across the remaining links in the port channel fabric interface. As a result. You must use the pinning max-links command to create several pinned fabric interface connections so that the parent switch can determine a distribution of host interfaces. . A fabric interface that fails in the port channel does not trigger a change to the host interfaces. but instead is always specified by you. you can configure the FEX to use a port channel fabric interface connection. The default value is max-links 1. its host interfaces are distributed equally among the available fabric interfaces. all host interfaces on the FEX are set to the down state. To provide load balancing between the host interfaces and the parent switch.48-port 1/10-Gigabit Ethernet SFP+ I/O module (F2e Series) (N77-F248XP-23E) Figure 3-14 shows the connectivity between the Cisco Nexus 2000 fabric extender and the N7KM132XP-12 line card. The drawback here is that if the fabric link goes down. The host interfaces are divided by the number of the max-links and distributed accordingly. This is explained in more detail in the next paragraphs. Figure 3-14 Cisco Nexus 2000 Connectivity to N7K-M132XP-12 There are two ways to connect the Nexus 2000 to the parent switch: either use static pinning. when the FEX is powered up and connected to the parent switch. or use a port channel when connecting the fabric interfaces on the FEX to the parent switch. In static pinning. which makes the FEX use individual links connected to the parent switch. If all links in the fabric port channel go down. This connection bundles 10-Gigabit Ethernet fabric interfaces into a single logical channel.

and 5600 Series switches. You can disallow the installed fabric extender feature set in a specific VDC on the device. all FEX host ports belong to the same VDC. Use the following steps to connect the Cisco Nexus 2000 to the Cisco 7000 Series parent switch when connected to one of the supported line cards. Example 3-11 Entering the Chassis Mode for the Fabric Extender Click here to view code image N7K-Agg(config)# fex 101 . The difference comes from the VDC-based architecture of the Cisco Nexus 7000 Series switches. FEX IDs are unique across VDCs (the same FEX ID cannot be used across different VDCs). Figure 3-15 shows the FEX connectivity to the Nexus 7000 when you have multiple VDCs. it is allowed in all VDCs. All FEX fabric links belong to the same VDC. you must follow certain rules. Figure 3-15 Cisco Nexus 2000 and VDCs Cisco Nexus 2000 FEX Configuration on the Nexus 7000 Series How you configure the Cisco Nexus 2000 Series on the Cisco 7000 Series is different from the configuration on Cisco Nexus 5000.When you have VDCs created on the Nexus 7000. when you install the fabric extender feature set. 6000. Example 3-9 Installing the Fabric Extender Feature Set in Default VDC Click here to view code image N7K-Agg(config)# N7K-Agg(config)# install feature-set fex Example 3-10 Enabling the Fabric Extender Feature Set Click here to view code image N7K-Agg(config)# N7K-Agg(config)# feature-set fex By default.

so it might be different depending on which VDC you want to connect to. The next step is to associate the fabric extender to a port channel interface on the parent device.N7K-Agg(config-fex)# description Rack8-N2k N7K-Agg(config-fex)# type N2232P Example 3-12 Defining the Number of Uplinks Click here to view code image N7K-Agg(config-fex)# pinning max-links 1 You can use this command if the fabric extender is connected to its parent switch using one or more statically pinned fabric interfaces. There can be only one port channel connection. we create a port channel with four member ports. Example 3-13 Disallowing Enabling the Fabric Extender Feature Set Click here to view code image N7K-Agg# configure terminal N7K-Agg (config)# vdc 1 N7K-Agg (config-vdc)# no allow feature-set fex N7K-Agg (config-vdc)# end N7K-Agg # Note The N7k-agg# VDC1 is the VDC_ID. In Example 3-14. Example 3-14 Associate the Fabric Extender to a Port Channel Interface Click here to view code image N7K-Agg# configure terminal N7K-Agg (config)# interface ethernet 1/28 N7K-Agg (config-if)# channel-group 4 N7K-Agg (config-if)# no shutdown N7K-Agg (config-if)# exit N7K-Agg (config)# interface ethernet 1/29 N7K-Agg (config-if)# channel-group 4 N7K-Agg (config-if)# no shutdown N7K-Agg (config-if)# exit N7K-Agg (config)# interface ethernet 1/30 N7K-Agg (config-if)# channel-group 4 N7K-Agg (config-if)# no shutdown N7K-Agg (config-if)# exit N7K-Agg (config)# interface ethernet 1/31 N7K-Agg (config-if)# channel-group 4 N7K-Agg (config-if)# no shutdown N7K-Agg (config-if)# exit N7K-Agg (config)# interface port-channel 4 .

1(1)] FEX Interim version: 5. Example 3-15 Verifying FEX Association Click here to view code image N7K-Agg# show fex FEX FEX FEX FEX Number Description State Model Serial -----------------------------------------------------------------------101 FEX0101 Online N2K-C2248TP-1GE JAF1418AARL Example 3-16 Display Detailed Information About a Specific FEX Click here to view code image N7K-Agg# show fex 101 detail FEX: 101 Description: FEX0101 state: Online FEX version: 5.Interface Up. State: Active Eth2/2 .1(1) [Switch version: 5. you must do some verification to make sure everything is configured properly. State: Active Eth2/1 .Interface Up. State: Active Fex Port State Fabric Port Primary Fabric Eth101/1/1 Up Po101 Po101 Eth101/1/2 Up Po101 Po101 Eth101/1/3 Down Po101 Po101 Eth101/1/4 Down Po101 Po101 Example 3-17 Display Which FEX Interfaces Are Pinned to Which Fabric Interfaces Click here to view code image N7K-Agg# show interface port-channel 101 fex-intf Fabric FEX Interface Interfaces --------------------------------------------------Po101 Eth101/1/2 Eth101/1/1 . State: Active Eth4/2 .1(1) Extender Model: N2K-C2248TP-1GE.Interface Up. Mac Addr: 54:75:d0:a9:49:42.N7K-Agg (config-if)# switchport N7K-Agg (config-if)# switchport mode fex-fabric N7K-Agg (config-if)# fex associate 101 After the FEX is associated successfully. Num Macs: 64 Module Sw Gen: 21 [Switch Sw Gen: 21] pinning-mode: static Max-links: 1 Fabric port for control traffic: Po101 Fabric interface state: Po101 . State: Active Eth4/1 .6) Switch Interim version: 5.Interface Up.Interface Up.159.1(0. Extender Serial: JAF1418AARL Part No: 73-12748-05 Card Id: 99.

allowing the management plane also to be virtualized. . Table 3-2 lists a reference for these key topics and the page numbers on which each is found. noted with the key topics icon in the outer margin of the page. each VDC will have its own management domain. The FEX technology can be extended all the way to the server hypervisor. Cisco Nexus 2000 cannot operate in a standalone mode. This technology enables operational simplicity by having a single point of management and policy enforcement on the access parent switch Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. it needs a parent switch to connect to.Example 3-18 Display the Host Interfaces That Are Pinned to a Port Channel Fabric Interface Click here to view code image N7K-Agg# show interface port-channel 4 fex-intf Fabric FEX Interface Interfaces --------------------------------------------------Po4 Eth101/1/48 Eth101/1/47 Eth101/1/46 Eth101/1/44 Eth101/1/43 Eth101/1/42 Eth101/1/40 Eth101/1/39 Eth101/1/38 Eth101/1/36 Eth101/1/35 Eth101/1/34 Eth101/1/32 Eth101/1/31 Eth101/1/30 Eth101/1/28 Eth101/1/27 Eth101/1/26 Eth101/1/24 Eth101/1/23 Eth101/1/22 Eth101/1/20 Eth101/1/19 Eth101/1/18 Eth101/1/16 Eth101/1/15 Eth101/1/14 Eth101/1/12 Eth101/1/11 Eth101/1/10 Eth101/1/8 Eth101/1/7 Eth101/1/6 Eth101/1/4 Eth101/1/3 Eth101/1/2 Eth101/1/45 Eth101/1/41 Eth101/1/37 Eth101/1/33 Eth101/1/29 Eth101/1/25 Eth101/1/21 Eth101/1/17 Eth101/1/13 Eth101/1/9 Eth101/1/5 Eth101/1/1 Summary VDCs on the Nexus 7000 switches enable true device virtualization. To send traffic between VDCs you must use external physical cables that lead to greater security of user data.

and check your answers in the Glossary: Virtual Device Context (VDC) supervisor engine (SUP) demilitarized zone (DMZ) erasable programmable logic device (EPLD) Fabric Extender (FEX) role-based access control (RBAC) network interface (NIF) host interface (HIF) virtual interface (VIF) logical interface (LIF) End-of-Row (EoR) Top-of-Rack (ToR) VN-Tag .Table 3-2 Key Topics for Chapter 3 Define Key Terms Define the following key terms from this chapter.

1. we can set up static MAC-to-IP mappings in OTV. Table 4-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. True or false? For silent hosts. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. ASR1000 Series d. which enables you to have data replication. ASR9000 Series c. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics.Chapter 4. you should mark that question as wrong for purposes of the self-assessment. If you do not know the answer to a question or are only partially sure of the answer. read the entire chapter. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. False 3. “Answers to the ‘Do I Know This Already?’ Quizzes. Data Center Interconnect This chapter covers the following exam topics: Identify Key Differentiators Between DCI and Network Interconnectivity Cisco Data Center Interconnect (DCI) solutions enable IT organizations to meet two main objectives: business continuity in case of disaster events and corporate and regularity compliance to improve data security. Which version of IGMP is used on the OTV join interface? . True b. which is called Overlay Transport Virtualization (OTV).” Table 4-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. Which of the following Cisco products supports OTV? a. In this chapter. Nexus 7000 Series b. DCI solutions in general extend LAN and SAN connectivity between geographically dispersed data centers. You can find the answers in Appendix A. and workload mobility. Catalyst 6500 Series 2. server clustering. a. you learn about the latest Cisco innovations in the Data Center Extension solutions and in LAN Extension in particular.

it is mainly used to enable the benefit of geographically separated data centers. IGMPv2 c. No need to enable IGMP on the OTV join interface 4. Different types of technologies provide Layer 2 extension solutions: Ethernet over Multiprotocol Label Switching (EoMPLS): EoMPLS can be used to provision point-to-point Layer 2 Ethernet connections between two sites using MPLS as the transport. VPLS delivers a virtual multiaccess Ethernet network. and database layers. Using these technologies to extend the LAN across multiple data centers creates a series of challenges: Failure isolation: The extension of Layer 2 domains between multiple data center sites can cause the data centers to share protocols and failures that would normally have been isolated when interconnecting data centers over an IP network. Virtual Private LAN Services (VPLS): Similar to EoMPLS. instead of point-to-point Ethernet pseudowires. application. IGMPv1 b. 1 b. True or False? You need to do no shutdown for the overlay interface after adding it. 4 d. These technologies increase the total bandwidth over the same number of fibers. 8 5. However. a. dark fiber might be available to build private optical connections between data centers. Dark fiber: In some cases. Dense wavelength-division multiplexing (DWDM) or coarse wavelengthdivision multiplexing (CWDM) can increase the number of Layer 2 connections that can be run through the fibers. Layer 2 extensions might be required at different layers in the data center to enable the resiliency and clustering mechanisms offered by the different applications at the web. VPLS uses an MPLS network as the underlying transport network. A solution that provides Layer 2 connectivity. False Foundation Topics Introduction to Overlay Transport Virtualization Data Center Interconnect (DCI) solutions have been available for quite some time. True b. 2 c.a. yet restricts the reach of the flood domain. IGMPv3 d. is necessary to contain failures and preserve the resiliency achieved by the use of multiple data centers. Transport infrastructure nature: The nature of the transport between data centers varies depending on the location of the data centers and the availability and cost of services in the . How many join interfaces can you create per logical overlay? a.

different areas. but will usually involve a mix of complex protocols. therefore. this results in failure containment within a data center. which is independent from the other site even though all sites have a common Layer 2 domain. Multihoming with end-to-end loop prevention: LAN extension techniques should provide a high degree of resiliency. the use of available bandwidth between data centers must be optimized to obtain the best connectivity at the lowest cost. and no extra configuration is needed to make it work. Mechanisms must be provided to prevent loops that might be induced when connecting bridged networks that are multihomed. Multicast and broadcast traffic should also be replicated optimally to reduce bandwidth consumption. OTV provides Layer 2 extension between remote data centers by using a concept called MAC address routing. This has a huge benefit over traditional data center interconnect solutions. and convergence times continue to improve. scale. Bridge Protocol Data Units (BPDU) are not forwarded on the overlay. load balancing. Balancing the load across all available paths while providing resilient connectivity between the data center and the transport network requires added intelligence above and beyond that available in traditional Ethernet switching and Layer 2 VPNs. An IP-capable transport is the most generalized offering. following are the minimum requirements to run OTV on each platform: Cisco Nexus 7000 Series and Cisco Nexus 7700 platform: . A cost-effective solution for the interconnection of data centers must be transport agnostic and give the network designer the flexibility to choose any transport between data centers based on business and operational preferences. Complex operations: Layer 2 VPNs can provide extended Layer 2 connectivity across data centers. OTV must be deployed only at specific edge devices. and it provides flexibility and enables long-reach connectivity. addition or removal of an edge device does not affect other edge devices on the overlay. meaning a control plane protocol is used to exchange MAC address reachability information between network devices providing the LAN extension functionality. and failure doesn’t propagate into other data centers and stops at the edge device. There is no need to extend Spanning Tree Protocol (STP) across the transport infrastructure. it is built in natively in OTV using the same control plane protocol along with loop prevention mechanisms. Finally. and path diversity: When extending Layer 2 domains across data centers. Bandwidth utilization with replication. Cisco has two main products that currently support OTV: Cisco Nexus 7000 Series switches and Cisco ASR 1000 Series aggregation services routers. Although the features. The loop prevention mechanisms prevent loops on the overlay and prevent loops from the sites that are multihomed to the overlay. OTV doesn’t require additional configuration to offer multihoming. A simple overlay protocol with built-in capability and point-to-cloud provisioning is crucial to reducing the cost of providing this connectivity. distributed provisioning. Each Ethernet frame is encapsulated into an IP packet and delivered across the transport infrastructure. multihoming of the Layer 2 sites onto the VPN is required. A solution capable of using an IP transport is expected to provide the most flexibility. and an operationally intensive hierarchical scaling model. Each site will have its own spanning-tree domain. which typically rely on data plane learning and the use of flooding across the transport infrastructure to learn reachability. This is built-in functionality. Therefore. and there is no need to establish virtual circuits between the data center sites.

2(2) or later (Cisco Nexus 7700 platform) Transport Services license Cisco ASR 1000 Series: Cisco IOS XE Software Release 3. Figure 4-1 OTV Terminology OTV edge device: The edge device performs OTV functions. OTV Terminology Before you learn how OTV control plane and data plane works. These terminologies are used throughout the chapter. Each site can have one or more edge devices. Because the edge device needs to see the . which is shown in Figure 4-1.0(3) or later (Cisco Nexus 7000 Series) or Cisco NX-OS Release 6. you must know OTV terminology.Any M-Series (Cisco Nexus 7000 Series) or F3 (Cisco Nexus 7000 Series or 7700 platform) line card for encapsulation Cisco NX-OS Release 5.5 or later Advanced IP Services or Advanced Enterprise Services license Note This chapter focuses on OTV implementation using the Nexus 7000 Series. OTV configuration is done on the edge device and is completely transparent to the rest of the infrastructure. it receives the Layer 2 traffic for all VLANS that need to be extended to remote sites and dynamically encapsulates the Ethernet frames into IP packets and sends them across through the transport infrastructure.

spanning-tree. At the time of writing this book. So we create one VDC per each Nexus 7000 to have redundancy. It can be owned by the customer or by the service provider. All the OTV configuration is applied to the overlay interface. are performed on the internal interfaces. OTV edge devices continue to use the site VLAN to . it is recommended to have it at the Layer 3 boundary for the site so it can see all the VLANs. the frame is logically forwarded to the overlay interface. and flooding. It is recommended to use a dedicated VLAN as a site VLAN that is not extended across the overlay. It uses the site VLAN to elect the authoritative edge device for the VLANs. Site VLAN: OTV edge devices send hellos on the site VLAN to detect other OTV edge devices in the same site. which gets encapsulated in an IP unicast or multicast packet and sent to the remote data center. starting with the 5. or both. Transport network: This is the network that connects the OTV sites. The forwarder is known as the Authoritative Edge Device (AED). OTV overlay interface: The OTV overlay interface is a logical multiaccess and multicast capable interface. one of the recommended designs is to have a separate VDC on the Nexus 7000 acting as the OTV edge device. Prior to NX-OS release 5. the only requirement is to allow IP reachability between the edge devices. However. typical Layer 2 functions.2(1).2 (1) NX-OS release. the mechanism of electing AEDs that is based only on the communication established on the site VLAN can create situations (resulting from connectivity issues or misconfiguration) where OTV edge devices belonging to the same site can fail to detect one another. When the OTV edge device receives a Layer 2 frame that needs to be delivered to a remote data center site. Site: An OTV site is the same as a data center. The join interface “joins” the overlay network and discovers the other overlay remote edge devices. The internal interface is usually configured as an access or trunk interface and receives Layer 2 traffic. however. such as local switching. The internal interface doesn’t need any specific OTV configuration. The IP address of the join interface is used to advertise reachability of MAC addresses present in the site. This could ultimately result in the creation of a loop scenario. thereby ending up in an active/active mode (for the same data VLAN). Overlay network: This is a logical network that interconnects OTV edge devices.Layer 2 traffic. which need to be extended. Authoritative edge device: OTV elects a designated forwarding edge device per site for each VLAN to provide loop-free multihoming. The edge devices talk to each other on the internal interfaces to elect the AED. each OTV device maintains dual adjacencies with other OTV edge devices belonging to the same DC site. You can have only one join interface associated with a given overlay. It must be defined by the user. To address this concern. It is part of the OTV control plane and doesn’t need additional configuration. it can be a physical interface or subinterface. such as a Layer 3 port channel or a Layer 3 port-channel subinterface. OTV used the Site VLAN within a site to detect and establish adjacencies with other OTV edge devices. and you can have one join interface shared across multiple overlays. The OTV edge device can be deployed in many places. OTV join interface: The OTV join interface is a Layer 3 interface. it can be a logical interface. which has VLANs that must be extended to another site—meaning another data center. OTV internal interface: The Layer 2 interface on the edge device that connects to the VLANS to be extended is called the internal interface.

To enable this new functionality. OTV Control Plane One of the key differentiators of OTV is the use of the control plane protocol. you can use one or more OTV edge devices as an adjacency server. Unicast Enabled Transport: If the transport infrastructure is not enabled for multicast. OTV devices also maintain a second adjacency. The OTV hello messages are sent across the logical overlay to reach all OTV remote edge devices. In addition to the site adjacency. All OTV edge devices belonging to a specific overlay register with the available list of the adjacency servers. adding an external IP header.2(1). The edge devices join the group as hosts. The combination of the site identifier and the IS-IS system ID is used to identify a neighbor edge device in the same site. starting from NX-OS 5. 2. depending on the nature of the transport infrastructure: Multicast Enabled Transport: If the transport infrastructure is enabled for multicast. which is used to carry control protocol exchanges. OTV edge devices must become adjacent to each other. The following steps explain the sequence. all OTV edge devices should be configured to join a specific Any Source Multicast (ASM) group. the enterprise must negotiate the use of this ASM group with the service provider. This happens without enabling PIM on this interface. Multicast Enabled Transport Infrastructure When the transport infrastructure is enabled for multicast. which leads to the discovery of all OTV edge devices belonging to a specific overlay: 1. Each OTV edge device sends an IGMP report to join the specific ASM group. This can happen in two ways. which runs between OTV edge devices. the original frames must be OTV encapsulated. All edge devices that are in the same site must be configured with the same site identifier. only IGMPv3 is enabled on the interface using the IP IGMP version 3 command. To be able to exchange MAC reachability information. established via the join interfaces across the Layer 3 network domain. The entire edge device will be a source and a receiver on that ASM group. which relies on flooding of Layer 2 traffic across the transport infrastructure. The OTV control protocol running on each OTV edge device generates hello packets that need to be sent to all other OTV edge devices. This adjacency is called site adjacency. 3. This site identifier is advertised in IS-IS hello packets sent over both the overlay and on the site VLAN. we use a multicast group to exchange control protocol messages between the OTV edge devices. This is required to communicate its existence and to trigger the establishment of control plane adjacencies within the same overlay. If the transport infrastructure is owned by the service provider. The source IP address in the external header is set to the IP address of the join . For this to happen.discover and establish adjacency with other OTV edge devices in a site. You need only to specify the ASM group to be used and associate it with a given overlay interface. The control protocol function is to advertise MAC address reachability information instead of using data plane learning. it is now mandatory to configure each OTV device also with a site-identifier value. named overlay adjacency. leveraging the join interface.

The use of the ASM group to transport the hello messages allows the edge devices to discover each other as if they were deployed on a shared LAN segment.interface of the edge device. The receiving OTV edge devices decapsulate the packets. the next step is to exchange MAC address reachability information: 1. 2. This prevents unauthorized routing messages from being injected into the network routing domain. The OTV edge device learns new MAC addresses on its internal interface using traditional data plane learning. the source IP address in the external header is set to the IP address of the join interface of the OTV edge device. The OTV update is replicated in the transport infrastructure and delivered to all the remote OTV edge devices on the same overlay. The routing protocol used to implement the OTV control plane is IS-IS. originally designed with the capability of carrying MAC address information in the TLV. The same process occurs in the opposite direction. The multicast frames are carried across the transport and optimally replicated to reach all the OTV edge devices that joined that multicast group. 5. the unicast deployment of OTV is . After the OTV devices discover each other. The resulting multicast frame is then sent to the join interface toward the Layer 3 network domain. whereas the destination is the multicast address of the ASM group dedicated to carry the control protocol. For security. 6. It was selected because it is a standard-based protocol. 3. the OTV control plane protocol is enabled. The IP destination address of the packet in the outer header is the multicast group used for OTV control protocol exchanges. Note To enable OTV functionality you must use the feature otv command. 4. The only difference with the traditional MAC address table entry is that for the MAC address in the remote data center. The hellos are passed to the control protocol process. the device doesn’t display any OTV commands until you enable the feature. The message is OTV encapsulated and sent into the OTV transport infrastructure. The OTV edge devices receiving the update decapsulate the message and deliver it to the OTV control process. instead of pointing to a physical interface. it will point to the joint interface of the remote OTV edge device. 4. and the end result is the creation of OTV control protocol adjacencies between all edge devices. After enabling the OTV overlay interface. Unicast-Enabled Transport Infrastructure OTV can be deployed with a unicast-only transport infrastructure. it is possible to leverage the IS-IS HMAC-MD5 authentication feature to add an HMAC-MD5 digest to each OTV control protocol message. The MAC address reachability information is imported to the MAC address tables of the edge devices. An OTV update message is created containing information for the MAC address learned through the internal interface. it doesn’t require a specific configuration command to do it.

Each OTV edge device must join a specific overlay register with the adjacency server. As shown in Figure 4-2. a normal Layer 2 lookup determines how to reach MAC 2. MAC 1 needs to talk to MAC 2.used when a multicast-enabled transport infrastructure is not an option. Figure 4-2 Intrasite Layer 2 Unicast Communication Intersite unicast communication: Intersite means that the servers that need to talk to each other . They both belong to Site A and they are in the same VLAN. the information in the MAC table points to Eth1 and the frame is delivered to the destination. The list is called a unicast-replication list. it builds the list of all the OTV edge devices that are part of the same overlay. Data Plane for Unicast Traffic After all OTV edge devices are part of an overlay establish adjacency and MAC address reachability information is exchanged. The OTV control plane over unicast transport infrastructure works in the same way the multicast transport works. traffic can start flowing across the overlay. each OTV edge device needs to know the list of the other devices that are part of the same overlay. The only difference is that each OTV edge device must create a copy of the control plane packet and send it to each remote OTV edge device. This list is periodically sent to all the listed OTV edge devices. which is the OTV edge device in this case. When a Layer 2 frame is received at the aggregation device. To be able to send control protocol messages to all the OTV edge devices. There are two types of Layer 2 communication: intrasite (traffic within the same site) and intersite (traffic between two different sites on the same overlay). When the adjacency server receives hello messages from the remote OTV edge devices. The following process details the communication: Intrasite unicast communication: Intrasite means that both servers that need to talk to each other reside in the same data center. This is achieved by having one or more OTV edge devices as an adjacency server.

which is the OTV edge device at Site A in this case. it decapsulates it and exposes the original Layer 2 packet. MAC 1 belongs to Site A and MAC 3 belongs to Site B. Figure 4-3 Intersite Layer 2 Unicast Communication Figure 4-4 shows the OTV data plane encapsulation on the original Ethernet frame. The OTV edge device in Site A encapsulates the Layer 2 frame. As shown in Figure 4-3. the source IP of the outer header is the IP address of the joint interface of the OTV edge device in Site A. a normal Layer 2 lookup happens. . This time. after that. MAC 3 in the MAC table in Site A points to the IP address of the remote OTV edge device in Site B. 4. The OTV edge device at Site B performs another Layer 2 lookup on the original Ethernet frame. When the remote OTV edge device receives the IP packets. and the frame is delivered to MAC 3 destination. The OTV edge device removes the CRC and the 802. The destination IP address in the outer header is the IP address of the joint interface of the remote OTV edge device in Site B.1Q fields from the original Layer 2 frame and adds an OTV shim (containing the VLAN and the overlay information) and an external IP header. The following procedure details the process for MAC 1 to establish communication with MAC 3: 1. 2. The OTV encapsulation increases the overall MTU size by 42 bytes. MAC 1 needs to talk to MAC 3. a normal IP unicast packet is sent across the transport infrastructure. The Layer 2 frame is received at the aggregation device. 3. MAC 3 will point to a local physical interface.reside in different data centers but belong to the same overlay.

Step 1 starts when a multicast source is activated in a given data center site. which is explained in the following steps and shown in Figure 4-5. we distinguished between two scenarios where the transport infrastructure is multicast enabled or not. Data Plane for Multicast Traffic You might be faced with situations in which there is a need to establish multicast communication between remote sites. including the join interface and any links between the data centers that are in the OTV transport. in this case. These groups are different from the ASM groups used for the OTV control plane. 1. The mapping happens between the site multicast groups and the SSM group range in the transport infrastructure. In the OTV control plane section. we use a set of Source Specific Multicast (SSM) groups in the transport to carry the Layer 2 multicast streams. Lower the MTU on all servers so that the total packet size does not exceed the MTU of the interfaces where traffic is encapsulated. This means that mechanisms like Path MTU Discovery (PMTUD) are not an option in this case. . In a Layer 2 domain. the assumption is that all intermediate LAN segments support at least the configured interface MTU size of the host. A multicast source is activated on the west side and starts streaming traffic to the multicast group Gs. The multicast traffic needs to flow across the OTV overlay. There are two ways to overcome the issue of the increased MTU size: Configure a larger MTU on all interfaces where traffic will be encapsulated. Note Nexus 7000 doesn’t support fragmentation and reassembly capabilities. This can happen when the multicast sources are deployed in one site and the multicast receivers are in another remote site. That means you need to accommodate the additional 42 bytes in the MTU size along the path between source and destination OTV edge devices.Figure 4-4 OTV Data Plane Encapsulation All OTV control and data plane packets originate from an OTV edge device with the Don’t Fragment (DF) bit set.

The east edge device. The local OTV edge device receives the first multicast frame and creates a mapping between the group Gs and a specific SSM group Gd available in the transport infrastructure. The remote edge device in the west side receives the GM-Update and updates its Outgoing Interface List (OIL) with the information that group Gs needs to be delivered across the OTV overlay. Figure 4-5 Mapping of the Site Multicast Group to the SSM Range Step 2 starts when a receiver in the remote site deployed in the same VLAN as the multicast source decides to join the multicast group Gs. 5. Gd) SSM group. 3.2. belonging to VLAN A. The OTV control protocol is used to communicate the Gs-to-Gd mapping to all remote OTV edge devices. 1. The client sends an IGMP report inside the east site to join the Gs group. 4. The OTV device sends an OTV control protocol message to all the remote edge devices to communicate this information. 2. The range of SSM groups to be used to carry Layer 2 multicast data streams are specified during the configuration of the overlay interface. which is explained in the following steps and shown in Figure 4-6. in turn. . The OTV edge device snoops the IGMP message and realizes there is an active receiver in the site interested in group Gs. 3. The edge device in the east side finds the mapping information previously received from the OTV edge device in the west side identified by the IP address IP A. sends an IGMPv3 report to the transport to join the (IP A. The mapping information specifies the VLAN (VLAN A) to which the multicast source belongs and the IP address of the OTV edge device that created the mapping. This allows building an SSM tree (group Gd) across the transport infrastructure that can be used to deliver the multicast stream Gs.

Failure Isolation Data center interconnect solutions should provide a Layer 2 extension between multiple data center sites without giving up resilience and stability. Broadcast traffic will be handled the same way as the OTV hello messages. For unicast only transport infrastructure. IGMP snooping is used to ensure that Layer 2 multicast traffic is sent to the remote DC only when an active receiver exists. This feature enables the administrator to configure a dedicated broadcast group to be used by OTV in the multicast core network. Unknown Unicast Traffic suppression. we leverage the same ASM multicast groups in the transport already used for OTV control protocol.2(2) introduces a new optional feature called dedicated broadcast group. By separating the two multicast addresses. you don’t need to add any configuration to achieve it. OTV achieves this goal by providing four main functions: Spanning Tree Protocol (STP) isolation. This helps to reduce the amount of head-end replication. you can now enforce different QoS treatments for data traffic and control traffic. This control plane feature makes each site STP domain independent.Figure 4-6 Receiver Joining Multicast Group Gs When the OTV transport infrastructure is not multicast enabled. Layer 2 multicast traffic can be sent across the overlay using head-end replication by the OTV edge device deployed in the DC where the multicast source is enabled. Note Cisco NX-OS 6. and broadcast policy control. head-end replication performed on the OTV device in the site originating the broadcast would ensure traffic delivery to all the remote OTV edge devices that are part of the unicast-only list. ARP suppression. This section explains each of these functions STP Isolation OTV doesn’t forward STP BPDUs across the overlay. This is native in OTV. The STP root placement configuration and parameters are decided for each site separately. When we have a multicast-enabled transport. Unknown Unicast Handling .

consider the interaction between ARP and CAM table aging timers because incorrect settings can lead to traffic black holing. ARP Optimization ARP optimization is one of the native features in OTV that helps to reduce the amount of traffic sent across the transport infrastructure. The ARP aging timer on the OTV edge devices should always be set lower than the CAM table aging timer. The ARP request eventually reaches the destination machine. the OTV ARP aging timer is 480 seconds and the MAC aging timer is 1800 seconds. OTV control protocol will map the MAC address to the destination IP address of the join interface of the remote data center. When the OTV edge device receives a frame destined to a MAC address that is not in the MAC address table. because OTV . all unknown unicast frames will not be forwarded across the logical overlay. If the MAC address is in a remote data center. The OTV edge device in Site A can snoop the ARP reply and cache the contained mapping information. selective unknown unicast flooding based on the MAC address is supported. Having more than one OTV edge device at a given site. This feature is important. such as Site A. sends out an ARP request. This helps to reduce the amount of traffic sent across the transport infrastructure.Unknown Unicast Handling OTV control protocol advertises MAC address reachability information between OTV edge devices. This is the normal behavior of a classical Ethernet interface. OTV Multihoming Multihoming is a native function in OTV where two or more OTV edge devices provide a LAN extension for a given site. If the host’s default gateway is placed on a device different from Nexus 7000. this request is sent across the OTV overlay to Site B. however. otherwise. the Layer 2 traffic is flooded out the internal interfaces. As a workaround. it will not be forwarded across the OTV overlay but will be locally answered by the OTV edge device. which in turn creates an ARP reply message and sends it back to the originating host. Starting with NX-OS 6. you should put the ARP aging timer of the device to a value lower than its MAC aging timer. which require the flooding of Layer 2 traffic to function. which is a broadcast message. that doesn’t happen via the overlay. When a subsequent ARP request originates to the same IP address in Site A. for example. you can statically add the MAC address of the silent host in the OTV edge device MAC address table. When a device in one of the data centers. in the case of Microsoft Network Load Balancing Services (NLBS). It is assumed that no silent hosts are available and eventually the OTV edge device will learn the MAC address of the host and will advertise it to the other OTV edge devices on the overlay. By default on the Nexus 7000. The command otv flood mac mac-address vlan vlan-id must be added.2(2). Note For the ARP caching.

Because we have the same VLANs exist in multiple sites. it allows the use of the same FHRP configuration in different sites. OTV uses a VLAN called Site VLAN within a site to detect adjacency with other OTV edge devices. The site identifier is advertised through IS-IS hello packets.doesn’t send spanning tree BPDUs across the overlay. the outbound traffic will be able to follow the optimal and shortest path. so you will not see steps to create the Nexus 7000 VDC. To enable this functionality. which happens via the OTV join interface across the Layer 3 network domain. an Authoritative Edge Device (AED) concept is introduced. Figure 4-7 shows the topology used. It is important that you enable the filtering of FHRP messages across the overlay. First Hop Redundancy Protocol Isolation Another capability introduced by OTV is to filter the First Hop Redundancy Protocol (FHRP —HSRP. The AED role is elected on a per-VLAN basis at a given site. The combination of the system ID and the site identifier is used to identify a neighbor edge device belonging to the same site. . and so on) messages across the overlay. It is assumed the VDC is already created. In this case. In this case. the OTV device must be configured with a site-identifier value. if we don’t filter the hello messages between the sites it will lead to the election of a single default gateway for a given VLAN that will lead to a suboptimal path for the outbound traffic. the same default gateway is available. VRRP. Examples 4-1 through 4-5 use a dedicated VDC to do the OTV function. To be able to support multihoming. and it is characterized by the same virtual IP and virtual MAC addresses in each data center. This is necessary to localize the gateway at a given site and optimize the outbound traffic from that site. can lead to an end-to-end loop. always using the local default gateway. and to avoid creating an end-to-end loop. You will be using VLANs 10 and 5 to be extended to VLAN 15 as the site VLAN. OTV Configuration Example with Multicast Transport This configuration example addresses OTV deployment at the aggregation layer with a multicast transport infrastructure. All OTV edge devices in the same site are configured with the same site-identifier value. OTV maintains a second adjacency named overlay adjacency. In addition to the site adjacency.

99/30 AGG-VDC-A(config-if)# ip pim sparse-mode AGG-VDC-A(config-if)# ip igmp version 3 AGG-VDC-A(config-if)# no shutdown Example 4-4 Sample PIM Configuration on the Aggregation VDC Click here to view code image .Figure 4-7 OTV Topology for Multicast Transport Example 4-1 Sample Configuration for How to Enable OTV VDC VLANs Click here to view code image OTV-VDC-A(config)# vlan 5-10.98/30 OTV-VDC-A(config-if)# ip igmp version 3 OTV-VDC-A(config-if)# no shutdown Example 4-3 Sample Configuration of the Point-to-Point Interface on the Aggregation Layer Facing the Join Interface Click here to view code image AGG-VDC-A(config)# interface ethernet 2/1 AGG-VDC-A(config-if)# description [ Connected to OTV Join-Interface ] AGG-VDC-A(config-if)# ip address 172. 15 Example 4-2 Sample Configuration of the Join Interface Click here to view code image OTV-VDC-A(config)# interface ethernet 2/2 OTV-VDC-A(config-if)# description [ OTV Join-Interface ] OTV-VDC-A(config-if)# ip address 172.26.255.255.26.

Configure the OTV control group: Click here to view code image OTV-VDC-A(config-if)# otv control-group 239. Enable the OTV feature.101 ip pim ssm range 232. Configure the OTV data group: . (It should be the same for all the OTV edge devices belonging to the same site.1.0/8 Example 4-5 Sample Configuration of the Internal Interfaces Click here to view code image interface ethernet 2/10 description [ OTV Internal Interface ] switchport switchport mode trunk switchport trunk allowed vlan 5-10. 15 no shutdown Before creating and configuring the overlay interface. The following shows a sample configuration for the overlay interface for a multicast-enabled transport infrastructure: 1.26. Configure the OTV site identifier.255. otherwise.) Click here to view code image OTV-VDC-A(config)# otv site-identifier 0x1 3. Configure the OTV site VLAN: Click here to view code image OTV-VDC-A(config)# otv site-vlan 15 4.1 6. certain configurations must be done.AGG-VDC-A# show running pim feature pim interface ethernet 2/1 ip pim sparse-mode interface ethernet 1/10 ip pim sparse-mode interface ethernet 1/20 ip pim sparse-mode ip pim rp-address 172.0.0. you will not see any OTV commands: Click here to view code image OTV-VDC-A(config)# feature otv 2.1. Create and configure the overlay interface: Click here to view code image OTV-VDC-A(config)# interface overlay 1 OTV-VDC-A(config-if)# otv join-interface ethernet 2/2 5.

54c2.--------10 0000.99 9. Configure the VLAN range to be extended across the overlay: Click here to view code image OTV-VDC-A(config-if)# otv extend-vlan 5-10 8.-------------. You must bring up the overlay interface.3dc1 1 00:00:55 Overlay Next-hop(s) ----------ethernet 2/10 OTV-VDC-C OTV Configuration Example with Unicast Transport Examples 4-8 through 4-13 show sample configurations for an OTV deployment in a unicast-only .5579.-----.1.ac64 1 00:00:55 site 10 001b.26.0c07.Click here to view code image OTV-VDC-A(config-if)# otv data-group 232.0.1.26.255.36c2 172.255.26.26.98) Site vlan : 15 (up) OTV-VDC-A# show otv adjacency Overlay Adjacency database Overlay-Interface Overlay1 : Hostname System-ID Dest Addr OTV-VDC-C 0022.5579.1 Data group range(s) : 232.94 Up Time 1w0d 1w0d Adj-State UP UP Example 4-7 Check the MAC Addresses Learned Through the Overlay Click here to view code image OTV-VDC-A# show otv route OTV Unicast MAC Routing Table For Overlay1 VLAN MAC-Address Metric Uptime Owner ---.1.0/26 Join interface(s) : Eth2/2 (172. it is shut down by default: Click here to view code image OTV-VDC-A(config-if)# no shutdown Example 4-6 Verify That the OTV Edge Device Joined the Overlay Click here to view code image OTV-VDC-A# sh otv overlay 1 OTV Overlay Information Overlay interface Overlay1 VPN name : Overlay1 VPN state : UP Extended vlans : 5-10 (Total:6) Control group : 239.1.90 OTV-VDC-D 0022.255.7c42 172.-------.1. This is required to allow communication from the OTV VDC to the Layer 3 network domain: Click here to view code image OTV-VDC-A(config)# ip route 0. Create a static default route to point out to the join interface.1.255.0/0 172.0.0/26 7.

meaning acting as a client.transport. only the OTV edge devices for the new site need to be configured with adjacency server addresses.1. one of the OTV edge devices.1 unicast-only otv extend-vlan 5-10 Example 4-10 Client OTV Edge Device Configuration Click here to view code image feature otv . Figure 4-8 OTV Topology for Unicast Transport Example 4-8 Primary Adjacency Server Configuration Click here to view code image feature otv otv site-identifier 0x1 otv site-vlan 15 interface Overlay1 otv join-interface e2/2 otv adjacency-server unicast-only otv extend-vlan 5-10 Example 4-9 Secondary Adjacency Server Configuration Click here to view code image feature otv otv site-identifier 0x2 otv site-vlan 15 interface Overlay1 otv join-interface e1/2 otv adjacency-server unicast-only otv use-adjacency-server 10. The second step is to do the required configuration on the OTV edge device not acting as an adjacency server. When a new site is added. It is always recommended to configure a pair with a secondary adjacency server in another site for redundancy. Figure 4-8 show the topology that will be used in Examples 4-8 through 4-13. The first step is to define the role of the adjacency server. The client devices are configured with the address of the adjacent server.1.

2.1 / [None] Example 4-13 Identifying the Role of the Client OTV Edge Device Click here to view code image Generic_OTV# show otv overlay 1 OTV Overlay Information Site Identifier 0000.2) Site vlan : 15 (up) AED-Capable : Yes Capability : Unicast-Only Is Adjacency Server : Yes Adjacency Server(s) : 10.1.1 10.0000.1.2 unicast-only otv extend-vlan 5-10 Example 4-11 Identifying the Role of the Primary Adjacency Server Click here to view code image Primary_AS# show otv overlay 1 OTV Overlay Information Site Identifier 0000.3.otv site-identifier 0x3 otv site-vlan 15 interface Overlay1 otv join-interface e1/1 otv use-adjacency-server 10.0003 Overlay interface Overlay1 VPN name : Overlay1 VPN state : UP Extended vlans : 5-10 (Total:6) Join interface(s) : E1/1 (10.1.1.3) Site vlan : 15 (up) AED-Capable : Yes .3.2.1.0001 Overlay interface Overlay1 VPN name : Overlay1 VPN state : UP Extended vlans : 5-10 (Total:6) Join interface(s) : E2/2 (10.0000.1) Site vlan : 15 (up) AED-Capable : Yes Capability : Unicast-Only Is Adjacency Server : Yes Adjacency Server(s) : [None] / [None] Example 4-12 Identifying the Role of the Secondary Adjacency Server Click here to view code image Secondary_AS# show otv overlay 1 OTV Overlay Information Site Identifier 0000.2.0000.1.2.0002 Overlay interface Overlay1 VPN name : Overlay1 VPN state : UP Extended vlans : 5-10 (Total:6) Join interface(s) : E1/2 (10.

Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. Table 4-2 Key Topics for Chapter 4 Define Key Terms Define the following key terms from this chapter. loop prevention end-to-end. and check your answers in the glossary: overlay transport virtualization (OTV) Data Center Interconnect (DCI) Ethernet over Multiprotocol Label Switching (EoMPLS) Multiprotocol Label Switching (MPLS) Virtual Private LAN Services (VPLS) coarse wavelength-division multiplexing (CWDM) Spanning Tree Protocol (STP) Authoritative Edge Device (AED) virtual LAN (VLAN) bridge protocol data unit (BPDU) . we went through the fundamentals of OTV and how it can be used as a data center interconnect solution.2. OTV is the latest innovation. the only requirement is IP connectivity between the sites.2 Summary In this chapter.1 / 10. When it comes to Layer 2 extension solutions. With a few command lines you can extend Layer 2 between multiple sites. Table 4-2 lists a reference of these key topics and the page numbers on which each is found.1. noted with the key topics icon in the outer margin of the page. and is simple to configure.1. It provides native multihoming. you can run OTV over any transport infrastructure. OTV uses a concept called MAC address routing.Capability : Unicast-Only Is Adjacency Server : No Adjacency Server(s) : 10.2.

Media Access Control (MAC) address resolution protocol (ARP) Virtual Router Redundancy Protocol (VRRP) First Hop Redundancy Protocol (FHRP) Hot Standby Router Protocol (HSRP) Internet Group Management Protocol (IGMP) Protocol Independent Multicast (PIM) .

The Cisco Nexus platform provides a rich set of management and monitoring mechanisms. It also provides an overview of out-of-band and in-band management interfaces. The operational efficiency is an important business priority because it helps in reducing the operational expenditure (OPEX) of the data center. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. Table 5-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. and verification. You can find the answers in Appendix A. Management and Monitoring of Cisco Nexus Devices This chapter covers the following exam topics: Control Plane. configuration.Chapter 5. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. “Answers to the ‘Do I Know This Already?’ Quizzes. you should mark that question as wrong for purposes of the self-assessment. An efficient and rich management and monitoring interface enables a network operations team to effectively operate and maintain the data center network. and Management Plane Initial Setup of Nexus Switches Nexus Device Configuration and Verification Management and monitoring of network devices is an important element of data center operations. If you do not know the answer to a question or are only partially sure of the answer. Data Plane. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. and management plane. read the entire chapter. It explains the functions performed by each plane and the key features of the data plane. This chapter also identifies the mechanism available in the Nexus platform to protect the control plane of the switch. This includes initial configuration of devices and ongoing maintenance of all the components. . These methods are discussed with some important commands for initial setup. This chapter provides an overview of the operational planes of the Nexus platform. control plane. The Nexus platform provides several methods for device configuration and management.” Table 5-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter.

User plane 4. User plane 3. a. SNMP b. Select the protocols that belong to the control plane of the switch. Admin user password d. NETCONF c. d. It maintains the routing table. Control plane b. mgmt0 IP address and subnet mask c. Which of the following operational planes of a switch is responsible for configuring and monitoring the switch? a. Boot variable . b. STP d. Switch name b. Which of the following parameters are configurable by initial setup script? a. Select the protocols that belong to the management plane of the switch. Management plane d. Data plane c. SNMP b. 2. LACP 5. NETCONF c. It forwards user traffic. It stores user data on the switch. When you execute the write erase command. Which of the following operational planes is responsible for maintaining routing and switching tables of a multilayer switch? a. Default route in the global routing table 7. Control plane b. it deletes the startup configuration of the switch. LACP 6. What is the function of the data plane in a switch? a. It maintains mac-address-table. STP d. a. Which of the following parameters are not deleted by this command? a.1. IP address of loopback 0 e. Management plane d. c. Data plane c. The initial setup script of a Nexus switch performs the startup configuration of a system.

Link Aggregation Control Protocol (LACP) fast timers are configured. I/O module BIOS and image 10. a. From a web browser using HTTP protocol c. d. Kickstart image b. you get an error message. Which of the following commands is used to perform this task? a. E1/1 c. To upgrade Nexus switch software. 12. Port channel e. Identify the conditions in which In Service Software Upgrade (ISSU) cannot be performed on a Nexus 5000 switch. you must create VLAN 10 on the switch. show running-config default . enable the feature interface-vlan. SVI 13. which of the following components can be upgraded? (Choose all that apply. Admin user password 8. b. and then create SVI. The command syntax is incorrect.) a. Switch name d. Using DCNM software d. when you type the interface vlan 10 command. mgmt0 b. Which of the following are examples of logical interfaces on a Nexus switch? a. What could be the possible reason of this error? a. System image d. vPC is configured on the switch. b. IPv4 configuration on mgmt0 interface c. FEX image e. which of the following methods can be used? a. Loopback d. First. FEX is connected to the switch. Supervisor BIOS c. You want to check the current configuration of a Nexus switch. with all the default configuration parameters. 11. However. When using ISSU to upgrade the Nexus 7000 switch software. From NMS server using SNMP 9. c. d. c. The appropriate software license is missing. First. From CLI using the install all command b.b. You are configuring a Cisco Nexus switch and would like to create a VLAN interface for VLAN 10. An application on the switch has Cisco Fabric Services (CFS) lock enabled.

The purpose of building such network controls is to provide a reliable data transport for network endpoints. control plane. You want to see the list of interfaces that are currently in up state. show running-config all d. When a network endpoint sends a packet (data). show startup-config c. it is switched by these network devices based on the best path they learned using the control protocols. Which command will show you a brief list of interfaces with up state? a. show interface status d. connected to each other and configured in different topologies. show logging last 10 c. . These network nodes are switches and routers. This simple explanation of a network node leads to the three operational components of a network node. and management plane. These network endpoints are servers and workstations. You want to check the last 10 entries of the log file. show running-config 14. Typically. In a large network with hundreds of these devices. show logging 10 15. Which command do you use to perform this task? a. show interface connected Foundation Topics Operational Planes of a Nexus Switch The data center network is built using multiple network nodes or devices. show last 10 logging d.b. these network devices talk to each other using different control protocols to find the best network path. show interface up brief b. show logging b. Figure 5-1 shows the operational planes of a Nexus switch in a data center. These components are data plane. show interface brief | include up c. it is important to build a management framework to monitor and manage these devices.

or send it to the control plane for further processing. Store-and-Forward Switching In store-and-forward switches. The earlier method requires the packet to be stored in the memory before a forwarding decision can be made. This component of a network device receives endpoint traffic from a network interface and decides what to do with this traffic.Figure 5-1 Operational Plane of Nexus Data Plane The data plane is also called a forwarding plane. Following are a few characteristics of store-and-forward switching: Error Checking: Store-and-forward switches receive a frame completely and analyze it using the frame-check-sequence (FCS) calculation to make sure the frame is free from data-link errors. This approach is helpful in reducing the packethandling time within the network device. thereby improving the data plane latency of network switches to tens of microseconds. drop the traffic. data packets received from an incoming interface must be completely stored in switch memory or ASIC before sending them to an outgoing interface. with ASICs and FPGAs packets can be switched without storing them in the memory. The advancements in integrated circuit technologies allowed network device vendors to move the Layer 2 forwarding decision from Complex Instruction Set Computing (CISC) and Reduced Instruction Set Computing (RISC) processors to the Application-Specific Integrated Circuits (ASIC) and Field-Programmable Gate Arrays (FPGA). however. It can process incoming traffic by looking at the destination address in its forwarding table and deciding on an appropriate action. In a typical Layer 2 switch. The switch then performs the forwarding process. . the forwarding decisions are based on the destination MAC address of data packets. This action could be to forward traffic to an outgoing interface.

The UCF is always used to schedule and switch packets between the ports of UPC. Therefore. and weighted fairness across all inputs. which ensures high throughput. The switch does not have to wait for the rest of the packet to make its forwarding decision. but the egress port is busy transmitting a frame coming in from another interface. for a 10 Gbps port it provides a 20 percent over-speed rate. however. Unified Crossbar Fabric (UCF): It provides a switching fabric by cross-connecting the UPCs. low latency.Buffering: The process of storing and then forwarding enables the switch to handle various networking conditions. Each data path connects to Unified Crossbar Fabric (UCF) via a dedicated fabric interface at 12 Gbps. . The scheduler coordinates the use of the crossbar for a contention-free match between input and output pairs. Egress Port Congestion: If a cut-though forwarding decision is made. The store-and-forward switch architecture is simple because it is easier to forward a stored packet. In this case. It is a single-stage. This technique reduces the network latency. Access Control List: Because a store-and-forward switch stores the entire packet. cut-through switching only flags but does not drop invalid packets. Each UPC manages eight ports. Invalid Packets: It starts forwarding the packet before receiving it completely and making sure it is valid. high-performance. Cut-Through Switching In the cut-through switching mode. Nexus 5500 Data Plane Architecture Data plane architecture of Nexus switches vary based on the hardware platform. and each port has a dedicated data path. switches start forwarding a frame before the frame has been completely received. an enhanced iSLIP algorithm is used for scheduling. In Nexus 5500. In this switching mode. Figure 5-2 shows Nexus 5500 data plane architecture. Following are a few characteristics of cut-through switching: Low Latency Switching: A cut-through switch can make a forwarding decision as soon as it has looked up the DMAC address of the data packet. The forwarding decision is made as soon as the switch sees the portion of the frame with the destination address. the switch can examine the necessary portions to permit or deny that frame. The Cisco Nexus 5500 switch data plane architecture is primarily based on two custom built ASICs developed by Cisco: Unified Port Controller (UPC): It provides data plane packet processing on ingress and egress interfaces. nonblocking crossbar with an integrated scheduler. The ingress buffering process provides the flexibility to support any mix of Ethernet speeds. packets with errors are forwarded to other segments of the network. which helps ensure line-rate throughput regardless of the internal packet overhead imposed by ASICs. Therefore. the basic principles of data plane operations are the same. the switch needs to buffer this packet. the frame is not forwarded in a cut-through fashion.

Figure 5-2 Nexus 5500 Data Plane Nexus 5500 utilizes both cut-through and store-and-forward switching. The switch fabric is designed to forward 10 G packets in cut-through mode. Table 5-2 shows the interface types and modes of switching. Cut-through switching can be performed only when the ingress data rate is equivalent or faster than the egress data rate. Table 5-2 Nexus 5500 Switching Modes Control Plane . The 1 G to 1 G switching is performed in store-and-forward mode.

including Cisco Discovery Protocol (CDP) Bidirectional Forwarding (BFD) Unidirectional Link Detection (UDLD) Link Aggregation Control Protocol (LACP) Address Resolution Protocol (ARP) Spanning Tree Protocol (STP) FabricPath Figure 5-3 shows the Layer 2 and Layer 3 control plane protocols of a Nexus switch. and the system is managed inband. The supervisor complex is connected to the data plane in-band through two internal ports running 1-Gbps Ethernet. Routing protocols such as OSPF. let’s look at the Cisco Nexus 5548P switch. To understand control plane of Nexus. or BGP are examples of control plane protocol. This information is then processed to create tables that help the data plane to operate. Figure 5-3 Control Plane Protocols Nexus 5500 Control Plane Architecture The Nexus switches control plane architecture has a similar component for almost all Nexus switches. or through the out-of-band 10/100/1000-Mbps management port. However. the configuration of these components is different based on the platform and hardware version. The control plane of this switch runs Cisco NX-OS software on a dual-core 1.The control plane maintains the information necessary for the data plane to operate. EIGRP. This information is collected and computed using complex protocols and algorithms. the control plane is not limited to only routing protocols.7-GHz Intel Xeon Processor C5500/C3500 Series with 8 GB of DRAM. . The control plane of a network node can speak to the control plane of its neighbor to share routing and switching information. however. Table 5-3 summarizes the control plane specifications of a Nexus 5500 switch. The control plane performs several other functions.

Figure 5-4 Control Plane of Nexus 5500 Cisco NX-OS software provides isolation between control and data forwarding planes within the device. Control Plane Policing . This isolation means that a disruption within one plane doe not disrupt the other.Table 5-3 Nexus 5500 Control Plane Specifications Figure 5-4 shows the control plane of a Nexus 5500 switch.

or a slow response for applications Slow or unresponsive interactive sessions with the CLI Processor resource exhaustion. video. it ensures network stability. The attacker can generate IP traffic streams to the control plane or management plane at a very high rate. These packets include router updates and keepalive messages. Redirected packets: Packets that are redirected to the supervisor module. Some examples of DoS attacks are outlined next: Internet Control Message Protocol (ICMP) echo requests IP fragments TCP SYN flooding If the control plane of the switch is impacted by excessive traffic. the supervisor module spends a large amount of time in handling these malicious packets and preventing the control plane from processing genuine traffic. CoPP classifies the packets to different classes and provides a mechanism to individually control the rate at which the supervisor module receives these packets. a high volume of traffic from the data plane to the supervisor module could overload and slow down the performance of the entire switch. Packet . Hence. Any disruption or attacks to the supervisor module can bring down the control and management plane of the switch. resulting in serious network outages. such as poor voice. such as a broadcast storm. Glean packets: If a Layer 2 MAC address for a destination IP address is not present in the FIB. or keepalives Unstable Layer 2 topology Reduced service quality. therefore. you will observe the following symptoms on the network: High CPU utilization Route flaps due to loss of the routing protocol neighbor relationship. and packet delivery. such as the memory and buffers Indiscriminate drops of incoming packets Different types of packets can reach the control plane: Receive packets: Packets that have the destination address of a router. This policy map is similar to a normal QoS policy and is applied to all traffic entering the switch from a nonmanagement port. updates. For example. Features such as Dynamic Host Configuration Protocol (DHCP) snooping or dynamic Address Resolution Protocol (ARP) inspection redirect some packets to the supervisor module. reachability.Control Plane Policing The Control Plane Policing (CoPP) feature prevents unnecessary traffic from overwhelming the control plane resources. Another example is a denial of service (DoS) attack on the supervisor module. The purpose of CoPP is to protect the control plane of the switch from the anomalies within the data plane. In this case. Exception packets: Packets that need special handling by the supervisor module. The supervisor module of the Nexus switch performs both the management plane and the control plane function. The CoPP feature enables a policy map to be applied to the supervisor bound traffic. or from a denial of service (DoS) attack. it is a critical element for overall network stability and availability. the supervisor module receives the packet and sends an ARP request to the host. This includes an ICMP unreachable packet and a packet with IP options set.

The class l2-unpoliced has a BC value of 5 MB. You can configure the following parameters for policing: Committed information rate (CIR): This is the desired bandwidth. Choosing one of the following CoPP policy options from the initial setup can set the level of protection for the control plane of a switch: Strict: This policy is 1 rate and 2 color and has a BC value of 250 ms (except for the important class. Peak information rate (PIR): The rate above which data traffic is negatively affected. CoPP does not support distributed policing. The classes important. These conditions or colors are as follows: conform (green). and violate (red). One is called policing and the other is called rate limiting. or drop the packet. The setup utility allows building an initial configuration file using the system configuration dialog. normal-dhcp-relay-response. Extended burst (BE): The size that a traffic burst can reach before all traffic exceeds the PIR. which has a value of 1000 ms). the rate at which packets arrive at the supervisor module can be controlled by two mechanisms. The Cisco Nexus device hardware performs CoPP on a per-forwarding-engine basis. If you do not select a policy. The actions can transmit the packet. redirect. Lenient: This policy is 1 rate and 2 color and has a BC value of 375 ms (except for the important class. These values are 50 percent greater than the strict policy. and default have a BC value of 250 ms. Dual-rate policers monitor both CIR and peak information rate (PIR) of traffic. normal-dhcp. Moderate: This policy is 1 rate and 2 color and has a BC value of 310 ms (except for the important class. You can define only one action for each condition. Committed burst (BC): The size of a traffic burst that can exceed the CIR within a given unit of time. Skip: No control plane policy is applied. The classes critical. These values are 25 percent greater than the strict policy. normal. the Cisco NX-OS software installs the default CoPP system policy to protect the supervisor module from DoS attacks.2. undesirable. Policing is the monitoring of data rates and burst sizes for a particular class of traffic. A policer determines three colors or conditions for traffic. exception. which has a value of 1500 ms). this option is named none. mark down the packet.classification can be done using the following parameters: Source and destination IP address Source and destination MAC address VLAN Source and destination port Exception cause After the packets are classified. After the initial setup. therefore. exceed (yellow). management. Single-rate policers monitor the specified committed information rate (CIR) of traffic. NX-OS software applies strict policy. Dense: This policy is 1 rate and 2 color. which has a value of 1250 ms). you should choose rates so that the aggregate traffic does not overwhelm the . and monitoring have a BC value of 1000 ms. l2-default. In Cisco NX-OS releases prior to 5.

Configure the network node using network management protocols such as simple network management protocol (SNMP) and network configuration protocol (NETCONF). You can use Ethanalyzer to troubleshoot your network by capturing and decoding the control-plane traffic. based on the command-line version of Wireshark. Application Program Interface (API). Monitor network statistics by collecting health and utilization data from the network node using network management protocols such as simple network management protocol (SNMP). The management plane is used in day-to-day administration of the network node. Network administrators can learn about the amount and nature of the control-plane traffic within the switch. It improves troubleshooting and reduces time to resolution. Program the network node using One Platform Kit (onePK). It provides an interface to the network administrators (see Figure 5-5) and NMS tools to perform actions such as the following: Configure the network node using command-line interface (CLI). The packets captured by Ethanalyzer are only control plane traffic destined to the supervisor CPU. . JavaScript Object Notation (JSON).supervisor module. Manage the network node using an element manager such as Data Center Network Manager (DCNM). Control Plane Analyzer Ethanalyzer is a Cisco NX-OS built-in protocol analyzer. Ethanalyzer enables you to perform the following functions: Capture packets sent to and received by the switch supervisor CPU Set the number of packets to be captured Set the length of the packets to be captured Display packets with either very detailed protocol information or a one-line summary Open and save the packet data captured Filter packet captures based on many criteria Filter packet displays based on many criteria Decode the internal header of the control packet Management Plane The management plane of the network node deals with the configuration and monitoring of the control plane. Following are the key benefits of Ethanalyzer: It is part of the NX-OS integrated management tool set. It preserves and improves the operational continuity of the network infrastructure. and guest shell with support for Python. This tool is useful for troubleshooting problems related to the switch itself. Representational State Transfer (REST). It does not capture hardware-switched traffic between data ports of the switch. DCNM leverages XML APIs of the Nexus platform.

Figure 5-5 Management Plane of Nexus Nexus Management and Monitoring Features The Nexus platform provides a wide range of management and monitoring features. out-of-band (OOB) management means that management traffic is traversing though a dedicated path in the network. Figure 5-6 shows an overview of the management and monitoring tools. Some of these tools are discussed in the following section. The purpose of creating an out-of-band network is to increase . Figure 5-6 Nexus Management and Monitoring Tools Out-of-Band Management In a data center network.

The advantage of this approach is that during a network outage. routers. Figure 5-8 shows remote access to the console port via OOB network. network operators can reach the devices using this OOB management network. Because this network is built for management of data center devices.the security and availability of management capabilities within the data center. In Figure 5-7. and network administrator 2 is using in-band network management. It provides an RS-232 connection with an RJ-45 interface. it is important to secure it so only administrators can access it. This network provides connectivity to management tools. In-band management is discussed later in this section. and troubleshooting without any dependency on the network build for user traffic. network administrator 1 is using OOB network management. Figure 5-7 Out-of-Band Management Cisco Nexus switches can be connected to the OOB management network using the following methods: Console port Connectivity Management Processor (CMP) Management port Console Port The console port is an asynchronous serial port used for initial switch configuration. the management port of network devices. and ILO ports of servers. It is recommended to connect all console ports of the devices to a terminal or comm server router such as Cisco 2900 for remotely connecting to the console port of devices. monitoring. This network is dedicated for management traffic only and is completely isolated from the user traffic. and firewalls are deployed to create an OOB network. dedicated switches. In a data center. OOB management enables a network administrator to access the devices in the data center for routine changes. The remote administrator connects to the terminal server router and performs a reverse telnet to connect to the console port of the device. .

It provides the capability to take complete console control of the supervisor. It provides the capability to initiate a complete system restart. It provides monitoring of supervisor status and initiation of resets. Figure 5-9 shows how to connect to CMP from the active supervisor module. Note that CMP is available only in the Nexus 7000 Supervisor engine 1. The CMP provides OOB management and monitoring capability independent from the primary operating system. It provides complete visibility of the system during the entire boot process. its supervisor module. and all other modules without the need for separate terminal servers. Key features of the CMP include the following: CMP provides a dedicated operating environment independent of the switch processor. . the Cisco Nexus 7000 series switches have a connectivity management processor that is known as the connectivity management processor (CMP). ensure that you are in the default virtual device context (VDC). The CMP enables lightsout remote monitoring and management of the Cisco Nexus 7000 Series system. You can access the CMP from the active supervisor module. It provides access to critical log information on the supervisor. Before you begin.Figure 5-8 Console Port Connectivity Management Processor For management high availability.

For example. OOB connectivity via mgmt0 port is shown earlier in Figure 5-7. SCP. and other management information. This port is part of management VRF on the switch and supports speeds of 10/100/1000 Ethernet. In this method. In Cisco NX-OS. If the network is down. To disable a specific protocol from accessing vty sessions. proper controls must be enforced on vty lines. vty lines automatically accept connections using any configured transport protocols. you must globally disable the specific protocol. There is no need to build a separate OOB management network. A vty line is used for all remote network connections supported by the device. new management sessions cannot be established. you must disable Telnet globally using the no feature telnet command. or loopback interface that is reachable via the same path as user data. To help ensure that a device can be accessed through a local or remote management session. The administrator uses Secure Shell (SSH) and Telnet to the vty lines to create a virtual terminal session. . or Telnet are examples). in-band management will also not work.Figure 5-9 Connectivity Management Processor Management Port (mgmt0) The management port of the Nexus switch is an Ethernet interface to provide OOB management connectivity. SNMP. It enables you to manage the device using an IPv4 or IPv6 address. the number of configured lines available can be determined by using the show run vshd command. In-band Management In-band management utilizes the same network as user data to manage network devices. Management port is also known as mgmt0 interface. The OOB management interface (mgmt0) can be used to manage all VDCs. Cisco NX-OS devices have a limited number of vty lines. By default. The network administrator manages the switch using this in-band IP address. up to 16 concurrent vty sessions are allowed. regardless of protocol (SSH. switches are configured with an in-band IP address on a routed interface. to prevent a Telnet session to the vty line. You can configure an inactive session timeout and a maximum sessions limit for virtual terminals. When all vty lines are in use. In-band is done using vty of a Cisco NX-OS device. SVI. Each VDC has its own representation for mgmt0 with a unique IP address from a common management subnet for the physical switch that can be used to send syslog.

Table 5-4 Default Roles on Nexus 5000 A Nexus 7000 switch supports features such as VDC. users log in to the switch and perform tasks as per their role in the organization. Role-Based Access Control (RBAC) provides a mechanism to define the amount of access that each user has when the user logs in to the switch. the user can execute a combination of all the commands permitted by these roles. this user also has Role-B. VLANs. When roles are combined. With RBAC. the user is allowed access to the configuration commands. However. and each role can contain multiple rules. Therefore. you first define the required user roles and then specify the rules for each switch management operation that user role is allowed to perform. NX-OS enables you to create local users on the switch and assign a role to these users. if Role-A allows access to only configuration operations. For example.Role-Based Access Control To manage a Nexus switch. a permit rule takes priority over deny. Table 5-5 shows the default roles on a Nexus 7000 switch. The roles are defined to restrict user authorization to perform switch management operations. a user has Role-A. associate the appropriate role with the user account. These roles are assigned to users so they have limited access to switch operations as per their role. a Nexus 7000 supports additional roles to administer and operate the VDCs. For example. Each user can have multiple roles. which allows access to the configuration command. In this case. a network operations center (NOC) engineer might want to check the health of the network. but he is not permitted to make changes to the switch configuration. User Roles User roles contain rules that define the operations allowed on the switch. If a user belongs to multiple roles. however. and interfaces of a Nexus switch. You can also define roles to limit access to specific VSANs. and Role-B allows access to only debug operations. NX-OS has some default user roles. which then determines what the individual user is allowed to do on the switch. . you can define new roles and assign them to the users. When you create a user account on the switch. Table 5-4 shows the default roles on a Nexus 5000 switch. For example. which is a virtual switch within the Nexus chassis. which denies access to a configuration command. Each role has a list of rules that define the permission to perform a task on the switch. users who belong to both Role-A and Role-B can access configuration and debug operations.

VLANs. They allow limiting access to the interfaces. Rules are executed in descending order (for example. rule 2 is checked before rule 1). Rules defined for a user role override the user role policies. The most granular control parameter of the rule is the command. Feature: All the commands associated with a switch feature. Users see only the CLI commands they are permitted to execute when using context-sensitive help. The next level of control parameter is the feature. For example. .Table 5-5 Default Roles on Nexus 7000 Rules A rule is the basic element of a role that defines the switch operations that are permitted or denied by a user role. 256 rules can be assigned to a single role. and VSANs. A user account can belong to 64 roles. Role changes do not take effect until the user logs out and logs in again. that user can execute the most privileged union of all the rules in all assigned roles. you define a user role policy to access a specific interface. RBAC Characteristics and Guidelines Some characteristics and guidelines for RBAC are outlined next: Roles are associated to user accounts in the local database or to the users in a remote Authentication Authorization and Accounting (AAA) database. and the user does not have access to the interface unless you configure a command rule to permit the interface command. The feature group combines related features for the ease of management. You can apply rules for the following parameters: Command: NX-OS command or group of commands defined in a regular expression. 64 user-defined roles can be configured per VDC. Enter the show role feature group command to display the default feature groups available for this parameter. User Role Policies The user role policies are used to limit user access to the switch resources. If a user account belongs to multiple roles. which represents all commands associated with the feature. You can check the available feature names for this command using the show role feature command. The last level of control parameter is the feature group. Feature group: Default or user-defined group of features.

A user account can be associated to a privilege level or to RBAC roles. and plan for network growth. They cannot be referenced in an AAA server command authorization policy (for example. When this feature is enabled. To enable privilege level support on the Nexus switch. SNMP agent: A software component that resides on the device that is managed. such as switches and routers. the privilege level feature is disabled on the Nexus switch. Privilege Levels Privilege levels are introduced in NX-OS to provide IOS-compatible authentication and authorization functionality. use the feature privilege configuration command. These predefined privilege levels cannot be deleted. you must define the relationship between manager and agent. Table 5-6 NX-OS Privilege Levels Simple Network Management Protocol Cisco NX-OS supports Simple Network Management Protocol (SNMP) to remotely manage the Nexus devices. Role features and feature groups can be referenced only in the local user account database. SNMP works at the application layer and facilitates the exchange of management information between network management tools and the network devices. the predefined privilege levels are created as roles.Role features and feature groups should be used to simplify RBAC configuration when applicable. This feature is useful in a network environment that uses a common AAA policy for managing Cisco IOS and NX-OS devices. troubleshoot faults. RBAC roles can be distributed to multiple Nexus 7000 devices using Cisco Fabric Services (CFS). . There are three components of SNMP framework: SNMP manager: This is a network management system (NMS) that uses SNMP protocol to manage the network devices. AAA/TACACS+). the privilege levels can also be modified to meet different security requirements. Table 5-6 shows the IOS privilege levels and their corresponding NX-OS system-defined roles. On Nexus switches. By default. Network administrators use SNMP to remotely manage network performance. To enable SNMP agent. the privilege level and RBAC roles can be configured simultaneously. Similar to the RBAC roles.

verify configuration. . The information collected from polling is analyzed to help troubleshoot network problems. the SNMP agent can generate notifications without any polling request from the SNMP manager. The network management system (NMS) uses the Cisco MIB variables to perform set operations to configure a device or a get operation to poll devices for specific information. The community string was sent in clear text and used for authentication between the SNMP manager and agent. SNMP Notifications In case of a network fault or other important event. SNMPv3 Security was always a concern for SNMP.Management Information Base (MIB): The collection of managed objects on the SNMP agent. Figure 5-10 shows components of SNMP. for communication between the SNMP manager and agent. These messages are more reliable because of the acknowledgement of the message from the SNMP manager. You can configure Cisco NX-OS to send notifications to multiple host receivers. NX-OS can generate the following two types of SNMP notifications: Traps: An unacknowledged message sent from the agent to the SNMP managers listed in the host receiver table. such as authentication and encryption. Figure 5-10 Nexus SNMP The Cisco Nexus devices support the agent and MIB. Informs: A message sent from the SNMP agent to the SNMP manager. Authentication: Determines the message is from a valid source. create network performance. The security features provided in SNMPv3 are as follows: Message integrity: Ensures that a packet has not been tampered with in transit. it can send the inform request again. for which the manager must acknowledge the receipt of the message. SNMPv3 solves this problem by providing security features. These messages are less reliable because there is no way for an agent to discover whether the SNMP manager has received the message. and monitor traffic loads. If the Cisco Nexus device never receives a response for a message.

3. In a Cisco Nexus switch. RMON Alarms An RMON alarm monitors a specific management information base (MIB) object for a specified interval. The Cisco Nexus device supports SNMPv1.1. Sampling interval: The interval to collect a sample value of the MIB object. The specified object must be an existing SNMP MIB object in standard dot notation (for example. and logs to monitor a Cisco Nexus device. .6. You can set an alarm on any MIB object that resolves into an SNMP INTEGER type. Each SNMP version has different security models and levels. and SNMPv3. and no events or alarms are configured.Encryption: Scrambles the packet contents to prevent it from being seen by unauthorized sources.17). 1. A combination of a security model and a security level determines which security mechanism is employed when handling an SNMP packet. SNMPv2c. events. and resets the alarm at another threshold value. You can enable RMON and configure your alarms and events by using the CLI or an SNMP-based network management station. RMON allows various network console systems and agents to exchange network-monitoring data.17 represents ifOutOctets. A security level is the permitted level of security within a security model.1.2. The Cisco NX-OS supports RMON alarms.1. you specify the following parameters: MIB: The object to monitor. developed by the Internet Engineering Task Force (IETF).2. A security model is an authentication strategy that is set up for a user and the role in which the user resides. RMON is disabled by default. Table 5-7 NX-OS SNMP Security Models and Levels Remote Monitoring Remote Monitoring (RMON) is an industry standard remote network monitoring specification. You can use alarms with RMON events to generate a log entry or an SNMP notification when the RMON alarm triggers. When you create an alarm. triggers an alarm at a specified threshold value. Table 5-7 shows SNMP security models and levels supported by NX-OS.2.

Different events can be generated for a falling alarm and a rising alarm. RMON Events RMON events are generated by RMON alarms.Sample type: Absolute sample is the value of MIB object. you can use Cisco Fabric Services (CFS) to distribute the syslog server configuration. You can associate an event to an alarm. Cisco Nexus switches only log system messages to a log file and output them to the terminal sessions. When you configure the severity level. By default. This log can be sent to the terminal sessions. Falling threshold: Device triggers a falling alarm or resets a rising alarm on this threshold. Table 5-8 describes the severity levels used in system messages. Nexus switches support logging of system messages. and syslog servers on remote systems. messages are sent to syslog servers only after the network is initialized. Rising threshold: Device triggers a rising alarm or resets a falling alarm on this threshold. You can configure the Nexus switch to send syslog messages to a maximum of eight syslog servers. Delta sample is the difference between two consecutive sample values. To support the same configuration of syslog servers on all switches in a fabric. a log file. When the switch first initializes. Table 5-8 System Message Severity Levels Embedded Event Manager . Events: The action taken when an alarm (rising or falling) is triggered. RMON supports the following type of events: SNMP notification: Sends an SNMP notification when a rising alarm or a falling alarm is triggered Log: Adds an entry in the RMON log table when the associated alarm is triggered Both: Sends an SNMP notification and adds an entry in the RMON log table when the associated alarm is triggered Syslog Syslog is a standard protocol defined in IETF RFC 5424 for logging system messages to a remote server. the system outputs messages at that level and lower.

2. Use the default action for the system policy. based on the configuration. and automation within the device. Event statements specify the event that triggers a policy to run. The action statement defines the action EEM takes when the event occurs. Log an exception. Each policy can have multiple action statements. The Embedded Event Manager (EEM) monitors events that occur on the device and takes action to recover or troubleshoot these events. EEM defines event filters so only critical events or multiple occurrences of an event within a specified time period trigger an associated action. Action Statements Action statements describe the action triggered by a policy. In Cisco NX-OS releases prior to 5. Generate an SNMP notification. you can configure multiple event triggers. you can configure only one event statement per policy. .Cisco NX-OS Embedded Event Manager provides real-time network event detection. Force the shutdown of any module. such as a workaround or a notification. Reload the device. However. Generate a syslog message. If no action is associated with a policy. These actions could be sending an e-mail or disabling an interface to recover from an event. these events are related to faults in the device. such as when an interface or a fan malfunctions. Generate a Call Home event. Update a counter. The event statement defines the event to look for as well as the filtering characteristics for the event. Shut down specified modules because the power is over budget. should be taken. beginning in Cisco NX-OS Release 5. EEM still observes events but takes no actions. Policies NX-OS EEM policy consists of an event statement and one or more action statements.2. There are three major components of EEM: Event statement Action statement Policies Event Statements An event is any device activity for which some action. programmability. NX-OS EEM supports the following actions in action statements: Execute any CLI commands. In many cases. Figure 5-11 shows a high-level overview of NX-OS Embedded Event Management.

. If a fault is detected. GOLD provides the following tests as part of the feature set: Boot diagnostics Continuous health monitoring On-demand tests Scheduled tests Figure 5-12 shows an overview of Nexus GOLD. the switch takes corrective action to mitigate the fault and to reduce potential network outages.Figure 5-11 Nexus Embedded Event Manager Generic Online Diagnostics (GOLD) Cisco generic online diagnostics (GOLD) is a suite of diagnostic functions that provide hardware testing and verification. It helps in detecting hardware faults and to make sure that internal data paths are operating as designed.

You can configure up to 50 e-mail destination addresses for each destination profile. Perform Initial Setup . Boot diagnostics include disruptive tests and nondisruptive tests that run during system boot and system reset. These alerts are grouped into alert groups. Call Home can deliver alerts to multiple recipients that you configure in destination profiles. or use Cisco Smart Call Home services to automatically generate a case with the technical assistance center (TAC). The XML format enables communication with the Cisco Systems Technical Assistance Center (Cisco-TAC). as per schedule. corrective actions are taken through Embedded Event Manager (EEM) policies. This feature can be used to page a network support engineer. use the diagnostic bootup level complete command. The switch includes the command output in the transmitted Call Home message. to start an on-demand test on module 6. and on demand from CLI. You can also start or stop an on-demand diagnostic test. If a failure condition is observed during the test. It is recommended to start a disruptive diagnostic test manually during a scheduled network maintenance window. You can use this feature to notify any external entity when an important event occurs on your device. Call Home includes a fixed set of predefined alerts on your switch. Multiple concurrent message destinations.Figure 5-12 Nexus GOLD During the boot diagnostics. Nexus Device Configuration and Verification This section reviews the process of initial setup of Nexus switches. For example. Multiple message format options. you can use following command: diagnostic start module 6 test all. and CLI commands are assigned to execute when an alert in an alert group occurs. use show diagnostic boot level. To enable boot diagnostics. Smart Call Home The Call Home feature provides e-mail-based notification of critical system events. To display the diagnostic level that is currently in place. and some important CLI commands to configure and verify network connectivity. Runtime diagnostics (also known as health monitoring diagnostics) include nondisruptive tests that run in the background during the normal operation of the switch. such as the following: Short Text: Suitable for pagers or printed reports. Following are advantages of using the Call Home feature: Automatic execution and attachment of relevant CLI command output. Continuous health checks occur in the background. GOLD tests are executed when the chassis is powered up or during an online insertion and removal (OIR) event. basic switch configuration. XML: Matching readable format that uses the Extensible Markup Language (XML) and the Adaptive Messaging Language (AML) XML schema definition (XSD). Full Text: Fully formatted message information suitable for human reading. e-mail a network operations center.

The only step you cannot skip is to provide a password for the administrator. the setup utility does not change that configuration. Figure 5-13 shows the initial configuration process of a Nexus switch. However. To start a new switch configuration. Therefore. Figure 5-13 Initial Configuration Procedure During the startup configuration. it goes into an initial setup mode. just press Enter. first you perform the startup configuration using the console port of the switch. the setup utility uses what was previously configured on the switch and goes to the next question. After the initial configuration file is created. This initial setup mode indicates that the switch is unable to find the configuration file at the startup. you can run the initial setup script anytime to perform a basic device configuration by using the setup command from the CLI. The startup configuration mode of the switch is an interactive system configuration dialog that guides you through the basic configuration of the system. and not the configured value. the setup utility changes to a configuration using that default. therefore. If a default answer is not available. the switch creates an admin user as the only initial user. and you have already configured the mgmt0 interface earlier. you can use CLI to perform additional configurations of the switch.When Nexus switches are shipped from the factory. The startup configuration enables you to build an initial configuration file with enough information to provide basic system management connectivity. You can press Ctrl+C at any prompt of the script to skip the remaining configuration steps and use what you have configured up to that point. To use the default answer for a question. The default . when you power on the switch the first time. if you press Enter to skip the IP address configuration of mgmt0 interface. The initial setup script for the startup configuration is executed automatically the first time the switch is started. This setup script can have slightly different steps based on the Nexus platform. there is no default configuration on these switches. this initial setup mode is also called startup configuration mode. For example. and there is no configuration file found in the NVRAM of the switch. However. if a default value is displayed on the prompt and you press Enter for that step.

So setup always assumes system defaults and not the current system configuration values.option is to use strong passwords. so you can verify it. After the initial parameters are provided to the startup script. If the configuration is correct. Click here to view code image Do you want to enforce secure password standard (yes/no): yes Enter the password for "admin": C1sco123 Confirm the password for "admin": C1sco123 ---.16. upper. the switch shows you the initial configuration based on your input. *Note: setup is mainly used for configuring the system initially.16.and lowercase. the default answers are mentioned in square brackets [xxx]. Would you like to enter the basic configuration dialog (yes/no): yes Do you want to enforce secure password standard (yes/no) [y]: y Create another login account (yes/no) [n]: n Configure read-only SNMP community string (yes/no) [n]: n Configure read-write SNMP community string (yes/no) [n]: n Enter the switch name : N7K Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: y Mgmt0 IPv4 address : 172. and it is saved in the switch NVRAM.1 Configure advanced IP options? (yes/no) [n]: n Enable the telnet service? (yes/no) [n]: n Enable the ssh service? (yes/no) [y]: y Type of ssh key you would like to generate (dsa/rsa) [rsa]: rsa Number of rsa key bits <1024-2048> [1024]: 1024 Configure default interface layer (L3/L2) [L3]:L3 Configure default switchport interface state (shut/noshut) [shut]: shut The following configuration will be applied: password strength-check Switchname N7K interface mgmt0 ip address 172. Press Enter at anytime to skip a dialog. Press Enter if you would like to accept the default parameter.0 Configure the default gateway? (yes/no) [y]: y IPv4 address of the default gateway : 172. Answer yes if you want to enter the basic system configuration dialog.0 no shutdown vrf context management ip route 0.1.1. you do not need to edit it. the startup script asks you to proceed with basic system configuration. you must use a minimum of eight characters with a combination of alphabets.0/0 172. or later using CLI command no password strengthcheck.0. and numerical characters. Use ctrl-c at anytime to skip the remaining dialogs.1.Basic System Configuration Dialog VDC: 2 ---This setup utility will guide you through the basic configuration of the system. Figure 5-14 shows the initial configuration setups of Nexus switches. you can accept this configuration.255. When you configure a strong password. You can change the password strength check during the initial setup script by answering no to the enforce secure password standard question.20 Mgmt0 IPv4 netmask : 255.0.16.255.20 255. Setup configures only enough connectivity for management of the system.16. During the basic system configuration.1.255. After the admin user password is configured.255. when no configuration is present.1 exit .

you can disconnect from the console port and access the switch using the IP address configured on the mgmt0 port. When you restart the switch after erasing its configuration from NVRAM. and you can configure IPv6 later using CLI. ISSU upgrades the software images without disrupting data-plane traffic.no feature telnet ssh key rsa 1024 force feature ssh no system default switchport system default switchport shutdown Would you like to edit the configuration? (yes/no) [n]: <enter> Use this configuration and save it? (yes/no) [y]: y Figure 5-14 Initial Configuration Dialogue After the initial configuration is completed. To upgrade the switch to newer software. the switch will go into the startup configuration mode. When an ISSU is initiated. The Cisco NX-OS software consists of two images: the kickstart image and the system image. except for the following: Boot variable definitions The IPv4 configuration on the mgmt0 interface. it updates the following components on the system as needed: . You can erase the configuration of a Nexus switch to return to the factory default. In-Service Software Upgrade Each Nexus switch is shipped with the Cisco NX-OS software. To delete the switch startup configuration from NVRAM. use the write erase command. and the switch will continue to operate normally. the Cisco NX-OS software warns you before proceeding with the software upgrade. It is important to note that the initial configuration script supports only the configuration of the IPv4 address on the mgmt0. In the case where ISSU will cause disruption to the data-plane traffic. you can use the write erase boot command. After installing the Nexus switch and putting it into production. the network administrator initiates the ISSU manually using one of the following methods: From CLI using a single command From the administrative interface of Cisco Data Center Network Manager (DCNM) On Nexus switches. The software upgrade of a switch in the data center should not have any impact on availability of services. This enables you to reschedule the software upgrade to a time less critical for your services to minimize the impact of outage. The write erase command erases the entire startup configuration. a time might come when you have to upgrade the NX-OS software to address the software defects or to add new features to the switch. This command will not delete the running configuration of the switch. including the IP address and subnet mask The route address in the management VRF To remove the boot variable definitions and the IPv4 configuration on the mgmt0 interface. There is minimal disruption to control-plane traffic only. ISSUs enable a network operator to upgrade NX-OS software without disrupting the operation of the switch.

When a network administrator initiates ISSU. a multistage ISSU cycle is initiated by the installer service. This separation in the control and data planes leads to a software upgrade with no service disruption. each fabric extender is upgraded to the Cisco NX-OS software release of the parent switch after the parent switch ISSU completes. To explain ISSU. This multistage ISSU cycle is designed to minimize overall system interruption with no impact on data traffic forwarding. It takes fewer than 80 seconds to restore control plane operation.1 and later. and the process varies based on the Nexus platform. and the ISSU on this switch causes the supervisor CPU to reset and load the new software image. Cisco NX-OS software supports ISSU from release 4. In a topology where Cisco Nexus 2000 Series fabric extenders are used. therefore. During the upgrade. but the data plane of the switch continues its operation and keeps forwarding packets. At this time the runtime state of the control plane and data plane are synchronized. . the data plane keeps forwarding packets. When the system recovers from the reset. existing flows for servers connected to the Cisco Nexus device should see no traffic disruption. the control plane of the switch is inactive.2. it is running an updated version of NX-OS. and supervisor state is restored to the previously known configuration. the ISSU performed on the parent switch automatically downloads and upgrades each of the attached fabric extenders. control plane operation is resumed. ISSU on the Cisco Nexus 5000 The Nexus 5000 switch is a single supervisor system. examples of Nexus 5000 and Nexus 7000 switches are described in the following sections.Kickstart image Supervisor BIOS System image Fabric extender (FEX) image I/O module BIOS and image A well-designed data center considers service continuity by implementing switch-level redundancy at each layer within the data center and by dual-homing servers to redundant switches at the access layer. As a result. During this reload. Figure 5-15 shows the ISSU process on Nexus 5000 switch.

5. Make sure both the kickstart file and the system file have the same version. .0. domain manager. All the configuration change requests from CLI and SNMP are denied. Use the dir command to verify that required NX-OS files are present on the system. After verifying the image on the bootflash. The ISSU process will abort if the system is configured with LACP fast timers.N2. The syntax of this command is show install all impact kickstart n5000-uk9-kickstart. No interface should be in a spanning tree designated forwarding state.3. Network topology changes.2. Typically. such as STP. use the show install all impact command to determine which components of the system will be affected by the upgrade. download the required software image and copy it to your upgrade location.bin Some of the important considerations for Nexus 5000 ISSU are the following: The Nexus 5000 switch does not allow configuration changes during ISSU. The Nexus 5000 switch undergoing ISSU must not be a root switch in the STP topology. these images are copied to the switch bootflash. Link Aggregation Control Protocol (LACP) fast timers are not supported. FC Fabric changes that affect zoning. ISSU cannot be performed if any application has Cisco Fabric Services (CFS) lock enabled.2.3.N2. and module insertion are not allowed during ISSU operation.5. FSPF.Figure 5-15 ISSU Process on Nexus 5000 To perform ISSU.0.bin system n5000uk9.

except when the ISSU process resets the CPU of the switch being upgraded. ISSU is supported on Nexus 5000 in a vPC configuration. While upgrading the switches in vPC topology. Please note that the SSO feature is available only when dual supervisor modules are installed in the system. The secondary vPC switch should be upgraded after the ISSU process completes successfully on the primary device. When one peer switch is undergoing an ISSU. It is a fully distributed system with nondisruptive Stateful Switch Over (SSO) and ISSU capabilities.If a zone merge or zone change request is in progress. the nondisruptive software upgrade is not supported when the Layer 3 license is installed. the control plane of the switch is running on new software. both the data plane and the control plane are available during the entire software upgrade process. first the standby supervisor is upgraded to the new software. In FEX virtual port channel (vPC) configurations. ISSU on the Cisco Nexus 7000 The Nexus 7000 switch is a modular system with supervisor and I/O modules installed in a chassis. The peer switch is responsible for holding on to its state until the ISSU process is completed successfully. After this step. the primary switch is responsible for upgrading the FEX. Bridge assurance must be disabled for nondisruptive ISSU. Note Always refer to the latest release notes for ISSU guidelines. on Nexus 7000. the linecards or I/O modules are upgraded. you should start with the primary vPC switch. The software is updated on the supervisor that is now in standby mode (previously active). Therefore. Figure 5-16 shows the ISSU process for the Nexus 7000 platform. It is okay to have bridge assurance enabled on the vPC peer-link. However. ISSU is achieved using a rolling update process. In this process. ISSU will be aborted. . The Cisco Nexus 5500 platform switches support Layer 3 functionality. When both supervisors are upgraded successfully. The Nexus 7000 ISSU leverages the SSO capabilities of the switch to perform a nondisruptive software upgrade. and a failover is done so the standby supervisor with the new image becomes active. the other peer switch locks the configuration until the ISSU is completed. The vPC switches continue their control plane communication during the entire ISSU process. In this process.

Figure 5-16 ISSU Process on Nexus 7000 It is important to note that the ISSU process results in a change in the state of active and standby supervisors. Figure 5-17 shows the state of supervisor modules before and after the software upgrade. .

To determine ISSU compatibility. It is recommended to use the show install all impact command to check whether the images are compatible for an upgrade. It is not possible to upgrade the software individually for each VDC. For releases . To initiate parallel FEX software upgrades. Following are some of the important considerations for a Nexus 7000 ISSU: When you upgrade Cisco NX-OS software on Nexus 7000. The software upgrade might be disruptive under the following conditions: A single-supervisor system with a kickstart or system image changes A dual-supervisor system with incompatible system software images VPC peers can only operate dissimilar versions of the Cisco NX-OS software during the upgrade or downgrade process. use the show incompatibility system command. make sure that the upgrade process will be nondisruptive. therefore.Figure 5-17 Nexus 7000 Supervisors Before and After ISSU Before you start ISSU on a Nexus 7000 switch. use the parallel keyword in the command.2(1). Prior to NX-OS release 5. use the following ISSU command: install all kickstart <image> system <image> parallel. including all VDCs on the physical device. Configuration mode is blocked during the Cisco ISSU to prevent any changes. one I/O module is upgraded at a time. You can start the ISSU using the install all command. To start a parallel upgrade of line cards. An ISSU might be disruptive if you have configured features that are not supported on the new software image. support for simultaneous line card upgrade is added into this release to reduce the ISSU time. Operating VPC peers with dissimilar versions after the upgrade or downgrade process is complete is not supported. This method was very time consuming. a parallel upgrade on the FEX is supported by ISSU. it is upgraded for the entire system. that is.1(1). Starting from NX-OS Release 6. ISSU performs the linecard software upgrade serially.

This section discusses important NX-OS CLI commands. If you would like to use a command. This form of help is called command syntax help. without any detail description. In the Cisco NX-OS software. In syntax help. command completion may also be used to complete a system-defined name and.3z fc Fiber Channel interface loopback Loopback interface mgmt Management interface port-channel Port Channel interface san-port-channel SAN Port Channel interface vfc Virtual FC interface vlan Vlan interface N5K-A(config)# interface <press-tab-key> ethernet loopback port-channel fc mgmt san-port-channel N5K-A(config)# interface vfc vlan Automatic Command Completion In any command mode. you first must enter into the group hierarchy and then type the command. Important CLI Commands In NX-OS all CLI commands are organized hierarchically. one per line. Command-Line Help To obtain a list of available commands. Example 5-1 Command-Line Help Click here to view code image N5K-A# configure terminal Enter configuration commands. Example 5-1 shows how to enter into switch configuration mode.1(1).prior to Cisco NX-OS Release 6. to configure a system. user-defined names. These commands will help you in configuration and verification of basic switch connectivity. you can type an incomplete command and then press the Tab key to complete the rest of the command. you use the configure terminal command and then use the appropriate command to configure the system. For example. in some cases. you see full parser help and descriptions. End with CNTL/Z. and command syntax help for the interface command. In Example 5-2. configuration. only a serial upgrade of FEX is supported. Even if a user types the parallel keyword in the command. N5K-A(config)# interface ? ethernet Ethernet IEEE 802. To list keywords or arguments of a command. The commands that perform similar functions are grouped together at the same level. the upgrade will be a serial upgrade. the Tab key . or hardware information are grouped under the show command. all the commands that display system. Pressing the Tab key displays a brief list of the commands that are available at the current branch. the use of the where command to find current context. Pressing the Tab key provides command completion to a unique but partially typed command string. type a question mark (?) at the system prompt. and all commands that are used to configure the switch are grouped under the configure terminal command. For example. enter a question mark (?) in place of a keyword or argument.

The Exit command will take you back one level in the hierarchy. NX-OS will go out of the context to find the match. Example 5-2 Automatic Command Completion Click here to view code image N5K-A(config)# vrf con<press-tab-key> N5K-A(config)# vrf context N5K-A(config)# vrf context man<press-tab-key> N5K-A(config)# vrf context management N5K-A(config-vrf)# Entering and Exiting Configuration Context To perform a configuration task. to configure an interface. first you must switch to the appropriate configuration context in the group hierarchy. If you need to view a specific feature’s portion of the configuration. If you run this command with command-line parameter all. Example 5-3 Configuration Context Click here to view code image N5K-A(config)# interface ethernet 1/1 N5K-A(config-if)# where conf. you can do so by running the command with the feature specified. it displays all the configuration. including the default configuration of the switch. show running-config interface ethernet 1/1 shows the configuration of interface Ethernet 1/1. you can run all show commands from the configuration context.is used to complete the VRF command and then the name of a VRF on the command line. All interface-related commands for interface Ethernet 1/1 will go under the interface. You can use the where command to display the current context and login credentials. For example. as shown in Example 5-4. you enter into the interface context by typing interface ethernet 1/1. For example. interface Ethernet1/1 admin@N5K-A N5K-A(config-if)# exit N5K-A(config)# show <Press CTRL-Z> N5K-A# show Display Current Configuration The show running-config command displays the current running configuration of the switch. For example. Example 5-4 show running-config Click here to view code image N5K-A# sh running-config ipqos !Command: show running-config ipqos . When you type a command that does not belong to the current context. The Ctrl+Z key combination exits configuration mode but retains the user input on the command line. Commands discussed in this section and their output are shown in Example 5-3.

and ignore-case in the command indicates that this match must not be case sensitive. When a match is found. The show spanning-tree command’s output is piped to egrep. including the following: Get regular expression (grep) and extended get regular expression (egrep) Less No-more Head and tail The pipe option can also be used multiple times or recursively. where the qualifiers indicate to display the lines that match the regular expression mstp. In Example 5-5. The egrep parameter enables you to search for a word or expression and print a word count. the show spanning-tree | egrep ignore-case next 4 mstp command is executed. Many advanced pipe options are available. the next 4 in the command means that 4 lines of output following the match of mstp are also to be displayed.0(3)N2(2) class-map type qos class-fcoe class-map type queuing class-fcoe match qos-group 1 class-map type queuing class-all-flood match qos-group 2 class-map type queuing class-ip-multicast match qos-group 2 class-map type network-qos class-fcoe match qos-group 1 class-map type network-qos class-all-flood match qos-group 2 class-map type network-qos class-ip-multicast match qos-group 2 system qos service-policy type qos input fcoe-default-in-policy service-policy type queuing input fcoe-default-in-policy service-policy type queuing output fcoe-default-out-policy service-policy type network-qos fcoe-default-nq-policy Command Piping and Parsing To locate information in the output of a show command. or filter the output of the command. These searching and filtering options follow a pipe character (|) at the end of the show command. The pipe parameter can be placed at the end of all CLI commands to qualify. Example 5-5 Command Piping and Parsing Click here to view code image N5K-A# sh spanning-tree | egrep ? WORD Search for the expression count Print a total count of matching lines only ignore-case Ignore case difference when comparing strings invert-match Print only lines that contain no matches for <expr> line-exp Print only lines where the match is a whole line line-number Print each match preceded by its line number . redirect. It also has other options.!Time: Sun Sep 21 12:45:14 2014 version 5. the Cisco NX-OS software provides the searching and filtering options.

and FCOE by using the feature command. or disable them if they are no longer required. Because of the feature command on the Nexus platform. You can enable or disable specific features such as OSPF. the configuration and verification commands become available. vPC.6d01 This bridge is the root Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Feature Command Cisco NX-OS supports many features. An example of such a feature is OSPF. and their related configuration contexts are not visible until that feature is enabled on the switch. the process will not start until you configure the features. or logical interfaces. vFC. protocols. and tunnels. mgmt. In the same manner other features can be enabled before configuring them on the switch. When you use the interface command. and the configuration and verification commands for that feature are not available from the CLI. After you enable the feature on the switch. and Ethernet port channel interface types in the Cisco NX-OS software command output. If a feature is not enabled on the switch. After enabling the interface-vlan feature on the Nexus switch. such as 1Gbps and 10Gbps Ethernet.next prev word-exp Print <num> lines of context after every matching line Print <num> lines of context before every matching line Print only lines where the match is a complete word N5K-A# sh spanning-tree | egrep ignore-case next 4 mstp Spanning tree enabled protocol mstp Root ID Priority 32768 Address 0005. PIM. only feature telnet and feature lldp are enabled. You can also use the show feature command to verify the features that are currently enabled on the switch. you can see only Ethernet. In Example 5-6. Example 5-6 feature Command Click here to view code image N5K-A(config)# sh running-config | include feature feature fcoe feature telnet no feature http-server Feature lldp N5K-A(config)# interface <Press-Tab-Key> ethernet loopback port-channel vfc . For some features. not all features are enabled by default. This approach optimizes the operating system and increases availability of the switch by running only required processes on the switch. the show running-config | include feature command is used to determine the features that are currently enabled on the switch. Nexus switch supports physical interfaces. such as SVI.73cb. however. the SVI interface type appears as a valid interface option. the process for that feature is not running. followed by pressing the Tab key for help. interfaces. It is recommended that you keep the features disabled until they are required. NX-OS only enables those features that are required for initial network connectivity. In the default configuration.

-------------------------------. The number and type of expansion modules are dependent on the model of the system.0000 to 0000. In Nexus 5000 Series switch. Example 5-8 shows output of the show module command on the Nexus 7000 switch. Software and hardware versions of each module.-----------------. each FEX is also considered an expansion module of the main system. The number of ports associated with module 1 will vary based on the model of the switch.-------------------------------------1 0005. If expansion modules are installed.0000. Example 5-7 shows the output of the show module command for the Nexus 5548 switch. FEX slot numbers are assigned by the administrator during FEX configuration.0(3)N2(2) 5. and their range of MAC addresses for the module are also available via this command.---------------------.73cb. they are also visible in this command.6cc8 to 0005.-----------1 32 O2 32X10GE/Modular Universal Pla N5K-C5548UP-SUP active * 3 0 O2 Daughter Card with L3 ASIC N55-D160L3 ok Mod --1 3 Sw -------------5. Example 5-8 Nexus 7000 Modules Click here to view code image N7K-1# show module Mod Ports Module-Type Model Status --.6ce7 3 0000. Modules that are inserted into the expansion module slots will be numbered according to the slot into which they are inserted. This switch has a Supervisor 1 module in slot 5.73cb.----------------------------------.---------- . If fabric extenders (FEX) are attached.000f N5K-A# Serial-Num ---------JAF1505BGTG JAF1504CCAR The show module command on the Nexus 7000 Series switches identifies which modules are inserted and operational on the switch.0 World-Wide-Name(s) (WWN) -------------------------------------------------20:01:00:05:73:cb:6c:c0 to 20:08:00:05:73:cb:6c:c0 -- Mod MAC-Address(es) --.----. a 32-port F1 module in slot 9. the main system is referred to as module 1.0000.fc N5K-A(config)# N5K-A(config)# ethernet fc N5K-A(config)# mgmt san-port-channel feature interface-vlan interface <Press-Tab-Key> loopback port-channel mgmt san-port-channel vfc vlan Verifying Modules The show module command provides a hardware inventory of the interfaces and module that are installed in the chassis. and a 32-port M1 module in slot 10. This number can be seen under the Module-Type column of the show module command. Example 5-7 Nexus 5000 Modules Click here to view code image N5K-A# show module Mod Ports Module-Type Model Status --.0(3)N2(2) Hw -----1.----. their serial numbers.0 1.

Example 5-9 shows the output of the show interface brief command.0 ----. where the name of the interface also indicates the speed. Example 5-9 show interface brief Command Click here to view code image N5K-A# show interface brief -------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------mgmt0 -up 172.2(1) 5.1. subinterfaces Management: Management On the Nexus platform. it also lists the reason why the interface is down. Information about the switch port mode.1 1000 1500 -------------------------------------------------------------------------------Ethernet VLAN Type Mode Status Reason Speed Port Interface Ch # -------------------------------------------------------------------------------Eth1/1 1 eth trunk up none 10G(D) -Eth1/2 1 eth trunk down SFP not inserted 10G(D) 60 Eth1/3 1 eth access down SFP not inserted 10G(D) -Eth1/4 -eth routed down SFP not inserted 10G(D) -Eth1/5 1 eth fabric up none 10G(D) 105 Eth1/6 1 eth access down SFP not inserted 10G(D) -Eth1/7 1 eth access down SFP not inserted 10G(D) -Eth1/8 1 eth access down SFP not inserted 10G(D) – ----. The show interface brief command lists the most relevant interface parameters for all interfaces on a Cisco Nexus Series switch. VLAN. and rate mode (dedicated or shared) is also displayed. tunnels.output removed ----- .2(1) 5. For interfaces that are down. speed.16. loopback. SVI.2(1) N7K-SUP1 N7K-F132XP-15 N7K-M132XP-12 active * ok ok Hw -----1. all Ethernet interfaces are named Ethernet. on Nexus there is no differentiation in the naming convention for the different speeds.8 1.5 9 10 0 32 32 Supervisor module-1X 1/10 Gbps Ethernet Module 10 Gbps Ethernet Module Mod --5 9 10 Sw -------------5. null.output removed ----- Verifying Interfaces The Cisco Nexus switches running NX-OS software support the following types of interfaces: Physical: Ethernet (10/100/1000/10G/40G) Logical: Port channel.1 2. Unlike other switching platforms from Cisco.

0(3)N2(2) interface Ethernet1/1 no description lacp port-priority 32768 lacp rate normal priority-flow-control mode auto lldp transmit lldp receive no switchport block unicast no switchport block multicast no hardware multicast hw-hash no cdp enable ----. rxload 1/255 Encapsulation ARPA Port mode is trunk full-duplex. output flow-control is off Rate mode is dedicated Switchport monitor is off EtherType is 0x8100 Last link flapped 3week(s) 6day(s) Last clearing of "show interface" counters never 30 seconds input rate 1449184 bits/sec.output removed ----- . media type is 10G Beacon is turned off Input flow-control is off. and traffic statistics of the interface. 10 Gb/s.The show interface command displays the operational state of any interface. Example 5-10 Displaying Interface Operational State Click here to view code image N5K-A# show interface ethernet 1/1 Ethernet1/1 is up Hardware: 1000/10000 Ethernet. Example 5-10 shows the output of the show interface Ethernet 1/1 command. 181148 bytes/sec. Example 5-11 shows the output of the show running-config interface Ethernet 1/1 all command. description. MTU. 195760 bytes/sec. The default configuration parameters for any interface may be displayed using the show running-config interface all command. including the reason why that interface might be down.73cb.73cb. DLY 10 usec. 252 packets/sec ----. reliability 255/255.output removed ----- Interfaces on the Cisco Nexus Series switches have many default settings that are not visible in the show running-config command. It also shows details about the port speed. address: 0005. MAC address. 237 packets/sec 30 seconds output rate 1566080 bits/sec.6cc8) MTU 1500 bytes.6cc8 (bia 0005. txload 1/255. BW 10000000 Kbit. Example 5-11 Showing Default Configuration Click here to view code image N5K-A# show running-config interface ethernet 1/1 all !Command: show running-config interface Ethernet1/1 all !Time: Sun Sep 21 17:39:33 2014 version 5.

the device outputs messages to terminal sessions and to a log file. Example 5-13 shows the output of the show logging nvram command.113.output removed ----- Viewing NVRAM logs To view the NVRAM logs. The show logging logfile [start-time yyyy mmm dd hh:mm:ss] [end-time yyyy mmm dd hh:mm:ss] command displays the messages in the log file that have a time stamp within the span entered.4) : No such file or directory 2014 Aug 24 22:40:45 N5K-A %STP-6-STATE_CREATED: Internal state created stateless 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: pss init done 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: conditional action being done 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: ARP lib init done 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: ppf lib init done 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: In global cfg db 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: In vlan cfg db 2014 Aug 24 22:40:46 N5K-A last message repeated 12 times ----.The highlighted portions of the CLI include default parameters. The clear logging logfile command clears the contents of the log file. 2014 Aug 24 22:40:45 N5K-A ntpd[4744]: ntp:precision = 1. use the show logging nvram [last number-lines] command. Example 5-13 Viewing NVRAM Logs Click here to view code image N5K-A# sh logging nvram 2012 Jul 8 08:30:22 N5K-A 2012 Jul 8 08:30:22 N5K-A 2012 Jul 8 08:30:26 N5K-A 2012 Jul 10 09:21:20 N5K-A %$ %$ %$ %$ VDC-1 VDC-1 VDC-1 VDC-1 %$ %$ %$ %$ %PFMA-2-FEX_STATUS: Fex 105 is online %NOHMS-2-NOHMS_ENV_FEX_ONLINE: FEX-105 On-line %PFMA-2-FEX_STATUS: Fex 105 is online %SATCTRL-FEX105-2-SOHMS_DIAG_ERROR: FEX-105 . Example 5-12 shows the output of the show logging logfile command.000 usec 2014 Aug 24 22:40:45 N5K-A %FEX-5-FEX_PORT_STATUS_NOTI: Uplink-ID 0 of Fex 105 that is connected with Ethernet1/5 changed its status from Created to Configured 2014 Aug 24 22:40:47 N5K-A %SYSLOG-3-SYSTEM_MSG: Syslog could not be sent to server(10. The clear logging nvram command clears the logged messages in NVRAM. Use the following commands: The show logging last number-lines command displays the last number of lines in the logging file. You can display or clear messages in the log file. Viewing Logs By default. Example 5-12 Viewing Log File Click here to view code image N5K-A# sh logging logfile 2014 Aug 24 22:40:45 N5K-A %CDP-6-CDPDUP: CDP Daemon Up.14.

Define Key Terms . base_addr=0x0. noted with the key topics icon in the outer margin of the page.output removed ----- Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter.kernel 2013 Mar 13 12:21:59 N5K-A %$ VDC-1 %$ Mar 13 12:21:59 %KERN-2-SYSTEM_MSG: IO Error(set): IO space is not enabled.kernel 2013 Mar 23 06:52:08 N5K-A %$ VDC-1 %$ %NOHMS-2-NOHMS_ENV_ERR_TEMPMINALRM: System minor temperature alarm on module 1. Table 5-9 lists a reference of these key topics and the page numbers on which each is found. Appendix C.” also on the disc. includes completed tables and lists to check your work. Sensor Outlet crossed minor threshold 50C ----. . base_addr=0x0.Module 1: Runtime diag detected major event: Voltage failure on power supply: 1 2012 Jul 10 09:21:20 N5K-A %$ VDC-1 %$ %SATCTRL-FEX105-2-SOHMS_DIAG_ERROR: FEX-105 System minor alarm on power supply 1: failed 2012 Jul 10 09:24:26 N5K-A %$ VDC-1 %$ %SATCTRL-FEX105-2-SOHMS_DIAG_ERROR: FEX-105 Recovered: System minor alarm on power supply 1: failed 2013 Jan 13 01:17:35 N5K-A %$ VDC-1 %$ Jan 13 01:17:35 %KERN-2-SYSTEM_MSG: IO Error(set): IO space is not enabled. and complete the tables and lists from memory. Table 5-9 Key Topics for Chapter 5 Complete Tables and Lists from Memory Print a copy of Appendix B. . “Memory Tables Answer Key. “Memory Tables.” (found on the disc) or at least the section for this chapter.

and check your answers in the glossary: Data Center Network Manager (DCNM) Application Specific Integrated Circuit (ASIC) Frame Check Sequence (FCS) Unified Port Controller (UPC) Unified Crossbar Fabric (UCF) Cisco Discovery Protocol (CDP) Bidirectional Forwarding Detection (BFD) Unidirectional Link Detection (UDLD) Link Aggregation Control Protocol (LACP) Address Resolution Protocol (ARP) Spanning Tree Protocol (STP) Dynamic Host Configuration Protocol (DHCP) Connectivity Management Processor (CMP) Virtual Device Context (VDC) Role Based Access Control (RBAC) Simple Network Management Protocol (SNMP) Remote Network Monitoring (RMON) Fabric Extender (FEX) Management Information Base (MIB) Generic Online Diagnostics (GOLD) Extensible Markup Language (XML) In Service Software Upgrade (ISSU) Field-Programmable Gate Array (FPGA) Complex Instruction Set Computing (CISC) Reduced Instruction Set Computing (RISC) .Define Key Terms Define the following key terms from this chapter.

If you do not remember some details. but as you read through each new chapter you will become more familiar and comfortable with them. without looking back at the chapters or your notes. if you’re not certain. There will be no written answers to these questions in this book. which are relevant to the particular chapter.Part I Review Keep track of your part review progress with the checklist shown in Table PI-1. This might seem overwhelming to you initially. For this self-assessment exercise in this book. Answer Part Review Questions For this task. Table PI-1 Part I Review Checklist Repeat All DIKTA Questions For this task. using the PCPT software. take the time to reread those topics. This exercise lists a set of key self-assessment questions. Details on each task follow the table. Review Key Topics Browse back through the chapters and look for the Key Topic icons. answer the Part Review questions for this part of the book. answer the “Do I Know This Already?” questions again for the chapters in this part of the book. using the PCPT software. you should try to answer these self-assessment questionnaires to the best of your ability. Refer to “How to View Only Part Review Questions by Part” in the Introduction to this book for help with how to make the PCPT software show you Part Review questions for this part only. which will help you remember the concepts and technologies of these key topics as you progress with the book. Refer to “How to View Only DIKTA Questions by Part” in the Introduction to this book for help with how to make the PCPT software show you DIKTA questions for this part only. you should go back and refresh on the key topics. shown in the following list. Self-Assessment Questionnaire Each chapter of this book introduces several key learning items. .

For example. . number 1 is about Data Center Network Architecture. and so on. Note your answer and try to reference key words in the answer from the relevant chapter.Think of each of these open self-assessment questions as being about the chapters you’ve read. Virtual Port Channel (vPC) would apply to Chapter 1. You might or might not find the answer explicitly. number 2 is about Cisco Nexus Product Family. For example. but the chapter is structured to give you the knowledge and understanding to be able to discuss and explain these self-assessment questions.

Part II: Data Center Storage-Area Networking Exam topics covered in Part II: Describe the Network Architecture for the Storage Area Network Describe the Edge/Core Layers of the SAN Describe Fibre Channel Storage Networking Describe Initiator Target Describe the Different Storage Array Connectivity Describe VSAN Describe Zoning Describe Basic SAN Connectivity Describe the Cisco MDS Product Family Describe the Cisco MDS Multilayer Directors Describe the Cisco MDS Fabric Switches Describe the Cisco MDS Software and Storage Services Describe the Cisco Prime Data Center Network Management Describe Storage Virtualization Describe LUN Storage Virtualization Describe Redundant Array of Inexpensive Disk (RAID) Describe Virtualizing SANs (VSAN) Describe Where the Storage Virtualization Occurs Perform Initial Setup on Cisco MDS Switch Verify SAN Switch Operations Verify Name Server Login Configure and Verify VSAN Configure and Verify Zoning Chapter 6: Data Center Storage Architecture Chapter 7: Cisco MDS Product Family Chapter 8: Virtualizing Storage Chapter 9: Fibre Channel Storage-Area Networking .

Companies are searching for more ways to efficiently manage expanding volumes of data. Devices previously only plugged into a power outlet are connected to the Internet and are sending and receiving tons of data at scales. Data Center Storage Architecture This chapter covers the following exam topics: Describe the Network Architecture for the Storage-Area Network Describe the Edge/Core Layers of the SAN Describe Fibre Channel Storage Networking Describe Initiator Target Describe the Different Storage Array Connectivity Describe VSAN Describe Zoning Describe Basic SAN Connectivity Every day. storage. SANs offer simplified storage management. and an improved capability to combine technologies (both hardware and software) in more powerful ways. and bandwidth at ever-lower costs. movement.Chapter 6. Gartner predicts that worldwide consumer digital storage needs will grow from 329 exabytes in 2011 to 4. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. It compares Small Computer System Interface (SCSI). Table 6-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions.” . and network-attached storage (NAS) connectivity for remote server storage. the ability to analyze Big Data and turn it into actionable information.1 zettabytes in 2016. It covers Fibre Channel protocol and operations. “Answers to the ‘Do I Know This Already?’ Quizzes. Storage area networks (SAN) are the leading storage infrastructure of today. and to make that data accessible throughout the enterprise data centers. social media. the rapid growth of cloud. and backup. and availability. Fibre Channel. This chapter discusses the function and operation of the data center storage-networking technologies. and improved data access. This chapter goes directly into the edge/core layers of the SAN design and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. read the entire chapter. scalability. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. flexibility. and mobile computing. You can find the answers in Appendix A. thousands of devices are newly connected to the Internet. This demand is pushing the move of storage into the network. Powerful technology trends include the dramatic increase in processing power.

Which of the following options describe advantages of block-level storage systems? (Choose all the correct answers. None 3. They can support external boot of the systems connected to them. If you do not know the answer to a question or are only partially sure of the answer. c. CIFS b. Secure access to all hosts e. Which of the following protocols are file based? (Choose all the correct answers. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security.Table 6-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. Fibre Channel .) a. Block-level storage systems are well suited for bulk file storage. you should mark that question as wrong for purposes of the self-assessment.) a. Which of the following options describe advantages of storage-area network (SAN)? (Choose all the correct answers. Block-level storage systems are very popular with storage area networks. d. Storage virtualization c. e. Business continuity d. b. Block-level storage systems are generally inexpensive when compared to file-level storage systems. 2. Each block or storage volume can be treated as an independent disk drive and is controlled by an external server OS.) a. 1. Consolidation b.

A dual-ported HBA has three WWNs: one nWWN. Which process enables an N Port to exchange information about ULP support with its target N Port. PLOGI c. Device performance and oversubscription ratios c. An HBA or a storage device can belong to only a single VSAN—the VSAN associated with the Fx port. Traffic management d. NFS 4. The arbitrated loop physical address (AL-PA) is a 16-bit address. to ensure that the initiator and target process can communicate? a. c. one can define 4096 VSANs.) a. Low latency 6. FLOGI b. An HBA or storage device can belong to multiple VSANs. Which of the following options are correct for Fibre Channel addressing? a. Which type of port is used to create an ISL on a Fibre Channel SAN? a. E b.c. d. 10. PRLO 8. The domain ID is an 8-bit field. 5. Back-up storage product. It is used for mission-critical applications. switch. b. gateway. Centralized controller and cache system. Which of the following options should be taken into consideration during SAN design? a. and Fibre Channel disk drive has a single unique nWWN. d. Which mapping table is inside the front-end array controller and determines which LUNs are advertised through which storage array ports? . b. Every HBA. b. c. Membership is typically defined using the VSAN ID to Fx ports. 7. array controller. NP 9. Port density and topology requirements b.) a. c. TN d. F c. d. Which options describe the characteristics of Tier 1 storage? (Choose all the correct answers. Integrated large-scale disk array. PRLI d. SCSI d. only 239 domains are available to the fabric. and one pWWN for each port. On a Cisco MDS switch. Which of the following options are correct for VSANs? (Choose all the correct answers.

such as aluminum. Data is read in a random-access manner. The drive consists of a large number of sectors (512-byte blocks). Multisector operations are possible. and Fibre Channel. Zoning d. LUN zoning e. each of which can be read or written. the read and write function can be faster or slower. An HDD retains its data even when powered off. where 1 gigabyte = 1 billion bytes). and product lifetime. No correct answer exists Foundation Topics What Is a Storage Device? Data is stored on hard disk drives that can be both read and written on. The primary characteristics of an HDD are its capacity and performance. More than 200 companies have produced HDD units. The sectors are numbered from 0 to n – 1 on a disk with n sectors. A platter is a circular hard surface on which data is stored persistently by inducing magnetic changes to it. A disk can have one or more platters. IBM introduced the first HDD in 1956. SCSI. price per unit of storage. described before the introduction of SATA as ATA). we can view the disk as an array of sectors. HDDs are expected to remain the dominant medium for secondary storage due to predicted continuing advantages in recording capacity. Serial Attached SCSI (SAS). The platters are all bound together around the spindle. each of which is called a surface. write latency. Continuously improved. which is connected to a motor that spins the . Thus. The basic interface for all modern drives is straightforward. the primary competing technology for secondary storage is flash memory in the form of solid-state drives (SSD). and the HDD technology on which they were built. and HDDs became the dominant secondary storage device for general-purpose computers by the early 1960s. HDDs are accessed over one of a number of bus types. including as of 2011. LUN mapping b. parallel ATA (PATA. each platter has two sides. many file systems will read or write 4KB at a time (or more). Serial ATA (SATA). power consumption. and then coated with a thin magnetic layer that enables the drive to persistently store bits even when the drive is powered off. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte (TB) drive has a capacity of 1. A hard disk drive (HDD) is a data storage device used for storing and retrieving digital information using rapidly rotating disks (platters) coated with magnetic material. SSDs are replacing HDDs where speed. and durability are more important considerations. Depending on the methods that are used to run those tasks. As of 2014. also called IDE or EIDE. HDDs have maintained this position into the modern era of servers and personal computers. LUN masking c. 0 to n – 1 is thus the address space of the drive. Most current units are manufactured by Seagate. Toshiba. indeed.000 gigabytes (GB.a. These platters are usually made of some hard material. and Western Digital. meaning individual blocks of data can be stored or retrieved in any order rather than sequentially.

RAID 1). performance. The most widespread standard for configuring multiple hard drives is Redundant Array of Inexpensive/Independent Disks (RAID). For example. it is also not used much anymore by the systems and subsystems that use disk drives. and all corresponding tracks from all disks define a cylinder (see Figure 6-1).200 RPM to 15.platters around (while the drive is powered on) at a constant (fixed) rate.” all the RAID levels are discussed in great detail. availability. logical block addressing facilitates the flexibility of storage networks by allowing many types of disk drives to be integrated more easily into a large heterogeneous storage environment. The rate of rotation is often measured in rotations per minute (RPM). Figure 6-1 Components of Hard Disk Drive A specific sector must be referenced through a three-part address composed of cylinder/head/sector information. Data is distributed across the drives in one of several ways. and sector addresses have been replaced by a method called logical block addressing (LBA). for example. and sectors is interesting. which the drive can use to hold data read from or written to the disk. tracks. The different schemes or architectures are named by the word RAID followed by a number (RAID 0. doing so enables the drive to quickly respond to any subsequent requests to the same track. the drive might decide to read in all of the sectors on that track and cache them in its memory. Cylinder. as well as whole disk failure. And cache is just some small amount of memory (usually around 8 or 16 MB).000 RPM means that a single rotation takes about 6 milliseconds (6 ms). and typical modern values are in the 7. Although the internal system of cylinders. Each scheme provides a different balance between the key goals: reliability. Note that we will often be interested in the time of a single rotation. referred to as RAID levels. depending on the specific level of redundancy and performance required. and capacity. track. In Chapter 8. a drive that rotates at 10. JBOD (derived from “just a bunch of . which comes in several standard configurations and nonstandard configurations. which makes disks much easier to work with by presenting a single flat address space. RAID levels greater than RAID 0 provide protection against unrecoverable (sector) read errors. we call one such concentric circle a track. There are usually 1024 tracks on a single disk. Data is encoded on each surface in concentric circles of sectors. To a large degree. “Virtualizing Storage. when reading a sector from the disk.000RPM range.

fans. They can be configured with common file-level protocols like NTFS (Windows). Storage-area network (SAN) arrays are based on block-level storage. the storage disk is configured with a particular protocol (such as NFS. often up to the point where all single points of failure (SPOF) are eliminated from the design. access ports. File-level storage systems are well suited for bulk file storage. Hard drives may be handled independently as separate logical volumes. these blocks are controlled by the server-based operating systems. and so on) and files are stored and accessed from it as such. Advantages of file-level storage systems are the following: A file-level storage system is simple to implement and simple to use. NFS (Linux). The raw blocks (storage volumes) are created. or they may be combined into a single logical volume using a volume manager. disk array components are often hot swappable. cache. and each block can be controlled like an individual hard drive. Each block or storage volume can be formatted with the file system required by the application (NFS/NTFS/SMB). integration with corporate directories. Block-level storage systems are very popular with SANs. File-level storage systems are generally inexpensive when compared to block-level storage systems. Typically. resiliency. in bulk. Each block or storage volume can be individually formatted with the required file system. Each block or storage volume can be treated as an independent disk drive and is controlled by external server OS. It stores files and folders and is visible as such to both the systems storing the files and the systems accessing them. Generally. and so on). The following are advantages of block-level storage systems: Block-level storage systems offer better performance and speed than file-level storage systems. a disk array provides increased availability. and so on. power supplies. while making them accessible either as independent hard drives or as a combined (spanned) single logical volume with no actual RAID functionality.disks”) is an architecture involving multiple hard drives. File-level storage systems are more popular with NAS-based storage systems—Network Attached Storage. In this type of storage. Disk arrays are divided into categories: Network attached storage (NAS) arrays are based on file-level storage. disk enclosures. CIFS. . and maintainability by using existing components (controllers. Additionally. and the like. The file level storage device itself can generally handle operations such as access control.

Oracle Corporation. IBM. Virtual Machine File Systems (VMFS). Infortrend. Standard . Hewlett-Packard. The controller of the disk subsystem must ultimately store all data on physical hard disks. or Internet SCSI (iSCSI) and can thus use the storage capacity that the disk subsystem provides (see Figure 6-2). DAS devices limit file sharing and can be complex to implement and manage. to support data backups. Fibre Channel. which sees only the hard disks that the disk subsystem provides to the server.Block-level storage systems are more reliable. To access data with DAS. and the like. NetApp. They can support external boot of the systems connected to them. and their transport systems are very efficient. DAS devices require resources on the host and spare disk systems that cannot be used on other systems. Block-level storage can be used to store files and also provide the storage required for special applications like databases. Devices in a captive storage topology do not have direct access to the storage network and do not support efficient sharing of storage. Hitachi Data Systems. DAS is commonly described as captive storage. DAS devices provide little or no mobility to other servers and little scalability. The internal structure of the disk subsystem is completely hidden from the server. Dell. a user must go through some sort of front-end network. For example. Figure 6-2 Servers Connected to a Disk Subsystem Using Different I/O Techniques Direct-attached storage (DAS) is often implemented within a parallel SCSI implementation. and other companies that often act as OEMs for the previously mentioned vendors and do not themselves market the storage components they manufacture. Primary vendors of storage systems include EMC Corporation. Servers are connected to the connection port of the disk subsystem using standard I/O techniques such as Small Computer System Interface (SCSI).

but a tape drive must physically wind tape between reels to read any one particular piece of data. comparable to hard disk drives. which provides physical connections and a management layer. manufacturer-specific—I/O techniques are used. a bar-code reader to identify tape cartridges. A tape library. Active/active (no load sharing): In this cabling method the controller uses both I/O channels in normal operation. it is no longer possible to access the data. There are four main I/O channel designs: Active: In active cabling the individual physical hard disks are connected via only one I/O channel. A SAN consists of a communication infrastructure.I/O techniques such as SCSI. tape robot. currently ranging from 20 terabytes up to 2. Magnetic tape data storage is typically used for offline. both groups are addressed via the other I/O channel. In the event of the failure of the first I/O channel.1 exabytes of data or multiple thousand times the capacity of a typical hard drive and well in excess of capacities achievable with network attached storage. Active/active (load sharing): In this approach. Active/passive: In active/passive cabling the individual hard disks are connected via two I/O channels. if this access path fails. and an automated method for loading tapes (a robot). as of 2010. the communication goes through the other channel only. Fibre Channel. in normal operation the controller communicates with the hard disks via the first I/O channel and the second I/O channel is not used. What Is a Storage-Area Network? The Storage Networking Industry Association (SNIA) defines the meaning of the storage-area network (SAN) as a network whose primary purpose is the transfer of data between computer systems and storage elements and among storage elements. however. If one I/O channel fails. sometimes called a tape silo. unlike a disk drive. These devices can store immense amounts of data. tape drives can stream data very quickly. A disk drive can move to any position on the disk in a few milliseconds. all hard disks are addressed via both I/O channels. Serial Attached SCSI (SAS). If one I/O channel fails. tape drives have very slow average seek times for sequential access after the tape is positioned. storage elements. For example. and computer systems so that data transfer is secure and robust. proprietary—that is. The term SAN is usually (but not necessarily) identified with block I/O services rather than file-access . which provides random access storage. or tape jukebox. Linear Tape-Open (LTO) supported continuous data transfer rates of up to 140 MBps. As a result. This layer organizes the connections. Sometimes. A tape drive is a data storage device that reads and writes data on a magnetic tape. a number of slots to hold tape cartridges. The hard disks are divided into two groups: in normal operation the first group is addressed via the first I/O channel and the second via the second I/O channel. The I/O channels can be designed with built-in redundancy to increase the fault-tolerance of a disk subsystem. However. is a storage device that contains one or more tape drives. and increasingly Serial ATA (SATA). in normal operation the controller divides the load dynamically between the two I/O channels so that the available hardware can be optimally utilized. archival data storage. and Serial Storage Architecture (SSA) are being used for internal I/O channels between connection ports and controllers as well as between controllers and internal hard disks. the disk subsystem switches from the first to the second I/O channel. A tape drive provides sequential access storage.

information life-cycle management (ILS). and the concept that the server effectively owns and manages the storage devices. Instead. but more powerful. a SAN introduces the flexibility of networking to enable one server or many heterogeneous servers to share a common storage utility. the storage utility might be located far from the servers that it uses. Additionally. Figure 6-3 Storage Connectivity The key benefits that a SAN might bring to a highly data-dependent business infrastructure can be summarized into three concepts: simplification of the infrastructure. Figure 6-3 portrays a sample SAN topology. Storage virtualization helps in making complexity nearly transparent and can offer a composite . A SAN is a specialized. It also eliminates any restriction to the amount of data that a server can access. high-speed network that attaches servers and storage devices. availability. centralized storage management tools can help improve scalability.services. including disk. and disaster tolerance. and optical storage. servers and storage pools that can help increase IT efficiency and simplify the infrastructure. The simplification of the infrastructure consists of six main areas. Consolidation is concentrating the systems and resources into locations with fewer. A network might include many storage devices. and business continuity. It eliminates the traditional dedicated connection between a server and storage. currently limited by the number of storage devices that are attached to the individual server. A SAN allows an any-to-any connection across the network by using interconnect elements such as switches and directors. tape. In addition.

switches. Storage subsystems. Integrated storage environments simplify system management tasks and improve security. and server systems can be attached to a Fibre Channel SAN. We talk about storage virtualization in detail in Chapter 8. As SANs increase in size and complexity. It is the deployment of these different types of interconnect entities that enables Fibre Channel networks of varying scale to be built. gateways. it’s about identifying your key products and services and the most urgent activities that underpin them. storage devices. Automation is choosing storage components with autonomic capabilities. When all servers have secure access to all data. Each component that composes a Fibre Channel SAN provides an individual management capability and participates in an often-complex end-to-end management environment. which can improve availability and responsiveness and help protect data as storage needs grow. so any combination of devices that can interoperate is likely to be used. then. Business continuity (BC) is about building and improving resilience in your business. The committee that is standardizing Fibre Channel is the INCITS Fibre Channel (T11) Technical Committee. including access to open system storage (FCP). or switches and directors for Fibre Channel switched fabric topologies. The ILM process manages this information in a manner that optimizes storage and maintains a high level of access at the lowest cost. and storage . Fibre Channel supports point-to-point. Given this definition. Fibre Channel is a serial I/O interconnect that is capable of supporting multiple protocols. SANs play a key role in the business continuity. it is a network. embedding BC into your business is proven to bring business benefits. your infrastructure might be better able to respond to the information needs of your users. A SAN implementation makes it easier to manage the information life cycle because it integrates applications and data into a single-view system in which the information resides. including directors. and bridges. after that analysis is complete. Information life-cycle management (ILM) is a process for managing information through its life cycle. and switched topologies with various copper and optical links that are running at speeds from 1 Gbps to 16 Gbps. routers. from conception until intentional disposal. By deploying a consistent and safe infrastructure. SANs make it possible to meet any availability requirements. A storage system consists of storage elements. Fibre Channel directors can be introduced to facilitate a more flexible and fault-tolerant configuration. arbitrated loop. whatever its size or cause. and networking (TCP/IP). higher-level tasks that are unique to the company’s business mission. it is about devising plans and strategies that will enable you to continue your business operations and enable you to recover quickly and effectively from any type of disruption. As soon as day-today tasks are automated. and appliances. several components can be used to build a SAN. In fact. Depending on the implementation. As the name suggests. How to Access a Storage Device Each new storage technology introduced a new physical interface. electrical interface. a Fibre Channel network might be composed of many types of interconnect entities. storage administrators might be able to spend more time on critical. hubs. plus all control software. In smaller SAN environments you can deploy hubs for Fibre Channel arbitrated loop topologies. It gives you a solid framework to lean on in times of crisis and provides stability and security. storage devices. which communicates over a network.view of storage assets. access to mainframe storage (FICON). computer systems.

by key. or the data might be part of the record. with different physical organizations or padding mechanisms. In general. Block-Level Protocols Block-oriented protocols (also known as block-level protocols) read and write individual fixedlength blocks of data. The intelligent interface model abstracts device-specific details from the storage protocol and decouples the storage protocol from the physical and electrical interfaces. Since then. the details vary depending on the particular system. sometimes called physical records. Data transfer is also governed by standards. There are several record formats. Different methods to access records may be provided. This type of structured data is called blocked. although the block size in file systems may be a multiple of the physical block size. In this chapter we will be categorizing them in three major groups: Blocks. as shown in Table 6-2. Most file systems are based on a block device. or by record number. This enables multiple storage technologies to use a common storage protocol. Records are similar to a file system. metadata may be associated with the file records to define the record length. The structure and logic rule used to manage the groups of information and their names is called a file system. the information is easily separated and identified. which is a level of abstraction for the hardware responsible for storing and retrieving specified blocks of data. SCSI has been continuously developed. which comes in a number of variants. a block size. . others provide file access via a network protocol. are sequences of bytes or bits. The first version of the SCSI standard was released in 1986.protocol. By separating the data into individual pieces and giving each piece a name. SCSI is an American National Standards Institute (ANSI) standard that is one of the leading I/O buses in the computer industry. the formats can be fixed-length or variable length. for example. usually containing some whole number of records having a maximum length. sequential. The SCSI bus is a parallel bus. Some file systems are used on local data storage devices. Files are granular containers of data created by the system or an application. SCSI is a block-level I/O protocol for writing and reading data blocks to and from a storage device.

After the drafts are approved by INCITS. drafts become standards and are published by ANSI. producers. and users for the creation and maintenance of formal daily IT standards. The standard process is that a technical committee (T10. INCITS is accredited by and operates under rules approved by the American National Standards Institute (ANSI). Figure 6-4 Standard Groups: Storage . The Information Technology Industry Council (ITI) sponsors INCITS (see Figure 6-4).Table 6-2 SCSI Standards Comparison The International Committee for Information Technology Standards (INCITS) is the forum of choice for information technology developers. These rules are designed to ensure that voluntary standards are developed by the consensus of directly and materially affected interests. Drafts are sent to INCITS for approval. T13) prepares drafts. ANSI promotes American National Standards to ISO as a joint technical committee member (JTC-1). T11.

As a medium. so SCSI controllers typically are called initiators. Logical units are architecturally independent of target ports and can be accessed through any of the target’s ports. A target must have at least one LUN. The bus can be realized in the form of printed conductors on the circuit board or as a cable. Thus. One SCSI device (the initiator) initiates communication with another SCSI device (the target) by issuing a command. Although we tend to use the term LUN to refer to a real or virtual storage device. Table 6-2 lists the SCSI standards. a LUN is an access point for exchanging commands and status information between initiators and targets. SCSI storage devices typically are called targets (see Figure 6-5). but SCSI operates as a master/slave model. whereas a subsystem might allow hundreds of LUNs to be defined. The process of provisioning storage in a SAN storage subsystem involves defining a LUN on a particular target port and then assigning that particular target/LUN pair to a specific logical unit. An . A so-called daisy chain can connect up to 16 devices together. SCSI targets have logical units that provide the processing context for SCSI commands. A logical unit can be thought of as a “black box” processor. SCSI defines a parallel bus for the transmission of data with additional lines for the control of communication. LUN 0. numerous cable and plug types have been defined that are not directly compatible with one another. For instance. and might optionally support additional LUNs. Also note that array-based replication software requires a storage controller in the initiating storage array to act as initiator both externally and internally. Commands received by targets are directed to the appropriate logical unit by a task router in the target controller. All SCSI devices are intelligent. the SCSI protocol is half-duplex by design and is considered a command/response protocol. a disk drive might use a single LUN. Status. A SCSI controller in a modern storage array acts as a target externally and acts as an initiator internally. via a LUN. and Block Data Between Initiators and Targets The logical unit number (LUN) identifies a specific logical unit (virtual controller) in a target. Figure 6-5 SCSI Commands. to which a response is expected. Essentially. Over time. a logical unit is a virtual machine (or virtual controller) that handles SCSI communications on behalf of real or virtual storage devices in a target. The initiating device is usually a SCSI controller. and the LUN is a way to identify SCSI black boxes.

the operating system must note three things for the differentiation of devices: controller ID. During the arbitration of the bus. Depending on the version of the SCSI standard. For instance. each supporting a separate string of disks. the IDs 7 to 0 should retain the highest priority so that the IDs 15 to 8 have a lower priority (see Figure 6-6). The target represents a single disk controller on the string.individual logical unit can be represented by multiple LUNs on different ports. this can lead to devices with lower priorities never being . and LUN. The SCSI protocol permitted only eight IDs. For reasons of compatibility. The bus. More recent versions of the SCSI protocol permit 16 different IDs. Figure 6-6 SCSI Addressing Devices (servers and storage devices) must reserve the SCSI bus (arbitrate) before they can send data through it. If the bus is heavily loaded. with the ID 7 having the highest priority. and LUN triad is defined from parallel SCSI technology. SCSI ID. The bus represents one of several potential SCSI interfaces that are installed in the host. a logical unit could be accessed through LUN 1 on Port 0 of a target and also accessed as LUN 8 on port 1 of the same target. Therefore. A server can be equipped with several SCSI controllers. target. a maximum of 8 or 16 IDs are permitted per SCSI bus. the device that has the highest priority SCSI ID always wins.

In-order delivery was traditionally provided by the SCSI bus and. was not needed by the SCSI protocol layer. hierarchical modules that fit within a general framework called SCSI Architecture Model (SAM). SCSI-3 is an all-encompassing term that refers to a collection of standards that were written as a result of breaking SCSI-2 into smaller. and command Protocol). These functions assemble raw data blocks into a coherent file that an application can manipulate. did not. The SCSI version 3 (SCSI-3) application client resides in the host and represents the upper-layer application. This is the main reason why TCP was considered essential for the iSCSI protocol that transports SCSI commands and data transfers over IP networking equipment: TCP provided ordering while other upper-layer protocols.allowed to send data. even faster bus types are introduced. Applications typically access data as files or records. Fibre Channel I/O Channel. file system. It is often also assumed that SCSI provides in-order delivery to maintain data integrity. signals. therefore. In SCSI-3. and TCP/IP I/O Networking It is important to understand that while SCSI-1 and SCSI-2 represent actual standards that completely define SCSI (connectors. responding to requests. The SCSI-3 device server sits in the target device. In SCSI3. the SCSI protocol does not provide its own reordering mechanism and the network is responsible for the reordering of transmission frames that are received out of order. such as UDP. cables. In other words. and operating system I/O requests. Figure 6-7 SCSI I/O Channel. along with serial SCSI buses that reduce the cabling overhead . The SCSI arbitration procedure is therefore “unfair. so it has different functional components (see Figure 6-7).” The SCSI protocol layer sits between the operating system and the peripheral resources. Although this information might be stored on disk or tape media in the form of data blocks. retrieval of the file requires a hierarchy of functions. the SCSI protocol assumes that proper ordering is provided by the underlying connection technology.

Latency is low. Because it uses TCP/IP. including the balance of read and write operations.and allow a higher maximum bus length. so data can flow in only one direction at a time. In particular. but distance is limited to 25 m and is half-duplex. Latency is low and bandwidth is high (16 GBps). FICON is a prerequisite for IBM z/OS systems to fully participate in a heterogeneous SAN. Fibre Channel cable: This transport is the basis for a traditional SAN deployment. solid state drives (SSD). FICON is a protocol that uses Fibre Channel as . rather than a replacement for. A SAN is Fibre Channel-based (FC-based). and the data block sizes. SCSI is carried in the payload of a Fibre Channel frame between Fibre Channel ports. depending on the variables the tester enters into the program. It is a data storage networking protocol that transports standard SCSI requests over the standard Transmission Control Protocol/Internet Protocol (TCP/IP) networking technology. the traditional IBM Enterprise Systems Connection (ESCON) architecture. which are explained in detail later in this chapter. On both types of storage devices. It is at this point where the Fibre Channel model is introduced. ISCSI is SCSI over TCP/IP: Internet Small Computer System Interface (iSCSI) is a transport protocol that carries SCSI commands from an initiator to a target. whereas for SSDs and similar solid state storage devices. iSCSI is also suited to run over almost any physical network. pronounced eye-ops) is a common performance measurement used to benchmark computer storage devices like hard disk drives (HDD). for both storage and data networks. Fibre Channel connection (FICON) architecture: An enhancement of. ISCSI enables the implementation of IP-based SANs. the random IOPS numbers are primarily dependent upon the storage device’s random seek time. there is always a push for faster communications without limitations on distance or on the number of connected devices. The SCSI protocol is suitable for block-based. the mix of sequential and random access patterns. iSCSI has the potential to lower the costs of deploying networked storage. Often. enabling clients to use the same networking technologies. the demands and needs of the market push for new technologies. Therefore. The specific number of IOPS possible in any system configuration will vary greatly. By eliminating the need for a second network technology just for storage. the random IOPS numbers are primarily dependent upon the storage device’s internal controller and memory interface speeds. IOPS (Input/output Operations Per Second. For HDDs and similar electromechanical storage devices. the sequential IOPS numbers (especially when using a large block size) typically indicate the maximum sustained bandwidth that the storage device can handle. As always. structured applications such as database applications that require many I/O operations per second (IOPS) to achieve high performance. where the SAN switch devices allow the mixture of open systems and mainframe traffic. the number of worker threads and queue depth. sequential IOPS are reported as a simple MB/s number as follows: IOPS × TransferSizeInBytes = BytesPerSec (and typically this is converted to megabytes per sec) SCSI messages and data can be transported in several ways: Parallel SCSI cable: This transport is mainly used in traditional deployments. Fibre Channel has a lossless delivery mechanism by using bufferto-buffer credits (BB_Credits). and storage-area networks (SAN).

FICON channels can achieve data rates up to 850 MBps and extend the channel distance (up to 100 km). Because most organizations already have an existing IP infrastructure.its physical medium. Any congestion control and management and data error and data loss recovery is handled by TCP/IP services and does not affect Fibre Channel fabric services. Fibre . is enormous. HBAs are roughly analogous to network interface cards (NIC). but HBAs are optimized for SANs and provide features that are specific to storage. FICON can also increase the number of control unit images per link and the number of device addresses per control unit link. at a relatively low cost. it allows deployments of Fibre Channel fabrics by using IP tunneling. The major consideration with FCIP is that it does not replace Fibre Channel with IP. FCIP encapsulates Fibre Channel block data and then transports it over a TCP socket. Another possible assumption is that the only need for the IP connection is to facilitate any distance requirement that is beyond the current scope of an FCP SAN. It is a method to allow the transmission of Fibre Channel information to be tunneled through the IP network. Figure 6-8 portrays networking stack comparison for all block I/O protocols. At the time of writing this book. TCP/IP services are used to establish connectivity between remote SANs. FCoE is discussed in detail in Chapters 10 and 11. Fibre Channel over Ethernet (FCoE): This transport replaces the Fibre Channel cabling with 10 Gigabit Ethernet cables and provides lossless delivery over unified I/O. Fibre Channel over IP (FCIP): Also known as Fibre Channel tunneling or storage tunneling. the attraction of being able to link geographically dispersed SANs. The protocol can also retain the topology and switch management characteristics of ESCON. The assumption that this might lead to is that the industry decided that Fibre Channel-based SANs are more than appropriate.) HBAs are I/O adapters that are designed to maximize performance by performing protocol-processing functions in silicon. Figure 6-8 Networking Stack Comparison for All Block I/O Protocols Fibre Channel provides high-speed transport for SCSI payload via Host Bus Adapter (HBA). (See Figure 6-9.

and others. The diagram shows the Fibre Channel. and Fibre Channel connection (FICON). support for multiple protocols. segmentation and reassembly. loop (shared) and fabric (switched) transport. software drivers perform protocol-processing functions such as flow control. Internet Protocol (IP).Channel overcomes many shortcomings of Parallel I/O. and combines best attributes of a channel and a network together. FC-1. which is divided into four lower layers (FC-0. and FC-3) and one upper layer (FC-4). Figure 6-9 Comparison Between Fibre Channel HBA and Ethernet NIC Figure 6-9 contrasts HBAs with NICs. HIPPI data framing. such as SCSI-3. Fibre Channel Protocol (FCP) has an ANSI-based layered architecture that can be considered a general transport vehicle for upper-layer protocols (ULP) such as SCSI command sets. Offloading these functions is necessary to provide the performance that storage networks require. IP. sequencing. and error correction. With NICs. illustrating that HBAs offload protocol-processing functions into silicon. including addressing for up to 16 million nodes. but HBAs are optimized for SANs and provide features that are specific to storage. Figure 6-10 shows an overview of the Fibre Channel model. FC-4 is where the upper-level protocols are used. The HBA offloads these protocol-processing functions onto the HBA hardware with some combination of an ASIC and firmware. . FC-2. HBAs are I/O adapters that are designed to maximize performance by performing protocol-processing functions in silicon. HBAs are roughly analogous to NICs. host speeds of 100 to 1600 MBps (1–16 Gbps).

Instead. Additional rates were subsequently introduced.7 Gbps. In the initial FC specification. alias address definition.125 GBaud. 100 MBps FC is also known as 1-Gbps FC. including cabling. and 100 MBps were introduced on several different copper and fiber media. protocols for hunt group. The FC-PH specification defines baud rate as the encoded bit rate per second. FCP is the FC-4 mapping of SCSI-3 onto FC. Note that FCP does not define its own header. connectors. address assignment.4 Gbps. and 400 MBps as 4-Gbps.5 MBps. FC-1 variants up to and including 8 Gbps use the same encoding scheme (8B/10B) as GE fiber optic variants. provides a raw bit rate of 1. FC-1 encoding: Defines the transmission protocol that includes the serial encoding.0625 Gbps. FC-0 physical interface: Provides physical connectivity. including 200 MBps and 400 MBps. and provides a data bit rate of 850 Mbps. and provides a data bit rate of 3. decoding. 2-Gbps FC operates at 2. Table 6-3 shows the evolution of Fibre Channel speeds. 200 MBps as 2-Gbps. Specifications exist here but are rarely implemented. 25 MBps. and so on.125 Gbps.Figure 6-10 Fibre Channel Protocol Architecture These are the main functions of each Fibre Channel layer: FC-4 upper-layer protocol (ULP) mapping: Provides protocol mapping to identify the ULP that is encapsulated into a protocol data unit (PDU) for delivery to the FC-2 layer. Fibre Channel throughput is commonly expressed in bytes per second rather than bits per second. sequence disassembly and reassembly. fields within the FC-2 . provides a raw bit rate of 4. 50 MBps. FC-2 framing and flow control: Provides the framing and flow control that is required to transport the ULP over Fibre Channel. and provides a data bit rate of 1. the FC-2 header and interframe spacing overhead must be subtracted. 1-Gbps FC operates at 1. Fibre Channel is described in greater depth throughout this chapter. and error control. provides a raw bit rate of 2. which means the baud rate and raw bit rate are equal.0625 GBaud. FC-3 generic services: Provides the Fibre Channel Generic Services (FC-GS) that is required for fabric management. frame format definition. exchange management. FC-2 functions include several classes of service. 4-Gbps FC operates at 4.25 Gbps. and multicast management and stacked connect-requests. To derive ULP throughput. rates of 12.25 GBaud. The FC-PI specification redefines baud rate more accurately and states explicitly that FC encodes 1 bit per baud.

Examples of resources are e-mail. The benefits of client/server architecture include cost reduction because of hardware and space requirements. CIFS is found primarily on Microsoft Windows servers (a Samba service implements CIFS on UNIX systems). and the system that requires the resources is called the client. 4 Gbps FC. and NFS is found primarily on UNIX and Linux servers. 2 Gbps FC. respectively. or multiple clients and one server. The configuration of the network depends on the resource requirement of the environment. and files. The theory of client/server architecture is based on the concept that one computer has the resources that another computer requires. The 1 Gbps FC. The local workstations do not need as much disk space because commonly used data . and 3. These ULP throughput rates are available directly to SCSI. 8 Gbps FC designs all use 8B/10B encoding. the ULP throughput rate is 826. The client and the server communicate with each other through established protocols. and the 10G and 16GFC standard uses 64B/66B encoding. A distributed (client/server) network might contain multiple servers and multiple clients.30608 Gbps for 1 Gbps. Table 6-3 Evolution of Fibre Channel Speeds File-Level Protocols File-oriented protocols (also known as file-level protocols) read and write variable-length files. Assuming the maximum payload (2112 bytes) and no optional FC-2 headers. These resources can be made available through NFS.65304 Gbps.519 Mbps. 2 Gbps. Unlike the 10 Gbps FC standards. 16 Gbps FC provides backward compatibility with 4 Gbps FC and 8 Gbps FC. The basic FC-2 header adds 36 bytes of overhead. Common Internet File System (CIFS) and Network File System (NFS) are file-based protocols that are used for reading and writing files across a network.header are used by FCP. 1. Inter-frame spacing adds another 24 bytes. Files are segmented into blocks before being stored on disk or tape. and 4 Gbps. database. The system with the resources is called the server.

a client cannot mount a resource if mountd is not running on the server. NFS is a widely used protocol for sharing files across networks. SCSI is mostly used in Fibre Channel SANs to transfer blocks of data between servers and storage arrays in a high-bandwidth. NFS is stateless. Block I/O protocols are suitable for well-structured applications.can be stored on the server. One benefit of NAS storage is that files can be shared easily between users within a workgroup. and tables. NFS communication cannot be established between the client and the server. quota daemon (rquot_1_main). Other benefits include centralized support (backups and maintenance) that is performed on the server. In Figure 6-11. SCSI is an efficient protocol and is the accepted standard within the industry. CIFS and NFS are verbose protocols that send thousands of messages between the servers and NAS devices when reading and writing files. For example. and File and Print services. Similarly. and the client can be one of many versions of the Unix or Linux operating system. Block I/O and File I/O are complementary solutions for reading and writing data to and from storage devices. records. Each service is required for successful operation of an NFS process. the server in the network is a network appliance storage system. if rpcbind is not running on the server. NFS daemon (nfsd). SCSI may be transported over many physical media. Status Monitor (sm_1_main). and portmap (also known as rpcbind). These services include mount daemon (mounted). low. latency is high. Because CIFS and NFS are transported over TCP/IP. to allow for easy recovery in the event of server failure. Network Lock Manager (nlm_main). such as databases that transfer small blocks of data when updating fields. These file-based protocols are suitable for file-based applications: Microsoft Office. . Figure 6-11 Block I/O Versus File I/O The storage system provides services to the client. Microsoft SharePoint.latency redundant SAN. Block I/O uses the SCSI protocol to transfer data in 512-byte blocks.

number of storage devices per chain. Back-end data communication is hostto-storage or program-to-device. and iSCSI. NAS servers have only limited suitability as data storage for databases due to lack of performance. and NAS Comparison At this stage. Fibre Channel was introduced to overcome these limitations. SMB (server message block—Microsoft NT). FCoE. Putting It All Together Fibre Channel SAN. Front-end data communications are considered host-to-host or client/server. FCoE SAN. Therefore. lack of device sharing. Storage networks are more difficult to configure. On the other hand. NFS. iSCSI SAN. IDE along with file-system formatting standards such as FAT. SCSI is limited by distance. NTFS. Storage networks can be realized with NAS servers by installing an additional LAN between the NAS server and the application servers. and management flexibility. and block-level data access applies to Fibre Channel (FC) back-end networks. iSCSI. Fibre Channel offers a greater sustained data rate (1 GFC–16 GFC at the time of writing this book). FCoE. In contrast to Fibre Channel. resiliency. CIFS (common Internet file-system) or NCP (Netware Core protocol—Novell) Back-end protocols include SCSI. in Fibre Channel. file-level data access applies to IP front-end networks. NAS servers are turnkey file servers. NAS transfers files or file fragments. Network Attached Storage (NAS) appliances reside on the front-end. Figure 6-12 DAS. Front-end protocols include FTP. and NAS are four techniques with which storage networks can be realized (see Figure 6-12). loop or . Fibre Channel supplies optimal performance for the data exchange between server and storage device. SAN. speed. SANs include the back end along with front-end components.In the scenario portrayed in Figure 6-11. In contrast to NAS. and iSCSI the data exchange between servers and storage devices takes place in a block-based fashion. you already figured out why Fibre Channel was introduced.

the lifespan of storage products built on these architectures is typically four years or less. facilities. Products designed with this architecture are often classified as distributed. The cost of the added capacity depends on the price of the devices. and centralized management with local or remote access. which can be relatively high with some storage systems. This is the reason IT organizations tend to spend such a high percentage of their budget on storage every year. distances of hundreds of kilometers or greater with extension devices. Data may be dynamically moved among classes in a tiered storage implementation . An architecture that adds capacity by adding storage components to an existing system is called a scale-up architecture. but not the number of the storage controllers. and by software upgrades processes that take several steps to complete. Scale-up storage systems tend to be the least flexible and involve the most planning and longest service interruptions when adding capacity. and maintenance costs. nondisruptive device addition. Scale-up and scale-out storage architectures have hardware. or nodes. Increasing storage capacity is an important task for keeping up with data growth. Tiered Storage Many storage environments support a diversity of needs and use disparate technologies that cause storage sprawl. Scaling architectures determine how much capacity can be added.switched networks for device sharing and low latency. Storage Architectures Storage systems are designed in a way that customers can purchase some amount of capacity initially with the option of adding more as the existing capacity fills up with data. this environment yields a suboptimal storage design that can be improved only with a focus on data access characteristics analysis and management to provide optimum performance. In addition. and products designed with it are generally classified as monolithic storage. how quickly it can be added. A scaleout solution is like a set of multiple arrays with a single logical view. performance or other attributes. Scale-up and Scale-out Storage Storage systems increase their capacity in one of two ways: by adding storage components to an existing system or by adding storage systems to an existing group of systems. and how much work and planning is involved. In a large-scale storage infrastructure. the impact on application availability. High availability with scale-up storage is achieved through component redundancy that eliminates single points of failure. allowing system updates to be rolled back. to a group of systems is called a scale-out architecture. A scale-up solution is a way to increase only the capacity. virtually unlimited devices in a fabric. The Storage Networking Industry Association (SNIA) standards group defines tiered storage as “storage that is physically partitioned into multiple distinct classes based on price. An architecture that adds capacity by adding individual storage systems. if necessary.

and importance of data availability. an optimal design keeps the active operational data in Tier 0 and Tier 1 and uses the benefits associated with a tiered storage approach mostly related to cost. you might more efficiently address the highest performance needs while reducing the Enterprise class storage. such as the performance needs. Figure 6-13 Cost Versus Performance in a Tiered Storage Environment Typically. Environmental savings. footprint. Table 6-4 lists the different storage tier levels. because lower-tier storage is less expensive. are possible. A tiered storage approach can provide the performance you need and save significant costs associated with storage. and energy costs.” Tiered storage is a mix of high performing/high-cost storage with low performing/low-cost storage and placing data based on specific characteristics. you can see the concept and a cost versus performance relationship of a tiered storage environment. and cooling reductions. age. In Figure 6-13. By introducing solid-state drive (SSD) storage as tier 0. However. such as energy.based on access activity or other considerations. the overall management effort increases when managing storage capacity and storage performance needs across multiple storage classes. . system footprint.

Table 6-4 Comparison of Storage Tier Levels .

Although . and others are considering single path FC SANs. To achieve 99.999 percent availability. The FC network is built on two separate networks (commonly called path A and path B). Modern SAN design is about deploying ports and switches in a configuration that provides flexibility and scalability.999 percent availability the network should be built with redundant switches that are not interconnected. Figure 6-14 SAN A and SAN B FC Networks Infrastructures for data access necessarily include options for redundant server systems. Figure 6-14 illustrates typical dual path FC-SAN designs. two. or five years later as the day they were first deployed. but most do not for reasons of cost. each host and storage device is dual-attached to the network. Some companies take the same approach with their traditional IP/Ethernet networks. many companies are actively seeking alternatives. It is also about making sure the network design and topology look as clean and functional one. and each end node (host or storage) is connected to both networks. In the traditional FC SAN design. This is primarily motivated by a desire to achieve 99. Some companies are looking to iSCSI and FCoE as the answer.SAN Design SAN design doesn’t have to be rocket science. Because the traditional FC SAN design doubles the cost of network implementation.

Multipathing software depends on having redundant I/O paths between systems and storage. by extension. all downstream connections. Figure 6-15 Storage Multipathing Failover A single multiport HBA can have two or more ports connecting to the SAN that can be used by multipathing software. Redundant server systems and filing systems are created through one of two approaches: clustered systems or server farms. servers are the starting point for most data access. Server filing systems are clearly in the domain of storage. and clusters are tightly coupled systems that function as a single. another SCSI communication connection is used in its place. If one of these communication connections fails. This includes switches and network cables that are being used to transfer I/O data between a computer and its storage. Farms are loosely coupled individual systems that have common access to shared data. Multipathing establishes two or more SCSI communication connections between a host system and the storage it uses.server systems might not necessarily be thought of as storage elements. changing an I/O path involves changing the initiator used for I/O transmissions and. although multiport HBAs provide path redundancy. Figure 6-15 portrays two types of multipathing. they are clearly key pieces of data access infrastructures. In general. From a storage I/O perspective. most current . Active/Active: balanced I/O over both paths (implementation specific) and Active/Passive: I/O over primary path—switches to standby path upon failure. However. fault-tolerant system.

Port Density and Topology Requirements The single most important factor in determining the most suitable SAN design is determining the number of end ports for now and over the anticipated lifespan of the design. taking into account any physical requirements of a design. the design for a SAN that will handle a network with 100 end ports will be very different from the design for a SAN that has to handle a network with 1500 end ports. Designing for a 1500-port SAN does not necessarily imply that 1500 ports need to be purchased initially. It also selects an alternative path in the event of failure of the primary path. SAN switches use the Fabric Shortest Path First (FSPF) routing protocol to converge new routes through the network following a change to the network configuration. and provide the necessary connectivity with remote data centers to handle the business requirements of business continuity and disaster recovery. establishing the shortest and quickest path between any two devices. As long as the new network route allows the storage path’s initiator and LUN to communicate. FSPF is the standard routing protocol used in Fibre Channel fabrics. including link or switch failures. Although FSPF itself provides for optimal routing between nodes. Multipathing is not the only automated way to recover from a network problem in a SAN. As an example.multipathing implementations use dual HBAs to provide redundancy for the HBA adapter. it could be advantageous to implement multipathing so that the timeout values for storage paths exceed the times needed to converge new network routes. This depends on switches in the network recognizing a change to the network and completing their route convergence process prior to the I/O operation timing out in the multipathing software. Considering this. From a design standpoint. it is typically better to overestimate the port count requirements than to underestimate them. It is about helping ensure that a design remains functional if . These underlying principles fall into five general categories: Port density and topology requirements: Number of ports required now and in the future Device performance and oversubscription ratios: Determination of what is acceptable and what is unavoidable Traffic management: Preferential routing or resource allocation Fault isolation: Consolidation while maintaining isolation Control plane scalability: Reduced routing complexity 1. Multipathing software in a host system can use new network routes that have been created in the SAN by switch routing algorithms. SAN Design Considerations The underlying principles of SAN design are relatively straightforward: plan a network topology that can handle the number of ports necessary now and into the future. or even at all. the storage process uses the route. FSPF automatically calculates the best path between any two devices in a fabric through dynamically computing routes. design a network topology with a given end-to-end performance and throughput level in mind. the Dijkstra algorithm on which it is commonly based has a worst-case running time that is the square of the number of nodes in the fabric.

a design should last longer than this. and densities. You can use the current number of end-port devices and the increase in number of devices during the previous 6.that number of ports is attained. Figure 6-16 SAN Major Design Factors For new environments. but once again. Although it is difficult to predict future requirements and capabilities. Preferably. it is more difficult to determine future port-count growth requirements. it is not difficult to plan based on an estimate of the immediate server connectivity requirements. protocols. coupled with an estimated growth rate of 30 percent per year. For example. Where existing SAN infrastructure is present. typically three years or more. because redesigning and reengineering a network topology become both more time-consuming and more difficult as the number of devices on a SAN expands. rather than later finding the design is unworkable. 12. A design should also consider physical space requirements. As a minimum. is the data center all on one floor? Is it all in one building? Is there a desire to use lower-cost connectivity options such as iSCSI for servers with minimal I/O requirements? Do you want to use IP SAN extension for disaster recovery connectivity? Any design should also consider increases in future port speeds. determining the approximate port count requirements is not difficult. Figure 6-16 portrays the SAN major design factors. the lifespan for any design should encompass the depreciation schedule for equipment. unused module . and 18 months as rough guidelines for the projected growth in number of end-port devices in the future.

in a SAN switching environment. Key factors in deciding the optimum fan-out ratio are server HBA queue depth. Figure 617 portrays the major SAN oversubscription factors. 2. Because storage subsystem I/O resources are not commonly consumed at 100 percent all the time by a single client. application I/O requests. multiple slower devices may fan in to a single port to take advantage of unused capacity. On the server side. storage device input/output (IOPS). however. which is derived from factors including I/O size. These recommendations are often in the range of 7:1 to 15:1. two oversubscription metrics must be managed: IOPS and network bandwidth capacity of the SAN. CPU capacity. On the storage side. . When the fan-out ratio is high and the storage array becomes overloaded. Note A fan-out ratio is the relationship in quantity between a single port on a storage device and the number of servers that are attached to it. When considering all the performance characteristics of the SAN infrastructure and the servers and storage devices. SAN switch ports are rarely run at their maximum speed for a prolonged period. Oversubscription is a necessity of any networked infrastructure and directly relates to the major benefit of network—to share common resources among numerous clients. Fan-in is how many storage ports can be served from a single host channel. cache. including the SAN infrastructure itself. although they pertain to different elements of the SAN. whereas bandwidth capacity relates to all devices in the SAN. The higher the rate of oversubscription. including the system architecture. IOPS performance relates only to the servers and storage devices and their ability to handle high numbers of I/O operations. the supported bandwidth is again strictly derived from the IOPS capacity of the disk subsystem itself. and port throughput. It is important to know the fan-out ratio in a SAN design so that each server gets optimal access to storage resources. the required bandwidth is strictly derived from the I/O load. is the practice of connecting multiple devices to the same switch port to optimize switch use. application performance will be affected negatively. disk controllers.slots in switches that have a proven investment protection record open the possibility to future expansion. percentage of reads versus writes. Most major disk subsystem vendors provide guidelines as to the recommended fan-out ratio of subsystem client-side ports to server connections. results in an uneconomic use of storage. Device Performance and Oversubscription Ratios Oversubscription. and actual disks. the lower the cost of the underlying network infrastructure and shared resources. and I/O service time from the target device. The two metrics are closely related. Too low a fan-out ratio. a fan-out ratio of storage subsystem ports can be achieved based on the I/O demands of various applications and server platforms.

which might temporarily require high-rate I/O service. large CPUs. Best-practice would be to build a SAN design using a topology that derives a relatively conservative oversubscription ratio (for example. and sequential I/O operations to show wire-rate performance. In this case. Traffic Management . and the SAN design oversubscription has little effect on I/O contention. 3. This grouping can examine either complementary application I/O profiles or careful scheduling of I/O-intensive activities such as backups and batch jobs. and application I/O patterns do not result in sustained full-bandwidth utilization. oversubscription can be safely factored into SAN design. 8:1) coupled with monitoring of the traffic on the switch ports connected to storage arrays and Inter-Switch Links (ISL) to see if bandwidth is a limiting factor. peak time I/O traffic contention is minimized. application server performance is acceptable. Although ideal scenario tests can be contrived using larger I/Os. However. you must account for burst I/O traffic. this is far from a practical real-world implementation. the oversubscription ratio can be increased gradually to a level that is both maximizing performance while minimizing cost. The general principle in optimizing design oversubscription is to group applications or servers that burst high I/O rates at different time slots within the daily production cycle. neither application server HBAs nor disk subsystem client-side controllers can handle full wire-rate sustained bandwidth. Because of this fact. and application performance can be monitored closely. I/O composition. server-side resources. If bandwidth is not the limited factor. In more common scenarios.Figure 6-17 SAN Oversubscription Design Considerations In most cases.

. Technology such as virtual SANs (VSANs. Many organizations would like to consolidate their storage infrastructure into a single physical fabric.Are there any differing performance requirements for different application servers? Should bandwidth be reserved or preference be given to traffic in the case of congestion? Given two alternate traffic paths between data centers with differing distances. should traffic use one path in preference to the other? For some SAN designs it makes sense to implement traffic management policies that influence traffic flow and relative traffic priorities. but both technical and business challenges make this difficult. 4. Fault Isolation Consolidating multiple areas of storage into a single physical fabric both increases storage utilization and reduces the administrative overhead associated with centralized storage management. The major drawback is that faults are no longer isolated within individual storage areas. Faults within one fabric are contained within a single fabric (VSAN) and are not propagated to other fabrics. see Figure 6-18) enables this consolidation while increasing the security and stability of Fibre Channel fabrics by logically isolating devices that are physically connected to the same set of switches.

routing protocols. and other Fibre Channel fabric services. name server and domain-controller queries. Control plane scalability is the primary reason storage vendors set limits on the number of switches and devices they have certified and qualified for operation in a single fabric. Fibre Channel frames destined for the switch itself. which handles the forwarding of data frames within the SAN. Control Plane Scalability A SAN switch can be logically divided into two parts: a data plane. Control plane service disruptions (perpetrated either inadvertently or maliciously) are possible.Figure 6-18 Fault Isolation with VSANs 5. any service disruption to the control plane can result in business impacting network outages. typically through a high rate of traffic destined to the switch itself. routing updates and keepalives. such as Fabric Shortest Path First (FSPF). which handles switch management functions. Because the control plane is critical to network operations. These result in excessive CPU utilization and/or deprive the switch of CPU resources . and a control plane.

the standard permits the connection of one or more arbitrated loops to a fabric. A goal of SAN design should be to try to minimize the processing required with a given SAN topology. SAN Topologies The Fibre Channel standard defines three different topologies: fabric. Control plane CPU deprivation can also occur when there is insufficient control plane CPU relative to the size of the network topology and a networkwide event (for example. which provide all the benefits of multiple parallel ISLs between switches (higher throughput and resiliency) but only appear in the topology as a single logical link rather than multiple parallel links. The fabric topology is the most frequently used of all topologies. and this is why more emphasis is placed upon the fabric topology than on the two other topologies in this chapter. Although FSPF itself provides for optimal routing between nodes. Attention should be paid to the CPU and memory resources available for control plane functionality and to port aggregation features such as Cisco port channels. doubling the number of devices in a SAN can result in a quadrupling of the CPU processing required to maintain that routing. arbitrated loop. the Dijkstra algorithm on which it is commonly based has a worstcase running time that is the square of the number of nodes in the fabric. fabric defines a network in which several devices can exchange data simultaneously at full bandwidth.for normal processing. . Furthermore. Arbitrated loop defines a unidirectional ring in which only two devices can ever exchange data with one another at any one time. loss of a major switch or significant change in topology) occurs. FSPF is the standard routing protocol used in Fibre Channel fabrics. Finally. and point-topoint (see Figure 6-19). A fabric basically requires one or more Fibre Channel switches connected together to form a control center between the end devices. That is. Point-to-point defines a bidirectional connection between two devices.

Address space constraints limit FC-SANs to a maximum of 239 switches. The cabling of an arbitrated loop can be simplified with the aid of a hub. In this configuration the end devices are bidirectionally connected to the hub. The connection between two ports is called a link. the wiring within the hub ensures that the unidirectional data flow within the arbitrated loop is maintained. Also switched FC fabrics are aware of arbitrated loop devices when attached. It is possible for any port in a node to communicate with any port in another node connected to the same fabric. one input and one output channel. and switches) must be equipped with one or more Fibre Channel ports. of which there is a limit of 126. However. A port always consists of two channels. The edge switches in the core-edge design may be connected to both core switches. In the point-to-point topology and in the fabric topology the links are always bidirectional: in this case the input channel and the output channel of the two ports involved in the link are connected by a cross. FSPF convergence). but this creates one physical network and compromises resilience against networkwide disruptions (for example. Unlike Ethernet. The core-only is a star topology. Host-to-storage FC connections are generally redundant. FC switches employ a routing protocol called fabric shortest path first (FSPF) based on a link-state algorithm. Like Ethernet.Figure 6-19 Fibre Channel Topologies Common to all topologies is that devices (servers. but that exceeds the practical and physical limitation of the switches that make up a fabric. . The complexity and the size of the FC SANs increase when the number of connections increases. the port is generally realized by means of so-called HBAs. On the other hand. Bandwidth is shared equally among all connected devices. Most FC SANs are deployed in one of two designs commonly known as the two-tier topology: core-edge-only and threetier topology (edge-core-edge) designs. storage devices. FSPF reduces all physical topologies to a logical tree topology. there is a limit to the number of FC switches that can be interconnected. The more active devices connected the less available bandwidth will be available. compared to arbitrated-loop at 126. the links of the arbitrated loop topology are unidirectional: each output channel is connected to the input channel of the next port until the circle is closed. single host-to-storage FC connections are common in cluster and grid environments because host-based failover mechanisms are inherent to such environments. Figure 6-20 illustrates the typical FC SAN topologies. In servers. Cisco’s virtual SAN (VSAN) technology increases the number of switches that can be physically interconnected by reusing the entire FC address space within each VSAN. any port can communicate with another cut-through or store and forward switching. FC switches can be interconnected in any manner. Adding a new device increases aggregate bandwidth. so that every output channel is connected to an input channel. the redundant paths are usually not interconnected. therefore. Switched fabrics have a theoretical address support for over 16 million connections. In both the core-only and core-edge designs. A switched fabric topology allows dynamic interconnections between nodes through ports connected to a fabric.

SAN designs that can be built using a single physical switch are commonly referred to as collapsed core designs. ToR designs are based on intra-rack cabling between servers and smaller switches. The total number of hosts and NPV switches determine the number of fabric logins required on the core switch. The fabric login limits determine the design of the current SAN as well as its potential for future growth. per-switch. per-line-card. The proliferation of NPIV-capable end devices such as host bus adaptors (HBAs) and Cisco NPV-mode switches makes the number of fabric logins needed on a perport. EoR designs reduce the number of network devices and optimize port utilization on the network devices. Typically.) This terminology refers to the fact that a design is conceptually a core/edge design. The increase in blade server deployments and the consolidation of servers through the use of server virtualization technologies affect the design of the network. But EoR flexibility taxes data centers with a great quantity of horizontal cabling running under the raised floor. they offer the storage team the challenge to manage a higher number of devices. the number of fabric logins needed has increased even more. On the other hand. and per-physical-fabric basis a critical consideration. Although these designs reduce the amount of cabling and optimize the space used by network equipment. With the use of features such as NPort ID Virtualization (NPIV) and Cisco N-Port Virtualization (NPV). which can be installed on the same racks as the servers. Both have a direct impact over a major part of the entire data center cabling system. making use of core ports (non oversubscribed) and edge ports (oversubscribed) but . and the operational complexity. in a SAN design. EoR designs are based on inter-rack cabling between servers and high-density switches installed on the same row as the server racks. the number of end devices determines the number of fabric logins needed. (See Figure 6-21. the budget. The best design choice leans on the number of servers per rack.Figure 6-20 Sample FC SAN Topologies The Top-of-Rack (ToR) and End-of-Row (EoR) designs represent how access switches and servers are connected to each other. the data rate for the connections.

and industry-wide consortiums accredit the Fibre Channel standard.S. Traditionally.that it has been collapsed into a single physical switch. vendors. Many standards bodies. although it could be argued that multiple physical fabrics provide increased physical protection (for example. Note Is it Fibre or Fiber? Fibre Channel was originally designed to support fiber optic cabling only. English spelling (Fiber) is retained when referring generically to fiber optics and cabling. There are many products on the market that take advantage of the high-speed and high-availability characteristics of the Fibre Channel architecture. and there is nondisruptive failover between fabrics in the event of a problem in one fabric. Fabric isolation can be achieved using either VSANs or dual physical switches. the committee decided to keep the name in principle. Figure 6-21 Sample Collapsed Core FC SAN Design SAN designs should always use two isolated fabrics for high availability. When copper support was added. unlike Small Computer System Interface (SCSI). The U. which was developed by a vendor and submitted for standardization afterward. but to use the UK English spelling (Fibre) when referring to the standard. Multipathing software should be deployed on the hosts to manage connectivity between the host and storage so that I/O uses both paths. protection against a sprinkler head failing above a switch) and protection against equipment failure. Fibre Channel Fibre Channel (FC) is the predominant architecture upon which SAN implementations are built. with both hosts and storage connecting to both fabrics. Fibre Channel was developed through industry cooperation. Current implementations support data transfers at up to 16 Gbps or even more. technical associations. a collapsed core design on Cisco MDS 9000 family switches would utilize both non oversubscribed (storage) and oversubscribed (host-optimized) line-cards. . Fibre Channel is a technology standard that allows data to be transferred at extremely high speeds. Both provide separation of fabric services.

it can support multiple protocols and a broad range of devices. Because of its network characteristics.Fibre Channel is an open. Detailed information on all FC Standards can be downloaded from T11 at www.t11. the transmission. FC-4. The lower four layers. the physical levels. defines how application protocols (upper layer protocols. hosts and applications see storage devices that are attached to the SAN as though they are locally attached storage. Figure 6-22 portrays just a few of them. whether a real Fibre Channel network is used as an IP network. The link services and fabric services are located quasi-adjacent to the Fibre Channel protocol stack. with the flexible connectivity and distance characteristics of a traditional network. Because of its channel-like qualities. in which each bit has its own data line . and the addressing. The upper layer. FC-0: Physical Interface FC-0 defines the physical transmission medium (cable. ULPs) are mapped on the underlying Fibre Channel network. Figure 6-22 Fibre Channel Standards The Fibre Channel protocol stack is subdivided into five layers (see Figure 6-10 in the “Block Level Protocols” section). Fibre Channel is structured by a lengthy list of standards. that is.org. FC-0 to FC-3. plug) and specifies which physical signals are used to transmit the bits 0 and 1. In contrast to the SCSI bus. a Fibre Channel SAN (as a storage network) or both at the same time. for example. The use of the various ULPs decides. define the fundamental communication techniques. and it can be managed as a network. These services will be required to administer and operate a Fibre Channel network. technical standard for networking that incorporates the channel transport characteristics of an I/O bus.

Single-mode fiber is for longer distances.plus additional control lines. which reduces the total system cost. Overall. The core size is typically 8. optical fibers provide a high-performance transmission medium. Common multimode core sizes are 50 micron and 62. Where costly electronics are heavily concentrated. If the distance is short. The 50micron or 562.5 micron. Fibre Channel transmits the bits sequentially via a single line. When a company leases equipment. dedicated fiber-optic link that can be used without the need for any additional equipment. Fibre Channel can use either optical fiber (for distance) or copper cable links (for short distance at low cost). Over a slightly longer distance—for example. the problem is much larger. Long wave laser: This technology uses a wavelength of 1300 nanometers. To connect one optical device to another. or mode. which was refined and proven over many years. Multimode fiber (MMF) allows more than one mode of light. 9-pin connector. FC-1 also describes certain transmission words . Mixing fiber-optical and copper components in the same environment is supported. This fiber might need to be laid underground or through a conduit. If the two units that need to be connected are in different cities. the fiberoptic cable that they lease is known as dark fiber. Fibre Channel architecture supports both short wave and long wave optical transmitter technologies in the following ways: Short wave laser: This technology uses a wavelength of 780 nanometers and is compatible only with MMF. MMF is more economical because it can be used with inexpensive connectors and laser devices. a standard fiber cable suffices. fiber-optic cabling is referred to by mode or the frequencies of light waves that are carried by a particular cable type.3 micron. Multimode cabling is used with shortwave laser light and has either a 50-micron or a 62. a fiber link might need to be laid. although not all products provide that flexibility. Multimode fiber is for shorter distances. Normally. Single-mode fiber (SMF) allows only one pathway. but it is not as simple as connecting two switches together in a single rack. An example of this type of application is on long spans between two system or network devices. MMF fiber is better suited for shorter distance applications. the primary cost of the system does not lie with the cable. Larger. Product flexibility needs to be considered when you plan a SAN. Copper cables tend to be used for short distances. Because most businesses are not in the business of laying cable. is typically associated with more expensive. up to 30 meters (98 feet). and can be identified by their DB-9. from one building to the next. they lease fiber-optic cables to meet their needs. In such a case. SMFs are used in applications where low signal loss and high data rates are required. FC-1: Encode/Decode FC-1 defines how data is encoded before it is transmitted via a Fibre Channel cable (8b/10b data transmission code scheme patented by IBM).5-micron diameter is sufficiently large for injected light waves to be reflected off the core interior.5-micron core with a cladding of 125 micron. in this case. It is compatible with both SMF and MMF. of light to travel within the fiber. some form of fiber-optic link is required. Fiber-optic cables have a major advantage in noise immunity. Dark fiber generically refers to a long. where repeater and amplifier spacing needs to be maximized.

The overhead of the 64B/66B encoding is considerably less than the more common 8b/10b-encoding scheme. The format of the 8b/10b character is of the format D/Kxx. If the preamble is 10. A parity check does not find the even-numbered bit errors. which have a special meaning. This information allows the receiver to synchronize to the embedded clock information and successfully recover the data at the required error rate.5. aligned on a word boundary. Data words occur between the start of frame (SOF) and end of frame (EOF) delimiters. and generate an error if seen. This scheme is called 8b/10b encoding because it refers to the number of data bits input to the encoder and the number of bits output from the encoder. Three major types of ordered sets are defined by the signaling protocol. Sixty-four bits of data are transmitted as a 66-bit entity. the start-of-frame (SOF). The encoding process ensures that sufficient clock information is present in the serial data stream. It also allows easier clock and timer synchronization because a transmission must be seen every 66 bits. Transmission word consists of four continuous transmission characters treated as a unit. Ordered sets begin with a special transmission character (K28. The use of the 01 and 10 preambles guarantees a bit transmission every 66 bits. which means that a continuous stream of zeros or ones cannot be valid data. They are 40 bits long. which also establishes word boundary alignment. Ordered sets delimit frame boundaries and occur outside of frames. There are two categories of transmission words: data words and ordered sets. They immediately precede or follow the contents of a frame. to move the data across the network. The information sent on a link consists of transmission words containing four transmission characters each. which is known as an ordered set. Ordered Sets Fibre Channel uses a command syntax.y: D or K D = Data K =Special Character “xx” = Decimal value of the 5 Least Significant bits “y” = Decimal value of the 3 Most Significant bits Communications of 10 and 16 Gbps use 64/66b encoding.(ordered sets) that are required for the administration of a Fibre Channel connection (link control protocol). plus 56 bits of control information and data. only the odd numbers. the data is encoded before transmission and decoded upon reception.5). The 66-bit entity is made by prefixing one of two possible 2-bit preambles to the 64 bits to be transmitted. There are 11 types of SOF and eight types of EOF delimiters that are defined for the fabric and N_Port sequence . Encoding and Decoding To transfer data over a high-speed serial interface. This 8b/10b encoding finds errors that a parity check cannot. The 8b/10b encoding logic finds almost all errors. The ordered sets are 4-byte transmission words that contain data and special characters. The preambles 00 and 11 are not used. and end-of-frame (EOF) ordered sets establish the boundaries of a frame. An ordered set always begins with the special character K28. The 8b/10b encoding process converts each 8-bit byte into two possible 10-bit characters. The frame delimiters. an 8-bit type field follows. which is outside the normal data space. the 64 bits are entirely data. If the preamble is 01. Ordered sets provide the availability to obtain bit and word synchronization.

A primitive sequence is an ordered set that is transmitted and repeated continuously to indicate specific conditions within a port. are ordered sets that are designated by the standard to have a special meaning. It determines how larger data units (for example. Table 6-5 Fibre Channel Primitive Sequences FC-2: Framing Protocol FC-2 is the most comprehensive layer in the Fibre Channel protocol stack. idle and receiver ready (R_RDY). . The two primitive signals. It regulates the flow control that ensures that the transmitter sends the data only at a speed that the receiver can process it. Or the set might indicate conditions that are encountered by the receiver logic of a port. Recognition of a primitive sequence requires consecutive detection of three instances of the same ordered set. An idle is a primitive signal that is transmitted on the link to indicate that an operational port facility is ready for frame transmission and reception. The primitive sequences that are supported by the standard are illustrated in Table 6-5. Each exchange consists of one or more bidirectional sequences (see Figure 6-23). a corresponding primitive sequence or idle is transmitted in response. a file) are transmitted via the Fibre Channel network. The R_RDY primitive signal indicates that the interface buffer is available for receiving further frames. And it defines various service classes that are tailored to the requirements of various applications: Multiple exchanges are initiated between initiators (hosts) and targets (disks).control. When a primitive sequence is received and recognized.

six filling words must be transmitted by means of a link between two frames. therefore. Larger sequences. and a Cyclic Redundancy Checksum (CRC) (see Figure 6-24). In contrast to Ethernet and TCP/IP. A sequence could be representing an individual database transaction. useful data (payload). Control frames contain no useful data. A sequence is a larger data unit that is transferred from a transmitter to a receiver. hence the name “sequence. The frame is bracketed by a start-of-frame (SOF) delimiter and an end-of-frame (EOF) delimiter. Only one sequence can be transferred after another within an exchange.” Furthermore. . A Fibre Channel frame consists of a header. sequences are only delivered to the next protocol layer up when all frames of the sequence have arrived at the receiver. Finally. they signal events such as the successful delivery of a data frame. each exchange maps to a SCSI command. have to be broken down into several frames. Data frames transmit up to 2.Figure 6-23 Fibre Channel FC-2 Hierarchy Each sequence consists of one or more frames. FC-2 guarantees that sequences are delivered to the receiver in the same order they were sent from the transmitter.112 bytes of useful data. A Fibre Channel network transmits control frames and data frames. For the SCSI3 ULP. The CRC checking procedure is designed to recognize all transmission errors if the underlying medium does not exceed the specified error rate of 10-12. Fibre Channel is an integrated whole: the layers of the Fibre Channel protocol stack are so well harmonized with one another that the ratio of payload to protocol overhead is very efficient at up to 98%.

Class 2 and Class 3. Three of these defined classes (Class 1. on the other hand. Class F serves for the data exchange between the switches within a fabric.Figure 6-24 Fibre Channel Frame Format Fibre Channel Service Classes The Fibre Channel standard defines six service classes for exchange of data between end devices. switches. In Class 2 the receiver acknowledges each received frame. Class 3 achieves less than Class 2: frames are not acknowledged. . Table 6-6 lists the different Fibre Channel classes of service. not end-to-end flow control. Class 2. Class 1 defines a connection-oriented communication connection between two node ports: a Class 1 connection is opened before the transmission of frames. In Fibre Channel SAN implementations. the end devices themselves negotiate whether they communicate by Class 2 or Class 3. Several Class 2 and Class 3 connections can thus share the bandwidth. with hardly any products providing the connection-oriented Class 1. In addition. A port can thus maintain several connections at the same time. This means that only link flow control takes place. Almost all new Fibre Channel products (HBAs. In addition. Class 2 uses end-to-end flow control and link flow control. Instead. which realize a packet-oriented service (datagram service). storage devices) support the service classes Class 2 and Class 3. and Class 3) are realized in products available on the market. the higher protocol layers must notice for themselves whether a frame has been lost. the frames are individually routed through the Fibre Channel network. are packet-oriented services: no dedicated connection is built up.

Two communicating ports negotiating the buffer-to-buffer credits achieve this. In end-to-end flow control. Buffer-to-buffer credits are used in Class 2 and Class 3 services between a node and a switch. FC-2 defines two mechanisms for flow control: end-to-end flow control and link flow control (see Figure 6-25). Each credit represents the capacity of the receiver to receive a Fibre Channel frame. This means that the link flow control also takes place at the Fibre Channel switches. . The sender decrements one BB_Credit for each frame sent and increments one for each R_RDY received. The link flow control takes place at each physical connection. If the receiver awards the transmitter a credit of 3. Fibre Channel uses credit model. The transmitter may not send further frames until the receiver has acknowledged the receipt of at least some of the transmitted frames. the transmitter may only send the receiver three frames. The end-to-end flow control is realized on the HBA cards of the end devices.Table 6-6 Fibre Channel Classes of Service Fibre Channel Flow Control Flow control ensures that the transmitter sends data only at a speed that the receiver can receive it. BB_Credits are the more practical application and are very important for switches that are communicating across an extension (E_Port) because of the inherent delays present with transport over LAN/WAN/MAN. Initial credit levels are established with the fabric at login and then replenished upon receipt of receiver-ready (R_RDYY) from the target. two end devices negotiate the end-to-end credit before the exchange of data.

Hop-by-hop traffic flow is paced by return of Receiver Ready (R_RDY) frames and can only transmit up to the number of BB_Credits before traffic is throttled. Insufficient BB_Credits will throttle performance—no data will be transmitted until R_RDY is returned. Buffer to buffer credits (BB_Credit) are negotiated between each device in a FC fabric. regardless of its frame size. there is no concept of end-to-end buffering exists. regardless of the number of switches in the network. FC frames are buffered and queued in intermediate switches. FC flow control is critical to the efficiency of the SAN fabric. After the initial credit level is set. the sender decrements one EE_Credit for each frame sent and increments one when an acknowledgement (ACK) is received.Figure 6-25 Fibre Channel End-to-End and Buffer-to-Buffer Flow Control Mechanism End-to-end credits are used in Class 1 and Class 2 services between two end nodes. As distance increases or frame size decreases. . Flow control management occurs between two connected Fibre Channel ports to prevent a target device from being overwhelmed with frames from the initiator device. Figure 6-26 portrays the Fibre Channel Frame buffering on an FC switched fabric. a small FC frame uses the same buffer as a large FC frame. One buffer is used per FC frame. the number of available BB_Credits required increases as well.

Table 6-7 illustrates the required BB_Credits for a specific FC frame size with specific link speeds. 50μsec × 2 =100μsec. This helps keep the fiber optic link full. The following functions are being discussed for FC-3: Striping manages several paths between multiport end devices. A Fibre Channel frame size is ~2 Kbytes (2148 bytes maximum). The transmit time for 2. the time it takes for a frame to travel from a sender to receiver and response to the sender is 100 μsec. the maximum number of frames in a second is 10. The speed of light over a fiber optic link is ~ 5μsec per km. Table 6-7 Fibre Channel BB_Credits FC-3: Common Services FC-3 has been in its conceptual phase since 1988. If the round-trip time is 100 μsec.000 frames per sec)./.Figure 6-26 Fibre Channel Frame Buffering The optimal buffer credit value can be calculated by determining the transmit time per FC frame and dividing that into the total round-trip time per frame. Multipathing combines several paths between two multiport end devices to form a logical path . FC-3 is empty. then BB_Credit >= 5 (100 μsec / 2 μsec =5).000). in currently available products.000 (1sec. and without running out of BB_Credits. (Speed of light over fiber optic links ~5μsec/km. Striping could distribute the frames of an exchange over several ports and thus increase the throughput between the two devices.000 bytes * 10.) If the sender waits for a response before sending the next frame. For a 10km link. 0001 = 10.000m characters (2 KB frame) is approximately 20 μsec (2000 /1 Gbps or 100 MBps FC data rate). This implies that the effective data rate is ~ 20 MBps (2.

2. On a single-port device. The pWWNs uniquely identify each port in a device. The nWWNs and pWWNs are both needed because devices can have multiple ports. On multiport devices. the nWWN and pWWN may be the same. gateway. using the port address. Failure or overloading of a path can be hidden from the higher protocol layers. The two types of WWNs are node WWNs (nWWN) and port WWNs (pWWN): The nWWNs uniquely identify devices. path failover and multiplexing software can detect redundant paths to a device by observing that the same nWWN is associated with multiple pWWNs. switch. the pWWN is used to uniquely identify each port. Ports must be uniquely identifiable because each port participates in a unique data path. Fibre Channel Addressing Fibre Channel uses World Wide Names (WWN) and Fibre Channel IDs (FCID). WWNs are important for enabling Cisco Fabric Services because they have these characteristics— they are guaranteed to be globally unique and are permanently associated with devices. Compressing and encryption of the data to be transmitted preferably realized in the hardware on the HBA. The service or application communicates with the target device. . The name server looks up and returns the current port address that is associated with the target WWN. Figure 6-27 portrays Fibre Channel addressing on an FC fabric. and switched fabric. array or tape controller. Every HBA.group. For example. The nWWNs are required because the node itself must sometimes be uniquely identified. The service or application queries the switch name server service with the WWN of the target device. When a management service or application needs to quickly locate a specific device. A dual-ported HBA has three WWNs: one nWWN and one pWWN for each port. This capability is an important consideration for Cisco Fabric Services. FCIDs are dynamically acquired addresses that are routable in a switch fabric. and Fibre Channel disk drive has a single unique nWWN. 3. WWNs are unique identifiers that are hardcoded into Fibre Channel devices. Vendors buy blocks of WWNs from the IEEE and allocate them to devices in the factory. array controller. Each Fibre Channel port has at least one WWN. Fibre Channel ports are intelligent interface points on the Fibre Channel SAN. the following occurs: 1. such as an I/O adapter. These characteristics ensure that the fabric can reliably identify and locate devices. They are found embedded in various devices.

Each switch receives a unique domain ID.Figure 6-27 Fibre Channel Naming on FC Fabric The Fibre Channel point-to-point topology uses a 1-bit addressing scheme. Figure 6-28 FC ID Addressing The FC-AL topology uses an 8-bit addressing scheme: The arbitrated loop physical address (AL-PA) is an 8-bit address. However. Figure 6-28 portrays FCID addressing. . Switched Fabric Address Space The 24-bit Fibre Channel address consists of three 8-bit elements: The domain ID is used to define a switch. One address is reserved for an FL port. One port assigns itself an address of 0x000000 and then assigns the other port an address of 0x000001. only a subset of 127 addresses is available because of the 8B/10B encoding requirements. so 126 addresses are available for nodes. which provides 256 potential addresses. Addresses are cooperatively chosen during loop initialization.

the number of switches in a fabric is far smaller because of storage-vendor qualifications. F_Port Login (FLOGI). State Change Registration (SCR). depending upon the type of function provided and whether the frame contains a payload. Each switch must have a unique domain ID. The fabric also assigns or confirms (if trying to reuse an old ID) the N_port identifier of the port initiating the BB_Credit value. Registered State Change Notification (RSCN). ELS service requests are not permitted prior to a port login except the fabric login. Generic services include the following: Directory Service. FC-GS shares a Common Transport (CT) at the FC-4 level and the CT provides access to the services. maximum payload size. They are basic link services. Process Login (PRLI). Although the domain ID is an 8-bit field. Fabric Login (FLOGI) is used by the N_Port to discover if a fabric is present. it provides operating characteristics associated with the fabric (service parameters. If fabric is present. and class of service supported. extended link services. Figure 6-29 portrays FC fabric and port login/logout and query to switch process. State Change Notification (SCN). Following are ELS command examples: N_Port Login (PLOGI). Logout (LOGO). Process Logout (PRLO). Alias Services. only 239 domains are available to the fabric: Domains 01 through EF are available. and FC-4 link services (generic services). Domains 00 and F0 through FF are reserved for use by switch services. Extended link services (ELS) are performed in a single exchange. In practice. Most of the ELSs are performed as a two sequence exchange: a request from the originator and a response from the responder. Loop Initialize (LINIT). so there can be no more than 239 switches in a fabric. Port login is required for generic services. . Fibre Channel Link Services Link services provide a number of architected functions that are available to the users of the Fibre Channel port. There are three types of link services defined. Areas can be used to group ports within a switch. Management Services. Each fabric-attached loop receives a unique area ID. Basic link service commands support low-level functions like aborting a sequence (ABTS) and passing control bit information. such as max frame size). Time Services and Key Distribution Services. FC-CT (Fibre Channel Common Transport) protocol is used as a transport media for these services. Areas are also used to uniquely identify fabric-attached arbitrated loops. N_Ports perform fabric login by transmitting the FLOGI extended link service command to the well-known address of 0xFFFFFE of the Fabric F_Port and exchanges service parameters. A new N Port Fabric Login source address is 0x000000 and destination address is 0xFFFFFE.The area ID is used to identify groups of ports within a domain. such as BB Credit.

GA_NXT (Name Server Query): Initiator to name server (0xFFFFFC) requesting attributes about a specific port. 4. Port Login. 2. FLOGI extended Link Service (ELS): Initiator/target to login server (0xFFFFFE) for switch fabric login. 9. PLOGI (ELS): Initiator to target to establish an end-to-end Fibre Channel session (see Figure 6-30). 8. ACCEPT (ELS reply): Login server to initiator/target for fabric login acceptance. 5. SCR (ELS): Initiator/target issues State Change Registration Command (SCRC) to fabric controller (0xFFFFFD) for notification when a change in the switch fabric occurs. ACCEPT (ELS reply): Fabric controller to initiator/target acknowledging registration command. and State Change Registration 1. 3. Fibre Channel ID (Port_Identifier) is assigned by login server to initiator/target. 7. 6. ACCEPT (Query reply): Name server to initiator with list of attributes of the requested port.Figure 6-29 Fibre Channel Fabric. . ACCEPT (ELS reply): Name server to initiator/target using Port_Identifier as D_ID for port login acceptance. PLOGI (ELS): Initiator/target to name server (0xFFFFFC) for port login and establish a Fibre Channel session.

port address.Figure 6-30 Fibre Channel Port/Process Login to Target 10. . PRLI (ELS): Initiator to target process login requesting an FC-4 session. It stores information such as port WWN. The fabric login server processes incoming fabric login requests with the address 0×FF FF FE. Fibre Channel Fabric Services In a Fibre Channel topology the switches manage a range of information that operates the fabric. The fabric controller then informs registered N-Ports of changes to the fabric (Registered State Change Notification. Like all services. N-Ports must log on with the name server by means of port login before they can use its services. This information is managed by fabric services. 12. Table 6-8 lists the reserved Fibre Channel addresses. N-Ports can register for state changes in the fabric controller (State Change Registration. the name server appears as an N-Port to the other ports. supported FC-4 protocols. The name server administers a database on N-Ports under the address 0×FF FF FC. RSCN). The fabric controller manages changes to the fabric under the address 0×FF FF FD. N-Ports can register their own properties with the name server and request information on other N-Ports. 11. ACCEPT (ELS reply): Target to initiator acknowledging session establishment. Initiator may now issue SCSI commands to the target. node WWN. All FC services have in common that they are addressed via FC-2 frames. ACCEPT (ELS reply): Target to initiator acknowledging initiator’s port login and Fibre Channel session. and they can be reached by well-defined addresses. and so on. or SCR). supported service classes. Servers can use this service to monitor their storage devices.

However. the type of data that end devices exchange via Fibre Channel connections remains open. A specific Fibre Channel network can serve as a medium for several application protocols—for example.) FCP maps the SCSI protocol onto the underlying Fibre Channel network. The protocol mappings determine how the mechanisms of Fibre Channel are used in order to realize the application protocol by means of Fibre Channel. This is where the application protocols (Upper Layer Protocols. This mapping of existing protocols aims to ease the transition to Fibre Channel networks: ideally. or ULPs) come into play. For the connection of storage devices to servers. they specify which service classes will be used and how the data flow in the application protocol will be projected onto the exchange sequence frame mechanism of Fibre Channel. This emulation of traditional SCSI devices should make it possible for Fibre Channel SANs to be simply and painlessly integrated into existing hardware and software. . which it addresses like “normal” SCSI devices. For example. The operating system recognizes storage devices connected via Fibre Channel as SCSI devices.Table 6-8 FC-PH Has Defined a Block of Addresses for Special Functions (Reserved Addresses) FC-4: ULPs—Application Protocols The layers FC-0 to FC-3 serve to connect end devices together by means of a Fibre Channel network. This means that the FC-4 protocol mappings support the Application Programming Interface (API) of existing protocols upward in the direction of the operating system and realize these downward in the direction of the medium via the Fibre Channel network. The application protocol for SCSI is called Fibre Channel Protocol (FCP). no further modifications are necessary to the operating system except for the installation of a new device driver. The task of the FC-4 protocol mappings is to map the application protocols onto the underlying Fibre Channel network. (See Figure 6-31. The idea of the FCP protocol is that the system administrator merely installs a new device driver on the server and this realizes the FCP protocol. SCSI and IP. the SCSI cable is therefore replaced by a Fibre Channel network.

Fibre Channel is therefore taking the old familiar storage networks from the world of mainframes into the Open System world (Unix. IPFC defines how IP packets will be transferred via a Fibre Channel network. When a server or storage device communicates. Windows. The Fibre Channel switch recognizes which type of device is attaching to the SAN and configures the ports accordingly. MacOS). array or tape controller. and switched fabric. FICON maps the ESCON protocol (Enterprise System Connection) used in the world of mainframes onto Fibre Channel networks.Figure 6-31 FCP Protocol Stack A further application protocol is IPFC: IPFC uses a Fibre Channel connection between two servers as a medium for IP data traffic. The server or storage device issues SCSI commands. Novell. it has been possible to realize storage networks in the world of mainframes since the 1990s. . Figure 6-32 portrays the various Fibre Channel port types. Fibre Channel—Standard Port Types Fibre Channel ports are intelligent interface points on the Fibre Channel SAN. OS/400. Using ESCON. which the interface point formats to be sent to the target device. the interface point acts as the initiator or target for the connection. These ports understand Fibre Channel. such as an I/O adapter. They are found embedded in various devices. Fibre Connection (FICON) is a further important application protocol.

TE Ports are specific to Cisco MDS 9000 Series switches and expand the functionality of E Ports to support virtual SAN (VSAN) trunking. the other FL Ports enter nonparticipating mode. class 3. Fabric port (F Port): In F Port mode. Node-proxy port (NP Port): An NP Port is a port on a device that is in N-Port Virtualization (NPV) mode and connects to the core switch via an F Port. If more than one FL Port is detected on the FC-AL during initialization. E Ports carry frames between switches for configuration and fabric management. an interface functions as a fabric expansion port. Interconnected switches use the VSAN ID to multiplex traffic from one or more VSANs across the same physical link. They also serve as a conduit between switches for frames that are destined to remote node ports (N Ports) and node loop ports (NL Ports). all frames that are transmitted are in the EISL frame format.Figure 6-32 Fibre Channel Standard Port Types Expansion port (E Port): In E Port mode. only one FL Port becomes operational. FL Ports support class 2 and class 3 service. This port connects to another E Port to create an interswitch link (ISL) between two switches. F Ports support class 2 and class 3 service. and class F service. This port connects to another TE Port to create an extended ISL (EISL) between two switches. This port connects to a peripheral device (such as a host or disk) that operates as an N Port. and the Fibre Channel Traceroute (fctrace) feature. NP Ports function like node ports (N . This port connects to one or more NL Ports (including FL Ports in other switches) to form a public FC-AL. E Ports support class 2. an interface functions as a trunking expansion port. an interface functions as a fabric loop port. which contains VSAN information. transport quality of service (QoS) parameters. Fabric loop port (FL Port): In FL Port mode. When an interface is in TE Port mode. an interface functions as a fabric port. An F Port can be attached to only 1 N Port. Trunking expansion port (TE Port): In TE Port mode.

an interface functions as a SPAN. A set of protocols runs over the SAN to handle routing. In TF Port mode. an interface functions as a trunking expansion port. If. the G-Port configures itself as an E-Port. for example. Such ports are called G-Ports. An SD Port monitors network traffic that passes through a Fibre Channel interface. When a port is configured as an ST Port. or TF Port. Fx Port: An interface that is configured as an Fx Port can operate in either F or FL Port mode. E Port. G-Port (Generic_Port): Modern Fibre Channel switches configure their ports automatically. SPAN tunnel port (ST Port): In ST Port mode. naming. This interface connects to another trunking node port (TN Port) or trunking node-proxy port (TNP Port) to create a link between a core switch and an NPV switch or a host bus adapter (HBA) to carry tagged frames. Fx Port mode is determined during interface initialization. you use the physical links to make these interconnections. TE Port. they also function as proxies for multiple physical N Ports. You can design multiple SANs with different topologies. an interface functions as an entry-point port in the source switch for the Remote SPAN (RSPAN) Fibre Channel tunnel. all frames are transmitted in the EISL frame format. This interface connects to a TF Port to create a link to a core N Port ID Virtualization (NPIV) switch from an NPV switch to carry tagged frames. Virtual Storage-Area Network A VSAN is a virtual storage-area network (SAN). SD Ports cannot receive frames and transmit only a copy of the source traffic. which contains VSAN information. ST Port mode and the RSPAN feature are specific to Cisco MDS 9000 Series switches. TNP Port: In TNP Port mode. with the port mode being determined during interface initialization. some SAN extender devices implement a B Port model to connect geographically dispersed fabrics. and zoning. TF Ports are specific to Cisco MDS 9000 Series switches and expand the functionality of F Ports to support VSAN trunking. A SAN is a dedicated network that interconnects hosts and storage devices primarily to exchange SCSI traffic. depending on the attached N or NL Port. In SANs. a Fibre Channel switch is connected to a further Fibre Channel switch via a G-Port. Bridge port (B Port): Whereas E Ports typically interconnect Fibre Channel switches. This model uses B Ports that are as described in the T11 Standard Fibre Channel Backbone 2 (FC-BB-2). FL Port. The SPAN feature is specific to Cisco MDS 9000 Series switches. Auto mode: An interface that is configured in auto mode can operate in one of the following modes: F Port. an interface functions as a trunking expansion port. it cannot be attached to any device and therefore cannot be used for normal Fibre Channel traffic. Monitoring is performed using a standard Fibre Channel analyzer (or a similar Switch Probe) that is attached to the SD Port. Switched Port Analyzer (SPAN) destination port (SD Port): In SD Port mode. This feature is nonintrusive and does not affect switching of network traffic for any SPAN source port. .Ports) but in addition to providing N Port operations. Trunking fabric port (TF Port): In TF Port mode.

. fewer less-costly redundant fabrics can be built. The reasons for building SAN islands include to isolate different applications into their own fabric or to raise availability by minimizing the impact of fabric-wide disruptive events. in practice this situation can become costly and wasteful in terms of fabric ports and resources. when a VSAN is created. Spare ports within the fabric can be quickly and nondisruptively assigned to existing VSANs. A VSAN has the following additional features: Multiple VSANs can share the same physical topology. but also a complete replicated set of Fibre Channel services for each VSAN. each housing multiple applications and still providing island-like isolation.A SAN island refers to a completely physically isolated switch or group of switches that are used to connect hosts to storage devices. Each VSAN in this topology has the same behavior and property of a SAN. VSANs increase the efficiency of a SAN fabric by alleviating the need to build multiple physically isolated fabrics to meet organizational or application needs. configuration management capability. VSANs provide not only a hardware-based isolation. and policies are created within the new VSAN. thus increasing VSAN scalability. the network administrator can build a single topology containing switches. providing a clean method of virtually growing application-specific SAN islands. Instead. and one or more VSANs. Therefore. The same Fibre Channel IDs (FC IDs) can be assigned to a host in another VSAN. links. With the introduction of VSANs. Figure 6-33 VSANs Physically separate SAN islands offer a higher degree of security because each physical infrastructure contains a separate set of Cisco Fabric Services and management access. Unfortunately. Figure 6-33 portrays VSAN topology and a representation of VSANs on an MDS 9700 Director switch by per port allocation. a completely separate set of Cisco Fabric Services.

Ease of configuration: Users can be added. If one VSAN fails. Redundancy: Several VSANs created on the same physical SAN ensure redundancy. or changed between VSANs without changing the physical structure of a SAN. and zoning. The EISL link is supported between Cisco MDS and Nexus switch products. Using a hardware-based frame tagging mechanism on VSAN member ports and EISL links isolates each separate virtual fabric. moved. VSANs offer the following advantages: Traffic isolation: Traffic is contained within VSAN boundaries. Up to 256 VSANs can be configured in a switch. Per VSAN fabric services: Replication of fabric services on a per VSAN basis provides increased scalability and availability. domain manager. The EISL link type has been created and includes added tagging information for each frame within the fabric. Moving a device from one VSAN to another only requires configuration at the port level. Membership to a VSAN is based on physical ports. if desired.Every instance of a VSAN runs all required protocols. ensuring absolute separation between user groups. Events causing traffic disruptions in one VSAN are contained within that VSAN and are not propagated to other VSANs. one is a default VSAN (VSAN 1) and another is an isolated VSAN (VSAN 4094). and devices reside in only one VSAN. and no physical port may belong to more than one VSAN. not at a physical level. Scalability: VSANs are overlaid on top of a single physical fabric. redundant protection (to another VSAN in the same physical SAN) is configured using a backup path between the host and the device. Fabric-related configurations in one VSAN do not affect the associated traffic in another VSAN. . Table 6-9 lists the different characteristics of VSAN and Zone. such as FSPF. Of these. User-specified VSAN IDs range from 2 to 4093. The ability to create several logical VSAN layers increases the scalability of the SAN.

DPVM offers flexibility and eliminates the need to reconfigure the port VSAN membership to maintain fabric topology when a host or storage device connection is moved between two Cisco MDS switches or two ports within a switch. Inter-VSAN Routing VSANs improve SAN scalability. and security by allowing multiple Fibre Channel SANs to share a common physical infrastructure of switches and ISLs. The pWWN identifies the host or device and the nWWN identifies a node consisting of multiple devices.Table 6-9 VSAN and Zone Comparison Dynamic Port VSAN Membership Port VSAN membership on the switch is assigned on a port-by-port basis. availability. You can dynamically assign VSAN membership to ports by assigning VSANs based on the device WWN. If you assign a combination. The Cisco NX-OS software checks the database during a device FLOGI and obtains the required VSAN details. By default each port belongs to the default VSAN. DPVM uses the Cisco Fabric Services (CFS) infrastructure to allow efficient database management and distribution. You can assign any one of these identifiers or any combination of these identifiers to configure DPVM mapping. A DPVM database contains mapping information for each device pWWN/nWWN assignment and the corresponding VSAN. It retains the configured VSAN regardless of where a device is connected or moved. This method is referred to as Dynamic Port VSAN Membership (DPVM). preference is given to the pWWN. DPVM configurations are based on port world wide name (pWWN) and node world wide name (nWWN) assignments. These benefits are derived from the separation of Fibre Channel services in each VSAN and the isolation of traffic between .

. IVR transports data traffic between specific initiators and targets on different VSANs without merging VSANs into a single logical fabric. nor can initiators access resources across VSANs other than the designated VSAN. Fibre Channel traffic does not flow between VSANs. whether physical or logical. Zoning provides security within a single fabric. Data traffic isolation between the VSANs also inherently prevents sharing of resources attached to a VSAN. IVR is not limited to VSANs present on a common switch. Using IVR. It establishes proper interconnected routes that traverse one or more VSANs across multiple switches. Figure 6-34 Inter-VSAN Topology Fibre Channel Zoning and LUN Masking VSANs are used to segment the physical fabric into multiple logical fabrics. such as robotic tape libraries. you can access resources across VSANs without compromising other VSAN benefits. LUN masking is used to provide additional security to LUNs after an initiator has reached the target device. to restrict access between initiators and targets (see Figure 6-35).VSANs (see Figure 6-34). It provides efficient business continuity or disaster recovery solutions when used with Fibre Channel over IP (FCIP).

However. There are two main methods of zoning. reducing the risk of data loss. zoning allows the user to overlay a security map. You can use either the existing basic zoning capabilities or the advanced (enhanced). Advanced zoning capabilities specified in the FC-GS-4 and FC-SW-3 standards are provided. This requires efficient hardware implementation (frame filtering) in the fabric switches. To avoid any compromise of crucial data within the SAN. but is much more secure. The primary goal is to prevent certain devices from accessing other devices within the fabric. hard and soft. when a server looks at the content of the fabric. All modern SAN switches then enforce soft zoning in hardware. standards-compliant zoning capabilities. In this way.Figure 6-35 VSANs Versus Zoning The zoning service within a Fibre Channel fabric is designed to provide security between devices that share the same fabric. The fabric name service allows each device to query the addresses of all other devices. it will see only the devices it is allowed to see. the data on the disk could become corrupted. soft zoning is similar to the computing concept of security through obscurity. That stated. the need for security is crucial. Soft zoning restricts only the fabric name service. With many types of servers and storage devices on the network. Therefore. hard zoning restricts actual communication across a fabric. namely hosts. For example. any server can still attempt to contact any device on the network by address. if a host were to access a disk that another host is using. This process dictates which devices. to show only an allowed subset of devices. the differences between the two have blurred. Cisco MDS 9000 and . potentially with a different operating system. More recently. can see which targets. In contrast. modern switches would employ hard zoning when you implement soft.

any device that is not in an active zone (a zone that is part of an active zone set) is a member of the default zone. IPv6 address: The IPv6 address of an attached device in 128 bits in colon-separated hexadecimal format. Zone membership criteria is based mainly on WWNs or FC IDs. This membership is also referred to as interface-based zoning. zone sets are acquired by the new switch. Zoning was also not designed to address availability or scalability of a Fibre Channel infrastructure. all devices are members of the default zone. members in different zones cannot access each other. It is a distributed service that is common throughout the fabric. Zoning can be administered from any switch in the fabric.Cisco Nexus family switches implement hard zoning. Zones can vary in size. full zone sets are distributed to all switches in the fabric. Any installed changes to a zoning configuration are therefore disruptive to the entire connected fabric. Devices can belong to more than one zone. all switches in the fabric receive the active zone set. Members in a zone can access each other. Additionally. Zoning does have its limitations. Interface and domain ID: Specifies the interface of a switch identified by the domain ID. if this feature is enabled in the source switch. Only one zone set can be activated at any time. Port world wide name (pWWN): Specifies the pWWN of an N port attached to the switch as a member of the zone. A zone set can be activated or deactivated as a single entity across all switches in the fabric. New zones and zone sets can be activated without interrupting traffic on unaffected ports or devices. A zone set consists of one or more zones. A zone can be a member of more than one zone set. This membership is also referred to as port-based zoning. Symbolic nodename: Specifies the member symbolic node name. Zoning was designed to do nothing more than prevent devices from communicating with other unauthorized devices. Zoning has the following features: A zone consists of multiple zone members. IPv4 address: Specifies the IPv4 address (and optionally the subnet mask) of an attached device. If zoning is activated. When you activate a zone (from any switch). FC ID: Specifies the FC ID of an N port attached to the switch as a member of the zone. Domain ID and port number: Specifies the domain ID of an MDS domain and additionally specifies a port belonging to a non-Cisco switch. Interface and switch WWN (sWWN): Specifies the interface of a switch identified by the sWWN. A Cisco MDS switch can have a maximum of 500 zone sets. If a new switch is added to an existing fabric. . Fabric pWWN: Specifies the WWN of the fabric port (switch port’s WWN). The maximum length is 240 characters. Zone changes can be configured nondisruptively. If zoning is not activated.

LUNs can be advertised on many storage ports and discovered by several hosts at the same time. The zoning feature complies with the FC-GS-4 and FC-SW-3 standards. Both features are explained in detail in Chapter 9. The following guidelines must be considered when creating zone members: Configuring only one initiator and one target for a zone provides the most efficient use of the switch resources. N × (N – 1) access permissions need to be enabled. As the host mounts each LUN volume. Both standards support the basic zoning functionalities and the enhanced zoning functionalities. Configuring multiple initiators to multiple targets is not recommended. LUN masking ensures that only one host can access a LUN. This type of configuration wastes switch resources by provisioning and managing many communicating pairs (initiator-to-initiator or target-totarget) that will never actually communicate with each other.Default zone membership includes all ports or WWNs that do not have a specific membership association. Many LUNs are to be visible. LUN masking is essentially a mapping table inside the front-end array controllers. and obtain their size. The best practice is to avoid configuring large numbers of targets or large numbers of initiators in a single zone. Device Aliases Versus Zone-Based (FC) Aliases . These are the commands: Identify: Which device are you? Report LUNs: How many LUNs are behind this storage array port? Report capacity: What is the capacity of each LUN? Request sense: Is a LUN online and available? It is important to ensure that only one host accesses each LUN on the storage array at a time. If LUN masking is unavailable in the storage array. but it is the responsibility of the administrator to configure each HBA so that each host has exclusive access to its LUNs. it overwrites the previous signature. LUN mapping can be used. An alternative method of LUN security is LUN mapping. For this reason. Configuring the same initiator to multiple targets is accepted. a single initiator with a single target is the most efficient approach to zoning. For a zone with N members. to ensure that only one host at a time can access each LUN. LUN mapping is configured in the HBA of each host. LUN masking determines which LUNs are to be advertised through which storage array ports. and which host is allowed to own which LUNs. it writes a signature at the start of the LUN to claim exclusive access. although both methods can be used concurrently. unless the hosts are configured in a cluster. all other hosts are masked out. “Fibre Channel Storage Area Networking. Access between default zone members is controlled by the default zone policy.” LUN masking is the most common method of ensuring LUN security. If a second host should discover and try to mount the same LUN volume. discover LUNs. Each SCSI host uses a number of primary SCSI commands to identify its target. When there are many hosts. All members of a zone can communicate with each other. mistakes are more likely to be made and more than one host might access the same LUN by accident.

When you configure the basic mode using device aliases. When device alias runs in the enhanced mode. the application immediately expands to pWWNs. port security) in a Cisco MDS 9000 family switch. The applications store the device alias name in the configuration and distribute it in the device alias format instead of expanding to pWWN.snia. Table 6-10 Comparison Between Zone Aliases and Device Aliases Device alias supports two modes: basic and enhanced. These user-friendly names are referred to as device aliases.Device Aliases Versus Zone-Based (FC) Aliases When the port WWN (pWWN) of a device must be specified to configure different features (zoning. QoS. all applications accept the device-alias configuration in the native format. Reference List Storage Network Industry Association (SNIA): http://www. This operation continues until the mode is changed to enhanced. Table 6-10 lists the different characteristics of zone aliases and device aliases. The applications track the device alias database changes and take actions to enforce it. An incorrect device name may cause unexpected results. You can avoid this problem if you define a user-friendly name for a port WWN and use this name in the entire configuration commands as required.org . you must assign the correct device name each time you configure these features.

snia. noted with the key topic icon in the outer margin of the page. Marc.org/html. Northwest Learning Associates. 2000 Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter.html ANSI T11—Fibre Channel: http://www. Robert W.T10.org/education/dictionary Farley. Microsoft Press.htm ANSI T10 Technical Committee: www.org SNIA dictionary: http://www. Rethinking Enterprise Storage: A Hybrid Cloud Mode. Table 6-11 lists a reference for these key topics and the page numbers on which each is found.org IEEE Standards site: http://standards. Inc..Internet Engineering Task Force—IP Storage: http://www.charters/ips-charter. . Fibre Channel a Comprehensive Introduction. 2013 Kembel.ieee.ietf.org/index.t11.

.

.

Table 6-11 Key Topics for Chapter 6 Complete Tables and Lists from Memory Print a copy of Appendix B. and check your answers in the glossary: . Define Key Terms Define the following key terms from this chapter. “Memory Tables” (found on the disc). and complete the tables and lists from memory.” also on the disc. “Memory Tables Answer Key. includes completed tables and lists to check your work. or at least the section for this chapter. Appendix C.

virtual storage-area network (VSAN) zoning fan-out ratio fan-in ratio logical unit number (LUN) masking LUN mapping Extended Link Services (ELS) multipathing Big Data IoE N_Port Login (PLOGI) F_Port Login (FLOGI) logout (LOGO) process login (PRLI) process logout (PRLO) state change notification (SCN) registered state change notification (RSCN) state change registration (SCR) inter-FSAN routing (IVR) Fibre Channel Generic Services (FC-GS) hard disk drive (HDD) solid-state drive (SSD) Small Computer System Interface (SCSI) initiator target American National Standards Institute (ANSI) International Committee for Information Technology Standards (INCITS) Fibre Connection (FICON) T11 Input/output Operations per Second (IOPS) Common Internet File System (CIFS) Network File System (NFS) Cisco Fabric Services (CFS) Top-of-Rack (ToR) End-of-Row (EoR) Dynamic Port VSAN Membership (DPVM) .

inter-VSAN routing (IVR) Fabric Shortest Path First (FSPF) .

and 10-Gbps Fibre Channel along with being Fibre Channel over Ethernet (FCoE) ready.” . “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. the focus is on the current generation of modules and storage services solutions.Chapter 7. Cisco MDS Product Family This chapter covers the following exam topics: Describe the Cisco MDS Product Family Describe the Cisco MDS Multilayer Directors Describe the Cisco MDS Fabric Switches Describe the Cisco MDS Software and Storage Services Describe the Cisco Prime Data Center Network Management Evolving business requirements highlight the need for high-density. One major benefit of the Cisco MDS 9000 family architecture is investment protection: the capability of first-. A quick look at the storage-area network (SAN) marketplace and the track record of delivery reveals that Cisco is the leader among SAN providers in delivering compatible architectures and designs that can adapt to future changes in technology. Cisco MDS Multiservice and Multilayer Fabric Switches. 4-. and discusses topics relevant to “Introducing Cisco Data Center Technologies” (DCICT) certification. and Data Center Network Management. You can find the answers in Appendix A. read the entire chapter. and third-generation modules to all coexist in both existing customer chassis and new switch configurations. high-bandwidth storage networking solution along with integrated fabric applications to support dynamic data center requirements. “Answers to the ‘Do I Know This Already?’ Quizzes. With the addition of the fourth-generation modules. Table 7-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. 2-. 8-. the Cisco MDS 9000 family of storage networking products supports now 1-. In this chapter. high-speed. The Cisco MDS 9000 family provides the leading high-density. This chapter goes directly into the key concepts of the Cisco Storage Networking product family. providing true investment protection. and low-latency networks that improve data center scalability and manageability while controlling IT costs. Cisco MDS Software and Storage Services. 16-. second-. This chapter discusses the Cisco Multilayer Data Center Switch (MDS) product family in four main areas: Cisco MDS Multilayer Directors.

It is used for a forwarding mechanism in Cisco MDS switches. MDS 9709 d. What is head-of-line blocking? a. 6 c. If you do not know the answer to a question or are only partially sure of the answer. How many crossbar switching fabric modules exists on a Cisco MDS 9706 switch? a. MDS 9222i d. MDS 9706 c. It is an authentication mechanism for Fibre Channel frames. High availability for fabric modules c. 3. 1.Table 7-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. MDS 9506 b. onePK API e. 2 5. Which of the following MDS switches are qualified for IBM Fiber Connection (FICON)? a. 4 d. SAN extension d. Which of the following features are NOT supported by Cisco MDS switches? a. 32 Gbps Fibre Channel 2. d. MDS 9148S b. Which of the following Cisco SAN switches are director model? a. It is used for high availability for fabric modules. c. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. b. Congestion on one port causes in blocking of traffic to other ports. 8 b. you should mark that question as wrong for purposes of the self-assessment. Multiprotocol support b. MDS 9520 c. MDS 9513 6. Which of the following Cisco MDS fabric switches support Fibre Channel over IP (FCIP) and I/O Accelerator (IOA)? . MDS 9704 4.

It is a high-availability feature that allows managing the Fibre Channel traffic load.” The MDS 9000 Multilayer Director switch family is composed of fabric and director-class Fibre Channel switches. Andiamo means “Let’s go” in Italian. c. Fibre Channel over IP d. It is an intelligent fabric application that provides Fibre Channel link encryption.a. IP over Fibre Channel c. d. MDS 9250i b. DCNM-SAN Server b. MDS 9148 7. c. MDS 9148S d. It is an intelligent fabric application that provides SCSI acceleration. It is a mainframe-based replication solution. d. 10. DCNM traffic accelerator Foundation Topics Once Upon a Time There Was a Company Called Andiamo In August 2002 Cisco Systems officially entered the storage networking business with the spin-in acquisition of Andiamo Systems. It is an industry standard that allows multiple N-port fabric login concurrently on a single physical Fibre Channel link. Virtual Extensible LAN e. What is Cisco Data Mobility Manager (DMM)? a. It is an industry standard that reduces the number of Fibre Channel domain IDs in core-edge SANs. Which of the following options are fundamental components of Cisco Prime DCNM SAN? a. It is a new Fibre Channel login method that is used for storage arrays. It is an intelligent fabric application that does data migration between heterogeneous disk arrays. DCNM Data Mobility Manager e. and Cisco entered the storage business by saying “Andiamo. These devices are capable of deploying highly available and scalable storage-area networks (SAN) for the most demanding data center . 9. Fibre Channel over Ethernet b. IBM Fiber Connection 8. DCNM-SAN Client c. What is N-port identifier virtualization (NPIV) in the Fibre Channel world? a. b. Performance Manager d. b. MDS 9222i c. Which protocol or protocols does the Cisco MDS 9250i fabric switch NOT support? a.

increasing interface bandwidth per slot to 96 Gbps full duplex. With the introduction of second-generation line cards. Offering a maximum system density of 224 1/2 Gbps Fibre Channel ports. crossbars (supervisors). In 2006. resulting in a total system switching capacity of 1.2 Tbps.and fifthgeneration. resulting in a total system switching capacity of 2. and the flagship Cisco MDS 9513 Multilayer Director. Cisco again set the benchmark for high-performance Fibre Channel Director switches. . the earlier generation line cards already became end of sale (EOS).44 Tbps. the available line cards for MDS reached to fourth. Cisco set the benchmark for high-performance Fibre Channel Director switches with the release of the Cisco MDS 9509 Multilayer Director. Figure 7-1 Cisco MDS 9000 Product Family In December 2002. first-generation Cisco MDS 9000 Family line cards connected to dual crossbar switch fabrics with an aggregate of 80 Gbps full-duplex bandwidth per slot. so in this chapter the focus will not be on first-. system density has grown to a maximum of 528 Fibre Channel ports operating at 1/2/4-Gbps or 44 10-Gbps Fibre Channel ports. Figure 7-1 portrays the available Cisco MDS 9000 switches (at the time of writing).environments. On the other hand. second-. At the time of writing this chapter. and third-generation line cards. the earlier generation line cards can still work on MDS 9500 Director switches together with the latest line cards.

The Secret Sauce of Cisco MDS: Architecture All Cisco MDS 9500 Series Director switches are based on the same underlying crossbar architecture. Comprehensive and simplified management of the entire data center network: Organizations looking to simplify management and integrate previously disparate networking teams. including N+1 fabric modules and three fan trays with front-to-back airflow. Multiprotocol support: The MDS 9710 Multilayer Director supports a range of Fibre Channel speeds. they can manage Cisco MDS and Nexus platforms. Being considered for future development are 32 Gbps Fibre Channel and 40 Gbps Fibre Channel over Ethernet (FCoE). DCNM is also integrated with VMware vSphere to provide a more comprehensive VM-to-storage visibility. nor is there ever any requirement for forwarding to drop back into software for processing. High-density. The MDS 9250i has been designed to help accelerate SAN extension. resulting in a distributed forwarding architecture. All frame forwarding is performed in dedicated forwarding hardware and distributed across all line cards in the system. A single resource should help to accelerate troubleshooting with easy-to-understand dynamic topology views and the ability to monitor the health of all the devices and filters on events that occur. The N+1 concept allows eight power supplies and up to six fabric modules. and 10 Gbps Fiber Channel over Ethernet (FCoE). and disk replication as well as migrate data between arrays or data centers. . tape backup. IBM Fiber Connection (FICON for mainframe connectivity). and data should find the Cisco Prime Data Center Network Manager (DCNM) appealing. starting with the MDS 9710 Multilayer Director and MDS 9250i Multiservice Fabric Switch. Future proofing: The MDS 9710 will support the Cisco Open Network Environment (ONE) Platform Kit (onePK) API to enable future services access and programmability. the MDS 9710 crossbar and central arbiter design enable up to 1. There is never any requirement for frames to be forwarded to the supervisor for centralized forwarding. The MDS 9250i Multiservice Fabric Switch accommodates 16 Gbps FC. or Internet Small Computer System Interface (iSCSI) and FICON. These new MDS solutions provide the following: Enhanced availability: Built on the experience from the MDS 9500 and the Nexus product line. high-performance storage director: To handle the rapid scale accompanied by data center consolidation. the MDS 9710 provides enhanced reliability. namely the concept of building a modular chassis that could be upgraded to incorporate future services without ripping and replacing the original chassis. in 2012 Cisco introduced its next generation of Fibre Channel platforms. This is coupled with the 384 ports of line rate 16 Gbps FC. The latter allows for tighter integration with the server and network admins. Organizations should also be able to leverage and report on the data captured and plan for additional capacity more effectively.5 Tbps of FC throughput per slot. Following a 10-year run with the initial MDS platform. 10 Gbps Fibre Channel over IP (FCIP). Frame forwarding logic is distributed in application-specific integrated circuits (ASIC) on the line cards themselves.The Cisco approach to Fibre Channel storage-area networking reflected its strong heritage in data networking. From a single management tool. essentially extending the concept of Software Defined Networks (SDN) into the SAN. storage. 10 Gbps FCoE.

In the event of a crossbar failure. When a frame is scheduled for transmission. resulting in both crossbars being active (1:1 redundancy). A lookup is issued to the forwarding and frame routing tables to determine where to switch the frame to and whether to route the frame to a new VSAN and/or rewrite the source/destination of the frame if Inter VSAN Routing (IVR) is being used. if priority is desired for a given set of traffic flows. there is complete fairness between ports regardless of port location within the switch. the frame is tagged with its ingress port and virtual storage-area networking (VSAN). resulting in an identical amount of active crossbar switching capacity regardless of whether the system is operating with one or two crossbar switch fabrics. frame switching uses a technique called virtual output queuing (VOQ). In addition. non-oversubscribed) line cards and host-optimized (nonblocking. Starting with frame error checking on ingress. From a switch architecture perspective. multiprotocol (Small Computer System Interface over IP [iSCSI] and Fibre Channel over IP [FCIP]) and intelligent (storage services) line cards.Forwarding of frames on the Cisco MDS always follows the same processing sequence: 1. it is transferred across the crossbar switch fabric to the egress line card where there is a further output access control list (ACL). the frame will be queued on the ingress line card for delivery. Input access control lists (ACL) are used to enforce hard zoning. this ensures that when multiple input ports on multiple line cards are contending for a congested output port. the standby channel on the remaining crossbar becomes active. the switch will choose an egress port based on the load-balancing policy configured for that particular VSAN. It also ensures that congestion on one port does not result in blocking of traffic to other ports (head-of-line blocking). making use of QoS policy maps to determine the order in which frames are scheduled. crossbar capacity is provided to each line card slot as a small number of high-bandwidth channels. IVR is an MDS NX-OS software feature that is similar to Network Address Translation (NAT) in the IP world. This provides significant flexibility for line card module design and architecture and makes it possible to build both performance-optimized (nonblocking. 2. 5. If the destination port is congested or busy. To ensure fairness between ports. 6. and the VSAN tag used internal to the switch is stripped from the front of the frame. oversubscribed) line cards. 4. It is used for Fibre Channel NAT. enabling the switch to handle duplicate addresses within different fabrics separated by VSANs. . 3. Combined with centralized arbitration. Figure 72 portrays the Central Arbitration in MDS 9000 switches. QoS and CoS can be used to give priority to those traffic flows. Depending on the MDS Director switch model. If there are multiple equal-cost paths for a given destination device. One active channel and one standby channel are used on each of the crossbar switch fabrics. the number of crossbar switch fabrics that are installed might change. and the frame is transmitted out the output port.

On ingress.Figure 7-2 Crossbar Switch with Central Arbitration A Day in the Life of a Fibre Channel Frame in MDS 9000 All Cisco MDS 9000 family line cards use the same process for frame forwarding. the PHY converts optical signals received into electrical signals. Each front-panel port has an associated physical layer (PHY) device port and a media access controller (MAC). The various frame-processing stages are outlined in detail in this section (see Figure 7-3). sending the electrical stream of bits into the MAC. Figure 7-3 Cisco MDS 9000 Series Switches Packet Flow 1. The primary function of the MAC is to decipher Fibre Channel frames from the incoming bit stream by identifying start-of-frame (SoF) and end-of-frame (EoF) markers in the bit stream. . Ingress PHY/MAC Modules: Fibre Channel frames enter the switch front-panel optical Fibre Channel ports through Small Form-Factor Pluggable (SFP) optics for Fibre Channel interfaces or X2 transceiver optical modules for 10 Gbps Fibre Channel interfaces. as well as other Fibre Channel signaling primitives.

and a variety of other fields from the interswitch header and Fibre Channel frame header. if there are multiple equal-cost Fabric Shortest Path First [FSPF] routes or the destination is a Cisco port channel bundle). guaranteeing in-order delivery. type of port. This table lookup also indicates whether there is a requirement for any Inter VSAN routing (IVR—in simple terms Fibre Channel NAT). the MAC prepends an internal switch header onto the frame. based on the input port. D_ID) or a hash also based on the exchange ID (S_ID. the MAC communicates with the forwarding and queuing modules and issues return buffer-credits (R_RDY primitives) to the sending device to recredit the received frames. Statistics gathered include frame and byte counters from a given source to a given destination. The load-balancing policy (and algorithm) can be configured on a per-VSAN basis to be either a hash of the source and destination addresses (S_ID. source address. 2. OX_ID) of the frame. the frame is dropped because there is no destination to forward to. destination address. If traffic from a given source address to a given destination address is marked for IVR. The result from this lookup tells the forwarding module where to switch the frame to (egress port). The switch uses the result from this lookup to either permit the frame to be forwarded. The concept of an internal switch header is an important architectural element of what enables the multiprotocol and multitransport capabilities of Cisco MDS 9000 family switches. ingress port. This also forms the basis for the Cisco implementation of hard zoning within Fibre Channel. Before forwarding the frame. a load-balancing decision is made to choose a single physical egress interface from a set of interfaces. or perform any additional inspection on the frame. Queuing Module: The queuing module is responsible for scheduling the flow of frames through the switch (see Figure 7-4). providing the forwarding module with details such as ingress port. If ingress rate limiting has been enabled. Ingress Forwarding Module: The ingress forwarding module determines which output port on the switch to send the ingress frame to. If the frame has multiple possible egress ports (for example. all frames within the same flow (either between a single source to a single destination or within a single SCSI I/O operation) will always be forwarded on the same physical path. When a frame is received by the ingress forwarding module. ingress VSAN. drop the frame. The MAC also checks that the received frame contains no errors by validating its cyclic redundancy check (CRC). 3. The switch uses this to maintain a series of statistics about what devices are communicating between one another. VSAN. D_ID) of the frame.Along with frames being received. The third lookup is a per-VSAN ingress ACL lookup determined by VSAN. . return buffer credits are paced to match the configured throughput on the port. The second lookup is a statistics lookup. three simultaneous lookups are initiated followed by forwarding logic based on the result of those lookups: The first of these is a per-VSAN forwarding table lookup determined by VSAN and the destination address. If the lookup fails. the final forwarding step is to rewrite the VSAN ID and optionally the source and destination addresses (S_ID. frame QoS markings (if the ingress port is set to trust the sender. and the destination address within the Fibre Channel frame. D_ID. blank if not). In this manner. and a timestamp of when the frame entered the switch. Return buffer credits are issued if the ingress-queuing buffers have not exceeded an internal threshold.

Point A is causing a head-of-line blocking. Note Head-of-line blocking is the congestion of one port resulting in blocking traffic to other ports.Figure 7-4 Queuing Module Logic It uses distributed virtual output queuing (VOQ) to prevent head-of-line blocking and is the functional block responsible for implementing QoS and fairness between frames contending for a congested output port. In the Fibre Channel world. head-of-line blocking is generally solved by “dropping” frames. In the Ethernet world. All the packets sent from the source device to all the destination points are held up because the destination point A cannot accept any packet. which in this picture is point A. if there is congestion the ingress port is blocked. So head-ofline blocking is less of an issue in the Ethernet world. Figure 7-5 shows a congested destination. Figure 7-5 Head-of-Line Blocking .

Unicast Fibre Channel frames (frames with only a single destination) are queued in one virtual output queue. 30 for the medium queue. the DWRR algorithm keeps track of this excess usage and subtracts it from the queue’s next transmission bandwidth allocation. The default weights are 50 for the high queue. DWRR is a scheduling algorithm that is used for congestion management in the network scheduler. Frames enter the queuing module and are queued in a VOQ based on the output port to which the frame is due to be sent (see Figure 7-6). however. It monitors utilization in the transmit queues. Frames queued in the three DWRR queues are scheduled using whatever relative weightings are configured. these can be set to any ratio. Central Arbitration Module: The central arbiters are responsible for scheduling frame transmission from ingress to egress. corresponding to four QoS levels per port: one priority queue and three queues using deficit-weighted round robin (DWRR) scheduling. If a queue uses excess bandwidth. Figure 7-7 Distributed Virtual Output Queuing 4a. and 20 for the low queue. Frames queued in the priority queue will be scheduled before traffic in any of the DWRR queues. Figure 7-7 portrays the distributed virtual output queuing. granting requests to transmit from ingress forwarding .The queuing module is also responsible for providing frame buffering for ingress queuing if an output interface is congested. Figure 7-6 Virtual Output Queues Four virtual output queues are available for each output port. Multicast Fibre Channel frames are replicated across multiple virtual output queues for each output port to which a multicast frame is destined to be forwarded.

that is. This also helps ensure that any blocking does not spread throughout the system. the arbiter will grant transmission to the output port in a fair (round-robin) manner. Because switch-wide arbitration is operating in a centralized manner.modules whenever there is an empty slot in the output port buffers. In this manner. Anytime a frame is to be transferred from ingress to egress. and each QoS level for each output port has its own virtual output queue. If the primary arbiter fails. in the event of congestion on an output port. The crossbar itself operates three times over speed. Two redundant central arbiters are available on each Cisco MDS Director switch. with the redundant arbiter snooping all requests issued at and grants offered by the primary active arbiter. nonblocking. high-throughput. the redundant arbiter can resume operation without any loss of frames or performance or suffer any delays caused by switchover. Rather than having to arbitrate each frame independently. it operates internally at a rate three times faster than its endpoints. Congestion does not result in head-of-line blocking because queuing at ingress utilizes virtual output queues. If there is an empty slot in the output port buffer. . the originating line card will issue a request to the central arbiters asking for a scheduling slot on the output port of the output line card. and nonoversubscribed switching capacity between line card modules. Another feature to maximize crossbar performance under peak frame rate is a feature called superframing. If there is no free buffer space. (Figure 7-8 portrays the crossbar switch architecture. Crossbar Switch Fabric Module: Dual redundant crossbar switch fabrics are used on all Director switches to provide low-latency. 4b. the arbiter will wait until there is a free buffer. Superframing improves the crossbar efficiency in the situation where there are multiple frames queued originating from one line card with a common destination. any congestion on an output port will result in distributed buffering across ingress ports and line cards. fairness is also guaranteed.) The over speed helps ensure that the crossbar remains nonblocking even while sustaining peak traffic levels. They are capable of scheduling more than a billion frames per second. the active central arbiter will respond to the request with a grant. multiple frames can be transferred back to back across the crossbar.

and outbound QoS queuing. 5. Figure 7-9 Egress Frame Forwarding Logic Before the frame arrives. When a frame arrives at the egressforwarding module from the crossbar switch fabric. if the frame requires IVR. Figure 7-9 portrays it in a flow diagram. Egress Forwarding Module: The primary role of the egress forwarding module is to perform some additional frame checks. the first processing step is to validate that the frame is error free and has a valid CRC. and enforce some additional ACL checks (logical unit number [LUN] zoning and read-only zoning if configured). There is also additional Inter VSAN routing IVR frame processing. the egress-forwarding module will issue an ACL table lookup to see if any . If the frame is valid. the egress-forwarding module has signaled the central arbiter that there is output buffer space available for receiving frames. make sure that the Fibre Channel frame is valid.Figure 7-8 Centralized Crossbar Architecture Crossbar switch fabric modules depend on clock modules for crossbar channel synchronization.

Next. the frame is queued for transmission to the destination port MAC with queuing on a per-CoS basis matching the DWRR queuing and configured QoS policy map. If the output port is a Trunked E_Port (TE_Port). Egress PHY/MAC Modules: Frames arriving into the egress PHY/MAC module first have their switch internal header removed.additional ACLs need to be applied in permitting or denying the frame to flow to its destination. Figure 7-10 portrays how the in-order delivery frames are delivered in order. it is very unlikely for any frames to be expired. The default drop timer is 500 milliseconds and can be tuned using the fcdroplatency switch configuration setting. an Enhanced ISL (EISL) header is prepended to the frame. Under normal operating conditions. Note that 500 milliseconds is an extremely long time to buffer a frame and typically occurs only when long. Finally. The next processing step is to finalize any Fibre Channel frame header rewrites associated with IVR. . Dropping of expired frames is in accordance with Fibre Channel standards and sets an upper limit for how long a frame can be queued within a single switch. ACLs applied on egress include LUN zoning and read-only zoning ACLs. and if the frame has been queued within the switch for too long it is dropped. the frame time stamp is checked. 6. slow WAN links subject to packet loss are combined with a number of Fibre Channel sources.

This guarantee extends across an entire multiswitch fabric. In this case. when a topology change occurs (for example. Unfortunately. the frames are transmitted onto the cable of the output port. The outbound MAC is responsible for formatting the frame into the correct encoding over the cable. inserting SoF and EoF markers. frames might be delivered out of order. shifting traffic from what might have been a congested link to one that is no longer congested. . provided the fabric is stable and no topology changes are occurring. a link failure occurs). Reordering of frames can occur when routing changes are instantiated.Figure 7-10 In-Order Delivery Finally. and inserting other Fibre Channel primitives over the cable. Lossless Networkwide In-Order Delivery Guarantee The Cisco MDS switch architecture guarantees that frames can never be reordered within a switch. newer frames transmitted down a new non-congested path might pass older frames queued in the previous congested path.

all SAN switch vendors default to disable network-IOD except where its use is mandated by channel protocols such as IBM Fiber Connection (FICON). This stops the forwarding of new frames when there has been an ISL failure or a topology change. The general idea here is that it is better to stop all traffic for a period of time than to allow the possibility for some traffic to arrive out of order. providing a modular and sustainable base for the long term. manageability. isolated environments to improve Fibre Channel SAN scalability. For mainframe environments. and standard Linux core.1 it is rebranded as Cisco MDS 9000 NX-OS Software. VSANs facilitate true hardware-based separation of FICON and open systems. Virtual SANs Virtual SAN (VSAN) technology partitions a single physical SAN into multiple VSANs. Native iSCSI support in the Cisco MDS 9000 family helps customers consolidate storage for a wide range of servers into a common pool on the SAN. Common Software Across All Platforms Cisco MDS 9000 NX-OS runs on all Cisco MDS 9000 family switches and Cisco Nexus family of data center Ethernet switches. and security. Cisco MDS Software and Storage Services The foundation of the data center network is the network software that runs the switches in the network. This partitioning of fabric services greatly reduces network instability by containing fabric reconfiguration settings and error conditions within an individual VSAN. FCoE. VSAN capabilities enable Cisco MDS 9000 NX-OS to logically divide a large physical fabric into separate. It is based on a secure. No new forwarding decisions are made during the time it takes for traffic to flush from existing interface queues.To protect against this. availability. Small Computer System Interface over IP (iSCSI). Formerly known as Cisco SAN-OS. Flexibility and Scalability Cisco MDS 9000 NX-OS is a highly flexible and scalable platform for enterprise SANs. starting from release 4. all Fibre Channel switch vendors have a fabricwide configuration setting called a networkwide in-order delivery (network-IOD) guarantee. stable. This mechanism is a little crude because it guarantees that an ISL failure or topology change will be a disruptive event to the entire fabric. The strict traffic segregation provided by VSANs helps ensure that the control and data traffic of a given VSAN are confined within the VSAN’s own domain. Cisco NX-OS Software is the network software for the Cisco MDS 9000 family and Cisco Nexus family data center switching products. and Fibre Channel over IP (FCIP) in a single platform. Cisco MDS 9000 NX-OS supports IBM Fibre Connection (FICON). Each VSAN is a logically and functionally separate SAN with its own set of Fibre Channel fabric services. Multiprotocol Support In addition to supporting Fibre Channel Protocol (FCP). Users can create SAN administrator roles that are limited in scope . Because of this.

Cisco DMM can be introduced without the need to rewire or reconfigure the SAN. VSANs are supported across FCIP links between SANs. Network-Assisted Applications Cisco MDS 9000 family network-assisted storage applications offer deployment flexibility and investment protection by allowing the deployment of appliance-based storage applications for any server or storage device in the SAN without the need to rewire SAN switches or end devices. F-port trunking allows multiple VSANs on a single uplink in N-port virtualization (NPV) mode. For example. continuous data protection. IVR also can be used with FCIP to create more efficient business-continuity and disaster-recovery solutions. The Cisco MDS 9000 family also implements trunking for VSANs. Cisco DCNM-SAN is used to administer Cisco DMM. and other roles can be set up to allow configuration and management within specific VSANs only. snapshots. extending VSANs to include devices at remote locations. provide flexibility and meet the high-availability requirements of enterprise data centers. Cisco DMM offers rate-adjusted online migration to enable applications to continue uninterrupted operation while data migration is in progress. nor can initiators access any resources aside from the ones designated with IVR. data migration. Cisco DMM is transparent to host applications and storage devices. data acceleration. Valuable resources such as tape libraries can be easily shared without compromise. Advanced capabilities. This approach improves the manageability of large SANs and reduces disruptions resulting from human errors by isolating the effect of a SAN administrator’s action to a specific VSAN whose membership can be isolated based on the switch ports or World Wide Names (WWN) of attached devices. Trunking allows Inter-Switch Links (ISL) to carry traffic for multiple VSANs on the same physical link. I/O Accelerator The Cisco MDS 9000 I/O Accelerator (IOA) feature is a SAN-based intelligent fabric application that provides SCSI acceleration to significantly improve the number of SCSI I/O operations per second (IOPS) over long distances in a Fibre Channel or FCIP SAN by reducing the effect of transport latency on the processing of each operation. and multipath supports. The feature also extends the distance for . a SAN administrator role can be set up to allow configuration of all platform-specific capabilities. no additional management software is required. Fibre Channel control traffic does not flow between VSANs. Cisco Data Mobility Manager Cisco Data Mobility Manager (DMM) is a SAN-based. Easy insertion and provisioning of appliance-based storage applications is achieved by removing the need to insert the appliance into the primary I/O path between servers and storage. Inter-VSAN (IVR) routing is used to transport data traffic between specific initiators and targets on different VSANs without merging VSANs into a single logical fabric. and replication on Cisco MDS 9000 family switches. intelligent fabric application offering data migration between heterogeneous disk arrays. Intelligent Fabric Applications Cisco MDS 9000 NX-OS forms a solid foundation for delivery of network-based storage applications and services such as virtualization.to certain VSANs. such as data verification. unequal-size logical unit number (LUN) migration.

Intuitive provisioning: IOA can be easily provisioned using Cisco DCNM-SAN. known as the System Data Mover (SDM). Cisco has supported XRC over FCIP at distances of up to 124 miles (200 km). reducing or eliminating the latency effects that can otherwise reduce performance at distances of 124 miles (200 km) or greater. IOA can be deployed with disk data-replication solutions such as EMC Symmetrix Remote Data Facility (SRDF). IOA includes the following features: Transport independence: IOA provides a unified solution to accelerate I/O operations over the MAN and WAN. Transparent insertion: IOA requires no fabric reconfiguration or rewiring and can be transparently turned on by enabling the IOA license. In . Write acceleration significantly reduces latency and extends the distance for disk replication. This process is sometimes referred to as XRC emulation or XRC extension.and FCIP-based business-continuity and disaster-recovery solutions without the need to add or manage a separate device. EMC MirrorView. Tape acceleration improves the performance of tape devices and enables remote tape vaulting over extended distances for data backup for disaster-recovery purposes. This data is buffered within the Cisco MDS 9000 family module that is local to the SDM. In the past. IOA can also be used to enable remote tape backup and restore operations without significant throughput degradation. which is now officially renamed IBM z/OS Global Mirror. XRC Acceleration accelerates dynamic updates from the primary to the secondary direct-access storage device (DASD) by reading ahead of the remote replication IBM System z. High availability and resiliency: IOA combines port channels and equal-cost multipath (ECMP) routing with disk and tape acceleration for higher availability and resiliency. and HDS TrueCopy to extend the distance between data centers or reduce the effects of latency. IOA as a fabric service: IOA service units (interfaces) can be located anywhere in the fabric and can provide acceleration service to any port. Tape acceleration: IOA provides tape acceleration for Fibre Channel and FCIP networks. Network Security Cisco takes a comprehensive approach to network security with Cisco MDS 9000 NX-OS. Write acceleration: IOA provides write acceleration for Fibre Channel and FCIP networks. Service clustering: IOA delivers redundancy and load balancing for I/O acceleration. is a mainframe-based software replication solution in widespread use in financial institutions worldwide. Compression: Compression in IOA increases the effective MAN and WAN bandwidth without the need for costly infrastructure upgrades. The new Cisco MDS 9000 XRC Acceleration feature supports essentially unlimited distances. XRC Acceleration IBM Extended Remote Copy (XRC). Integration of data compression into IOA enables implementation of more efficient Fibre Channel.disaster-recovery and business-continuity applications over WANs and metropolitan-area networks (MAN). Speed independence: IOA can accelerate 1/2/4/8/10/16-Gbps links and consolidate traffic over 8/10/16-Gbps ISLs.

such as role-based access control (RBAC) and Cisco TrustSec Fibre Channel Link Encryption. Starting with Cisco MDS 9000 NX-OS 4. such as those in defense and intelligence services. Switch and Host Authentication Fibre Channel Security Protocol (FC-SP) capabilities in Cisco MDS 9000 NX-OS provide switchto-switch and host-to-switch authentication for enterprisewide fabrics. Cisco TrustSec Fibre Channel Link Encryption Cisco TrustSec Fibre Channel Link Encryption addresses customer needs for data integrity and privacy. and data integrity for both FCIP and iSCSI connections on the Cisco MDS 9000 family switches. Cisco MDS 9000 NX-OS offers other significant security features. IP Security for FCIP and iSCSI Traffic flowing outside the data center must be protected. which provide true isolation of SAN-attached devices. because the encryption is at full line rate with no performance penalty. and accounting (AAA). a switch or host cannot join the fabric. Diffie-Hellman Challenge Handshake Authentication Protocol (DH-CHAP) is used to perform authentication locally in the Cisco MDS 9000 family or remotely through RADIUS or TACACS+. and AES-GMAC authenticates only the frames that are being passed between the two E-ports. coarse wavelength-division multiplexing (CWDM). or dense wavelength-division multiplexing (DWDM). Cisco TrustSec Fibre Channel Link Encryption is an extension of the FC-SP feature and uses the existing FC-SP architecture. such as dark fiber. frames are decrypted and authenticated with integrity check. There are two primary use cases for Cisco TrustSec Fibre Channel Link Encryption: Many customers will want to help ensure the privacy and integrity of any data that leaves the secure confines of their data center through a native Fibre Channel link. Role-Based Access Control . If authentication fails. Cisco MDS 9000 family management has been certified for Federal Information Processing Standards (FIPS) 140-2 Level 2 and validated for Common Criteria (CC) Evaluation Assurance Level 3 (EAL 3). Other customers. may be even more security focused and choose to encrypt all traffic within their data center as well. data encryption for privacy. Cisco MDS 9000 NX-OS uses Internet Key Exchange Version 1 (IKEv1) and IKEv2 protocols to dynamically set up security associations for IPsec using preshared keys for remote-side authentication. and supports the industry-standard security protocol for authentication. AES-GCM encrypts and authenticates frames. The encryption algorithm is 128-bit Advanced Encryption Standard (AES) and enables either AES-Galois/Counter Mode (AESGCM) or AES-Galois Message Authentication Code (AES-GMAC) for an interface. At ingress.addition to VSANs. Encryption is performed at line rate by encapsulating frames at egress with encryption using the GCM mode of AES 128-bit encryption. A future release of Cisco NX-OS software will enable encryption of Fibre Channel data between Eports of Cisco MDS 9000 16-Gbps Fibre Channel switching modules. The proven IETF standard IP Security (IPsec) capabilities in Cisco MDS 9000 NX-OS offer secure authentication. authorization.2(1). Fibre Channel data between E-ports of Cisco MDS 9000 8-Gbps Fibre Channel switching modules can be encrypted.

Availability . Applications using SNMP Version 3 (SNMPv3). This locking mechanism helps ensure that unauthorized devices connecting to the switch port do not disrupt the SAN fabric. which dramatically reduces operational complexity while consuming fewer resources on the switch. or switches. instead of creating multiple two-member zones. CLI and SNMP users and passwords are shared. The entities can be hosts. In addition. The roles describe the access-control policies for various featurespecific commands on one or more VSANs. or a virtualized cluster level without sacrificing switch resources. The zoning feature complies with the FC-GS-4 and FC-SW-3 standards. such as Cisco DCNM-SAN. or plus switch WWN of the interface iSCSI zoning: Defines zone members based on the host zone iSCSI name IP address To provide strict network security. identified by their WWNs. targets. Cisco MDS 9000 NX-OS supports the following types of zoning: N-port zoning: Defines zone members based on the end-device (host and storage) port WWN Fibre Channel identifier (FC-ID) Fx-port zoning: Defines zone members based on the switch port WWN WWN plus interface index. offer full RBAC for switch features managed using this protocol. and only a single administrative account is required for each user. a physical cluster level. and none of them cause performance degradation. up to 64 user-defined roles can be configured. Zoning Zoning provides access control for devices within a SAN.Role-Based Access Control Cisco MDS 9000 NX-OS provides RBAC for management access to the Cisco MDS 9000 family command-line interface (CLI) and Simple Network Management Protocol (SNMP). Fabric binding extends port security to allow ISLs between only specified switches. zoning is always enforced per frame using access control lists (ACLs) that are applied at the ingress switch. Port Security and Fabric Binding Port security locks the mapping of an entity to a switch port. Smart zoning enables customers to intuitively create fewer multimember zones at an application level. Enhanced zoning session-management capabilities further enhance security by allowing only one user at a time to modify zones. Both standards support the basic and enhanced zoning functionalities. the software provides support for smart zoning. All zoning policies are enforced in hardware. or domain ID plus interface index Domain ID plus port number (for Brocade interoperability) Fabric port WWN Interface plus domain ID. In addition to the two default roles on the switch.

so the array can redirect a failed I/O operation to another link without waiting for an I/O timeout. VRRP increases IP network availability for iSCSI and FCIP connections by allowing failover of connections from one port to another. Nondisruptive Software Upgrades Cisco MDS 9000 NX-OS provides nondisruptive software upgrades for director-class products with redundant hardware and fabric switches. Port Tracking for Resilient SAN Extension The Cisco MDS 9000 NX-OS port-tracking feature enhances SAN extension resiliency. Minimally disruptive upgrades are provided for the other Cisco MDS 9000 family fabric switches that do not have redundant supervisor engine hardware. ISLs with compatible parameters automatically form channel groups. . disk arrays must wait seconds for an I/O timeout to recover from a network link failure.Cisco MDS 9000 NX-OS provides resilient software architecture for mission-critical hardware deployments. In the autoconfigure mode. Thus. including misconfiguration detection and auto creation of port channels among compatible ISLs. ISL ports can reside on any switching module. up to 16 expansion ports (Eports) or trunking E-ports (TE-ports) or F-ports connected to NP-ports can be bundled into a port channel. If a Cisco MDS 9000 family switch detects a WAN or MAN link failure. This feature facilitates the failover of an iSCSI volume from one IP services port to any other IP services port. VRRP dynamically manages redundant paths for external Cisco MDS 9000 family management applications. Cisco MDS 9000 NX-OS uses a protocol to exchange port channel configuration information between adjacent switches to simplify port channel management. FCIP. ISL Resiliency Using Port Channels Port channels aggregate multiple physical ISLs into one logical link with higher bandwidth and port resiliency for both Fibre Channel and FICON traffic. and Management-Interface High Availability The Virtual Routing Redundancy Protocol (VRRP) increases the availability of Cisco MDS 9000 Family management traffic routed over both Ethernet and Fibre Channel networks. The autotrespass feature enables high-availability iSCSI connections to RAID subsystems. Trespass commands can be sent automatically when Cisco MDS 9000 NX-OS detects failures on active paths. Similarly. either locally or on another Cisco MDS 9000 family switch. and they do not need a designated master port. Otherwise. no manual intervention is required. iSCSI. if a port or a switching module fails. With this feature. independent of host software. it takes down the associated diskarray link when port tracking is configured. the port channel continues to function properly without requiring fabric reconfiguration. making control-traffic path failures transparent to applications. Stateful Process Failover Cisco MDS 9000 NX-OS automatically restarts failed software processes and provides stateful supervisor engine failover to help ensure that any hardware or software failures on the control plane do not disrupt traffic flow in the fabric.

server. Also. The open API also helps CiscoWorks Device Fault Manager (DFM) monitor Cisco MDS 9000 family device health. which allows multiple N-port fabric logins concurrently on a single physical Fibre Channel link. and management traffic routed in band and out of band Open APIs Cisco MDS 9000 NX-OS provides a truly open API for the Cisco MDS 9000 family based on the industry-standard SNMP. and wireless devices. and in-band IP over Fibre Channel (IPFC) SNMPv1. storage devices. including switch. The Cisco MDS 9000 NX-OS open API allows the CiscoWorks Resource Manager Essentials (RME) application to provide centralized Cisco MDS 9000 family configuration management. routers. management applications can gather HBA and host OS information without the need to install proprietary host agents. With FDMI. fabric. eliminating the need to reenter names when devices are moved. and switch ports. Configuration and Software-Image Management The CiscoWorks solution is a commonly used suite of tools for a wide range of Cisco devices such as IP switches. The Cisco DCNM-SAN Storage Management Initiative Specification (SMI-S) server provides an XML interface with an embedded agent that complies with the Web-Based Enterprise Management (WBEM). Commands performed on the switches by Cisco DCNM-SAN use this open API extensively. Common Information Model (CIM). all major storage and network management software vendors use the Cisco MDS 9000 NX-OS management API. and SMI-S standards. Management interfaces supported by Cisco MDS 9000 NX-OS include the following: CLI through a serial port or out-of-band (OOB) Ethernet management port. N-Port Virtualization Cisco MDS 9000 NX-OS supports industry-standard N-port identifier virtualization (NPIV). HBAs that support NPIV can help improve SAN security by enabling configuration of zoning and port security . software-image management. and temperature. and zoning profiles. Cisco fabric services simplify SAN provisioning by automatically distributing configuration information to all switches in a storage network.Manageability Cisco MDS 9000 NX-OS incorporates many management features that facilitate effective management of growing storage environments with existing resources. and inventory management. Fabric Device Management Interface (FDMI) capabilities provided by Cisco MDS 9000 NX-OS simplify management of devices such as Fibre Channel HBAs though in-band communications. CiscoWorks DFM can also monitor the health of important components such as fans. and v3 over an OOB management port and in-band IPFC Cisco FICON Control Unit Port (CUP) for in-band management from IBM S/390 and z/900 processors IPv6 support for iSCSI. FCIP. v2. such as supervisor memory and processor utilization. power supplies. intelligent system log (syslog) management. Distributed device alias services provide fabricwide alias names for host bus adapters (HBA).

Cisco Data Center Network Manager Server Federation Cisco DCNM server federation improves management availability and scalability by load balancing fabric-discovery. Up to 10 Cisco DCNM instances can form a federation (or cluster) that can manage more than 75. One of the main problems faced today in SAN environments is the time and effort required to install and replace servers. distributed iSNS built in to Cisco MDS 9000 NX-OS or through external iSNS servers. users can connect to any Cisco DCNM instance and view all reports. FlexAttach addresses these problems. the SAN configuration should not have to be changed when a new server is installed or an existing server is replaced. high availability. Cisco MDS 9000 NXOS 6. The process involves both SAN and server administrators. Network Boot for iSCSI Hosts Cisco MDS 9000 NX-OS simplifies iSCSI-attached host management by providing network-boot capability. This feature is available only for Cisco MDS 9000 family blade switches and the Cisco MDS 9100 Series Multilayer Fabric Switches. Internet Storage Name Service The Internet Storage Name Service (iSNS) helps existing TCP/IP networks function more effectively as SANs by automating discovery. NPV is a complementary feature that reduces the number of Fibre Channel domain IDs in core-edge SANs. and logs from a single web browser. inventory. and disaster recovery. they just pass traffic between core switch links and end devices. performance-monitoring. A storage administrator can discover and move fabrics within a federation for the purposes of load balancing. which eliminates the domain IDs for these switches. and configuration of iSCSI devices. . To alleviate the need for interaction between SAN and server administrators. iSCSI targets presented by Cisco MDS 9000 family IP storage services and Fibre Channel device-state-change notifications are registered by Cisco MDS 9000 NX-OS. Autolearn for Network Security Configuration The autolearn feature allows the Cisco MDS 9000 family to automatically learn about devices and switches that connect to it. NPIV is used by edge switches in the NPV mode to log in to multiple end devices that share a link to the core switch. management. In addition to being useful for server connections. and event-handling processes. Cisco MDS 9000 family fabric switches operating in the NPV mode do not join a fabric.2(1) supports the FlexAttach feature on Cisco MDS 9148 Multilayer Fabric Switch when the NPV mode is enabled. reducing the number of configuration changes and the time and coordination required by SAN and server administrators when installing and replacing servers. Cisco DCNM provides a single management pane for viewing and managing all fabrics within a single federation. statistics. either through the highly available.000 end devices.independently for each virtual machine (OS partition) on a host. NPIV is beneficial for connectivity between core and edge SAN switches. FlexAttach Cisco MDS 9000 NX-OS supports the FlexAttach feature. and the interaction and coordination between them can make the process time consuming. In addition. The administrator can use this feature to configure and activate network security features such as port security without having to manually configure the security for each port.

up to 4095 buffer credits from a pool of more than 6000 buffer credits for a module can be allocated to ports as needed to greatly extend the distance of Fibre Channel SANs. Using extended credits. Zonebased QoS helps simplify configuration and administration by using the familiar zoning concept. and pure IPv6 data networks. IPv6 Cisco MDS 9000 NX-OS provides IPv6 support for FCIP. iSLB greatly simplifies iSCSI configuration and provides automatic. Control traffic is assigned the highest QoS priority automatically. Proxy mode reduces the number of separate times that back-end tasks such as Fibre Channel zoning and storage-device configuration must be performed. iSCSI Server Load Balancing Cisco MDS 9000 NX-OS helps simplify large-scale deployment and management of iSCSI servers. Data traffic can be classified for QoS by the VSAN identifier. Virtual Output Queuing Virtual output queuing (VOQ) buffers Fibre Channel traffic at the ingress port to eliminate head-of- . to accelerate convergence of fabric-wide protocols such as FSPF. zone merges. and Cisco MDS 9000 family switches running previous software revisions. iSCSI. Fibre Channel data traffic for latency-sensitive applications can be configured to receive higher priority than throughput-intensive applications using data QoS priority levels. Adding credits lengthens distances for Fibre Channel SAN extension. Cisco MDS 9000 NX-OS enhances the architecture of the Cisco MDS 9000 family with several advanced traffic-management features that help ensure consistent performance of the SAN under varying load conditions. rapid recovery from IP connectivity problems for high availability. A complete dual stack has been implemented for IPv4 and IPv6 to remain compatible with the large base of IPv4-compatible hosts. N-port WWN. Quality of Service Four distinct quality-of-service (QoS) priority levels are available: three for Fibre Channel data traffic and one for Fibre Channel control traffic. and management traffic routed inband and out-of-band. In addition to allowing fabric-wide iSCSI configuration from a single switch. Traffic Management In addition to implementing the Fabric Shortest Path First (FSPF) Protocol to calculate the best path between two switches and providing in-order delivery features. zone. transitional networks with a mixture of both versions. Extended Credits Full line-rate Fibre Channel ports provide at least 255 buffer credits as standard. iSCSI server load balancing (iSLB) is available to automatically redirect servers to the next available Gigabit Ethernet port. This dual-stack approach allows the Cisco MDS 9000 family switches to easily connect to older IP networks.Proxy iSCSI Initiator The proxy iSCSI initiator simplifies configuration procedures when multiple iSCSI initiators (hosts) are assigned to the same iSCSI target ports. and principal switch selection. routers. or FC-ID.

Load balancing using port channels is performed over both Fibre Channel and FCIP links. By integrating data compression into the Cisco MDS 9000 family.line blocking. which dramatically reduce write throughput. Port rate limiting is also beneficial for throttling WAN traffic at the source to help eliminate excessive buffering in Fibre Channel and IP data network devices. Fibre Channel Port Rate Limiting The Fibre Channel port rate-limiting feature for the Cisco MDS 9100 Series Multilayer Fabric Switches controls the amount of bandwidth available to individual Fibre Channel ports within groups of four host-optimized ports. which directs SCSI I/O commands to a specific virtual target and reports IOPS and I/O latency results. Load Balancing of Port Channel Traffic Port channels load balance Fibre Channel traffic using a hash of the source FC-ID and destination FC-ID. and MDS 9000 16-Port Storage Services Node (SSN) achieve up to a 43:1 compression ratio. FCIP Compression FCIP compression in Cisco MDS 9000 NX-OS increases the effective WAN bandwidth without the need for costly infrastructure upgrades. and optionally the exchange ID. helping determine the number of concurrent I/O operations needed to increase FCIP throughput. optimize transfer sizes for the IP network topology. For WAN performance optimization. . the next-generation intelligent services switch in the Cisco MDS 9200 Series Multiservice Switch portfolio. with typical ratios of 4:1 over a variety of data sources. The switch is designed so that the presence of a slow N-port on the SAN does not affect the performance of any other port on the SAN. MDS 9000 18/4-Port Multiservice Module (MSM). and reduce latency by eliminating TCP connection setup for most data transfers. Highperformance streaming tape drives require a continuous flow of data to avoid write-data underruns. iSCSI and SAN Extension Performance Enhancements iSCSI and FCIP enhancements address out-of-order delivery problems. Limiting bandwidth on one or more Fibre Channel ports allows the other ports in the group to receive a greater share of the available bandwidth under high-use conditions. more efficient FCIP-based business-continuity and disaster-recovery solutions can be implemented without the need to add and manage a separate device. A future release of Cisco NX-OS software will support similar compression ratios on the 10 Gigabit Ethernet FCIP ports on the new Cisco MDS 9250i. Cisco MDS 9000 NX-OS includes a SAN extension tuner. FCIP Tape Acceleration Centralization of tape backup and archive operations provides significant cost savings by allowing expensive robotic tape libraries and high-speed drives to be shared. FCIP performance is further enhanced for SAN extension by compression and write acceleration. Gigabit Ethernet ports for the Cisco MDS 9222i Multiservice Modular Switch (MMS). Cisco MDS 9000 NX-OS also can be configured to load balance across multiple same-cost FSPF routes. This centralization poses a challenge for remote backup media servers that need to transfer data across a WAN.

The SPAN destination port does not have to be on the same switch as the SPAN source ports. which causes significant disruption of traffic in the SAN. and Diagnostics Cisco MDS 9000 NX-OS is among the first storage network OSs to provide a broad set of serviceability features that simplify the process of building. administrators can check the connectivity of an N-port and determine its round-trip latency. and restore operations for open systems. Troubleshooting. Cisco MDS 9000 NX-OS can shut down malfunctioning ports if errors exceed user-defined thresholds. The embedded Cisco Fabric Analyzer allows the Cisco MDS 9000 family to save Fibre Channel control traffic within the switch for text-based analysis. This capability helps isolate problems and reduce risks by preventing errors from spreading to the whole fabric. Serviceability. expanding. These features also increase availability by decreasing SAN disruptions for maintenance and reducing recovery time from problems. Call Home Cisco MDS 9000 NX-OS offers a Call Home feature for proactive fault management. These .Without FCIP tape acceleration. SCSI Flow Statistics LUN-level SCSI flow statistics can be collected for any combination of initiator and target. The Cisco Switched Port Analyzer (SPAN) feature allows an administrator to analyze all traffic between ports (called the SPAN source ports) by nonintrusive directing the SPAN-session traffic to a SPAN destination port that has an external analyzer attached to it. or to send IP-encapsulated Fibre Channel control traffic to a remote PC for decoding and display using the open source Ethereal networkanalyzer application. Fibre Channel control traffic therefore can be captured and analyzed without an expensive Fibre Channel analyzer. With Fibre Channel traceroute. The scope of these statistics includes read. Cisco Switched Port Analyzer and Cisco Fabric Analyzer Typically. debugging errors in a Fibre Channel SAN require the use of a Fibre Channel analyzer. The Call Home feature forwards the alarms and events. packaged with other relevant information in a standard format. With Fibre Channel ping. Fibre Channel Ping and Fibre Channel Traceroute Cisco MDS 9000 NX-OS brings to storage networks features such as Fibre Channel ping and Fibre Channel traceroute. the effective WAN throughput for remote tape operations decreases exponentially as the WAN latency increases. FCIP tape acceleration helps achieve nearly full throughput over WAN links for remote tape-backup operations for both open systems and mainframe environments. any Fibre Channel port in the fabric can be a source. write. This feature is available only on the Cisco MDS 9000 family storage service modules. and control commands and error statistics. SPAN sources can include Fibre Channel ports and FCIP and iSCSI virtual ports for IP services. to external entities. and maintaining SANs. Cisco Call Home provides a notification system triggered by software and hardware events. administrators can check the reachability of a switch by tracing the path followed by frames and determining hop-by-hop latency. Alert grouping capabilities and customizable destination profiles offer the flexibility needed to notify specific individuals or support organizations only when necessary.

and traffic is looped internally from the transmit path back to the receive path. Other Serviceability Features Additional serviceability features include the following: Online diagnostics: Cisco MDS 9000 NX-OS provides advanced online diagnostics capabilities. Periodically tests are run to verify that supervisor engines. Network Time Protocol (NTP) support: NTP synchronizes system clocks in the fabric. the powerful Cisco Generic Online Diagnostics (GOLD) framework replaces the Cisco Online Health Management System (OHMS) diagnostic framework on the new Cisco MDS 9700 Series Multilayer Directors. Loopback testing: The Cisco MDS 9000 family uses offline port loopback testing to check port capabilities. and on-demand and scheduled tests are part of the Cisco GOLD feature set. Starting with Cisco MDS 9000 NX-OS 6. but are not restricted to. System Log The Cisco MDS 9000 family syslog capabilities greatly enhance debugging and management. With this feature. Traps can be generated based on a threshold value. Cisco GOLD is a suite of diagnostic facilities for verifying that hardware and internal data paths are operating as designed. NTP messages are transported using IPFC. switch counters. continuous monitoring. Enhanced event logging and reporting with SNMP traps and syslog: Cisco MDS 9000 family events filtering and remote monitoring (RMON) provide complete and exceptionally flexible control over SNMP traps. . An NTP server must be accessible from the fabric through the OOB Ethernet port. Syslog severity levels can be set individually for all Cisco MDS 9000 NX-OS functions. or time stamps. providing a precise time base for all switches.2. Messages are logged internally. and the Cisco Technical Assistance Center (TAC). optics. and they can be sent to external syslog servers. a server in-house or at a service provider’s facility. Messages can be selectively routed to a console and to log files. an administrator’s e-mail account or pager. a port is isolated from the external connection. facilitating logging and display of messages ranging from brief summaries to very detailed information for debugging. Within the fabric. and interconnections are functioning properly. Boot-time diagnostics. allowing them to be run in production SAN environments. Cisco EEM helps customers harness the network intelligence intrinsic to the Cisco software and enables them to customize behavior based on network events as they happen. standby fabric loopback tests. Cisco Embedded Event Manager (EEM): Cisco EEM is a powerful device and system management technology integrated into Cisco NX-OS. These online diagnostics do not adversely affect normal Fibre Channel operations. External entities can include. During testing. an external management station attached through an OOB management port to a Cisco MDS 9000 family switch in the fabric can manage all other switches in the fabric using the in-band IPFC Protocol. switching modules. IPFC: The Cisco MDS 9000 Family provides the capability to carry IP packets over a Fibre Channel network.notification messages can be used to automatically open technical-assistance tickets and resolve problems before they become critical.

and XRC Acceleration Package. Cisco Data Center Network Manager for SAN includes a centralized license management console that provides a single interface for managing licenses across all Cisco MDS switches in the fabric. and Table 7-3 lists the Cisco MDS 9500. The grace period allows plenty of time to correct the licensing issue. All licensed features may be evaluated for up to 120 days before a license is expired. Mainframe Package. Cisco license packages require a simple installation of an electronic license: no software installation or upgrade is required.Syslog provides a comprehensive supplemental source of information for managing Cisco MDS 9000 family switches. if desired. Table 7-2 lists the Cisco MDS 9700 family licenses. IOA Package. . On-demand port activation licenses are also available for the Cisco MDS 9000 family blade switches and 4-Gbps Cisco MDS 9100 Series Multilayer Fabric Switches. Cisco MDS 9000 Family Software License Packages Most Cisco MDS 9000 family software features are included in the standard package: the base configuration of the switch. so license keys are never lost even during a software reinstallation. If an administrative error does occur with licensing. However. SAN Extension over IP Package. DMM Package. the switch provides a grace period before the unlicensed features are disabled. Licenses can also be installed on the switch in the factory. and 9100 family licenses. Cisco offers a set of advanced software features logically grouped into software license packages. 9200. some features are logically grouped into add-on packages that must be licensed separately. such as the Cisco MDS 9000 Enterprise Package. This single console reduces management overhead and prevents problems because of improperly maintained licensing. DCNM Packages. Messages ranging from only high-severity events to detailed debugging messages can be logged. Cisco MDS stores license keys on the chassis serial PROM (SPROM).

Table 7-2 Cisco MDS 9700 Family Licenses .

.

and Table 7-4 explains the models.Table 7-3 Cisco MDS 9500. . 9200. FICON. They enable seamless deployment of unified fabrics with high-performance Fibre Channel. and 9100 Family Licenses Cisco MDS Multilayer Directors Cisco MDS 9500 and 9700 Series Multilayer Directors are (SAN) switches designed for deployment in large. scalable enterprise clouds. They share the same operating system and management interface with Cisco Nexus data center switches. and FCoE connectivity to achieve low total cost of ownership (TCO). Figure 7-11 portrays the Cisco MDS 9000 Director switches since year 2002.

MDS 9710 supports up to 384 2/4/8-Gbps or 4/8/16-Gbps autosensing line-rate Fibre Channel ports in a single chassis and up to 1152 Fibre Channel ports per rack with up to 24 terabits per second (Tbps) of Fibre Channel system bandwidth. The Cisco MDS 9710 also supports 10 Gigabit Ethernet clocked optics carrying Fibre Channel traffic. and 16Gbps Fibre Channel and FCoE. 8-. 10-.Figure 7-11 Cisco MDS 9000 Director Switches Table 7-4 Details of Cisco MDS 9500 and 9700 Series Director Switch Models Cisco MDS 9700 Cisco MDS 9700 is available in a 10-slot and 6-slot configuration. It supports 2-. 4-. The multilayer architecture of the Cisco MDS 9700 series enables a consistent feature set over a .

It provides grid redundancy on the power supply and 1+1 redundant supervisors. MDS 9710 and MDS 9706 switches transparently integrate Fibre Channel. Figure 7-12 portrays the Cisco MDS 9700 Director switches and their modular components. remove only one fan tray at a time to access the appropriate fabric modules: Fabric Modules . Cisco MDS 9710 architecture. The Cisco MDS 9710 Director has a 10-slot chassis and it supports the following: Three fan trays at the back of the chassis. It has three hot-swappable rear panel fan trays with redundant individual fans. including the fabric card. and FICON. provides 16-Gbps line-rate. Eight power supply bays are located at the front of the chassis.protocol-independent switch fabric. The power supplies are redundant by default and they can be configured to be combined if desired. Fabric modules are located behind the fan trays.5 Tbps of front-panel Fibre Channel throughput between modules in each direction for each of the eight Cisco MDS 9710 payload slots. Figure 7-12 Cisco MDS 9710 Director Switch The Cisco MDS 9700 series switches enable redundancy on all major components. Users can add an additional fabric card to enable N+1 fabric redundancy. the fabric modules are numbered 1–6 from left to right when facing the rear of chassis. They support multihop FCoE. two Supervisor-1 modules reside in slots 5 and 6. FCoE. based on central arbitration and crossbar fabric. The combination of the 16-Gbps Fibre Channel switching module and the Fabric-1 crossbar switching modules enables up to 1. predictable performance across all traffic conditions for every port in the chassis. MDS 9710 and MDS 9706 have six crossbar-switching modules located at the rear of the chassis. non-blocking. extending connectivity from FCoE and Fibre Channel fabrics to FCoE and Fibre Channel storage devices. When the system is running. This per-slot bandwidth is twice the bandwidth needed to support a 48-port 16-Gbps Fibre Channel module at full line rate.

1–2 are behind fan tray 1. Fabric Modules 3–4 are behind fan tray 2. Air enters through perforations in line cards. Four power supply bays are located at the front of the chassis. Fabric Modules 5–6 are behind fan tray 3. two Supervisor-1 modules reside in slots 3 and 4. the best practice is one behind each fan tray. . Blank panels must be installed in an empty line card or PSU slots to ensure proper airflow. Air exits through perforations in fan trays. MDS 9710 and MDS 9706 have front to back airflow. and power supplies. Fabric modules may be installed in any slot. Figure 7-13 portrays the Cisco MDS 9710 Director switch details. Figure 7-13 Cisco MDS 9710 Director Switch The Cisco MDS 9706 Director switch has a 6-slot chassis and it supports the following: Three fan trays at the back of the chassis. Figure 7-14 portrays the Cisco MDS 9706 Director switch chassis details. supervisor modules. The power supplies are redundant by default and they can be configured to be combined if desired. It has three hot-swappable rear panel fan trays with redundant individual fans.

MDS 9500 supports up to 528 1/2/4/8-Gbps autosensing Fibre Channel ports in a single chassis and up to 1584 Fibre Channel ports per rack. and 10-Gbps Fibre Channel. Inclusion of Cisco Control Unit Port (CUP) support enables in-band management of Cisco MDS 9000 family switches from mainframe management applications. Figure 7-15 portrays the Cisco MDS 9500 Director switch details. VSAN-enabled intermix of mainframe and open systems environments.Figure 7-14 Cisco MDS 9706 Director Switch Cisco MDS 9500 Cisco MDS 9500 is available in 6. and N_Port ID virtualization (NPIV) for mainframe Linux partitions. 2-. 4-. .and 13-slot configurations. 1-Gbps Fibre Channel over IP (FCIP). 1/2/4/8-Gbps and 10-Gbps FICON: The Cisco MDS 9513 supports advanced FICON services including cascaded FICON fabrics. It supports 1-. 10-Gbps FCoE. 8-. and 1-Gbps Small Computer System Interface over IP (iSCSI) port speeds.

clock modules. In the unlikely event of multiple simultaneous fan failures. crossbar switch fabrics. two vertical. redundant power supplies. redundant. where slots 1 to 6 and slots 9 to 13 are reserved for switching and services modules only. and power supplies. The Cisco MDS 9506 Director has a 6-slot chassis. the fan tray still provides sufficient cooling to keep the switch operational. for more information. Please refer to the Cisco website.cisco. The rear of the chassis supports two vertical. and a variable-speed fan tray with two individual fans located above the crossbar modules. A side-mounted fan tray provides active cooling. Two hot-swappable clock modules are located at the rear of the chassis. and slots 7 and 8 are for Supervisor-2A modules only. One hot-swappable fan module for the crossbar modules is located at the rear of the chassis. and failure of one or more individual fans causes all other fans to speed up to compensate. It has one hot-swappable front panel fan tray with redundant individual fans. The Cisco MDS 9513 Director is a 13-slot Fibre Channel switch.com. Two crossbar modules are located at the rear of the chassis. and it supports the following: . A variable-speed fan tray with 15 individual fans is located on the front-left panel of the chassis. and employs majority signal voting wherever 1:1 redundancy would not be appropriate. All Cisco MDS 9500 Series Director switches are fully redundant with no single point of failure. All fans are variable speed. there are multiple redundant fans with dual fan controllers. The system has dual supervisors. The power supplies are redundant by default and they can be configured to be combined if desired. The Cisco MDS 9513 Director supports the following: Two Supervisor-2A modules reside in slots 7 and 8.Figure 7-15 Cisco MDS 9500 Director Switch Note The Cisco MDS 9509 has reached End of Life. Modules exist on both sides of the plane. www. two clock modules. The Cisco MDS 9513 Director uses a midplane. It continues to be supported at the time of writing this chapter. Although there is a single fan tray. external crossbar modules. Two power supplies are located at the rear of the chassis. End of Sale status. The front panel consists of 13 horizontal slots.

Up to two Supervisor-2A modules that provide a switching fabric. . COM1 port. Two power supplies are located in the back of the chassis. with a console port. and a MGMT 10/100 Ethernet port on each module. It has one hot-swappable fan module with redundant fans. Table 7-5 compares the major details of Cisco 9500 and 9700 Series Director switch models. Two power entry modules (PEM) are in the front of the chassis for easy access to power supply connectors and switches. The first four slots are for optional modules that can include up to four switching modules or three IPS modules. The power supplies are redundant by default and can be configured to be combined if desired. Slots 5 and 6 are reserved for the supervisor modules.

.

.

The modules occupy two slots in the Cisco MDS 9500 Series chassis.Table 7-5 Details of Cisco MDS 9500 and 9700 Series Director Switch Models Note The Cisco Fabric Module 2. Refer to the Cisco website. www. These modules are still be supported at the time of writing this chapter. and secure high-performance multilayer SAN switching solutions. the Cisco MDS 9500 Series Supervisor-2A Module enables intelligent. Designed to integrate multiprotocol switching and routing. This topic describes the capacities of the Cisco MDS 9000 Series switching and supervisor modules. intelligent SAN services. Cisco MDS 9700–MDS 9500 Supervisor and Crossbar Switching Modules The Cisco MDS 9500 Series Supervisor-2A Module incorporates an integrated crossbar switching fabric. allowing recovery from most process errors without a reset of the supervisor . so we will not be covering these modules in this book. The Cisco MDS 9500 Series directors include two supervisor modules—a primary module and a redundant module. for more information. resilient. Gen-3 1/2/4/8 Gbps modules. Supporting up to 528 Fibre Channel ports in a single chassis. and 1.2 Tbps of internal system bandwidth when installed in the Cisco MDS 9513 Multilayer Director chassis. The active/standby configuration of the supervisor modules allows support of nondisruptive software upgrades. 4-port 10 Gbps FC Module have reached End of Life. Cisco MDS 9000 Series Multilayer Components Several modules are available for the modular switches in the Cisco MDS 9000 Series product range. scalable. End of Sale status.com.4 terabits per second (Tbps) of internal system bandwidth when combined with a Cisco MDS 9506 or MDS 9509 Multilayer Director chassis or up to 2.cisco. 1584 Fibre Channel ports in a single rack. and storage applications onto highly scalable SAN switching platforms. with the remaining slots available for switching modules. The supervisor module also supports stateful process restarts.

Figure 7-16 portrays the Cisco MDS 9500 Series Supervisor-2A module and Cisco MDS 9700 Supervisor-1 module. The Cisco MDS 9000 arbitrated local switching feature provides line-rate switching across all ports on the same module without degrading performance or increasing latency for traffic to and from other modules in the chassis. an RJ-45 serial console port. They house the crossbar switch fabrics and central arbiters used to provide data-plane connectivity between all the line cards in the chassis. the supervisor modules do not handle any frame forwarding or frame processing. On the Cisco MDS 9513 chassis. the Cisco MDS NX-OS Release 5. this is used only on Cisco MDS 9509 and MDS 9506 chassis. Figure 7-16 Cisco MDS 9000 Supervisor Models . Although each Supervisor-2A module contains a crossbar switch fabric.2 supports Cisco MDS 9513 Fabric 3 module. On Cisco MDS 9513 Director. DS-13SLT-FAB3 that is a generation-4 type module. All frame forwarding and processing is handled within the distributed forwarding ASICs on the line cards themselves. Management connectivity is provided through an out-of-band Ethernet (interface mgmt0). the crossbar switch fabrics are on fabric cards (refer to Figure 7-15) inserted into the rear of the chassis. This capability is achieved through Cisco MDS 9500 Series Multilayer Directors crossbar architecture with a central arbiter arbitrating fairly between local traffic and traffic to and from other modules.module and with no disruption to traffic. and the crossbar switch fabrics housed on the supervisors are disabled. and optionally also a DB9 auxiliary console port suitable for modem connectivity. Besides providing crossbar switch fabric capacity. Cisco MDS 9000 family supervisor modules provide two essential switch functions: They house the control-plane processors that are used to manage the switch and keep the switch operational. Table 7-6 lists the details of switching capabilities per fabric for Cisco MDS 9710 and MDS 9706 Director switch models.

Table 7-6 Details of Switching Capabilities per Fabric for Cisco MDS 9710 and MDS 9706 Each fabric module provides 220 Gbps data bandwidth per slot. Three fabric modules provide 660 Gbps data bandwidth and 768 Gbps of Fibre Channel bandwidth per slot. The line-rate on 48-port 16 Gbps Fibre Channel module needs only three Fabric modules. which is why three is the minimum number of fabric modules that are supported on MDS 9700 Series switches. Figure 7-17 portrays the crossbar switching modules for MDS 9700 and MDS 9513 platforms. equivalent to 256 Gbps of Fibre Channel bandwidth. Figure 7-17 Cisco MDS 9700 Crossbar Switching Fabric-1 and MDS 9513 Crossbar Switching Fabric-3 . Table 7-7 lists the details of MDS 9700 Supervisor-1 and MDS 9500 Supervisor-2A.

For traffic switched across the backplane. For large-scale storage networking environments.0 ports provided on the front panel for simplified configuration-file uploading and downloading using common USB memory stick products. the 48-port 8-Gbps Advanced Fibre Channel switching module delivers full duplex aggregate backplane switching performance of 512 Gbps. With Arbitrated Local Switching.5:1 oversubscription at 8-Gbps Fibre Channel (FC) rate across all ports. . Figure 7-18 portrays the Cisco MDS 9000 48-port 8-Gbps Advanced Fibre Channel Switching Module.Table 7-7 Details of MDS 9700 Supervisor-1 and MDS 9500 Supervisor-2A Both MDS 9700 supervisor-1 and MDS 9500 supervisor-2A modules have two USB 2. this module supports 1. Cisco MDS 9000 48-Port 8-Gbps Advanced Fibre Channel Switching Module The MDS 9000 48-Port 8-Gbps Advanced Fibre Channel Switching Module provides high port density and is ideal for connection of high-performance virtualized servers. the 48-port 8-Gbps Advanced module delivers fullduplex aggregate performance of 768 Gbps across locally switched ports on the module.

making this module well suited for the high-performance 8-Gbps storage subsystems. distributed intelligent fabric services. Small Computer System Interface over IP (iSCSI). Cisco Storage Services Enabler (SSE). The 32-port 8-Gbps Advanced switching module delivers full-duplex aggregate performance of 512 Gbps. The Cisco MDS 9000 18/4-Port Multiservice Module (MSM) is optimized for deployment of high-performance SAN extension solutions. Cisco Storage Media Encryption (SME). and switch cascading.Figure 7-18 Cisco MDS 9000 48-port 8-Gbps Advanced Fibre Channel Switching Module Cisco MDS 9000 32-Port 8-Gbps Advanced Fibre Channel Switching Module The MDS 9000 32-Port 8-Gbps Advanced Fibre Channel Switching Module delivers line-rate performance across all ports and is ideal for high-end storage subsystems and for Inter-Switch Link (ISL) connectivity. and 4-Gbps Fibre Channel ports and four 1-Gigabit Ethernet IP storage services ports. The modules can support 8-Gbps or 10-Gbps and/or a combined (8. integrating. It provides multiprotocol capabilities. and the ports are grouped in different orders depending on the 8-Gbps and 10Gbps modes. Cisco MDS 9000 Extended Remote Copy (XRC) Acceleration. Fibre Channel. . Figure 7-19 portrays the Cisco MDS 9000 32-port 8-Gbps Advanced Fibre Channel Switching Module. Figure 7-19 Cisco MDS 9000 32-port 8-Gbps Advanced Fibre Channel Switching Module Cisco MDS 9000 18/4-Port Multiservice Module (MSM) The Cisco MDS 9000 18/4-Port Multiservice Module (Figure 7-20) offers 18 1-. IBM Fibre Connection (FICON). The 10-Gbps ports can be used for long-haul DWDM connections. Fibre Channel over IP (FCIP). in a single-form-factor. and cost-effective IP storage and mainframe connectivity in enterprise storage networks. 2-.and 10-Gbps) InterSwitch Link (ISL). Cisco MDS 9000 I/O Accelerator (IOA). FICON Control Unit Port (CUP) management.

tape. Table 7-8 lists the details of Multiservice Module (MSM) and Storage Services Node (SSN). and Cisco Storage Media Encryption (SME) to secure missioncritical data stored on heterogeneous disk. Figure 7-21 Cisco MDS 9000 16-port Gigabit Ethernet Storage Services Node Module . which can each be individually and incrementally enabled to scale as business requirements change. or be configured to consolidate up to four fabric application instances to dramatically decrease hardware footprint and free valuable slots in the MDS 9500 Director chassis. flexible. Cisco IO Accelerator (IOA) to optimize the utilization of metropolitan. and VTL drives. Figure 7-21 portrays the Cisco MDS 9000 16-port Gigabit Ethernet Storage Services Node (SSN) Module. and intelligent fabric applications. unified platform for deploying enterprise-class disaster recovery. and FC tape read-and-write acceleration. business continuance.Figure 7-20 Cisco MDS 9000 18/4-port Multiservice Module Cisco MDS 9000 16-Port Gigabit Ethernet Storage Services Node (SSN) Module The Cisco MDS 9000 16-Port Storage Services Node provides a high-performance. The Cisco MDS 9000 16-Port Storage Services Node hosts four independent service engines. FC write acceleration. The 16-Port Storage Services Node offers FCIP for remote SAN Extension.and wide-area network (MAN/WAN) resources for backup and replication using hardware-based compression.

This module supports hot-swappable X2 optical SC interfaces. This capability Allows FCoE initiators to access remote Fibre Channel resources connected through Fibre Channel over IP (FCIP) for enterprise backup solutions Supports virtual SANs (VSANs) for resource separation and consolidation. security. Figure 7-22 portrays the MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module. . training. Figure 7-23 Cisco MDS 9000 8-port 10-Gbps Fibre Channel over Ethernet (FCoE) Module FCoE allows an evolutionary approach to I/O consolidation by preserving all Fibre Channel constructs.Table 7-8 Details of Multiservice Module (MSM) and Storage Services Node (SSN) Cisco MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module Cisco MDS 9000 Series supports 10-Gbps Fibre Channel switching with the Cisco MDS 9000 4-Port 10-Gbps Fibre Channel switching module. FCoE enables the preservation of Fibre Channel as the storage protocol in the data center while giving customers a viable solution for I/O consolidation. Cisco MDS 9000 10-Gbps FCoE module (see Figure 7-23) provides the industry’s first multihop-capable FCoE module. respectively. maintaining the latency. Not only does the Cisco MDS 9000 10-Gbps 8-Port FCoE Module take advantage of migration of FCoE into the core layer. and SANs. Figure 7-22 Cisco MDS 9000 4-port 10-Gbps Fibre Channel Switching Module Cisco MDS 9000 8-Port 10-Gbps Fibre Channel over Ethernet (FCoE) Module Cisco MDS 9500 Series of Multilayer Directors support 10-Gbps 8-port FCoE switching module with full line-rate FCoE connectivity to extend the benefits of FCoE beyond the access layer into the core of the data center network. but it also extends enterprise-class Fibre Channel services to Cisco Nexus 7000 and 5000 Series Switches and FCoE initiators. Modules can be configured with either short-wavelength or long-wavelength X2 transceivers for connectivity up to 300 meters and 10 kilometers. Supports inter-VSAN routing (IVR) to use resources that may be segmented. and traffic management attributes of Fibre Channel while preserving investments in Fibre Channel tools.

up to 4095 buffer credits can be allocated to an individual port.Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module Cisco MDS 9700 Series Fibre Channel Switching Module is hot swappable and compatible with 2/4/8-Gbps. and supports hot-swappable Enhanced Small Form-Factor Pluggable (SFP+) transceivers. MDS 9700 module also supports 10 Gigabit Ethernet clocked optics carrying 10 Gbps Fibre Channel traffic. Figure 7-24 portrays the Cisco MDS 9700 48-port 16-Gbps Fibre Channel Switching Module. 8-Gbps. enabling full link bandwidth over long distances with no degradation in link utilization. and sophisticated diagnostics. advanced traffic management capabilities. integrated VSANs and IVR. resilient high-performance ISLs. This module also supports connectivity to FCoE initiators and targets that only send FCoE traffic. With the Cisco Enterprise Package. Figure 7-24 Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module Cisco MDS 9700 48-Port 10-Gbps Fibre Channel over Ethernet (FCoE) Module The Cisco MDS 9700 48-Port 10-Gbps FCoE Module provides the scalability of 384 10-Gbps full line-rate autosensing ports in a single chassis. . fault detection and isolation of error packets. or 10-Gbps SFP+ transceivers. Each port supports 500 buffer credits without requiring additional licensing. Figure 7-25 portrays the Cisco MDS 9700 48 port 16-Gbps Fibre Channel Switching Module. comprehensive security frameworks. 4/8/16-Gbps and 10-Gbps FC interfaces. The 16-Gbps Fibre Channel switching module continues to provide previously available features such as predictable performance and high availability. thus providing a way to deploy dedicated FCoE SAN without requiring any convergence. Individual ports can be configured with Cisco 16-Gbps.

along with the chassis compatibility associated with them. They allow you to choose different cabling types and distances on a port-by-port basis. Cisco MDS family Storage Modules offer a wide range of options for various SAN architectures.Figure 7-25 Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module Cisco MDS 9000 Family Pluggable Transceivers The Cisco Small Form-Factor Pluggable (SFP).html. and X2 devices are hot-swappable transceivers that plug in to Cisco MDS 9000 family director switching modules and fabric switch ports.cisco. Enhanced Small Form-Factor Pluggable (SFP+). Table 7-9 lists all the hardware modules available at the time of writing this chapter.com/c/en/us/products/collateral/storage-networking/mds-9000-seriesmultilayer-switches/product_data_sheet09186a00801bc698. . The most up-to-date information can be found for pluggable transceivers in the following link: www.

.

The Cisco MDS 9200 Series switches deliver state-of-theart multiprotocol and distributed multiservice convergence. The Cisco MDS 9200 and 9100 Series . Cisco MDS NX-OS provides services optimized for virtual machines and blade servers that let IT managers dynamically respond to changing business needs in virtual environments. providing edge-switch connectivity to enterprise core SANs. the Cisco MDS 9200 Series switches provide an ideal solution for departmental and remote branch-office SANs. easy-to-install. With high port densities. the Cisco MDS 9100 Series switches easily scale from an entry-level departmental switch to a top-ofthe-rack switch. offering high-performance SAN extension and disaster-recovery solutions. With a compact form factor and advanced capabilities normally available only on director-class switches. and cost-effective multiprotocol connectivity for both open systems and mainframe environments. intelligent fabric services. and highly configurable Fibre Channel switches that are ideal for small to medium-sized businesses.Table 7-9 Hardware Modules and the Chassis Compatibility Associated with Them Cisco MDS Multiservice and Multilayer Fabric Switches Cisco MDS 9200 and MDS 9100 Series switches are fabric switches designed for deployment in small to medium-sized enterprise SANs. The Cisco MDS 9100 Series switches are cost-effective.

Customers have the option of purchasing preconfigured models of 16. federal government. providing load-balancing bandwidth use across all links. or 48 ports and upgrading the 16. When extended distances are required. Ports per chassis: the On-Demand Port Activation license allows the addition of eight 8-Gbps ports. Figure 7-26 Cisco MDS 9148 Cisco MDS 9148S The Cisco MDS 9148S 16G Multilayer Fabric Switch (see Figure 7-27). up to 125 buffer credits can be allocated to a single port within the port group.S. The following additional capabilities are included: Each port group consisting of four ports has a pool of 128 buffer credits. 36. helping ensure that the bundle remains active even in the event of a port group failure. This extensibility is available without additional licensing. The Cisco MDS 9148 can be used as the foundation for small. Cisco MDS 9148 The MDS 9148 is a 1 RU Fibre Channel switch with 48 line-rate 8-Gbps ports. Figure 7-26 portrays the Cisco MDS 9148 switch. and can be upgraded as needed with the 12-port Cisco MDS 9148S OnDemand Port Activation license to support configurations of 24. with a default of 32 buffer credits per port. The bundle can consist of any port from the switch.and 32-port models onsite. compact one rack-unit (1RU) switch scales from 12 to 48 line-rate 16 Gbps Fibre Channel ports. as a Top-of-Rack switch. standalone SANs. and support IPv6. also supporting 1/2/4-Gbps connectivity. or 48 enabled ports.switches are FIPS compliant. 32. or as an edge switch in larger. The Cisco MDS 9148 Multilayer Fabric Switch is designed to support quick configuration with zerotouch plug-and-play features and task wizards that allow it to be deployed quickly and easily in networks of any size. core-edge SAN infrastructures. as mandated by the U. all the way to 48 ports by adding these licenses. . Port channels enable users to aggregate up to 16 physical Inter-Switch Links (ISLs) into a single logical bundle. The base switch model comes with 12 ports enabled.

The Cisco MDS 9148S also supports Power On Auto Provisioning (POAP) to automate software image upgrades and configuration file installation on newly deployed switches. The Cisco MDS 9148S includes dual redundant hot-swappable power supplies and fan trays. Table 7-10 lists the chassis comparison between MDS 9148 and MDS 9148S.Figure 7-27 Cisco MDS 9148S The Cisco MDS 9148S offers In-Service Software Upgrades (ISSU). Table 7-10 Chassis Comparison Between MDS 9148 and MDS 9148S . port channels for Inter-Switch Link (ISL) resiliency. and F-port channeling for resiliency on uplinks from a Cisco MDS 9148S operating in NPV mode. This means that Cisco NX-OS Software can be upgraded while the Fibre Channel ports carry traffic.

Figure 7-28 portrays the Cisco MDS 9222i two-rack-unit switch. The following additional capabilities are included: High-performance ISLs: Cisco MDS 9222i supports up to 16 Fibre Channel interswitch links (ISLs) in a single port channel. midsize customers in support of IT simplification. The Cisco MDS 9222i supports the Cisco MDS 9000 4/44-Port 8-Gbps Host-Optimized Fibre Channel Switching Module in the expansion slot. It can also provide remote site device aggregation and SAN extension connectivity to large data center directors. It provides 18 fixed 4-Gbps Fibre Channel ports and 4 fixed Gigabit Ethernet ports for Fibre Channel over IP (FCIP) and Small Computer System Interface over IP (iSCSI) storage services. Up to 4095 buffer-to-buffer credits can be assigned to a single Fibre Channel port to extend storage networks over long distances. Links can span any port on any module in a chassis for added scalability and resilience. the Cisco MDS 9222i scales up to 66 ports for both open systems and IBM Fiber Connection (FICON) mainframe environments. This additional engine can run a second advanced software application solution package such as Cisco MDS 9000 I/O Accelerator (IOA) to further improve the SAN Extension over IP performance and throughput . Figure 7-28 Cisco MDS 9222i The Cisco MDS 9222i can add a second services engine when a Cisco MDS 9000 18/4-Port Multiservice Module (MSM) is installed in the expansion slot. Thus. replication. with four of the Fibre Channel ports capable of running at 8 Gbps. Intelligent application services engines: The Cisco MDS 9222i includes as standard a single application services engine in the fixed slot that enables the included SAN Extension over IP software solution package to run on the four fixed 1 Gigabit Ethernet storage services ports.Cisco MDS 9222i The Cisco MDS 9222i provides multilayer and multiprotocol functions in a compact three-rack-unit (3RU) form factor. The Cisco MDS 9222i is designed to address the needs of medium-sized and large enterprises with a wide range of SAN capabilities. The Cisco MDS 9222i also supports a Cisco MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module in the expansion slot. and disaster recovery and business continuity solutions. It can be used as a cost-effective SAN extension over IP switch for midrange.

The Cisco MDS 9250i connects to existing native Fibre Channel networks. the Cisco MDS 9000 16-Port Storage Services Node (SSN) can be installed in the expansion slot to provide four additional services engines for simultaneously running application software solution packages such as Cisco Storage Media Encryption (SME).running on the standard services engine. the Cisco MDS 9222i switch platform scales from a single application service solution in the standard offering to a maximum of five heterogeneous or homogeneous application services. Table 7-11 lists the technical details of the Cisco MDS 9200 and 9100 Series models. IPv6 support is provided for FCIP. The Cisco MDS 9000 XRC Acceleration feature enables acceleration of dynamic updates for IBM z/OS Global Mirror. Cisco MDS 9250i The Cisco MDS 9250i offers up to forty 16-Gbps Fibre Channel ports. Also. enabling features such as Fibre Channel over IP (FCIP) and compression on the switch without the need for additional licenses. including cascaded FICON fabrics. Advanced FICON services: The Cisco MDS 9222i supports System z FICON environments. . iSCSI. and management traffic routed in-band and out-ofband. In Service Software Upgrade (ISSU) for Fibre Channel interfaces: Cisco MDS 9222i provides high serviceability by allowing Cisco MDS 9000 NX-OS Software to be upgraded while the Fibre Channel ports are carrying traffic. depending on the choice of optional service module that is installed in the expansion slot. Overall. VSAN-enabled intermix of mainframe and open systems environments. two 1/10-Gigabit Ethernet IP storage services ports. and eight 10-Gigabit Ethernet FCoE ports in a fixed two-rack-unit (2RU) form factor. The Cisco SAN Extension over IP application package license is enabled as standard on the two fixed 1/10 Gigabit Ethernet IP storage services ports. IBM Control Unit Port (CUP) support enables in-band management of Cisco MDS 9200 Series switches from the mainframe management console. FICON tape acceleration reduces latency effects for FICON channel extension over FCIP for FICON tapes read and write operations to mainframe physical or virtual tape. Figure 7-29 portrays the Cisco MDS 9250i two-rack-unit switch. formerly known as XRC. and N-port ID virtualization (NPIV) for mainframe Linux partitions. This feature is sometimes referred to as tape pipelining. If more intelligent services applications need to be deployed. using the eight 10-Gigabit Ethernet FCoE ports. the Cisco MDS 9250i platform attaches to directly connected FCoE and Fibre Channel storage devices and supports multitier unified network fabric connectivity directly over FCoE.

Figure 7-29 Cisco MDS 9250i .

.

Links can span any port on any module in a chassis. Cisco Data Mobility Manager (DMM) as a distributed fabric service: Cisco DMM is a fabric-based data migration solution that transfers block data nondisruptively across heterogeneous storage volumes and across distances. 2 ports of 10-Gigabit Ethernet for FCIP and iSCSI storage services. It also supports IBM CUP. Cisco MDS 9250i supports In Service Software Upgrade (ISSU) for Fibre Channel interfaces. VSAN-enabled intermix of mainframe and open systems environments. Platform for intelligent fabric applications: The Cisco MDS 9250i provides an open platform that delivers the intelligence and advanced features required for multilayer intelligent SANs. FIPS compliance: The Cisco MDS 9250i is FIPS 140-2 compliant and it is IPv6 capable. and 8 ports of 10 Gigabit Ethernet for FCoE connectivity. High-performance ISLs: Cisco MDS 9250i supports up to 16 Fibre Channel ISLs in a single port channel. Up to 256 buffer-to-buffer credits can be assigned to a single Fibre Channel port.Table 7-11 Summary of the Cisco MDS 9200 and 9100 Series Models The following additional capabilities are included: SAN consolidation with integrated multiprotocol support: The Cisco MDS 9250i is available in a base configuration of 20 ports (upgradeable to 40 through On-Demand Port Activation license) of 16-Gbps Fibre Channel. including cascaded FICON fabrics. FICON tape acceleration. and IBM Extended Remote Copy (XRC) features like the MDS 9222i. whether the host is online or offline. Advanced FICON services: The Cisco MDS 9250i supports FICON environments. Intelligent application services engine: The Cisco MDS 9250i includes as standard a single application services engine that enables the included Cisco SAN Extension over IP software solution package to run on the two fixed 1/10 Gigabit Ethernet storage services ports. Cisco MDS Fibre Channel Blade Switches .

BladeCenter T. In the 24-port model. providing transparent. BladeCenter HT. and 1 Gbps. The management of the module is integrated with the HP OpenView Management Suite. There is a 10-port upgrade license to upgrade from 10 ports to 20 ports. The Cisco MDS Fibre Channel Blade Switch functions are equivalent to the Cisco MDS 9124 switch. Figure 7-30 shows the Cisco MDS Fibre Channel Blade Switch for HP and IBM. In the 12-port model. 14-ports are for internal connections to the server and 6-ports are external. and IBM BladeCenter S (10-port model only). 2. The Cisco MDS Fibre Channel Blade Switch is capable of speeds of 4. In the 20-port model. and MSIM (20-port model only). end-to-end service delivery in core-edge deployments. in the form factor to fit IBM BladeCenter (BladeCenter. 8 ports are for internal connections to the server. In the 10-port model. the Cisco MDS Fibre Channel Blade Switch offers the densities required from entry level to advance. Figure 7-30 Cisco MDS Fibre Channel Blade Switches The Cisco MDS Fibre Channel Blade Switch for IBM BladeCenter has two models: the 10-port switch and the 20-port switch. blade switches share the same design as the Cisco MDS 9000 family fabric switches. The Cisco MDS Fibre Channel Blade Switch for HP c-Class BladeSystem has two models: the base 12-port model and the 24-port model. 7 ports are for internal connections to the server and 3 ports are external. There is a 12-port license upgrade to upgrade the 12-port model from 12 ports to 24 ports. There is also the MDS 8Gbps Fibre Channel Blade Switch for HP cClass BladeSystem. The switch modules have the following features: . 16 ports are internal for server connections. It is running Cisco MDS 9000 NX-OS Software. and 4 ports are external or storage area network (SAN) facing. With its flexibility to expand ports in increments. BladeCenter H. The management of the module is integrated with the IBM Management Suite. Supported IBM BladeCenter chassis are IBM BladeCenter E.Cisco Fibre Channel Blade switches are built using the same hardware and software platform as Cisco MDS 9000 family Fibre Channel switches and hence are consistent in features and performance. it includes advanced storage networking features and functions and is compatible with Cisco MDS 9500 Series Multilayer Directors and Cisco MDS 9200 Series Multilayer Fabric Switches. Architecturally. and 8 ports are external or SAN facing. BladeCenter T. Cisco offers Fibre Channel blade switch solutions for HP c-Class BladeSystem and IBM BladeCenter. and BladeCenter H) and HP c-Class BladeSystem (see Figure 7-30).

. which is composed of Nexus and MDS 9000 switches. large-scale. It provides visibility and control of the unified data center so that you can optimize for the quality of service (QoS) required to meet service-level agreements. Cisco Prime DCNM 7. FL_ports (fabric loop ports). Figure 7-31 Cisco MDS 8-Gbps Fibre Channel Blade Switch Cisco Prime Data Center Network Manager Cisco Prime Data Center Network Manager (DCNM) is a management system for the Cisco Data Center Unified Fabric. It enables you to provision. Cisco Prime DCNM 7. External ports can be configured as F_ports (fabric ports). Internal ports are configured as F_ports at 2 Gbps or 4 Gbps. and troubleshoot the data center network infrastructure. or 1 Gbps. computing. Figure 7-31 shows the Cisco MDS Fibre Channel Blade Switch for HP.0 (at the time of writing this chapter) includes the infrastructure necessary to operate and automate a Cisco DFA network fabric.Six external autosensing Fibre Channel ports that operate at 4.0 integrates with external hypervisor solutions and third-party applications using a Representational State Transfer (REST) API. extensible management capabilities for Cisco Dynamic Fabric Automation (DFA). standards-based. Internal autosensing Fibre Channel ports that operate at 4 or 2 Gbps. and storage resources. Figure 7-32 illustrates Cisco Prime DCNM as central point of integration for NX-OS Platforms with visibility across networking. 2. or E_ports (expansion ports). monitor. DCNM software provides ready-to-use.0 policy-based deployment helps reduce labor and operating expenses (OpEx). Two internal full-duplex 100 Mbps Ethernet interfaces. Power-on diagnostics and status reporting. Cisco Prime DCNM 7.

More Cisco DCNM servers can be added to a cluster. adds. and storage administration needs of data centers. Cisco Prime DCNM product for Cisco DCNM Release 5. Cisco DCNM also supports the installation of the Cisco DCNM for SAN and Cisco DCNM for LAN components with a single installer. This advanced management product Provides self-service provisioning of intelligent and scalable fabrics. It provides a robust framework and comprehensive feature set that meets the routing.2 is retitled with the name Cisco DCNM for LAN. and LDAP. and Cisco Unified Computing System products. The VMware vCenter plug-in brings Cisco Prime DCNM computing dashboard into VMware vCenter for dependency mapping. Simplifies operational management of virtualized data centers. or as a virtual service blade on Nexus 1010 or 1100. RADIUS. switching. Cisco DCNM authenticates administration through the following protocols: TACACS+. . Cisco MDS. Centralizes fabric management to facilitate resource moves. Eases diagnosis and troubleshooting of data center outages. and events. Proactively monitors the SAN and LAN. Opens northbound application programming interfaces (API) for Cisco and third-party management and orchestration platforms. inventory. Tracks port utilization by tier. Cisco DCNM provides a high level of visibility and control through a single web-based management console for Cisco Nexus. and changes. This management solution can be deployed in Windows and Linux. and through trending capacity manager predicts when an individual tier will be consumed.Figure 7-32 Cisco Prime DCNM as Central Point of Integration Cisco Prime DCNM provisions and optimizes the overall uptime and reliability of the data center fabric. performance.2 is retitled with the name Cisco DCNM for SAN. VMpath analysis provides the capability to view performance for every switch hop all the way to the individual VMware ESX server and virtual machine. Cisco Fabric Manager product for Cisco DCNM Release 5. and detects performance degradation. configuration.

6.0) or Windows 2008 R2 SP1. This facilitates managing redundant fabrics. The Cisco DCNM-SAN Clients can also connect directly to an MDS switch in fabrics that are not monitored by a Cisco DCNM-SAN Server. Cisco DCNM-SAN Server has the following features: Multiple fabric management: DCNM-SAN Server monitors multiple physical fabrics under the same user interface. Performance Manager. Network Monitoring. Roaming user profiles: The licensed DCNM-SAN Server uses the roaming user profile feature .7. A licensed DCNM-SAN Server maintains up-to-date discovery information on all configured fabrics so device status and interconnections are immediately available when you open the DCNM-SAN Client. Red Hat Enterprise Linux 5. SNMP operations are used to collect fabric information. Device Manager.4 (Cisco Prime DCNM 6. Up to 16 clients (by default) can connect to a single Cisco DCNM-SAN Server concurrently. 5. The Cisco DCNM-SAN software. Authentication in DCNM-SAN Client.3 and 6. including the server components. Cisco DCNMSAN Server runs on VMware ESXi (Cisco Prime DCNM 7. Each computer configured as a Cisco DCNM-SAN Server can monitor multiple Fibre Channel SAN fabrics. requires about 100 GB of hard disk space. Windows 2008 Standard SP2. troubleshooting. so any events that occurred since the last time you opened the DCNM-SAN Client are captured. DCNM-SAN Client. DCNM-SAN Web Client.3). Continuous health monitoring: MDS health is monitored continuously. which ensures that you can manage any of your MDS devices from a single console. Performance Monitoring.Cisco DCNM SAN Fundamentals Overview Cisco Prime DCNM SAN has the following fundamental components: DCNM-SAN Server. Figure 7-33 Cisco Prime DCNM Managing Virtualized Topology DCNM-SAN Server DCNM-SAN Server is a platform for advanced MDS monitoring. DCNM-SAN Server provides centralized MDS management services and performance monitoring. 6. Cisco Traffic Analyzer. and configuration capabilities. Figure 7-33 shows how Cisco Prime DCNM manages highly virtualized data centers.

Performance Manager drill-down reports: Performance Manager can analyze daily. trial versions of these licensed features are available. you run the feature as if you had purchased the license. You can also view the results for specific time intervals using the interactive zooming functionality. and continuously monitored fabrics. and inventory from a remote location using a web browser. a Device Manager dialog box shows values for a single switch. A full two days of raw samples are saved for maximum resolution. However. Performance Manager summary reports: The Performance Manager summary report provides a high-level view of the network performance. A key management capability is network performance monitoring. DCNM-SAN Web Client With DCNM-SAN Web Client you can monitor Cisco MDS switch events.to store your preferences and topology map layouts on the server. so that your user interface will be consistent regardless of what computer you use to manage your storage networks. Device Manager Device Manager provides a graphical representation of a Cisco MDS 9000 family switch chassis. Both tabular and graphical reports are available for all interconnections monitored by Performance Manager. performance. the basic unlicensed version of Cisco DCNM-SAN Server is installed with it. The tables in the DCNM-SAN information pane basically correspond to the dialog boxes that appear in Device Manager. Performance Manager gathers network device statistics historically . such as Performance Manager. Gradually the resolution is reduced as groups of the oldest samples are rolled up together. weekly. you need to buy and install the Cisco DCNM-SAN Server package. Zero maintenance databases for statistics storage: No maintenance is required to maintain Performance Manager’s round-robin database. However. the power supplies. Cisco Nexus 5000 Series switch chassis. or Cisco Nexus 7000 Series switch chassis (FCoE only) along with the installed switching modules. the status of each port within each module. Licensing Requirements for Cisco DCNM-SAN Server When you install DCNM-SAN. monthly. and yearly trends. You see a dialog box explaining that this is a demo version of the feature and that it is enabled for a limited time. To get the licensed features. At prescribed intervals the oldest samples are averaged (rolled-up) and saved. These reports list the average and peak throughput and provide hot links to additional performance graphs and tables with additional statistics. remote client support. because its size does not increase over time. the supervisor modules. and the fan assemblies. although DCNM-SAN tables show values for one or more switches. Performance Manager The primary purpose of DCNM-SAN is to manage the network. To enable the trial version of a feature. These reports are available only if you create a collection using Performance Manager and start the collector. Device Manager also provides more detailed information for verifying or troubleshooting device-specific configuration than DCNM-SAN.

and configured flows. Cisco Traffic Analyzer monitors round-trip response times. and information viewing capabilities. the switch creates a temporary SNMP username that is used by DCNM-SAN Client and DCNM-SAN Server. and provides inventory and configuration information in multiple viewing options such as fabric view. which is based on ntop. SCSI read or traffic throughput and frame counts. Flows are defined based on a host-to-storage (or storage-to-host) link. Fibre Channel Generic . DCNM-SAN re-creates a fabric topology. topology mapping. a SAN discovery process begins. After DCNM-SAN is invoked. Additional statistics are also available on Fibre Channel frame sizes and network management protocols. Based on this configuration. Performance Manager can collect statistics for ISLs. and management task information. If the switch in either the local switch authentication database or through a remote AAA server recognizes the username and password. These files determine which SAN elements and SAN links Performance Manager gathers statistics for. including Name Server registrations. a public domain software enhanced by Cisco for Fibre Channel traffic analysis. hosts. Performance Manager also integrates with external tools such as Cisco Traffic Analyzer.and provides this information graphically using a web browser. either DCNM-SAN Client or DCNM-SAN Server opens a CLI session to the switch (SSH or Telnet) and retries the user name and password pair. hosts. DCNM-SAN collects information on the fabric topology through SNMP queries to the switches connected to it. Collection: The Web Server Performance Collection screen collects information on desired fabrics. Performance Manager has three operational stages: Definition: The Flow Wizard sets up flows in the switches. If this username and password are not a recognized SNMP username and password. Performance Manager communicates with the appropriate devices (switches. SCSI session status. summary view. Using information polled from a seed Cisco MDS 9000 family switch. storage elements. Performance Manager presents recent statistics in detail and older statistics in summary. and operation view. Traffic encapsulated by one or more Port Analyzer Adapter products can be analyzed concurrently with a single workstation running Cisco Traffic Analyzer. SCSI I/Os per second. The username and password are passed to DCNM-SAN Server and are used to authenticate to the seed switch. presents it in a customizable map. Presentation: Generates web pages to present the collected data through DCNM-SAN Web Server. or storage elements) and collects the appropriate information at fixed five-minute intervals. Network Monitoring DCNM-SAN provides extensive SAN discovery. Performance Manager gathers statistics from across the fabric based on collection configuration files. device view. Authentication in DCNM-SAN Client Administrators launch DCNM-SAN Client and select the seed switch that is used to discover the fabric. Cisco Traffic Analyzer Cisco Traffic Analyzer provides real-time analysis of SPAN traffic or analysis of captured traffic through a web browser user interface.

com/c/en/us/products/storage-networking/index. Real-time statistics gather data on parts of the fabric in user-configurable intervals and display these results in DCNM-SAN and Device Manager.com/c/en/us/products/collateral/storage-networking/mds-9000-seriesmultilayer-switches/product_data_sheet09186a00801bc698. You can set the polling interval from ten seconds to one hour.cisco. The device information discovered includes device names. class 2 traffic.com/go/dcnm More information on the Cisco I/O Acceleration: http://www. and SAN links. you can monitor any of a number of statistics.com/c/en/us/products/storagenetworking/mds-9700-series-multilayer-directors/datasheet-listing. host bus adapters (HBAs). Fabric Shortest Path First (FSPF). Performance Monitoring DCNM-SAN and Device Manager provide multiple tools for monitoring the performance of the overall fabric. and VSANs. vendor. and FICON data. software revision levels.com/en/US/prod/collateral/ps4159/ps6409/ps6028/ps10531/data_sheet_c78538860.cisco.com/en/US/products/ps6028/index. and storage devices are discovered. including traffic in and out. These tools provide real-time statistics as well as historical performance monitoring. For a selected port. errors. and host operating-system type and version discovery without host agents.cisco. The Cisco MDS 9000 family switches use Fabric-Device Management Interface (FDMI) to retrieve the HBA model.html Cisco MDS 9700 Data Sheets: http://www.cisco. Real-time performance statistics are a useful tool in dynamic troubleshooting and fault isolation within the fabric.cisco. port channels. All available switches.cisco. value per second. serial number and firmware version.html More information on the Cisco MDS 9000 Family intelligent fabric applications: http://www. DCNMSAN gathers this information through SNMP queries to each switch. and SCSI-3.html More information on the Data Mobility Manager (DMM): http://www. ISLs. This tool gathers statistics at a configurable interval and displays the results in tables or charts.cisco. and minimum or maximum value per second.Services (FC-GS).html . DCNM-SAN automatically discovers all devices and interconnects on one or more fabrics.pdf Cisco MDS 9000 Family Pluggable Transceivers Data Sheet: http://www. SAN elements.cisco.com/c/en/us/products/collateral/storagenetworking/mds-9500-series-multilayer-directors/data_sheet_c78_605104. and display the results based on a number of selectable options including absolute value.com/en/US/prod/collateral/ps4159/ps6409/ps6028/product_data_sheet0900ae More information on the Cisco Prime DCNM: http://www.html Cisco MDS 9500 Data Sheets: http://www. Reference List The landing page for Cisco SAN portfolio is available at http://www. These statistics show the performance of the selected port in real-time and can be used for performance monitoring and troubleshooting. Device Manager provides an easy tool for monitoring ports on the Cisco MDS 9000 family switches.

noted with the key topics icon in the outer margin of the page.com/en/US/prod/collateral/ps4159/ps6409/ps6028/ps10526/data_sheet_c78538834.cisco. Table 7-12 lists a reference of these key topics and the page numbers on which each is found.More information on the XRC: http://www. Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter.html. .

and complete the tables and lists from memory. “Memory Tables” (found on the disc). includes completed tables and lists to check your work. “Memory Tables Answer Key. and check your answers in the glossary: storage-area network (SAN) Fibre Channel over Ethernet (FCoE) Data Center Network Manager (DCNM) Internet Small Computer System Interface (iSCSI) Fibre Connection (FICON) Fibre Channel over IP (FCIP) buffer credits crossbar arbiter virtual output queuing (VOQ) virtual SAN (VSAN) zoning N-port virtualization (NPV) N-port identifier virtualization (NPIV) Data Mobility Manager (DMM) I/O Accelerator (IOA) Dynamic Fabric Automation (DFA) multiprotocol support XRC Acceleration inter-VSAN routing (IVR) head-of-line blocking .Table 7-12 Key Topics for Chapter 7 Complete Tables and Lists from Memory Print a copy of Appendix B. Appendix C. or at least the section for this chapter. Define Key Terms Define the following key terms from this chapter.” also on the disc.

read the entire chapter. Devices. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. Network virtualization: Using network resources through a logical segmentation of a single physical network. Virtualizing Storage This chapter covers the following exam topics: Describe Storage Virtualization Describe LUN Storage Virtualization Describe Redundant Array of Inexpensive Disk (RAID) Describe Virtualizing SANs (VSAN) Describe Where the Storage Virtualization Occurs In computing. “Answers to the ‘Do I Know This Already?’ Quizzes. and human users are able to interact with the virtual resource as if it were a real single logical resource. Application virtualization: Decoupling the application and its data from the operating system. including the following: Storage virtualization: Combining multiple network storage devices into what appears to be a single storage unit. why. virtualization means to create a virtual version of a device or resource.Chapter 8. If you . and how questions. Server virtualization: Partitioning a physical server into smaller virtual servers. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. Table 8-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. A number of computing technologies benefit from virtualization. storage device.” Table 8-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. applications. or network where the framework divides the resource into one or more execution environments. You can find the answers in Appendix A. This chapter guides you through storage virtualization concepts answering basic what. such as a server. This chapter goes directly into the key concepts of storage virtualization and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification.

INCITS (International Committee for Information Technology Standards) b. b.) a. The host b. SNIA (Storage Networking Industry Association) d. Disks b. Achieving cost-effective business continuity and data protection 3. Storage virtualization has no standard measure defined by a reputable organization. It comprises block-level striping with distributed parity. In the SAN fabric d.) a. b. . Which of the following options explains RAID 6? a. IETF (Internet Engineering Task Force) c. It comprises block-level striping with double distributed parity. Switches 4. It comprises block-level striping with distributed parity. DNS 5. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. Which of the following options can be virtualized according to SNIA storage virtualization taxonomy? (Choose all that apply. It is an exact copy (or mirror) of a set of data on two disks. Which of the following options are reasons to use storage virtualization? (Choose three.do not know the answer to a question or are only partially sure of the answer. Where does the storage virtualization occur? (Choose three. 6. It creates a striped set from a series of mirrored drives. Which of the following options explains RAID 1+0? a. 2. Exponential data growth and disruptive storage upgrades c. c. you should mark that question as wrong for purposes of the self-assessment. 1. Which of the following organizations defines the storage virtualization standards? a. It is an exact copy (or mirror) of a set of data on two disks. The storage array c.) a. Blocks c. File systems e. The control plane e. Virtualization trends d. d. Power and cooling consumption in the data center b. Tape systems d. Data protection regulations e.

I. Foundation Topics . VI d. a. b. An acknowledgement is sent to the primary storage array by a secondary storage array after the data is stored on the secondary storage array. It is a feature of storage arrays. I. e. It is licensed and managed per host. Primary storage array maintains new data queued to be copied to the secondary storage array at a later time. II c. It should be configured on HBAs. The LVM is a collection of ports from a set of connected Fibre Channel switches. c. It provides basic LUN-level security by allowing LUNs to be seen by selected servers. II. III. It is close to the file system.) a. primary storage array does not require a confirmation from the secondary storage array to acknowledge the write operation to the server. Asynchronous mirroring operation was not explained correctly. b. II. Which of the following options are the advantages of host-based storage virtualization? (Choose three. IV. III. c. d. III. 7. It creates a striped set from a series of mirrored drives. LVM combines multiple hard disk drive components into a logical unit to provide data redundancy. III. Write operation to the primary array is acknowledged locally. It is independent of SAN transport. d. 8. Write operation is received by primary storage array from the host. I. e. c. d. Which of the following options explains the LUN masking? (Choose two.) a. The LVM manipulates LUN representation to create virtualized storage to the file system. IV e. It comprises block-level striping with double distributed parity. LVMs can be used to divide large physical disk arrays into more manageable virtual disks. d. IV. b. It is a proprietary technique that Cisco MDS switches offer.) a. Select the correct order of operations for completing asynchronous array-based replication. III. It uses the array controller’s CPU cycles. II. 9. IV b. Which of the following options explains the Logical Volume Manager (LVM)? (Choose two.c. It uses the operating system’s built-in tools. It is a feature of Fibre Channel HBA. It initiates a write to the secondary storage array. I. I. 10. II.

virtualization began to move into storage and some disk subsystems. Such mapping information. According to the SNIA dictionary. The closest vendor-neutral attempt to make storage virtualization concepts comprehensible has been the work of the Storage Networking Industry Association (SNIA). The first virtual tape systems appeared in 1997 for mainframe systems. or isolating the internal function of a storage (sub) system or service from applications. When a host requests I/O. application-aware storage services through orchestration of the underlining storage infrastructure in support of an overall software defined environment. This virtual pool can then be more easily managed and provisioned as needed.What Is Storage Virtualization? Storage virtualization is a way to logically combine storage capacity and resources from various heterogeneous. Unlike previous new protocols or architectures. The late 1970s witnessed the introduction of the first solid-state disk. One decade later. In networking. Much of the pioneering work in virtualization began on mainframes and has since moved into the non-mainframe. The storage virtualization layer creates a mapping from the logical storage address space (used by the hosts) to the physical address of the storage device. The term Software Defined Storage is a follow up of the technology trend Software Defined Networking. which has produced useful tutorial content on the various flavors of virtualization technology. or general network resources for the purpose of enabling application and network independent management of storage or data. The beginning of virtualization goes a long way back. Virtual memory operating systems evolved during the 1960s. is stored in huge mapping tables. similar to server virtualization. How Storage Virtualization Works Storage virtualization works through mapping. In 2005. Virtualization achieved a new level when the first Redundant Array of Inexpensive Disk (RAID) virtual disk array was announced in 1992 for mainframe systems. SDS delivers automated. which was a helical-scan cartridge that looked like a disk. SDS refers to the abstraction of the physical elements. also referred to as metadata. A single set of tools and processes performs everything from online any-to-any data migrations to heterogeneous replication. SDS represents a new evolution for the storage industry for how storage will be managed and deployed in the future. making abstraction and virtualization more difficult to manage in complex virtual environments. Software Defined Storage (SDS) has been proposed in 2013 as a new category of storage software products. virtual tape was the most popular storage initiative in many storage management pools. which was first used to describe an approach in network technology that abstracts various elements of networking and creates an abstraction or virtualized layer in software. In 2003. increasing the demand for virtual tape. policy-driven. Virtualization gained momentum in the late 1990s as a result of virtualizing SANs and storage networks in an effort to tie together all the computing platforms from the 1980s. virtual tape architectures moved away from the mainframe and entered new markets. . the virtualization layer looks at the logical address in that I/O request and. which was a box full of DRAM chips that appeared to be rotating magnetic disks. The original mass storage device used a tape cartridge. storage virtualization has no standard measure defined by a reputable organization such as INCITS (International Committee for Information Technology Standards) or the IETF (Internet Engineering Task Force). hiding. the control plane and the data plane have been intertwined within the traditional switches that are deployed today. storage virtualization is the act of abstracting. open-system world. however. computer servers. external storage systems into one virtual pool of storage.

The major economic driver is to reduce costs without sacrificing data integrity or performance. use IT resources more efficiently. and become more agile. flexible service-based infrastructure to reflect ongoing changes in business requirements. The top seven reasons to use storage virtualizations are Exponential data growth and disruptive storage upgrades Low utilization of existing assets . The application is unaware of the mapping that happens beneath the covers. and space. ensure business continuity. In addition. Figure 8-1 Storage Virtualization Mapping Heterogeneous Physical Storage to Virtual Storage Why Storage Virtualization? The business drivers for storage virtualization are much the same as those for server virtualization. translates the logical address into the address of a physical storage device.using the mapping table. The virtualization layer then performs I/O with the underlying storage device using the physical address. cooling. CIOs and IT managers must cope with shrinking IT budgets and growing client demands. They must simultaneously improve asset utilization. Figure 8-1 illustrates this virtual storage layer that sits between the applications and the physical storage devices. Organizations can use the technology to resolve the issue and create a more adaptive. they are faced with ever-increasing constraints on power. and when it receives the data back from the physical device. it returns that data to the application as if it had come from the logical address.

tape systems. How the virtualization occurs may be via inband or out-of-band separation of control and data paths. Storage virtualization provides the means to build higher-level storage services that mask the complexity of all underlying components and enable automation of data storage operations. Where virtualization occurs may be on the host.Growing management complexity with flat or decreasing budgets for IT staff head count Increasing hard costs and environmental costs to acquire. and manage storage Ensuring rapid. high-quality storage service delivery to application and business owners Achieving cost-effective business continuity and data protection Power and cooling consumption in the data center What Is Being Virtualized? The Storage Networking Industry Association (SNIA) taxonomy for storage virtualization is divided into three basic categories: what is being virtualized. where the virtualization occurs. in storage arrays. file systems. and file or record virtualization. and how it is implemented. run. . Figure 8-2 SNIA Storage Virtualization Taxonomy Separates Objects of Virtualization from Location and Means of Execution What is being virtualized may include blocks. or in the network via intelligent fabric switches or SAN-attached appliances. This is illustrated in Figure 8-2. disks.

these alternatives are referred to as host-based. network-based. within SAN switches. It has the opportunity to work as a bridge. The out-of-band method provides separate paths for data and control. In the in-band method the appliance acts as an I/O target to the hosts and acts as an I/O initiator to the storage. Originally. The in-band method places the virtualization engine directly in the data path. In common usage. This capability enables end users to preserve investment in storage resources while benefiting from the latest host connection mechanisms such as Fibre Channel over Ethernet (FCoE). it can present an iSCSI target to the host while using standard FC-attached arrays for the storage. binding multiple levels of technologies on a foundation of logical abstraction. within the storage network in the form of a virtualization appliance. respectively. The abstraction layer that masks physical from logical storage may reside on host systems such as servers. or array-based virtualization. it was defined as Redundant Arrays of Inexpensive Disks (RAID) on a paper . In-band and out-of-band virtualization techniques are sometimes referred to as symmetrical and asymmetrical. or on a storage array or tape subsystem targets. This can be achieved by a layered approach. Block Virtualization—RAID RAID is another data storage virtualization technology that combines multiple hard disk drive components into a logical unit to provide data redundancy and improve performance or capacity. presenting an image of virtual storage to the host by one link and allowing the host to directly retrieve data blocks from physical storage on another. in 1988. so that both block data and the control information that govern its virtual appearance transit the same link. The ultimate goal of storage virtualization should be to simplify storage administration. vendors have different methods for implementing virtualized storage transport. For example. Figure 8-3 illustrates all components of an intelligent SAN. Figure 8-3 Possible Alternatives Where the Storage Virtualization May Occur In addition to differences between where storage virtualization is located.The most important reasons for storage virtualization were covered in the “Why Storage Virtualization?” section.

several nested levels and many nonstandard levels (mostly proprietary). especially for high- . RAID 0 is normally used to increase performance. Double parity provides fault tolerance up to two failed drives. and RAID 6 (dual parity). RAID 1 and variants (mirroring). Running RAID in a volume manager opens the potential for integrating file system and volume management products for management and performance-tuning purposes. RAID has been implemented in adapters/controllers for many years. but many variations have evolved—notably. it can be used on multiple levels recursively. performance and capacity. Many RAID levels employ an error protection scheme called “parity. RAID 5 (distributed parity). RAID is a staple in most storage products. Now it is commonly used as Redundant Array of Independent disks (RAID). Gibson. and Randy Katz. if a 500 GB disk is mirrored together with 400 GB disk. Upon failure of a single drive. A classic RAID 1 mirrored pair contains two disks. For example. The most common RAID levels that are used today are RAID 0 (striping). Most of the RAID levels use simple XOR. Originally. Data is distributed across the hard disk drives in one of several ways. if an 800 GB disk is striped together with a 500 GB disk. and virtually any processor along the I/O path. host bus adapters. Each scheme provides a different balance between the key goals: reliability and availability. from the University of California. without parity information. depending on the specific level of redundancy and performance required. Disk subsystems have been the most popular product for RAID implementations and probably will continue to be for many years to come. Because RAID works with storage address spaces and not necessarily storage devices. It can be found in disk subsystems. Host volume management software typically has several RAID variations and. device driver software. It provides no data redundancy. referred to as RAID levels. A RAID 0 (also known as a stripe set or striped volume) splits data evenly across two or more disks (striped). RAID 0. system motherboards. Patterson. the size of the array will be 1 TB (500 GB × 2). A RAID 6 comprises block-level striping with double distributed parity. For example. offers many sophisticated configuration options. subsequent reads can be calculated from the distributed parity so that no data is lost. Such an array can only be as big as the smallest member disk. The Storage Networking Industry Association (SNIA) standardizes RAID levels and their associated data formats in the common RAID Disk Drive Format (DDF) standard. Today. as well as whole disk failure. This is useful when read performance or reliability is more important than data storage capacity. This makes larger RAID groups more practical. The different schemes or architectures are named by the word RAID followed by a number (for example. A RAID 1 is an exact copy (or mirror) of a set of data on two disks. RAID 1). RAID levels greater than RAID 0 provide protection against unrecoverable (sector) read errors. There is an enormous range of capabilities offered in disk subsystems across a wide range of prices. in some cases. but the storage space added to the array by each disk is limited to the size of the smallest disk. It requires that all drives but one be present to operate. although it can also be used as a way to create a large logical disk out of two or more physical ones. the size of the array will be 400 GB.written by David A. RAID 5 requires a minimum of three disks. volume management software. A RAID 5 comprises block-level striping with distributed parity.” a widely used method in information technology to provide fault tolerance in a given set of data. there were five RAID levels. A RAID 0 can be created with disks of different sizes. starting with SCSI and including Advanced Technology Attachment (ATA) and serial ATA. Gary A. but RAID 6 uses two separate parities based respectively on addition and multiplication in a particular Galois Field or Reed–Solomon error correction that we will not be covering in this chapter.

Many storage controllers allow RAID levels to be nested (hybrid). because large-capacity drives take longer to restore. The final array of the RAID level is known as the top array. the more important it becomes to choose RAID 6 instead of RAID 5. it is possible to mitigate most of the problems associated with RAID 5. As with RAID 5. The array continues to operate with one or more drives failed in the same mirror set.availability systems. and storage arrays are rarely nested more than one level deep. . RAID 1+0 creates a striped set from a series of mirrored drives. which yields RAID 10 and RAID 50. The array can sustain multiple drive losses as long as no mirror loses all its drives. a single drive failure results in reduced performance of the entire array until the failed drive has been replaced. the data on the RAID system is lost. With a RAID 6 array. most vendors omit the “+”. but if drives fail on both sides of the mirror. respectively. When the top array is RAID 0 (such as in RAID 1+0 and RAID 5+0). Table 8-2 lists the different types of RAID levels. using drives from multiple sources and manufacturers. The larger the drive capacities and the larger the array size. Most common examples of nested RAIDs are the following: RAID 0+1 creates a second striped set to mirror a primary striped set.

.

.

.

You can have multiple layers of virtualization or mapping by using the output of one layer as the input for a higher layer of virtualization. LUNs are central to the management of block storage arrays shared over a storage-area network (SAN). The back-end resource refers to a LUN that is not presented to the computer or host system for direct use. iSCSI. A Logical Unit Number (LUN) is a unique identifier that designates individual hard disk devices or grouped devices for address by a protocol associated with a SCSI. which manages the process of mapping that logical space to the physical location. The user is presented with a logical space for data storage by the virtualization system. How the mapping . The front-end LUN or volume is presented to the computer or host system for use. Virtualization maps the relationships between the back-end resources and front-end resources. Fibre Channel (FC). or similar interface.Table 8-2 Overview of Standard RAID Levels Virtualizing Logical Unit Numbers Virtualization of storage helps to provide location independence by abstracting the physical location of the data.

Each storage array vendor has its own management and proprietary techniques for LUN masking in the array. a feature of Fibre Channel HBAs. allows the administrator to selectively map some LUNs that have been discovered by the HBA. . a proprietary technique that Cisco MDS switches offer. In a block-based storage environment. this mapping is a large management task. LUN zoning can be used instead of. LUN masking in heterogeneous environments or where Just a Bunch of Disks (JBOD) are installed. LUN management becomes more difficult. In most SAN environments. leading to potential loss of data or security. as shown in Figure 8-13: LUN masking: LUN masking. Typically. one block of information is addressed by using a LUN and an offset within that LUN. LUN mapping must be configured on every HBA. Otherwise. LUN zoning: LUN zoning. LUN mapping: LUN mapping.is performed depends on the implementation. In a heterogeneous environment with arrays from different vendors. There are three ways to prevent this multiple access. They then perform LUN management in the array (LUN masking) or in the network (LUN zoning). In a large SAN. known as a Logical Block Address (LBA). each individual LUN must be discovered by only one server host bus adapter (HBA). provides basic LUN-level security by allowing LUNs to be seen only by selected servers that are identified by their port worldwide name (pWWN). allows LUNs to be selectively zoned to their appropriate host port. or in combination with. Most administrators configure the HBA to automatically map all LUNs that the HBA discovers. the same volume will be accessed by more than one file system. a feature of enterprise storage arrays. one physical disk is broken down into smaller subsets in multiple megabytes or gigabytes of disk space.

LUN Mapping. In most LUN virtualization deployments. More storage systems. More physical storage can be allocated by adding to the mapping table (assuming the using system can cope with online expansion). disks can be reduced in size by removing some physical storage from the mapping (uses for this are limited because there is no guarantee of what resides on the areas removed). a virtualizer element is positioned between a host and its associated target disk array. and the virtual storage space will scale up by the same amount. Thin-provisioning is a LUN virtualization feature that helps reduce the waste of block storage resources. The physical storage resources are aggregated into storage pools from which the logical storage is created. Similarly. Utilizing vLUNs disk expansion and shrinking can be managed easily. only blocks that are present in the vLUN are saved on the storage pool. and LUN Zoning on a Storage Area Network (SAN) Note JBODs do not have a management function or controller and so do not support LUN masking.Figure 8-13 LUN Masking. This feature brings the concept of oversubscription to data storage. Using this technique. Each storage vendor has different LUN virtualization solutions and techniques. can be added as and when needed. which may be heterogeneous in nature. consequently demanding special attention to the ratio between “declared” and actually used resources. This virtualizer generates a virtual LUN (vLUN) that proxies server I/O operations while hiding specialized data block processes that are occurring in the pool of disk arrays. although a server can detect a vLUN with a “full” size. This process is fully transparent to the applications using the storage infrastructure. In this .

Each VSAN is a separate self-contained fabric using distinctive security policies. In some cases. The shift to VTL also eliminates streaming problems that often impair efficiency in tape drives. modeled after the virtual local area network (VLAN) concept in Ethernet networking. By backing up data to disks instead of tapes.chapter we have briefly covered a subset of these features. A VTL presents a storage component (usually hard disk storage) as tape libraries or tape drives for use with existing backup software. VSANs can also be configured separately and independently. multiple switches can join a number of ports to form a single VSAN. Ports within a single switch can be partitioned into multiple VSANs. despite sharing hardware resources. Restore processes are found to be faster than backup regardless of implementations. Figure 8-14 portrays three different SAN islands that are being virtualized onto a common SAN infrastructure on Cisco MDS switches. (This scheme is called disk-to-disk-to-tape. The benefits of such virtualization include storage consolidation and faster data restore processes. because disk technology does not rely on streaming and can write effectively regardless of data transfer speeds. that problem can be handled with a minimum of disruption to the rest of the network. FICON. events. and iSCSI. If a problem occurs in one VSAN. for disaster recovery purposes. it saves tapes. can offer different high-level protocols such as FCP. and name services. Tape media virtualization resolves the problem of underutilized tape media. and floor space. which form a virtual fabric. Tape Storage Virtualization Tape storage virtualization can be divided into two categories: the tape media virtualization and the tape drive and library virtualization (VTL). Conversely. VTL often increases performance of both backup and recovery operations. A VSAN. A virtual tape library (VTL) is used typically for backup and recovery purposes. most contemporary backup software products introduced also direct usage of the file system storage (especially network-attached storage. They also often offer a disk staging feature: moving the data from disk to a physical tape for long-term storage. With tape media virtualization the amount of mounts are reduced. like each FC fabric. . the Technical Committee T11 of the International Committee for Information Technology Standards approved VSAN technology to become a standard of the American National Standards Institute (ANSI). tape libraries. The use of array enclosures increases the scalability of the solution by allowing the addition of more disk drives and enclosures to increase the storage capacity. memberships. such as physical tapes. FCIP. or D2D2T. In October 2004. Virtualizing the disk storage as tape allows integration of VTLs with existing backup software and existing backup and recovery processes and policies. Virtualizing Storage-Area Networks A virtual storage-area network (VSAN) is a collection of ports from a set of connected Fibre Channel switches. The geographic location of the switches and the attached devices are independent of their segmentation into logical VSANs. the data stored on the VTL’s disk array is exported to other media. The use of VSANs allows traffic to be isolated within specific portions of the network. VSANs were designed by Cisco. and applied to a SAN.) Alternatively. Most current VTL solutions use SAS or SATA disk arrays as the primary storage component because of their relatively low cost. zones. accessed through NFS and CIFS protocols over IP networks) not requiring tape library emulation at all.

and security perspectives. The problem. NPV is intended to address this problem. Whereas NPIV is primarily a host-based solution. In some cases. Whenever a file system client needs to access a file. N-Port ID Virtualization (NPIV) and N-Port Virtualizer. NPV is primarily a switch-based technology. therefore. and that the total number of domain IDs in a fabric is limited. . In a virtual machine environment where many host operating systems or applications are running on a physical host. though. Different applications can be used in conjunction with NPIV.Figure 8-14 Independent Physical SAN Islands Virtualized onto a Common SAN Infrastructure Since December 2002 Cisco enabled multiple virtualization features within intelligent SANs. it must retrieve information about the file from the metadata servers first (see Figure 8-15). All FCIDs assigned can now be managed on a Fibre Channel fabric as unique entities on the same physical host. Virtualizing File Systems A virtual file system is an abstraction layer that allows client applications using heterogeneous filesharing network protocols to access a unified pool of storage resources. N-Port ID Virtualization (NPIV) NPIV allows a Fibre Channel host connection or N-Port to be assigned multiple N-Port IDs or Fibre Channel IDs (FCID) over a single link. N-Port Virtualizer (NPV) An extension to NPIV is the N-Port Virtualizer feature. These systems are based on a global file directory structure that is contained on dedicated file metadata servers. Several examples are Inter VSAN routing (IVR). each virtual machine can now be managed independently from zoning. The N-Port Virtualizer feature allows the blade switch or top-of-rack fabric device to behave as an NPIV-based host bus adapter (HBA) to the core Fibre Channel director. is that you often need to add Fibre Channel switches to scale the size of your fabric. The device aggregates the locally connected host ports or N-Ports into one or more uplinks (pseudo-interswitch links) to the core switches. this limit can be fairly low depending upon the devices attached to the fabric. an inherent conflict between trying to reduce the overall number of switches to keep the domain ID count low while also needing to add switches to have a sufficiently high port count. Consider that every Fibre Channel switch in a fabric needs a different domain ID. There is. aliasing. It is designed to reduce switch management and overhead in larger SAN deployments.

A VFS specifies an interface (or a “contract”) between the kernel and a concrete file system. Sun Microsystems introduced one of the first VFSes on Unix-like systems. Mac OS. so that concrete file system support built for a given release of the operating system would work with future versions of the operating system. A VFS can. and the Oracle Clustered File System (OCFS) are all examples of virtual file systems. It can be used to bridge the differences in Windows. be used to access local and network storage devices transparently without the client application noticing the difference. to allow it to work with a new release of the operating system. The terms of the contract might bring incompatibility from release to release. which would require that the concrete file system support be recompiled. and possibly modified before recompilation. it is easy to add support for new file system types to the kernel simply by fulfilling the contract. so that applications can access files on local file systems of those types without having to know what type of file system they are accessing. The VMware Virtual Machine File System (VMFS).Figure 8-15 File System Virtualization The purpose of a virtual file system (VFS) is to allow client applications to access different types of concrete file systems in a uniform way. Or the supplier of the operating system might make only backward-compatible changes to the contract. Therefore. and Unix file systems. . for example. Linux’s Global File System (GFS). NTFS.

and device emulation. as well as in a storage system. Both offer file-level granularity. but most important. you simply type in the domain name. Users look to a single mount point that shows all their files. a software layer. The challenge. The first is one that’s built in to the storage system or the NAS itself. which could solve the file stub and manual look-up problems. In some cases. the logical volume manager. The best location to implement storage virtualization functionality depends in part on the preferences. however. In a common scenario.File/Record Virtualization File virtualization unites multiple storage devices into a single logical pool of files. However. existing technologies. residing above the disk device driver intercepts the I/O requests (see Figure 8-16) and provides the metadata lookup and I/O mapping. and objectives for deploying storage virtualization. the flexibility to mix hardware in a system. including aggregation (pooling). and in other instances it is offered as a separate product. . Often global file systems are not granular to the file level. a situation equivalent to not being able to mix brands of servers in a server virtualization project. often called a “global” file system. more power-efficient systems that bring down capital and operational costs. whereas out-of-band solutions are simpler to implement. they can be moved to lower cost. as you will remember. heterogeneous replication. Because these files are less frequently accessed. standalone file virtualization systems do not have to use all the same hardware. meaning the same hardware must be used to expand these systems and is often available from only one manufacturer. There are two types of file virtualization methods available today. In a similar fashion. volume management is built in to the operating system. transparent data migration. file virtualization eliminates the need to know exactly where a file is physically located. There may be some advantage here because the file system itself is managing the metadata that contains the file locations. In Chapter 6. A physical device driver handles the volumes (LUN) presented to the host system. you could lose the ability to store older data on a less expensive storage system. In-band solutions typically offer better performance. The second file virtualization method uses a standalone virtualization engine—typically an appliance. Where Does the Storage Virtualization Occur? Storage virtualization functionality. is that this metadata is unique to the users of that specific file system. This eliminates one of the key capabilities of a file virtualization system. the primary NAS could be a high-speed high performance system storing production data. File virtualization is similar to Dynamic Name Service (DNS). It is a vital part of both file area network (FAN) and network file management (NFM) concepts. These systems can either sit in the data path (in-band) or outside the data path (out-of-band) and can move files to alternate locations based on a variety of attribute related policies. With this solution. we discussed different types of tiered storage. Data can be moved from a tier-one NAS to a tier-two or tier-three NAS or network file server automatically. can be implemented at the host server in the data path on a network switch blade or on an appliance. which removes the requirement to know the IP address of a website. Host-Based Virtualization A host-based virtualization requires additional software running on the host as a privileged task or process.

Network-Based (Fabric-Based) Virtualization A network-based virtualization is a fabric-based switch infrastructure. or appliances. logical volume management may be complemented by hardware-based virtualization. RAID controllers for internal DAS and JBOD external chassis are good examples of hardware-based virtualization. Fabric may be composed of multiple switches from different vendors. in Windows. but the software that performs the striping calculations often places a noticeable burden on the host CPU.Figure 8-16 Logical Volume Manager on a Server System The LVM manipulates LUN representation to create virtualized storage to the file system or database manager. it will not differentiate the RAID levels. The SAN fabric provides connectivity to heterogeneous server platforms and heterogeneous storage arrays and tape subsystems. or LDM) that performs virtualization tasks. heterogeneous storage systems can be used on a perserver basis. So the storage administrator must ensure that the appropriate classes of storage under LVM control are paired with individual application requirements. RAID can be implemented without a special controller. it’s ZFS’s zpool layer. The logical volume manager treats any LUN as a vanilla resource for storage pooling. but manual administration of hundreds or thousands of servers in a large data center can be very costly. Fabric-based storage virtualization dynamically interacts with virtualizers on arrays. So with host-based virtualization. in Solaris and FreeBSD. this may not be a significant issue. LVMs can be used to divide large physical disk arrays into more manageable virtual disks or to create large virtual disks from multiple physical disks. servers. For smaller data centers. Use of host-resident virtualization must therefore be balanced against the processing requirements of the upper layer applications so that overall performance expectations can be met. it’s called Logical Volume Manager or LVM. . Most modern operating systems have some form of logical volume management built in (in Linux. it’s Logical Disk Manager. the virtualization capabilities require standardized protocols to ensure interoperability between fabric-hosted virtualization engines. Host-based storage virtualization may require more CPU cycles. For higher performance requirements.

This can result in less latency than in-band storage virtualization because the virtualization engine doesn’t handle the data. They do require that the virtualization engine handle both control traffic and data. such as snapshots. The fabric-based virtualization engine can be hosted on an appliance. Network-based virtualization can be done with one of the following approaches: Symmetric or in-band: With an in-band approach. array.A host-based (server-based) virtualization provides independence from vendor-specific storage. the FAIS application interface. The FAIS initiative separates control information from the data path. the virtualization device is sitting in the path of the I/O data flow. and the storage virtualization application. and storage-based virtualization provides independence from vendor-specific servers and operating systems. or appliance-based utilities to now be supported within fabric switches and directors. an application-specific integrated circuit (ASIC) blade. The control path processor (CPP) includes some form of operating system. which requires the processing power and internal bandwidth to ensure that they don’t add too much latency to the I/O process for host servers. so vendor implementation may be different. and regenerate those I/O requests to the storage systems on the back end. FAIS is an open systems project of the ANSI/INCITS T11. referring to whether the virtualization engine is in the data path. while maintaining high-performance switching of storage data. map them to the physical storage locations. There are two types of processors. Caching for improving performance and other storage management features such as replication and migration can be supported. or auxiliary module. Network-based virtualization provides independence from both. In-band solutions intercept I/O requests from hosts. and other tasks. Hosts send I/O requests to the virtualization device. It can be either in-band or out-of-band. The CPP is therefore a high-performance CPU with auxiliary memory. Out-of-band solutions typically run on the server or on an appliance in the network and essentially process the control traffic. and data replication. Multivendor fabric-based virtualization provides high-availability. centralized within the switch architecture. The data path controller (DPC) may be implemented at the port level in the form of an applicationspecific integrated circuit (ASIC) or dedicated CPU. which perform I/O with the actual storage device on behalf of the host. The FAIS initiative aims to provide fabric-based services. FAIS development is thus being driven by the switch manufacturers and by companies who have developed storage virtualization software and virtualization hardware-assist components. Fabric-based virtualization requires processing power and memory on top of a hardware architecture that is providing processing power for fabric services. The DPC is optimized for low latency and high bandwidth to execute basic SCSI read/write transactions under the management of one or more CPPs. but don’t handle data traffic. Allocation of virtualized storage to individual servers and management of the storage metadata is the responsibility of the storage application running on the CPP. The FAIS specifications do not define a particular hardware design for CPP and DPC entities. it’s also less disruptive if the virtualization engine goes down. the CPP and the DCP. The APIs are a means to more easily integrate storage applications that were originally developed as host. and it initiated the fabric application interface standard (FAIS). mirroring. routing I/O requests to the proper physical locations.5 task group and defines a set of common application programming interfaces (APIs) to be implemented within fabrics. Asymmetric or out-of-band: The virtualization device in this approach sits outside the data . switching.

These ASIC can look inside Fibre Channel frames and perform I/O mapping and reroute the frames without introducing any significant amount of latency. Whereas in typical in-band solutions.path between the host and storage device. What this means is that special software is needed on the host. the CPU is susceptible to being overwhelmed by I/O traffic. Figure 8-17 portrays the three architecture options for implementing network-based storage virtualization. which knows to first request the location of data from the virtualization device and then use that mapped physical address to perform the I/O. taking advantage of intelligent SAN switches to perform I/O redirection and other virtualization tasks at wire speed. . Hybrid split-path: This method uses a combination of in-band and out-of-band approaches. in the split-path approach the I/O-intensive work is offloaded to dedicated port-level ASICs on the SAN switch. Specialized software running on a dedicated highly available appliance interacts with the intelligent switch ports to manage I/O traffic and map logical-to-physical storage resources at wire speed.

and MDS 9000 18/4-Port Multiservice . Some applications and their storage requirements are best suited for a combination of technologies to address specific needs and requirements.Figure 8-17 Possible Architecture Options for How the Storage Virtualization Is Implemented There are many debates regarding the architecture advantages of in-band (in the data path) vs. out-ofband (out-of-data path with agent and metadata controller) or split path (hybrid of in-band and out-ofband). MDS 9222i Multiservice Modular Switch. Cisco SANTAP is one of the Intelligent Storage Services features supported on the Storage Services Module (SSM).

and any latency between the two systems affects overall performance. each write to a primary array must be completed on the secondary array before the SCSI transaction acknowledges completion. SANTAP also allows LUN mapping to Appliance Virtual Targets (AVT). The control path is implemented using an SCSI-based protocol. Array-based data replication may be synchronous or asynchronous. This is commonly referred to as disk-to-disk (D2D) data replication. An appliance sends requests to a Control Virtual Target (CVT). Cisco SANTAP is a good example of Network-based virtualization. which the SANTAP process creates and monitors. That controller. availability. As one of the first forms of array-to-array virtualization. or performance of a data path between the server and disk. The protocol-based interface that is offered by SANTAP allows easy and rapid integration of the data storage service application because it delivers a loose connection between the application and an SSM. The control path handles requests that create and manipulate replication sessions sent by an appliance. is responsible for providing the mapping functionality. and multiple other storage devices from the same vendor or from a different vendor can be added behind it. For this reason. In a synchronous replication operation (see Figure 8-18). and other storage management services across the storage devices that it is virtualizing. SANTAP has a control path and a data path. synchronous data replication is generally limited to metropolitan distances (~150 kilometers) to avoid performance degradation due to speed of light propagation delay. which reduces the effort needed to integrate applications with the core services being offered by the SSM. data replication requires that a storage system function as a target to its attached servers and as an initiator to a secondary array. The communication protocol between storage subsystems may be proprietary or based on the SNIA SMI-S standard. The primary storage array controller also provides replication. The SCSI I/O is therefore dependent on the speed at which both writes can be performed. The Cisco MDS 9000 SANTAP service enables customers to deploy third-party appliance-based storage applications without compromising the integrity. . Array-Based Virtualization In the array-based approach. migration.Module (MSM-18/4). called the primary storage array controller. The communication between separate storage array controllers enables the disk resources of each system to be managed collectively. Responses are sent to the control logical unit number (LUN) on the appliance. either through a distributed management scheme or through a hierarchal master/slave relationship. the virtualization layer resides inside a storage array controller.

a checkpoint to restore the state of an application. This solves the problem of local performance. . Like photographic snapshots capture images of physical action. full-copy snapshots can protect against physical destruction. some implementations required the primary array to temporarily buffer each pending transaction to the secondary so that transient disruptions are recoverable. and a kind of off-host processing. point-in-time copy. test data. but may result in loss of the most current write to the secondary array if the link between the two systems is lost. are virtual or physical copies of data that capture the state of data set contents at a single instant. data mining. while one or more write transactions may be queued to the secondary array.Figure 8-18 Synchronous Mirroring In asynchronous array-based replication (see Figure 8-19). Figure 8-19 Asynchronous Mirroring Array-based virtualization services often include automated point-in-time copies or snapshots that may be implemented as a complement to data replication. Both virtual (copy-on-write) and physical (full-copy) snapshots protect against corruption of data content. To minimize this risk. These copies can be used for a backup. or snapshots. individual write operations to the primary array are acknowledged locally. Additionally. Point-in-time copy is a technology that permits you to make the copies of large sets of data a common activity.

SNIA.Solutions need to be able to scale in terms of performance. Indianapolis: Cisco Press.cisco.org/tech_activities/standards/curr_standards/ddf/ SNIA Dictionary: http://www. Gustavo. A. Data Center Virtualization Fundamentals.com/c/en/us/products/collateral/storagenetworking/mds-9000-nx-os-software-release-4-1/white_paper_c11-459263. and ease of management. N-Port Virtualization in the Data Center: http://www. Moore.org/education/storage_networking_primer/stor_virt Fred G.snia.org: http://www.html A. Storage Virtualization for IT Flexibility. 2013.snia. Sun Microsystems.snia.org/education/dictionary SNIA Storage Virtualization tutorial I and II: http://www. functionality. The best solution is going to be the one that meets customers’ specific requirements and may vary by customers’ different tiers of storage and applications. There are many approaches to implement and deploy storage virtualization functionality to meet various requirements. connectivity. and resiliency without introducing instability. Table 8-3 provides a basic comparison between various approaches based on SNIA’s virtualization tutorial. TechTarget 2006. . Table 8-3 Comparison of Storage Virtualization Levels Reference List “Common RAID Disk Drive Format (DDF) standard”. Santana.

Storage Virtualization—Technologies for Simplifying Data Storage and Management. Table 8-4 Key Topics Complete Tables and Lists from Memory Print a copy of Appendix B. “Memory Tables Answer Key. 2005. or at least the section for this chapter. Marc. Indianapolis: Cisco Press. “Memory Tables” (found on the disc). Long. Table 8-4 lists a reference of these key topics and the page numbers on which each is found. includes completed tables and lists to check your work. Indianapolis: Cisco Press. . noted with the key topic icon in the outer margin of the page. and complete the tables and lists from memory. Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. Appendix C. Storage Networking Protocol Fundamentals. 2004. 2006. Storage Networking Fundamentals. Indianapolis: Addison-Wesley Professional. Clark. Tom.” also on the disc.Farley. James. Volume 1 and 2.

and check your answers in the Glossary: storage-area network (SAN) Redundant Array of Inexpensive Disk (RAID) virtual storage-area network (VSAN) operating system (OS) Logical Unit Number (LUN) virtual LUN (vLUN) application-specific integrated circuit (ASIC) host bus adapter (HBA) thin-provisioning Storage Networking Industry Association (SNIA) software-defined storage (SDS) metadata in-band out-of-band virtual tape library (VTL) disk-to-disk-to-tape (D2D2T) N-Port virtualizer (NPV) N-Port ID virtualization (NPIV) virtual file system (VFS) asynchronous and synchronous mirroring host-based virtualization network-based virtualization array-based virtualization .Define Key Terms Define the following key terms from this chapter.

and the fabric domain using command-line interface. starting from release 4. Formerly known as Cisco SAN-OS. You can find the answers in Appendix A. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. “Cisco MDS Product Family. This chapter discusses how to configure Cisco MDS 9000 Series multilayer switches and how to update software licenses. It also describes how to verify virtual storage-area networks (VSAN). In this chapter we build our storage-area network starting from the initial setup configuration on Cisco MDS 9000 switches.Chapter 9. Chapter 7.1 it is rebranded as Cisco MDS 9000 NX-OS Software. Fibre Channel Storage-Area Networking This chapter covers the following exam topics: Perform Initial Setup on Cisco MDS Switch Verify SAN Switch Operations Verify Name Server Login Configure and Verify VSAN Configure and Verify Zoning The foundation of the data center network is the network software that runs the switches in the network.” discussed Cisco MDS 9000 Software features. zoning. Cisco NX-OS Software is the network software for Cisco MDS 9000 family and Cisco Nexus family data center switching products. Table 9-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. This chapter goes directly into the practical configuration steps of the Cisco MDS product family and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. It is based on a secure. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. “Answers to the ‘Do I Know This Already?’ Quizzes. stable.” . read the entire chapter. and standard Linux core. providing a modular and sustainable base for the long term. the fabric login.

There is no configuration option for in-band management on MDS family switches. How does the On-Demand Port Activation license work on the Cisco MDS 9148S 16G FC switch? a. Which of the following switches support Console port. During the initial setup script of MDS 9500. MDS 9706 2. . 1. If you do not know the answer to a question or are only partially sure of the answer. 2 ports of 10 Gigabit Ethernet for FCIP and iSCSI storage services.) a. b. The base configuration of 20 ports of 16 Gbps Fibre Channel. MDS 9148S d. 4. d. Which MDS switch models support the PowerOn Auto provisioning with NX-OS 6. None 3. MDS 9706 5. VSAN interface d. MDS 9513 d. MDS 9710 b. COM1. MDS 9508 c. and MGMT interface? (Choose all the correct answers. how can you configure in-band management? a. MDS 9148S e. c. Out-of-band management interface e. During the initial setup script of MDS 9706. All FC interfaces c. you should mark that question as wrong for purposes of the self-assessment. MGMT 0 interface b.2(9)? a. and 8 ports of 10 Gigabit Ethernet for FCoE connectivity. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. which interfaces can be configured with the Ipv6 address? a. MDS 9250i b. MDS 9706 family director switch does not support in-band management.Table 9-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. MDS 9148 c. Via answering yes to Configuring Advanced IP Options. Via enabling SSH service.

255.255. Which NX-OS CLI command suspends a specific VSAN traffic? a. Which is the correct option for the boot sequence? a.22. c. d. vsan 4 name SUSPEND b. System—BIOS—Loader—Kickstart d. no suspend vsan 4 c.1 b. 32. BIOS—Loader—System—Kickstart 7. BIOS—Loader—Kickstart—System c.0 switch(config-if)# no shutdown switch(config-if)# exit switch(config)# ip default-gateway 10. or 48 ports. System—Kickstart—BIOS—Loader b.2 255. Click here to view code image switch(config-if)# (no) switchport mode F switch(config-if)# (no) switchport mode auto e. The base switch model comes with 8 ports and can be upgraded to models of 16. 6.22.2. vsan 4 suspend . Which of the following options is the correct configuration for in-band management of MDS 9250i? a. The base switch model comes with 12 ports enabled and can be upgraded as needed with the 12-port activation license to support 24. Click here to view code image switch(config)# interface mgmt0 switch(config-if)# ipv6 enable switch(config-if)# ipv6 address ipv6 address 2001:0db8:800:200c::417a/64 switch(config-if)# no shutdown c. None 8.b. The base switch model comes with 24 ports enabled and can be upgraded as needed with a 12-port activation license to support 36 or 48 ports. or 48 ports. Click here to view code image switch(config)# interface mgmt0 switch(config-if)# ip address 10. 36.2. Click here to view code image switch(config)# interface vsan 1 switch(config-if)# no shutdown d.

the equipment should be installed and mounted onto the racks. 4-. This port is used to create a local management connection to set the IP address and other initial configuration settings before connecting the switch to the network for the first time. trunk mode auto d.” is an RS-232 port with an RJ-45 interface. trunk mode on. Switch 1: Switchport mode E. The MDS multiservice and multilayer switches have one management and one console port as well. any device connected to this port must be capable of asynchronous transmission. Switch 2: Switchport mode E. In Chapter 7 we described all the details about the MDS 9000 product family. follow these steps: . and each of the supervisor modules have one management and one console port. No correct answer exists Foundation Topics Where Do I Start? The Cisco MDS 9000 family of storage networking products supports 1-. vsan no suspend vsan 4 9. In which of the following conditions for two MDS switches is the trunking state (ISL) and port mode E? a. The Cisco MDS family switches provide the following types of ports: Console port (supervisor modules): The console port is available for all the MDS product family. pWWN b. if needed. trunk mode off b. 10-Gbps Fibre Channel. labeled “Console. It is an asynchronous (sync) serial port. 1. Switch 2: Switchport mode E. The console port. 8-.and 10-Gbps Fibre Channel over IP (FCIP) on Gigabit Ethernet ports. To connect the console port to a computer terminal. Ipv6 address c. 16-. This chapter starts with explaining what to do after the switch is installed and powered on following the hardware installation guide.d. FC ID 10. Switch 1: Switchport mode E. you may want to quickly review the chapter. trunk mode off. Mac address of the MGMT 0 interface d. trunk mode on c. Switch 2: Switchport mode E. Before starting any configuration. trunk mode auto e. Which of the following options are valid member types for a zone configuration on MDS 9222i switch? a. Switch 1: Switchport mode E. 10-Gbps Fibre Channel over Ethernet (FCoE). trunk mode auto. For each MDS product family switch there is a specific hardware installation guide. Switch 1: Switchport mode E. The MDS multilayer directors have dual supervisor modules. which explains in detail how and where to install specific components of the MDS switches. Switch 2: Switchport mode F. 2-. trunk mode off.

such as through Cisco Data Center Network Manager (DCNM). Connect the DB-9 serial adapter to the COM1 port. . To connect the COM1 port to a modem. The supervisor modules support an autosensing MGMT 10/100/1000 Ethernet port (labeled “MGMT 10/100/1000”) and have an RJ-45 interface. 2. On a switchover. The active supervisor module owns the IP address used by both of these Ethernet connections. MGMT Ethernet port is available for all the MDS product family. COM1 port is available for all the MDS product family except MDS 9700 director switches. 3. the newly activated supervisor module takes over this IP address. 8 data bits. RJ-45. 2. switch. no parity.1. follow these steps: 1. b. Note For high availability. a. straightthrough UTP cable to connect the MGMT 10/100/1000 Ethernet port to an Ethernet switch port or hub or use a crossover cable to connect to a router interface. COM1 port: This is an RS-232 port that you can use to connect to an external serial communication device such as a modem. c. Connect the modem to the COM1 port using the adapters and cables. You need to use a modular. You can use it to connect to an external serial communication device such as a modem. We recommend using the adapter and cable provided with the switch. or router. Configure the terminal emulator program to match the following default port characteristics: 9600 baud. Connect the adapters using the RJ-45-to-RJ-45 rollover cable (or equivalent crossover cable).default: ATE0Q1&D2&C1S0=1###BOT_TEXT###15 Statistics: tx:17 rx:0 Register Bits:RTS|DTR MGMT 10/100/1000 Ethernet port (supervisor module): This is an Ethernet port that you can use to access and manage the switch by IP address. Connect the console cable (a rollover RJ-45 to RJ-45 cable) to the console port and to the RJ-45 to DB-9 adapter or the RJ-45 to DP-25 adapter (depending on your computer) at the computer serial port. It is available on each supervisor module of MDS 9500 Series switches. Connect the RJ-45 to DB-25 modem adapter to the modem. The default COM1 settings are as follows: line Aux: |Speed: 9600 bauds Databits: 8 bits per byte Stopbits: 1 bit(s) Parity: none Modem In: Enable Modem Init-String . 1 stop bit. connect the MGMT 10/100/1000 Ethernet port on the active supervisor module and on the standby supervisor module to the same network or VLAN. Connect the supplied RJ-45 to DB-9 female adapter or RJ-45 to DP-25 female adapter (depending on your computer) to the computer serial port. This process requires an Ethernet connection to the newly activated supervisor module. The COM1 port (labeled “COM1”) is an RS-232 port with a DB-9 interface. You can connect the MGMT 10/100/1000 Ethernet port to an external hub.

SFP+ was replaced by the XFP 10 G modules and became the mainstream. and XFP are all terms for a type of transceiver that plugs into a special port on a switch or other network device to convert the port to a copper or fiber interface. USB drive/port: It is a simple interface that allows you to connect to different devices supported by NX-OS. The USB drive is not available for MDS 9100 Series switches. XFP is based on the standard of XFP MSA. Slot 0 and LOG FLASH. Each transceiver must match the transceiver on the other end of the cable. The Cisco SFP.Fibre Channel ports (switching modules): These are Fibre Channel (FC) ports that you can use to connect to the SAN or for in-band management. SFP+ transceivers that are supported by the switch. SFP is based on the INF-8074i specification and SFP+ is based on the SFF-8431 specification. The show interface transceiver command and the show interface fc slot/port transceiver command display both values for Cisco-supported SFPs. An advantage of the SFP+ modules is that SFP+ has a more compact form factor package than X2 and XFP. there are two USB drives. The size of SFP+ is smaller than XFP.html. the show interface and show interface brief commands display the ID instead of the transmitter type. SFP+. Let’s also review the transceivers before we continue with the configuration. SFP and SFP+ specifications are developed by the Multiple Source Agreement (MSA) group. and X2 devices are hot-swappable transceivers that plug into Cisco MDS 9000 family director switching modules and fabric switch ports. SFF-8431. They allow you to choose different cabling types and distances on a port-by-port basis. The only restrictions are that short wavelength (SWL) transceivers must be paired with SWL transceivers. The Cisco MDS 9700 Series switch has two USB drives (in each Supervisor-1 module).com/c/en/us/products/collateral/storage-networking/mds-9000-series-multilayerswitches/product_data_sheet09186a00801bc698. and the cable must not exceed the stipulated cable length for reliable communication. and long wavelength (LWL) transceivers with LWL transceivers. The cost of SFP+ is lower than XFP. X2. SFP+ is the mainstream design. In the supervisor module. SFP+ is in compliance with the protocol of IEEE802. MAC. SFP has the lowest cost among all three. SFP+. and XFP? SFP.cisco. It can connect with the same type of XFP. SFP.3ae. and SFF-8432 specifications. They both have the same size and appearance but are based on different standards. thus it moves some functions to the motherboard. The LOG FLASH and Slot 0 USB ports use different formats for their data. . and EDC. and the cable must not exceed the stipulated cable length for reliable communications. X2. XFP and SFP+ are 10G fiber optical modules and can connect with other types of 10G modules. including signal modulation function. The Cisco MDS 9000 family supports both Fibre Channel and FCoE protocols for SFP+ transceivers. CDR. You can use any combination of SFP. and XENPAK. and XENPAK directly. If the related SFP has a Cisco-assigned extended ID. The most up-todate information can be found for pluggable transceivers here: http://www. What are the differences between SFP+. Gigabit Ethernet ports (IP storage ports): These are IP Storage Services ports that are used for the FCIP or iSCSI connectivity. The SFP hardware transmitters are identified by their acronyms when displayed in the show interface brief command. Fibre Channel over Ethernet ports (switching modules): These are Fibre Channel over Ethernet (FCoE) ports that you can use to connect to the SAN or for in-band management.

In Chapter 5. The Cisco MDS 9000 family switches can be accessed and configured in many ways. maximum supportable distances. and attenuation for optical fiber applications by fiber type. “Management and Monitoring of Cisco Nexus Devices. Figure 9-1 Tools for Configuring Cisco NX-OS Software The different protocols that are supported in order to access. and configure the Cisco MDS 9000 family of switches are described in Table 9-2.htm. Figure 9-1 portrays the tools for configuring the Cisco NX-OS software.” we discussed Nexus management and monitoring features.thefoa. The Nexus product line and MDS family switches use mostly the same management features. check the Fiber Optic Association page (FOA) at http://www. The Cisco MDS product family uses the same operating system as NX-OS. monitor.org/tech/Linkspec. and they support standard management protocols.To get the most up-to-date technical specifications for fiber optics per the current standards and specs. .

press Enter. and Configure the Cisco MDS Family The Cisco MDS NX-OS Setup Utility—Back to Basics The Cisco MDS NX-OS setup utility is an interactive command-line interface (CLI) mode that guides you through a basic (also called a startup) configuration of the system. The setup utility allows you to configure only enough connectivity for system management. If a default answer is not available (for example.Table 9-2 Protocols to Access. except for the administrator password. If you want to skip answers to any questions. the device uses what was previously configured and skips to the next question. The setup starts automatically when a device has no configuration file in NVRAM. the device hostname). You can press Ctrl+C at any prompt to skip the remaining configuration options and proceed with what you have configured up to that point. Monitor. Figure 9-2 portrays the setup script flow. After the file is created. you can use the CLI to perform additional configuration. The dialog guides you through initial configuration. The setup utility allows you to build an initial configuration file using the System Configuration dialog. .

it runs a setup program that . The first time you access a switch in the Cisco MDS 9000 family. if you have already configured the mgmt0 interface. If IPv4 routing is disabled. when no configuration is present. If you enable IPv4 routing. The setup utility keeps the configured values when you skip steps in the script. if there is a default value for the step. If you have dual supervisor modules. Connect the Ethernet management port on the supervisor module to the network. For example. Step 3. the setup utility does not change that configuration if you skip that step. you can use the setup utility at any time for basic device configuration. Be sure to carefully check the configuration changes before you save the configuration. connect the console ports on both supervisor modules to the network. and the default gateway IPv4 address to enable SNMP access. not the configured value. However. Note Be sure to configure the IPv4 route. However. Connect the console port on the supervisor module to the network. The setup script only supports IPv4. you must execute the following steps: Step 1. Step 2. Have a password strategy for your network environment. the device uses the IPv4 route and the default network IPv4 address. If you have dual supervisor modules. connect the Ethernet management ports on both supervisor modules to the network. the default network IPv4 address.Figure 9-2 Setup Script Flow You use the setup utility mainly for configuring the system initially. the setup utility changes to the configuration using that default. Before starting the setup utility. the device uses the default gateway IPv4 address.

Power on the switch. The in-band management logical interface is VSAN 1. Step 2. You can configure out-of-band management on the mgmt 0 interface. you can create an additional user account (in the network-admin role) besides the administrator’s account. Enter the password for the administrator.prompts you for the IP address and other configuration information necessary for the switch to communicate over the supervisor module Ethernet interface. When you power up the switch for the first time. A default route that points to the switch providing access to the IP network should be configured on every switch in the Fibre Channel fabric. Click here to view code image Enter the password for admin: admin-password Confirm the password for admin: admin-password Step 4. and special characters. This management interface uses the Fibre Channel infrastructure to transport IP traffic. The following are the initial setup procedure steps for configuring out-of-band management on the mgmt0 interface: Step 1. digits. Click here to view code image Do you want to enforce secure password standard (yes/no): yes You can also enable a secure password standard using the password strength-check command. the Cisco MDS 9000 Family Cisco Prime DCNM can reach the switch through the console port. assign the IP address. Step 3. *Note: setup is mainly used for configuring the system initially. This setup utility will guide you through the basic configuration of the system. Switches in the Cisco MDS 9000 family boot automatically. Setup configures only enough connectivity for management of the system. There are two types of management through MDS NX-OS CLI. Each switch should have its VSAN 1 interface configured with either an IPv4 address or an IPv6 address in the same subnetwork. After you perform this step. A secure password should contain characters from at least three of the classes: lowercase letters. uppercase letters. Enter yes (yes is the default) to enable a secure password standard. So setup always assumes system defaults and not the current system configuration values. . Click here to view code image Create another login account (yes/no) [no]: yes While configuring your initial setup. Enter yes (no is the default) if you do not want to create additional accounts. Enter yes to enter the setup mode. Use CTRL-C-c at anytime to skip the remaining dialogs. Press Enter at any time to skip a dialog. Click here to view code image Would you like to enter the basic configuration dialog (yes/no): yes The setup utility guides you through the basic configuration process. This information is required to configure and manage the switch. An interface for VSAN 1 is created on every switch in the fabric. The IP address can only be configured from the CLI. when no configuration is present. Press Ctrl+C at any prompt to end the configuration process Step 5.

Enter yes (yes is the default) at the configuration prompt to configure out-of-band management. Enter yes (no is the default) to avoid configuring the read-only SNMP community string. Configure the read-only or read-write SNMP community string. The operator cannot make any configuration changes. two roles exist in all switches: a. The switch name is limited to 32 alphanumeric characters. Enter the SNMP community string. the setup script supports only IP version 4 (IPv4) for the management interface. . The administrator can also create and customize up to 64 additional roles. Enter yes (yes is the default) to configure the default gateway.1(x) and later. IP version 6 (IPv6) is supported in Cisco MDS NX-OS Release 4. Click here to view code image Configure read-only SNMP community string (yes/no) [n]: yes b. Click here to view code image Continue with out-of-band (mgmt0) management configuration? [yes/ no]: yes Enter the mgmt0 IPv4 address. a.Click here to view code image Enter the user login ID: user_name Enter the password for user_name: user-password Confirm the password for user_name: user-password Enter the user role [network-operator]: network-admin By default. One (of these 64 additional roles) can be configured during the initial setup process. Click here to view code image Mgmt0 IPv4 address: ip_address Enter the mgmt0 IPv4 subnet mask. Step 6. Click here to view code image SNMP community string: snmp_community Step 7. Click here to view code image Enter the switch name: switch_name Step 8. b. Network administrator (network-admin): Has permission to execute all commands and make configuration changes. Click here to view code image Mgmt0 IPv4 netmask: subnet_mask Step 9. The default switch name is “switch”. Enter a name for the switch. However. Network operator (network-operator): Has permission to view the configuration only.

Click here to view code image Configure the default-gateway: (yes/no) [y]: yes Enter the default gateway IP address. Click here to view code image Next hop ip address: next_hop_address d. default network. Click here to view code image Default network IP address [dest_prefix]: dest_prefix e. Enter yes (yes is the default) to configure a static route. Enter yes (no is the default) to configure advanced IP options. Enter yes (yes is the default) to enable IPv4 routing capabilities. Click here to view code image Configure static route: (yes/no) [y]: yes Enter the destination prefix. Enter no (no is the default) at the in-band management configuration prompt. Click here to view code image Configure Advanced IP options (yes/no)? [n]: yes a. Click here to view code image . Click here to view code image Enable ip routing capabilities? (yes/no) [y]: yes c. DNS. Click here to view code image Configure the default-network: (yes/no) [y]: yes Enter the default network IPv4 address. such as in-band management static routes. Enter yes (yes is the default) to configure the DNS IPv4 address. Click here to view code image Destination prefix: dest_prefix Enter the destination prefix mask. and domain name. Enter yes (yes is the default) to configure the default network. IP address of the default gateway: default_gateway Step 10. Click here to view code image Destination prefix mask: dest_mask Enter the next hop IP address. Click here to view code image Continue with in-band (VSAN1) management configuration? (yes/no) [no]: no b.

Click here to view code image Enter the number of key bits? (768-2048) [1024]: 2048 Step 12.Configure the DNS IP address? (yes/no) [y]: yes Enter the DNS IP address. Click here to view code image Configure the default domain name? (yes/no) [n]: yes Enter the default domain name. Enter yes (no is the default) to disable the Telnet service. Enter yes (yes is the default) to configure congestion or no_credit drop for FC interfaces. Enter con (con is the default) to configure congestion or no_credit drop. Click here to view code image Default domain name: domain_name Step 11. Click here to view code image Type the SSH key you would like to generate (dsa/rsa)? rsa Enter the number of key bits within the specified range. DNS IP address: name_server f. Click here to view code image Enable the telnet service? (yes/no) [n]: yes Step 13. Enter yes (yes is the default) to enable the SSH service Click here to view code image Enabled SSH service? (yes/no) [n]: yes Enter the SSH key type. Enter a value from 100 to 1000 (d is the default) to calculate the number of milliseconds for congestion or no_credit drop. Click here to view code image Configure congestion or no_credit drop for fc interfaces? (yes/ no) [q/quit] to quit [y]: yes Step 14. Click here to view code image Enter number of milliseconds for congestion/no_credit drop[100 1000] or [d/default] for default:100 . Click here to view code image Enter the type of drop to configure congestion/no_credit drop? (con/no) [c]:con Step 15. Enter yes (no is the default) to skip the default domain name configuration.

Enter shut (shut is the default) to configure the default switch port interface to the shut (disabled) state. Enter on (off is the default) to configure the port-channel auto-create state. Click here to view code image Configure NTP server? (yes/no) [n]: yes Enter the NTP server IPv4 address. Enter yes (yes is the default) to configure the switchport mode F. Click here to view code image Configure default switchport mode F (yes/no) [n]: y Step 21. FCIP. You see the new configuration. Step 19. Click here to view code image . Step 23. Step 24. Enter a mode for congestion or no_credit drop. Click here to view code image Configure default port-channel auto-create state (on/off) [off]: on Step 22. Review and edit the configuration that you have just entered.Step 16. Enter yes (no is the default) to disable a full zone set distribution. iSCSI. Click here to view code image Enable full zoneset distribution (yes/no) [n]: yes Overrides the switch-wide default for the full zone set distribution feature. Enter yes (no is the default) to configure the NTP server. Enter enhanced (basic is the default) to configure default-zone mode as enhanced. NTP server IP address: ntp_server_IP_address Step 18. Click here to view code image Configure default switchport interface state (shut/noshut) [shut]: shut The management Ethernet interface is not shut down at this point. Only the Fibre Channel. Enter permit (deny is the default) to deny a default zone policy configuration. Click here to view code image Configure default switchport trunk mode (on/off/auto) [off]: on Step 20. Click here to view code image Enter mode for congestion/no_credit drop[E/F]: Step 17. Enter on (off is the default) to configure the switch port trunk mode. and Gigabit Ethernet interfaces are shut down. Click here to view code image Configure default zone policy (permit/deny) [deny]: permit Permits traffic flow to all members of the default zone.

com/tac . Verify that the switch is running the latest Cisco NX-OS 6. The following configuration will be applied: Click here to view code image username admin password admin_pass role network-admin username user_name password user_pass role network-admin snmp-server community snmp_community ro switchname switch interface mgmt0 ip address ip_address subnet_mask no shutdown ip routing ip route dest_prefix dest_mask dest_address ip default-network dest_prefix ip default-gateway default_gateway ip name-server name_server ip domain-name domain_name telnet server disable ssh key rsa 2048 force ssh server enable ntp server ipaddr ntp_server system default switchport shutdown system default switchport trunk mode on system default switchport mode F system default port-channel auto-create zone default-zone permit vsan 1-4093 zoneset distribute full vsan 1-4093 system default zone mode enhanced Would you like to edit the configuration? (yes/no) [n]: n Step 26. This ensures that the kickstart and system images are also automatically configured.cisco. Example 9-1 portrays show version display output. depending on which you installed. Type yes to save the new configuration. none of your changes are updated the next time the switch is rebooted. Step 25. Step 28. Log in to the switch using the new username and password. Verify that the required licenses are installed in the switch using the show license command Step 29.Configure default zone mode (basic/enhanced) [basic]: enhanced Overrides the switch-wide default zone mode as enhanced.2(x) software. Step 27. issuing the show version command. Example 9-1 Verifying NX-OS Version on an MDS 9710 Switch Click here to view code image DS-9710-1# show version Cisco Nexus Operating System (NX-OS) Software TAC support: http://www. Enter no (no is the default) if you are satisfied with the configuration. Enter yes (yes is default) to use and save this configuration. Click here to view code image Use this configuration and save it? (yes/no) [y]: yes If you do not save the configuration at this point.

0 or the GNU Lesser General Public License (LGPL) Version 2.0 6 6.1.----------------------------------.php Software BIOS: version 3. A copy of each such license is available at http://www.1. The copyrights to certain works contained in this software are owned by other third parties and used and distributed under license.html Copyright (c) 2002-2013.1.2(3) system: version 6. Certain components of this software are licensed under the GNU General Public License (GPL) version 2. 20 second(s) Last reset Reason: Unknown System version: 6.0 Mod MAC-Address(es) Serial-Num --.-----------------1 48 2/4/8/10/16 Gbps Advanced FC Module DS-X9448-768K9 2 48 2/4/8/10/16 Gbps Advanced FC Module DS-X9448-768K9 5 0 Supervisor Module-3 DS-X97-SF1-K9 6 0 Supervisor Module-3 DS-X97-SF1-K9 Mod Sw Hw --. Ethernet Plugin Step 30.----.2(3) 1. 0 hour(s).---------1 1c-df-0f-78-cf-d8 to 1c-df-0f-78-cf-db JAE171308VQ 2 1c-df-0f-78-d8-50 to 1c-df-0f-78-d8-53 JAE1714019Y 5 1c-df-0f-78-df-05 to 1c-df-0f-78-df-17 JAE171504KY Status ---------ok ok active * ha-standby .opensource.0.1 2 6.2(3) BIOS compile time: 02/27/2013 kickstart image file is: bootflash:///kickstart kickstart compile time: 7/10/2013 2:00:00 [07/31/2013 10:10:16] system image file is: bootflash:///system system compile time: 7/10/2013 2:00:00 [07/31/2013 11:59:38] Hardware cisco MDS 9710 (10 Slot) Chassis ("Supervisor Module-3") Intel(R) Xeon(R) CPU with 8120784 kB of memory. All rights reserved. Processor Board ID JAE171504KY Device name: MDS-9710-1 bootflash: 3915776 kB slot0: 0 kB (expansion flash) Kernel uptime is 0 day(s).org/licenses/gpl-2.org/licenses/lgpl-2.-------------. Example 9-2 Verifying Modules on an MDS 9710 Switch Click here to view code image MDS-9710-1# show module Mod Ports Module-Type Model --.cisco. Verify the status of the modules on the switch using the show module (see Example 9-2) command.Documents: http://www. 39 minute(s).2(3) Service: Plugin Core Plugin.2(3) 1.2(3) 1. Cisco Systems.1 5 6.-----1 6.-------------------------------------.php and http://www.com/en/US/products/ps9372/tsd_products_support_series_ home. Inc.2(3) 1.0 kickstart: version 6.opensource.

6 Mod --1 2 5 6 Xbar --1 2 3 Xbar --1 2 3 Xbar --1 2 3 1c-df-0f-78-df-2b to 1c-df-0f-78-df-3d JAE171504E6 Online Diag Status -----------------Pass Pass Pass Pass Ports Module-Type Model ----. static routes.-----------------0 Fabric Module 1 DS-X9710-FAB1 0 Fabric Module 1 DS-X9710-FAB1 0 Fabric Module 1 DS-X9710-FAB1 Sw Hw -------------. and domain name. Enter no (yes is the default) to configure a static route.0 MAC-Address(es) Serial-Num -------------------------------------. Enter yes (no is the default) at the in-band management configuration prompt. default network.----------------------------------.---------NA JAE1710085T NA JAE1710087T NA JAE1709042X Status ---------ok ok ok The following are the initial setup procedure steps for configuring in-band management logical interface on VSAN 1: You can configure both in-band and out-of-band configuration together by entering yes in both step 10c and step 10d in the following procedure: Step 10. Click here to view code image VSAN1 IPv4 address: ip_address Enter the IPv4 subnet mask. Enter no (yes is the default) to enable IPv4 routing capabilities.-----NA 1. . Click here to view code image Configure Advanced IP options (yes/no)? [n]: yes a.0 NA 1. Enter yes (no is the default) to configure advanced IP options such as in-band management.0 NA 1. Click here to view code image Continue with in-band (VSAN1) management configuration? (yes/no) [no]: yes Enter the VSAN 1 IPv4 address. DNS. Click here to view code image VSAN1 IPv4 net mask: subnet_mask b. Click here to view code image Enable ip routing capabilities? (yes/no) [y]: no c.

and bootstraps itself with its interface IP address. and DNS (Domain Name System) server. It also obtains the IP address of the DCNM server to download the configuration script that is run on the switch to download and install the appropriate software image and device configuration file. asking if you want to abort POAP and continue with a normal setup. you enter the normal interactive setup script. If you continue in POAP mode. During startup. The USB device on MDS 9148 or on MDS 9148S does not contain the required installation files.Click here to view code image Configure static route: (yes/no) [y]: no d. all the front-panel interfaces are set up in the default configuration. When a Cisco MDS switch with the POAP feature boots and does not find the startup configuration. One or more servers can contain the desired software images and configuration files. a prompt appears. and DCNM DNS server IP addresses. If you exit POAP mode. Figure 9-3 portrays the network infrastructure for POAP. locates the DCNM DHCP server. When you power up a switch for the first time. the switch enters POAP mode. The POAP setup requires the following network infrastructure: a DHCP server to bootstrap the interface IP address. Starting with NX-OS 6. . gateway. the POAP capability is available on Cisco MDS 9148 and MDS 9148S multilayer fabric switches (at the time of writing this book). Enter no (no is the default) to skip the default domain name configuration. Click here to view code image Configure the DNS IP address? (yes/no) [y]: no f. POAP mode starts. Enter no (yes is the default) to configure the default network. When a configuration file is not found.2(9). it loads the software image that is installed at manufacturing and tries to find a configuration file from which to boot. Enter no (yes is the default) to configure the DNS IPv4 address. gateway address. Click here to view code image Configure the default-network: (yes/no) [y]: no e. Click here to view code image Configure the default domain name? (yes/no) [n]: no In the final configuration the following CLI commands will be added to the configuration: Click here to view code image interface vsan1 ip address ip_address subnet_mask no shutdown The Power On Auto Provisioning POAP (Power On Auto Provisioning) automates the process of upgrading software images and installing configuration files on Cisco MDS and Nexus switches that are being deployed in the network. A TFTP server that contains the configuration script is used to automate the software image installation and configuration process. You can choose to exit or continue with POAP.

Step 3. Deploy a DHCP server and configure it with the interface. After you set up the network environment for POAP setup. Deploy one or more servers to host the software images and configuration files. (Optional) if you want to exit POAP mode and enter the normal interactive setup script. follow these steps to configure the switch using POAP: Step 1. Step 3. To verify the configuration after bootstrapping the device using POAP. Step 2. Deploy a TFTP server to host the configuration script. gateway. use one of the following commands: show running config: Displays the running configuration show startup config: Displays the startup configuration Licensing Cisco MDS 9000 Family NX-OS Software Features . (Optional) Put the POAP configuration script and any other desired software image and switch configuration files on a USB device that is accessible to the switch. and TFTP server IP addresses and a boot file with the path and name of the configuration script file. Step 4. enter y (yes). Install the switch in the network.Figure 9-3 POAP Network Infrastructure Here are the steps to set up the network environment using POAP: Step 1. Step 2. Power on the switch. This information is provided to the switch when it first boots.

An example is the IPS-8 or IPS-4 module using the FCIP feature. and the Cisco Fabric Switch for IBM BladeCenter. such as the Cisco MDS 9000 Enterprise Package. DMM Package. Module-based licenses allow features that require additional hardware modules. Obtain the serial number for your device by entering the show license host-id command. Obtaining the License Key File consists of the following steps: Step 1. The first step is to contact the reseller or Cisco representative and request this service. The grace period allows plenty of time to correct the licensing issue. All licensed features may be evaluated for up to 120 days before a license is expired. If an administrative error does occur with licensing. Any feature not included in a license package is bundled with the Cisco MDS 9000 family switches.Licenses are available for all switches in the Cisco MDS 9000 family. the switch provides a grace period before the unlicensed features are disabled. Obtaining a Factory-Installed License: You can obtain factory-installed licenses for a new Cisco NX-OS device. The license key file is mirrored on both supervisor modules. The cost varies based on a per-module usage. The second step is using the device and the licensed features. the Cisco Fabric Switch for HP c-Class Blade System. Even if both supervisor modules fail. and XRC Acceleration Package. The licensing model defined for the Cisco MDS product line has two options: Feature-based licenses allow features that are applicable to the entire switch. Some features are logically grouped into add-on packages that must be licensed separately. Licensing allows you to access specified premium features on the switch after you install the appropriate license for that feature. Licenses can also be installed on the switch in the factory. SAN Extension over IP Package. License Installation You can either obtain a factory-installed license (only applies to new device orders) or perform a manual license installation of the license (applies to existing devices in your network). Mainframe Package. DCNM Packages. Devices with dual supervisors have the following additional high-availability features: The license software runs on both supervisor modules and provides failover protection. Cisco license packages require a simple installation of an electronic license: no software installation or upgrade is required. This single console reduces management overhead and prevents problems because of improperly maintained licensing. so license keys are never lost even during a software reinstallation. Performing a Manual Installation: If you have existing devices or if you want to install the licenses on your own. IOA Package. On-demand port activation licenses are also available for the Cisco MDS 9000 family blade switches and 4-Gbps Cisco MDS 9100 Series multilayer fabric switches. you must first obtain the license key file and then install that file in the device. You can also obtain licenses to activate ports on the Cisco MDS 9124 Fabric Switch. the Cisco MDS 9134 Fabric Switch. The cost varies based on a per-switch usage. . the license file continues to function from the version that is available on the chassis. Cisco Data Center Network Manager for SAN includes a centralized license management console that provides a single interface for managing licenses across all Cisco MDS switches in the fabric. Cisco MDS stores license keys on the chassis serial PROM (SPROM).

sfp. sfp. Log in to the device through the console port of the active supervisor. Exit the device console and open a new terminal session to view all license files installed on the device using the show license command.lic Installing license . Step 3.The host ID is also referred to as the device serial number. contact Cisco Technical Support at this URL: http://www. as shown in Example 9-3.com/cisco/web/download/index. Locate the product authorization key (PAK) from the software license claim certificate document. Step 5. sftp.. Step 4.html. Example 9-3 show license Output on MDS 9222i Switch Click here to view code image MDS-9222i# show license MDS20090713084124110. Locate the website URL from the software license claim certificate document. You can use the file transfer service (tftp. If you cannot locate your software license claim certificate. ftp. Step 2.cisco.lic: SERVER this_host ANY VENDOR cisco INCREMENT ENTERPRISE_PKG cisco 1. Obtain your claim software license certificate document.html. Step 7. (Optional) Back up the license key file. Use the copy licenses command to save your license file to either the bootflash: directory or a slot0: device. Click here to view code image switch# install license bootflash:license_file. Installing the License Key File consists of the following steps: Step 1. Step 5. Step 6. Perform the installation by using the install license command on the active supervisor module from the device console.0 permanent uncounted \ VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M9200-ALL-LICENSESINTRL</ SKU> \ HOSTID=VDH=FOX1244H0U4 \ . You can access the Product License Registration website from the Software Download website at this URL: http://www. sftp.done Step 4.cisco. or scp) or use the Cisco Fabric Manager License Wizard under Fabric Manager Tools to copy a license to the switch.com/en/US/support/tsd_cisco_worldwide_contacts. or scp) or use the Cisco Fabric Manager License Wizard under Fabric Manager Tools to copy a license to the switch. Follow the instructions on the Product License Registration website to register the license for your device. Click here to view code image mds9710-1# show licenser host-id License hostid: VDH=JAF1710BBQH Step 2. You can use the file transfer service (tftp. Step 3. ftp.

Click here to view code image switch# show license usage ENTERPRISE_PKG Application ----------ivr qos_manager ----------- You can use the show license usage command to identify all the active features.lic MDS-9222i# show license file MDS201308120931018680. you can use clear license filename command.lic MDS201308120931018680.0 permanent 1 \ VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M9200-ALLLICENSES-INTRL</SKU> \ HOSTID=VDH=FOX1244H0U4 \ NOTICE="<LicFileID>20090713084124110</LicFileID><LicLineID>2</LicLineID> \ <PAK></PAK>" SIGN=9287E36C708C INCREMENT SAN_EXTN_OVER_IP cisco 1.lic: SERVER this_host ANY VENDOR cisco INCREMENT SAN_EXTN_OVER_IP_18_4 cisco 1.lic MDS201308120931018680. You can use the show license brief command to display a list of license files installed on the device (see Example 9-4). Click here to view code image switch# clear license Enterprise.NOTICE="<LicFileID>20090713084124110</LicFileID><LicLineID>1</LicLineID> \ <PAK></PAK>" SIGN=D2ABC826DB70 INCREMENT STORAGE_SERVICES_ENABLER_PKG cisco 1. it can activate a license grace period.lic Clearing license Enterprise. Example 9-4 show license brief Output on MDS 9222i Switch Click here to view code image MDS-9222i# show license brief MDS20090713084124110.lic: SERVER this_host ANY VENDOR cisco . To uninstall the installed license file... where filename is the name of the installed license key file.0 permanent 2 \ VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>L-M92EXT1AK9=</SKU> \ HOSTID=VDH=FOX1244H0U4 \ NOTICE="<LicFileID>20130812093101868</LicFileID><LicLineID>1</LicLineID> \ <PAK></PAK>" SIGN=DD075A320298 When you enable a Cisco NX-OS software feature.0 permanent 1 \ VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M9200-ALL-LICENSESINTRL</ SKU> \ HOSTID=VDH=FOX1244H0U4 \ NOTICE="<LicFileID>20090713084124110</LicFileID><LicLineID>3</LicLineID> \ <PAK></PAK>" SIGN=C03E97505672 <snip> .

and 32-port models onsite all the way to 48 ports by adding these licenses. On the Cisco MDS 9148. or 48 ports and upgrading the 16. You need to contact technical support to request an updated license. where url specifies the bootflash: slot0:. 2 ports of 10 Gigabit Ethernet for FCIP and iSCSI storage services. the On-Demand Port Activation license allows the addition of eight 8-Gbp ports. You can enable the grace period feature by using the license grace-period command: Click here to view code image switch# configure terminal switch(config)# license grace-period Verifying the License Configuration To display the license configuration. On-Demand Port Activation Licensing The Cisco MDS 9250i is available in a base configuration of 20 ports (upgradeable to 40 through On-Demand Port Activation license) of 16 Gbps Fibre Channel. The PORT_ACTIVATION_PKG does not appear as installed if you have only the default license installed. use one of the following commands: show license: Displays information for all installed license files. the status displayed is Unused. or 48 enabled ports. The base switch model comes with 12 ports enabled and can be upgraded as needed with the 12-port Cisco MDS 9148S On-Demand Port Activation license to support configurations of 24. Example 9-5 Verifying License Usage on an MDS 9148 Switch Click here to view code image MDS-9148# show license usage Feature Ins Lic Status Expiry Date Comments Count ---------------------------------------------------------------------------FM_SERVER_PKG No Unused ENTERPRISE_PKG No Unused Grace expired . or usb2: location of the updated license file. usb1:. 36. you must obtain and install an updated license. If a license is in use. the status displayed is In use. you can update the license file by using the update license url command. 32. and 8 ports of 10 Gigabit Ethernet for FCoE connectivity. After obtaining the license file. If a license is installed but no ports have acquired a license.If the license is time bound. show license usage: Displays the usage information for installed licenses. The Cisco MDS 9148S 16G Multilayer Fabric Switch compact one rack-unit (1RU) switch scales from 12 to 48 line-rate 16 Gbps Fibre Channel ports. show license brief: Displays summary information for all installed license files. show license file: Displays information for a specific license file. Customers have the option of purchasing preconfigured models of 16. show license host-id: Displays the host ID for the physical device. You can use the show license usage command (see Example 9-5) to view any licenses assigned to a switch.

2. [no] port-license . interface fc slot/port 3. Configure terminal. Table 9-3 Port Activation License Status Definitions Making a Port Eligible for a License By default. you must make that port eligible by using the port-license command. all ports are eligible to receive a license. which are described in Table 9-3.PORT_ACTIVATION_PKG No 16 In use never - Example 9-6 shows the default port license configuration for the Cisco MDS 9148 switch: Example 9-6 Verifying Port-License Usage on an MDS 9148 Switch Click here to view code image bdc-mds9148-2# show port-license Available port activation licenses are 0 ----------------------------------------------Interface Cookie Port Activation License ----------------------------------------------fc1/1 16777216 acquired fc1/2 16781312 acquired fc1/3 16785408 acquired fc1/4 16789504 acquired fc1/5 16793600 acquired fc1/6 16797696 acquired fc1/7 16801792 acquired fc1/8 16805888 acquired fc1/9 16809984 acquired fc1/10 16814080 acquired fc1/11 16818176 acquired fc1/12 16822272 acquired fc1/13 16826368 acquired fc1/14 16830464 acquired fc1/15 16834560 acquired fc1/16 16838656 acquired fc1/17 16842752 eligible <snip> fc1/46 16961536 eligible fc1/47 16965632 eligible fc1/48 16969728 eligible There are three different statuses for port licenses. However. 1. if a port has already been made ineligible and you prefer to activate it.

Shutdown 8. Shutdown 4. port-license acquire 9. interface fc slot/port 3. (Optional) show port-license 6. To select the system image. (Optional) show port-license 12. (Optional) copy running-config startup-config Cisco MDS 9000 NX-OS Software Upgrade and Downgrade Each switch is shipped with the Cisco MDS NX-OS operating system for Cisco MDS 9000 family switches. Exit 6. The Cisco MDS NX-OS software consists of two images: the kickstart image and the system image. The images and variables are important factors in any install procedure. [no] port-license acquire 4.4. Exit 11. use the SYSTEM variable. interface fc slot/port 3. You must specify the variable and the respective image to upgrade or downgrade your switch. interface fc slot/port 7.” 1. you will need to first acquire licenses for ports to which you want to move the license. To select the kickstart image. Configure terminal. Exit 5. No shutdown 10. the switch returns the message “port activation license not available. 2. . (Optional) copy running-config startup-config Moving Licenses Among Ports You can move a license from a port (or range of ports) at any time. Configure terminal 2. (Optional) show port-license 6. 1. use the KICKSTART variable. Both images are not always required for each install. Exit 5. If you attempt to move a license to a port and no license is available. (Optional) copy running-config startup-config Acquiring a License for a Port If you prefer not to accept the default on-demand port license assignments. no port-license 5.

In this case. the software upgrade may be disruptive. the control plane is down. but the data plane remains up. Before starting any software upgrade or downgrade. you may decide to proceed with this installation. both supervisor modules must have Ethernet connections on the management interfaces (mgmt 0) to maintain connectivity when switchovers occur during upgrades and downgrades. when the upgrade has progressed past the point at which it cannot be stopped gracefully. On switches with dual supervisor modules. During the upgrade. there are specific sections that explain compatibility between different software versions and hardware modules. In this case. In some cases. In some cases.The software image install procedure is dependent on the following factors: Software images: The kickstart and system image files reside in directories or folders that can be accessed from the Cisco MDS 9000 family switch prompt. the software reports the incompatibility. the user should review NX-OS Release Notes. To view the results of a dynamic compatibility check. An incompatible feature is enabled in the image to be installed and it is not available in the running image and does not cause the switch to move into an inconsistent state. Use this command to obtain further information when the install all . the incompatibility is strict. If the active and the standby supervisor modules run different versions of the image. So new devices will be unable to log in to the fabric via the control plane. but existing devices will not experience any disruption of traffic via the data plane. both images may be High Availability (HA) compatible in some cases and incompatible in others. Image version: Each image file has a version. Supervisor modules: There are single or dual supervisor modules. Flash disks on the switch: The bootflash: resides on the supervisor module and the Compact Flash disk is inserted into the slot0: device. The image to be installed is considered incompatible with the running image if one of the following statements is true: An incompatible feature is enabled in the image to be installed and it is not available in the running image and may cause the switch to move into an inconsistent state. the user may need to take a specific path to perform nondisruptive software upgrades or downgrades. In some cases. Note What is nondisruptive NX-OS upgrade or downgrade? Nondisruptive upgrades on the Cisco MDS fabric switches take down the control plane for not more than 80 seconds. or if a failure occurs. because they are not supported in the image to be installed. Compatibility is established based on the image and configuration: Image incompatibility: The running image and the image to be installed are not compatible. the incompatibility is loose. Configuration incompatibility: There is a possible incompatibility if certain features in the running image are turned off. If the running image and the image you want to install are incompatible. issue the show incompatibility system bootflash:filename command. In the Release Notes.

the system BIOS on the supervisor module first runs power-on self-test (POST) diagnostics. and proceeds to load the startup configuration that contains the switch configuration from NVRAM. The kickstart then verifies and loads the system image. the boot parameters in NVRAM should point to the location and name of the system image. the boot process fails at the last stage. Again. If this error happens. as a result. . Finally. checks the file systems. You can exit if you do not want to proceed with these changes. If the boot parameters are missing or have an incorrect name or location. Do you wish to continue? (y/ n) [y]: n You can upgrade any switch in the Cisco MDS 9000 Family using one of the following methods: Automated. the system image loads the Cisco NX-OS Software. The loader obtains the location of the kickstart file. The install all command launches a script that greatly simplifies the boot procedure and checks for errors and the upgrade impact before proceeding. The install all command compares and presents the results of the compatibility before proceeding with the installation. Quick. usually on bootflash. copy the correct kickstart and system images to the correct location and change the boot commands in your configuration. some resources might become unavailable after a switchover. When the Cisco MDS 9000 Series multilayer switch is first switched on or during reboot. and verifies the kickstart image before loading it. This upgrade is disruptive. usually on bootflash.command returns the following message: Click here to view code image Warning: The startup config contains commands not supported by the standby supervisor. The boot parameters are held in NVRAM and point to the location and name of both the kickstart and system images. The kickstart loads the Linux kernel and device drivers and then needs to load the system image. one-step upgrade using the reload command. The BIOS then runs the loader bootstrap function. Before running the reload command. Figure 9-4 portrays the boot sequence. This upgrade is nondisruptive for director switches. the administrator must recover from the error and reload the switch. one-step upgrades using the install all command.

Issue the copy running-config startup-config command to store your current running configuration. use the console connection. Upgrade the BIOS on the active and standby supervisor modules and the data modules. and follow these steps: Step 1. 3. Install licenses (if necessary) to ensure that the required features are available on the . Step 2. 5. Upgrade complete. Switch over from the active supervisor module to the upgraded supervisor module. Verify the following physical connections for the new Cisco MDS 9500 family switch: The console port is physically connected to a computer terminal (or terminal server).txt command. 4. Upgrading to Cisco NX-OS on an MDS 9000 Series Switch During any firmware upgrade. Steps 7 and 8 will not be executed. 2. use the latest Cisco MDS NX-OS Software on the Cisco MDS 9000 director series switch. switch. To upgrade the switch. Step 3. Perform non-disruptive image upgrade for each data module (one at a time). because this process causes a switchover and the current standby supervisor will be active after the upgrade. For MDS switches with only one supervisor module. Be aware that if you are upgrading through the management interface. the boot sequence steps are as follows: 1. Bring up the old active supervisor module with the new kickstart and system images. You can also create a backup of your existing configuration to a file by issuing the copy running-config bootflash:backup_config. Bring up standby supervisor module with the new kickstart and system images. In this section we discuss firmware upgrade steps for a director switch with dual supervisor modules.Figure 9-4 Boot Sequence For the MDS Director switches with dual supervisor modules. The management 10/100 Ethernet port (mgmt0) is connected to an external hub. 6. you must have a working connection to both supervisors. or router.

Download the files to an FTP or TFTP server. Click here to view code image switch# del m9500-sf2ek9-kickstart-mz.6. and select the required Cisco MDS NX-OS Release 6. Click here to view code image switch# attach mod x (where x is the module number of the standby supervisor) switch(standby)s# dir bootflash: 12288 Aug 26 19:06:14 2011 lost+found/ 16206848 Jul 01 10:54:49 2011 m9500-sf2ek9-kickstart-mz. Step 5.6. Step 10.2.bin Step 8. Step 4.2.bin 16604160 Jul 01 10:20:07 2011 m9500-sf2ek9-kickstart-mz.2.x. Access the Software Download Center.x.6.x.5.switch.6. delete unnecessary files to make space available.5. Verify that space is available on the standby supervisor module bootflash on a Cisco MDS 9500 Series switch.bin switch# copy tftp://tftpserver. Copy the Cisco MDS NX-OS kickstart and system images to the active supervisor module bootflash using FTP or TFTP.5c.6. Step 12. If you need more space on the active supervisor module bootflash.2.bin Step 6.6.2.bin switch# del m9500-sf2ek9-mz-npe. Use the delete bootflash: filename command to remove unnecessary files.bin switch(standby)# del bootflash: m9500-sf2ek9-mz.2(x) image file. Ensure that the required space is available in the bootflash: directory for the image file(s) to be copied using the dir bootflash: command.cisco.com/MDS/m9500-sf2ek9kickstart-mz. Click here to view code image switch(standby)# del bootflash: m9500-sf2ek9-kickstart-mz. depending on which you are installing. delete unnecessary files to make space available.2.bin bootflash:m9500-sf2ek9-mz.bin Usage for bootflash://sup-local 122811392 bytes used 61748224 bytes free 184559616 bytes total switch(standby)# exit ( to return to the active supervisor ) Step 7. .6.2.6. Verify that your switch is running compatible hardware by checking the Release Notes. Step 9.6.bin bootflash:m9500-sf2ek9-kickstartmz.2.27.5.bin Step 11.6. If you need more space on the standby supervisor module bootflash on a Cisco MDS 9500 Series switch.2.com/MDS/m9500-sf2ek9mz.6.x.cisco. Click here to view code image switch# copy tftp://tftpserver. Verify that the switch is running the required software version by issuing the show version command.2.5.

you must have a working connection to both supervisors. download it from an FTP or TFTP server to the active supervisor module bootflash.cisco.com/MDS/m9700-sf3ek9mz. Steps 7 and 8 will not be executed. avoid using the reload command. You should first read the Release Notes and follow the steps for specific NX-OS downgrade instructions. Here we outline the major steps for the downgrade: Step 1. In this section we discuss firmware upgrade steps for an MDS director switch with dual supervisor modules.2.6. If the software image file is not present.6.9.6.5. The output displays the status only after the switch has rebooted with the new image. Ensure that the required space is available on the active supervisor. Click here to view code image switch# install all kickstart m9500-sf2ek9-kickstartmz.9. delete unnecessary files to make space available. Use the install all command to downgrade the switch and handle configuration conversions.2.6.9.6.bin Step 3.) If you need more space on the standby supervisor module bootflash. Step 2. because this process causes a switchover and the current standby supervisor will be active after the downgrade. (Refer to Step 5 in the upgrade section. Verify that the system image files for the downgrade are present on the active supervisor module bootflash with dir bootflash command. All actions preceding the reboot are not captured in this output because when you enter the install all command using a Telnet session.5.bin switch# copy tftp://tftpserver. Perform the upgrade by issuing the install all command.2. After information is provided about the impact of the upgrade before it takes place.bin ssi m9000-ek9ssi-mz. in which case the output will display the status of the upgrade.bin bootflash:m9700-sf3ek9-mz.bin system m9500-sf2ek9-mz.6.6.5. Be aware that if you are downgrading through the management interface.5.2. use the console connection.) Click here to view code image switch# copy tftp://tftpserver.bin The install all process verifies all the images before installation and detects incompatibilities. the upgrade might already be complete. For MDS switches with only one supervisor module.2. The process checks configuration compatibility.Step 13. When downgrading any switch in the Cisco MDS 9000 family. the session is disconnected when the switch reboots.2. Downgrading Cisco NX-OS Release on an MDS 9500 Series Switch During any firmware upgrade. If you need more space on the active supervisor module bootflash: use the delete command to remove unnecessary files and ensure that the required space is available on the active supervisor.cisco.com/MDS/m9700-sf3ek9mz. (Refer to Steps 6 and 7 in the upgrade section.bin bootflash:m9700-sf3ek9-kickstart-mz.2. . the script will check whether you want to continue with the upgrade. When you can reconnect to the switch through a Telnet session. Do you want to continue with the installation (y/n)? [n] y If the input is entered as y the install will continue. You can display the status of a nondisruptive upgrade by using the show install all status command.

. or VSAN interfaces. Click here to view code image switch# copy running-config startup-config Step 7. TL port. When a module is removed and replaced with the same type of module. Disable any features that are incompatible with the downgrade system image. Interfaces are created in VSAN 1 by default. The configured interfaces can be Fibre Channel interfaces. each interface may be configured in auto or Fx port modes. Click here to view code image switch# install all kickstart bootflash:m9500-sf2ek9-kickstartmz. Issue the show incompatibility system command (see Example 9-7) to determine whether you need to disable any features not supported by the earlier release. Configuring Interfaces The main function of a switch is to relay frames from one data link to another. Save the configuration using the copy running-config startup-config command.2. and B port (see Figure 9-5).S74 system bootflash:m9500-sf2ek9-mz.bin The following configurations on active are incompatible with the system image 1) Service : port-channel . Capability : CAP_FEATURE_AUTO_CREATED_41_PORT_CHANNEL Description : auto create enabled ports or auto created port-channels are present Capability requirement : STRICT Disable command : 1. the characteristics of the interfaces through which the frames are received and sent must be defined.Convert autocreated port channels to be persistent (port-channel 1 persistent) . Each physical Fibre Channel interface in a switch may operate in one of several port modes: E port. Issue the install all command to downgrade the software. ST port.1. If a different type of module is inserted.1. switch(config)# interface fcip 31 switch(config-if)# no channel-group auto switch(config-if)# end switch# port-channel 127 persistent switch# Step 6. TE port..5.bin.5. Click here to view code image switch# configure terminal Enter configuration commands. the original configuration is no .2. End with CNTL/Z.switch# dir bootflash: Step 4. These two modes determine the port type during interface initialization. the configuration is retained.Disable autocreate on interfaces (no channel-group auto).2.5. F port. Verify the status of the modules on the switch using the show module command.S74 Step 8.1. Step 5. one per line. Example 9-7 show incompatibility system Output on MDS 9500 Switch Click here to view code image switch# show incompatibility system bootflash:m9500-sf2ek9-mz. Gigabit Ethernet interfaces. SD port. FL port. the management interface (mgmt0). 2.bin. To relay the frames. Besides these modes.

. Figure 9-5 Cisco MDS 9000 Family Switch Port Modes The interface state depends on the administrative configuration of the interface and the dynamic state of the physical link.” we discussed the different Fibre Channel port types. Table 9-4 Interface States Graceful Shutdown Interfaces on a port are shut down by default (unless you modified the initial configuration). “Data Center Storage Architecture. In Table 9-4 the interface states are summarized. In Chapter 6. The Cisco NX-OS software implicitly performs a graceful shutdown in response to either of the following actions for interfaces operating in the E port mode: If you shut down an interface.longer retained.

A graceful shutdown ensures that no frames are lost when the interface is shutting down. 2 Gbps. When a shutdown is triggered either by you or the Cisco NX-OS software. This enhancement reduces the chance of frame loss. By default. This command determines the frame format for all frames transmitted by the interface in SD port mode. Monitoring is performed using a standard Fibre Channel analyzer (or a similar Switch Probe) that is attached to the SD Port. even if the port negotiates at an operating speed of 1 Gbps or 2 Gbps.If a Cisco NX-OS software application executes a port shutdown as part of its function. the port administrative speed for an interface is automatically calculated by the switch. 4 Gbps of bandwidth is reserved. all outgoing frames are transmitted in the EISL frame format. which decreases the latency. By using local switching. Port Administrative Speeds By default. the switches connected to the shutdown link coordinate with each other to ensure that all frames in the ports are safely sent through the link before shutting down. regardless of the SPAN sources. If the encapsulation is set to EISL. When using local switching. the switch disables the interface when the threshold is reached. When autosensing is enabled for an interface operating in dedicated rate mode. Frame Encapsulation The switchport encap eisl command applies only to SD (SPAN destination) port interfaces. or 4 Gbps on the 4-Gbps switching modules. which allows traffic to be switched directly with a local crossbar when the traffic is directed from one port to another on the same line card. an extra switching step is avoided. and 8 Gbps on the 8-Gbps switching modules. You can enter a shutdown and no shutdown command sequence to re-enable the interface. Local Switching Local switching can be enabled in Generation 4 modules. Bit Error Thresholds The bit error rate threshold is used by the switch to detect an increased error rate before performance degradation seriously affects traffic. an interface functions as a SPAN. The bit errors can occur for the following reasons: Faulty or bad cable Faulty or bad GBIC or SFP GBIC or SFP is specified to operate at 1 Gbps but is used at 2 Gbps GBIC or SFP is specified to operate at 2 Gbps but is used at 4 Gbps Short haul cable is used for long haul or long haul cable is used for short haul Momentary sync loss Loose cable connection at one or both ends Improper GBIC or SFP connection at one or both ends A bit error rate threshold is detected when 15 error bursts occur in a 5-minute period. This configuration enables the interfaces to operate at speeds of 1 Gbps. note the following guidelines: . Autosensing speed is enabled on all 4-Gbps and 8-Gbps switching module interfaces by default. In SD port mode. An SD port monitors network traffic that passes through a Fibre Channel interface.

Table 9-5 Bandwidth and Port Group Configurations for Fibre Channel Modules . or ports can share a pool of bandwidth. Ports in the port group can have bandwidth dedicated to them. be sure not to exceed the available bandwidth in a port group. The Cisco MDS 9000 Family allows for the bandwidth of ports in a port group to be allocated based on the requirements of individual ports. you can have bandwidth dedicated to them in a port group by using the switchport rate-mode dedicated command. and ports on highbandwidth servers. For ports that require high-sustained bandwidth. Note Local switching is not supported on the Cisco MDS 9710 switch. enter the switchport rate-mode shared command. such as ISL ports. E ports are not allowed in the module because they must be in dedicated mode. Table 9-5 lists the bandwidth and port group configuration for Fibre Channel modules. you can share the bandwidth in a port group by using the switchport rate-mode shared command.All ports need to be in shared mode. which usually is the default state. storage ports that have higher fan-out ratios). typically servers that access shared storage-array ports (that is. allocation of the bandwidth within the port group is important. storage and tape array ports. Dedicated and Shared Rate Modes Ports on Cisco MDS 9000 Family line cards are grouped into port groups that have a fixed amount of bandwidth per port group. For other ports. When configuring the ports. When planning port bandwidth requirements. To place a port in shared mode.

The slow devices lead to ISL credit shortage in the traffic destined for these devices and they congest the links. When there are slow devices attached to the fabric. In some cases. we discussed the fan-out ratio. Most major disk subsystem vendors provide guidelines as to the recommended fan-out ratio of subsystem client-side ports to server connections. we recommend that you apply this feature only for edge F ports and retain the default configuration for ISLs as E and TE ports. To avoid or minimize the stuck condition. so that the ports run at 8 Gbps and are oversubscribed at a rate of 1. or the IP version 6 (IPv6) parameters so that the switch is reachable. even though destination devices do not experience slow drain. To configure a connection on the mgmt0 interface. Even though this feature can be applied to ISLs as well. and the port group has only 32. You can also mix dedicated and shared rate ports in a port group. Each port group has 32. This function frees the buffer space in ISL. which can be used by other unrelated flows that do not experience the slow drain condition. Slow Drain Device Detection and Congestion Avoidance All data traffic between end devices in a SAN fabric is carried by Fibre Channel Class 3. This oversubscription rate is well below the oversubscription rate of the typical storage array port (fan-out ratio) and does not affect performance.4 Gbps of bandwidth available. has eight port groups of 6 ports each. a Cisco MDS 9513 Multilayer Director with a Fabric 3 module installed. Management Interfaces You can remotely configure the switch through the management interface (mgmt0). The management port (mgmt0) is autosensing and operates in full-duplex mode at a speed of . and using a 48-port 8-Gbps Advanced Fibre Channel module.48:1 (6 ports × 8 Gbps = 48 Gbps/32. The credit shortage affects the unrelated flows in the fabric that use the same ISL link. and buffer-tobuffer flow control. per-hop-based. You can. These classes of service do not support end-to-end flow control. the end devices do not accept the frames at the configured or negotiated rate. In Chapter 6. The goal is to avoid or minimize the frames being stuck in the edge ports due to slow drain devices that are causing ISL blockage.4 Gbps). and default gateway). This feature provides various enhancements to detect slow drain devices that are causing congestion in the network and also provides a congestion avoidance function. however.For example. configure a lesser frame timeout for the ports. the traffic is carried by Class 2 services that use link-level. No-credit timeout drops all packets after the slow drain is detected using the configured thresholds. You cannot configure all 6 ports of a port group at the 8-Gbps dedicated rates because that would require 48 Gbps of bandwidth. This feature is focused mainly on the edge ports that are connected to slow drain devices. subnet mask. configure all 6 ports in shared rate mode. The lesser frame timeout value helps to alleviate the slow drain condition that affects the fabric by dropping the packets on the edge ports sooner than the time they actually get timed out (500 mms). you must configure either the IP version 4 (IPv4) parameters (IP address. Note This feature is used mainly for edge ports that are connected to slow edge devices. These recommendations are often in the range of 7:1 to 15:1.4 Gbps of bandwidth.

If you delete the VSAN. Here are the guidelines when creating or deleting a VSAN interface: Create a VSAN before creating the interface for that VSAN. you must configure the IP address for this VSAN. it is not created automatically. the default speed is 100 Mbps and the default duplex mode is auto. Configure each interface only in one VSAN. On a Supervisor-2 module. You can create an IP interface on top of a VSAN and then use this interface to send frames to this VSAN. On a Supervisor-1 module. the default speed is auto and the default duplex mode is auto. All the NX-OS CLI commands for interface configuration and verification are summarized in Table 9-6. To use this feature. VSAN Interfaces VSANs apply to Fibre Channel fabrics and enable you to configure multiple isolated SAN topologies within the same physical infrastructure. Create the interface VSAN. VSAN interfaces cannot be created for nonexisting VSANs.10/100/1000 Mbps. Autosensing supports both the speed and the duplex mode. the interface cannot be created. . the attached interface is automatically deleted. If a VSAN does not exist.

.

.

.

In this section we discuss the configuration and verification steps to build a Fibre Channel SAN topology. zoning. . Fibre Channel Services. Fibre Channel concepts. and SAN topologies.Table 9-6 Summary of NX-OS CLI Commands for Interface Configuration and Verification Verify Initiator and Target Fabric Login In Chapters 6 and 7 we discussed a theoretical explanation of Virtual SAN (VSAN). In this chapter we build up the following sample SAN topology together to practice what we have learned so far (see Figure 9-6).

and UCS Fabric Interconnect. Nexus 7000 Series switches. This state cannot be configured. user-defined VSANs (VSAN 2 to 4093). VSANs provide isolation among devices that are physically connected to the same fabric. State: The administrative state of a VSAN can be configured to an active (default) or . With VSANs you can create multiple logical SANs over a common physical infrastructure. VSANs have the following attributes: VSAN ID: The VSAN ID identifies the VSAN as the default VSAN (VSAN 1). Each VSAN can contain up to 239 switches and has an independent address space that allows identical Fibre Channel IDs (FC IDs) to be used simultaneously in different VSANs. and the isolated VSAN (VSAN 4094). This state indicates that traffic can pass through this VSAN. Nexus 6000. A VSAN is in the operational state if the VSAN is active and at least one port is up.Figure 9-6 Our Sample SAN Topology Diagram VSAN Configuration You can achieve higher security and greater stability in Fibre Channel fabrics by using virtual SANs (VSANs) on Cisco MDS 9000 family switches and Cisco Nexus 5000.

we will be configuring and verifying VSANs on our sample topology that was shown in Figure 9-6. (DPVM is explained in Chapter 6. By enabling a VSAN. the default) for load balancing path selection. Each vendor has a regular mode and an equivalent interoperability mode. If a port is configured in this VSAN. After VSANs are created. All ports in a suspended VSAN are disabled. Example 9-8 VSAN Creation and Verification on MDS-9222i Switch Click here to view code image .) In Example 9-8. Dynamically: By assigning VSANs based on the device WWN. For example. You can assign VSAN membership to ports using one of two methods: Statically: By assigning VSANs to ports. Fibre Channel standards guide vendors toward common external Fibre Channel interfaces. the default name for VSAN 3 is VSAN0003. you activate the services for that VSAN. it is disabled. which specifically turns off advanced or proprietary features and provides the product with a more amiable standards-compliant implementation. you can preconfigure all the VSAN parameters for the whole fabric and activate the VSAN immediately. The active state of a VSAN indicates that the VSAN is configured and enabled. By suspending a VSAN. Interop mode: Interoperability enables the products of multiple vendors to interact with each other. By default. The name can be from 1 to 32 characters long and it must be unique across all VSANs. The suspended state of a VSAN indicates that the VSAN is configured but not enabled. each port belongs to the default VSAN. Cisco NX-OS software supports the following four interop modes: Mode 1: Standards based interop mode that requires all other vendors in the fabric to be in interop mode Mode 2: Brocade native mode (Core PID 0) Mode 3: Brocade native mode (Core PID 1) Mode 4: McData native mode Port VSAN Membership Port VSAN membership on the switch is assigned on a port-by-port basis.suspended state. they may exist in various conditions or states. Load balancing attributes: These attributes indicate the use of the source-destination ID (srcdst-id) or the originator exchange OX ID (src-dst-ox-id. This method is referred to as dynamic port VSAN membership (DPVM). By default. Use this state to deactivate a VSAN without losing the VSAN’s configuration. the VSAN name is a concatenation of VSAN and a four-digit string representing the VSAN ID. VSAN name: This text string identifies the VSAN for management purposes.

MDS-9222i(config-vsan-db)# vsan 47 name VSAN-A interop loadbalancing ? src-dst-id Src-id/dst-id for loadbalancing src-dst-ox-id Ox-id/src-id/dst-id for loadbalancing(Default) ! Load balancing attributes of a VSAN ! To exit configuration mode issue command end MDS-9222i(config-vsan-db)# end ! Verifying the VSAN state. as highlighted. We have done this just so that we can see multiple storage targets for zoning. one per line. .!Initial Configuration and entering the VSAN configuration database MDS-9222i# conf t Enter configuration commands. the operational state is down since no FC port is up MDS-9222i# show vsan 47 vsan 47 information name:VSAN-A state:active interoperability mode:default loadbalancing:src-id/dst-id/oxid operational state:down MDS-9222i(config)# vsan database ! Interface fc1/1 and fc 1/3 are included in VSAN 47 MDS-9222i(config-vsan-db)# vsan 47 interface fc1/1.fc1/3 MDS-9222i(config-vsan-db)# interface fc1/1. which allows the port to come up in either F mode or FL mode: F mode: Point-to-point mode for server connections and most storage connections (storage ports in F mode have only one target logging in to that port. The last steps for the VSAN configuration will be verifying the VSAN membership and usage (see Example 9-9). associated with the storage array port). fc1/3 ! Change the state of the fc interfaces MDS-9222i(config-if)# no shut ! Verifying status of the interfaces MDS-9222i(config-if)# show int fc 1/1. the Admin Mode is FX (the default). MDS-9222i(config)# vsan database ! Creating VSAN 47 MDS-9222i(config-vsan-db)# vsan 47 ? MDS-9222i(config-vsan-db)# vsan 47 name VSAN-A ! Configuring VSAN name for VSAN 47 MDS-9222i(config-vsan-db)# vsan 47 name VSAN-A ? <CR> interop Interoperability mode value loadbalancing Configure loadbalancing scheme suspend Suspend vsan ! Other attributes of a VSAN configuration can be displayed with a question mark after !VSAN name. fc1/3 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) fc1/1 47 FX on up swl F 2 -fc1/3 47 FX on up swl FL 4 -- Note. End with CNTL/Z. FL mode: Arbitrated loop mode—storage connections that allow multiple storage targets to log in to a single switch port.

the name server instances running on each switch share information in a distributed database.48-4093 fc1/4 fc1/8 fc1/12 fc1/16 In a Fibre Channel fabric. Use the show flogi command to verify if a storage device is displayed in the FLOGI table as in the next section. Name servers allow a database entry to be modified by a device that originally registered the information. SFP is short wave laser w/o OFC (SN) Port WWN is 20:01:00:05:9b:77:0d:c0 Admin port mode is FX. Examine the FLOGI database on a switch that is directly connected to the host HBA and connected ports. In Example 9-10. These attributes are deregistered when the Nx port logs out either explicitly or implicitly. 47 vsans available for configuration: 2-46. The name server functionality maintains a database containing the attributes for all hosts and storage devices in each VSAN. the fabric login is successful. The name server permits an Nx port to register attributes during a PLOGI (to the name server) to obtain attributes of other hosts. FCID is 0x410100 . we continue verifying FCNS and FLOGI databases on our sample SAN topology. trunk mode is on snmp link state traps are enabled Port mode is F.Example 9-9 Verification Steps for VSAN Membership and Usage Click here to view code image ! Verifying the VSAN membership MDS-9222i# show vsan membership vsan 1 interfaces: fc1/1 fc1/2 fc1/3 fc1/5 fc1/6 fc1/7 fc1/9 fc1/10 fc1/11 fc1/13 fc1/14 fc1/15 fc1/17 fc1/18 vsan 47 interfaces: fc1/1 fc1/3 vsan 4079(evfp_isolated_vsan) interfaces: vsan 4094(isolated_vsan) interfaces: ! Verifying VSAN usage MDS-9222i# show vsan usage 1 vsan configured configured vsans: 1. If the required device is displayed in the FLOGI table. In a multiswitch fabric configuration. each host or disk requires a Fibre Channel ID. The name server stores name entries for all hosts in the FCNS database. Example 9-10 Verifying FCNS and FLOGI Databases on MDS-9222i Switch Click here to view code image ! Verifying fc interface status MDS-9222i(config-if)# show interface fc 1/1 fc1/1 is up Hardware is Fibre Channel. One instance of the name server process runs on each switch.

2 loop inits 1 output OLS. 1 LRR. the initiator is connected to fc 1/1 the domain id is x41 MDS-9222i(config-if)# show flogi database -------------------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME -------------------------------------------------------------------------------fc1/1 47 0x410100 21:00:00:e0:8b:0a:f0:54 20:00:00:e0:8b:0a:f0:54 ! JBOD FC target contains 2 NL_Ports (disks) and is connected to domain x041 ! FC Initiator with the pWWN 21:00:00:e0:8b:0a:f0:54 is attached to fc interface 1/1. 0 errors 0 CRC. The buffer-to-buffer credits are negotiated during the port login. 0 LRR.0.Port vsan is 47 Speed is 2 Gbps Rate mode is shared Transmit B2B Credit is 3 Receive B2B Credit is 16 Receive data field Size is 2112 Beacon is turned off 5 minutes input rate 88 bits/sec. 1396 bytes 0 discards.0 ipa :ff ff ff ff ff ff ff ff fc4-types:fc4_features :scsi-fcp:target symbolic-port-name :SANBlaze V4. MDS-9222i(config-if)# show fcns database VSAN 47: -------------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x4100e8 NL c5:9a:00:06:2b:04:fa:ce scsi-fcp:target 0x4100ef NL c5:9a:00:06:2b:01:01:04 scsi-fcp:target 0x410100 N 21:00:00:e0:8b:0a:f0:54 (Qlogic) scsi-fcp:init Total number of entries = 3 MDS-9222i(config-if)# show fcns database detail -----------------------VSAN:47 FCID:0x4100e8 -----------------------port-wwn (vendor) :c5:9a:00:06:2b:04:fa:ce node-wwn :c5:9a:00:06:2b:01:00:04 class :3 node-ip-addr :0. ! Verifying name server database ! The assigned FCID 0x410100 and device WWN information are registered in the FCNS and attain fabric-wide visibility.0 Port . fc1/3 47 0x4100e8 c5:9a:00:06:2b:04:fa:ce c5:9a:00:06:2b:01:00:04 fc1/3 47 0x4100ef c5:9a:00:06:2b:01:01:04 c5:9a:00:06:2b:01:00:04 Total number of flogi = 3. which is attached to fc 1/1. ! Verifying flogi database . 0 NOS. 0 errors 0 input OLS. 0 NOS. 0 frames/sec 5 minutes output rate 32 bits/sec. the VSAN domain manager 47 has assigned FCID 0x410100 to the device during FLOGI. 3632 bytes 0 discards. 11 bytes/sec. 0 unknown class 0 too long.0. 0 too short 18 frames output. 4 bytes/sec. 0 frames/sec 18 frames input. 1 loop inits 16 receive B2B credit remaining 3 transmit B2B credit remaining 3 low priority transmit B2B credit remaining Interface last changed at Mon Sep 22 09:55:15 2014 ! The port mode is F mode and there is automatically FCID is assigned to the FC Initiator.

2.8.8.0.0 Port symbolic-node-name : port-type :NL port-ip-addr :0.6 port-type :N port-ip-addr :0.0 fabric-port-wwn :20:03:00:05:9b:77:0d:c0 hard-addr :0x000000 permanent-port-wwn (vendor) :00:00:00:00:00:00:00:00 connected interface :fc1/3 switch name (IP address) :MDS-9222i (10.8. .25 DVR:v9.2.0 ipa :ff ff ff ff ff ff ff ff fc4-types:fc4_features :scsi-fcp:target symbolic-port-name :SANBlaze V4.0.0.0.03.0.44) -----------------------VSAN:47 FCID:0x4100ef -----------------------port-wwn (vendor) :c5:9a:00:06:2b:01:01:04 node-wwn :c5:9a:00:06:2b:01:00:04 class :3 node-ip-addr :0.8.2.0 ipa :ff ff ff ff ff ff ff ff fc4-types:fc4_features :scsi-fcp:init symbolic-port-name : symbolic-node-name :QLA2342 FW:v3.0.0 fabric-port-wwn :20:03:00:05:9b:77:0d:c0 hard-addr :0x000000 permanent-port-wwn (vendor) :00:00:00:00:00:00:00:00 connected interface :fc1/3 switch name (IP address) :MDS-9222i (10.0 fabric-port-wwn :20:01:00:05:9b:77:0d:c0 hard-addr :0x000000 permanent-port-wwn (vendor) :21:00:00:e0:8b:0a:f0:54 (Qlogic) connected interface :fc1/1 switch name (IP address) :MDS-9222i (10.0.44) Total number of entries = 3 The DCNM Device Manager for MDS 9222i will be displaying the ports in two tabs: one is Summary and the other one is a GUI display of the physical ports in the Device tab (see Figure 9-7).0.44) -----------------------VSAN:47 FCID:0x410100 ! Initiator connected to fc 1/1 ! The vendor of the host bus adapter (HBA) is Qlogic where we can see in the vendor pWWN -----------------------port-wwn (vendor) :21:00:00:e0:8b:0a:f0:54 (Qlogic) node-wwn :20:00:00:e0:8b:0a:f0:54 class :3 node-ip-addr :0.symbolic-node-name : port-type :NL port-ip-addr :0.1.0.0.

.Figure 9-7 Cisco DCNM Device Manager for MDS 9222i Switch In Table 9-7 we summarize NX-OS CLI commands that are related to VSAN configuration and verification.

if a host were to gain access to a disk that was being used by another host.Table 9-7 Summary of NX-OS CLI Commands for VSAN Configuration and Verification Verify Fibre Channel Zoning on a Cisco MDS 9000 Series Switch Zoning is used to provide security and to restrict access between initiators and targets on the SAN fabric. For example. potentially with a different operating system. With many types of servers and storage devices on the network. the data on this disk could become corrupted. To avoid any compromise of . security is crucial.

Example 9-11 Configuring Zoning on MDS-9222i Switch Click here to view code image ! Creating zone in VSAN 47 MDS-9222i(config-if)# zone name zoneA vsan 47 MDS-9222i(config-zone)# member ? ! zone member can be configured with one of the below options device-alias Add device-alias member to zone domain-id Add member based on domain-id. Members in a zone can access one another. A zone set consists of one or more zones: A zone set can be activated or deactivated as a single entity across all switches in the fabric. or slot0. SCP. SFTP. You cannot make changes to an existing zone set and activate it if the full zone set is lost or is not propagated. Members in different zones cannot access one another. or TFTP) The active zone set is not part of the full zone set. Zone Set Duplication: You can make a copy of a zone set and then edit it without altering the original zone set. to one of the following areas: To the full zone set To a remote location (using FTP. we will continue building our sample topology with zoning configuration.crucial data within the SAN. Only one zone set can be activated at any time. volatile: directory. zoning allows the user to overlay a security map. You can copy an active zone set from the bootflash: directory. A zone can be a member of more than one zone set. can see which targets. Zone Set Distribution: You can distribute full zone sets using one of two methods: one-time distribution or full zone set distribution. A zone consists of multiple zone members. This map dictates which devices. In Example 9-11. namely hosts. thus reducing the risk of data loss. port-number fcalias Add fcalias to zone fcid Add FCID member to zone fwwn Add Fabric Port WWN member to zone interface Add member based on interface ip-address Add IP address member to zone pwwn Add Port WWN member to zone symbolic-nodename Add Symbolic Node Name member to zone ! Inserting a disk (FC target) into zoneA MDS-9222i(config-zone)# member pwwn c5:9a:00:06:2b:01:01:04 ! Inserting pWWN of an initiator which is connected to fc 1/1 into zoneA MDS-9222i(config-zone)# member pwwn 21:00:00:e0:8b:0a:f0:54 ! Verifying zone zoneA if the correct members are configured or not MDS-9222i(config-zone)# show zone name zoneA zone name zoneA vsan 47 pwwn c5:9a:00:06:2b:01:01:04 pwwn 21:00:00:e0:8b:0a:f0:54 ! Creating zone set in VSAN 47 MDS-9222i(config-zone)# zoneset name zonesetA vsan 47 .

and then select ZonesetA (see Figure 9-8). The administrator is required to manage the individual zones according to the zone configuration guidelines. initiator-target pairs are configured. but not initiator-initiator. For example. The device type information of each device in a smart zone is automatically populated from the Fibre Channel Name Server (FCNS) database as host. just about everything except making initial connections of multiple switches into one fabric). On the DCNM SAN active zone set. useful combinations can be implemented at the hardware level by the Cisco MDS NX-OS software. By analyzing device-type information in the FCNS.MDS-9222i(config-zoneset)# member zoneA MDS-9222i(config-zoneset)# exit ! Inserting a disk (FC target) into zoneA MDS-9222i(config)# zoneset activate name zonesetA vsan 47 Zoneset activation initiated. This information allows more . DCNM is really best at viewing and operating on existing fabrics (that is. members can be displayed via Logical Domains (on the left pane). and the combinations that are not used are ignored. A missing asterisk may indicate an offline device or an incorrectly configured zone. However. In this chapter we are just doing the CLI versions of these commands. Smart zoning eliminates the need to create a single initiator to single target zones. Figure 9-8 Cisco DCNM Zone Members for VSAN 47 About Smart Zoning Smart zoning implements hard zoning of large zones with fewer hardware resources than was previously required. MDS-9222i(config)# show zoneset brief zoneset name zonesetA vsan 47 zone zoneA Setting up multiple switches independently and then connecting them into one fabric can be done in DCNM. Select VSAN 47. The traditional zoning method allows each device in a zone to communicate with every other device in the zone. possibly a mistyped pWWN. check zone status ! Verifying the active zone set status in VSAN 47 MDS-9222i(config)# show zoneset active zoneset name zonesetA vsan 47 zone name zoneA vsan 47 * fcid 0x4100ef [pwwn c5:9a:00:06:2b:01:01:04] * fcid 0x410100 [pwwn 21:00:00:e0:8b:0a:f0:54] The asterisks (*) indicate that a device is currently logged into the fabric (online). or both. target.

but data traffic to LUN 0 (for example. the command will be propagated to the other switches within the VSAN automatically. Enhanced zoning uses the same techniques and tools as basic zoning. The zone mode will display enhanced for enhanced mode and the default zone distribution policy is full for . differs from that of basic zoning. switch(config)# zone mode enhanced vsan 222 Zoning mode is changed to enhanced for VSAN 200. and if you want to perform additional LUN zoning in a Cisco MDS 9000 family switch. With LUN zoning. control traffic to LUN 0 (for example. Last. as per standards requirements. The default zoning is basic mode. the enhanced zoning feature is disabled in all switches in the Cisco MDS 9000 family. A storage device can have multiple LUNs behind it. changes are either committed or aborted with enhanced zoning. a member of the zone can access any LUN in the device. then. For one thing. a VSANwide lock. smart zoning defaults can be overridden by the administrator to allow complete control. At the time enhanced zoning is enabled. if available. the LUN number for each host bus adapter (HBA) from the storage subsystem should be configured with the LUN-based zone procedure. is implicitly obtained for enhanced zoning. If the device port is part of a zone. Both standards support the basic zoning functionalities explained in Chapter 6 and the enhanced zoning functionalities described in this section. About Enhanced Zoning The zoning feature complies with the FC-GS-4 and FC-SW-3 standards. You need to check the zone status to verify the status. When LUN 0 is not included within a zone. About LUN Zoning and Assigning LUNs to Storage Subsystems Logical unit number (LUN) zoning is a feature specific to switches in the Cisco MDS 9000 family. Step 3. Click here to view code image switch(config)# no zone mode enhanced vsan 222 The preceding command disables the enhanced zoning mode for a specific VSAN. The default zoning mode can be selected during the initial setup. By default. LUN masking and mapping restricts server access to specific LUNs. WRITE) is denied. Smart zoning can be enabled at the VSAN level but can also be disabled at zone level. Second. you can restrict access to specific LUNs associated with a device. To enable enhanced zoning in a VSAN. In the event of a special situation. If LUN masking is enabled on a storage subsystem. all zone and zone set modifications for enhanced zoning include activation. READ. Enhanced zoning needs to be enabled on only one switch within the VSAN in the existing SAN topology. with a few added commands.efficient utilization of switch hardware by identifying initiator-target pairs and configuring those only in hardware. The flow of enhanced zoning. however. such as a disk controller that needs to communicate with another disk controller. switch# config t (enter the configuration mode) Step 2. Enhanced zoning can be turned on per VSAN as long as each switch within that VSAN is enhanced zoning capable. INQUIRY) is supported. REPORT_LUNS. The zone status can be verified with the show zone status command. follow these steps: Step 1.

Enabling enhanced zoning does not perform a zone set activation. we discuss the advantages of using enhanced zoning.enhanced mode. thereby overwriting the destination switches’ full zone set database. Enabling it on multiple switches within the same VSAN can result in failure to activate properly. Click here to view code image switch# show zone status vsan 222 VSAN: 222 default-zone: deny distribute: full Interop: default mode: enhanced merge-control: allow session: none hard-zoning: enabled broadcast: enabled smart-zoning: disabled rscn-format: fabric-address Default zone: qos: none broadcast: disabled ronly: disabled Full Zoning Database : DB size: 44 bytes Zonesets:0 Zones:0 Aliases: 0 Attribute-groups: 1 Active Zoning Database : Database Not Available The rules for enabling enhanced zoning are as follows: Enhanced zoning needs to be enabled on only one switch in the VSAN of an existing converged SAN fabric. The switch that is chosen to initiate the migration to enhanced zoning will distribute its full zone database to the other switches in the VSAN. . In basic zone mode the default zone distribution policy is active. In Table 9-8.

Table 9-8 Advantages of Enhanced Zoning .

fc1/3. Figure 9-9 SAN topology with CLI Configuration On MDS 9222i we configured two VSANs: VSAN 47 and VSAN 48.) Example 9-12 Verifying Initiator and Target Logins Click here to view code image MDS-9222i(config-if)# show interface fc1/1.fc1/4.Let’s return to our sample SAN topology.fc1/12 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/1 47 FX on up swl F 2 -fc1/3 47 FX on up swl FL 4 -fc1/4 48 FX on up swl FL 4 -MDS-9222i(config-if)# show fcns database VSAN 47: -------------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x4100e8 NL c5:9a:00:06:2b:04:fa:ce scsi-fcp:target 0x4100ef NL c5:9a:00:06:2b:01:01:04 scsi-fcp:target 0x410100 N 21:00:00:e0:8b:0a:f0:54 (Qlogic) scsi-fcp:init Total number of entries = 3 VSAN 48: -------------------------------------------------------------------------- . Both VSANs are active and their operational states are up. (See Example 9-12. so far we managed to build only separate SAN islands (see Figure 9-9).

zone-based traffic prioritizing. The zoning features.FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x4100e8 NL c5:9b:00:06:2b:14:fa:ce scsi-fcp:target 0x4100ef NL c5:9b:00:06:2b:02:01:04 scsi-fcp:target Total number of entries = 2 MDS-9148(config-if)# show flogi database -------------------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME -------------------------------------------------------------------------------fc1/1 48 0x420000 21:01:00:e0:8b:2a:f0:54 20:01:00:e0:8b:2a:f0:54 Total number of flogi = 1. . MDS-9148(config-if)# show fcns database VSAN 48: -------------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x420000 N 21:01:00:e0:8b:2a:f0:54 (Qlogic) scsi-fcp:init Total number of entries = 1 Table 9-9 summarizes NX-OS CLI commands for zone configuration and verification. read-only zones. are LUN zoning. and zone-based FC QoS. which require the Enterprise package license.

.

.

.

.

.

Trunking enables interconnect ports to transmit and receive frames in more than one VSAN. Figure 9-10 represents the possible trunking scenarios in a SAN with MDS core switches. . and HBAs. over the same physical link.Table 9-9 Summary of NX-OS CLI Commands for Zone Configuration and Verification VSAN Trunking and Setting Up ISL Port VSAN trunking is a feature specific to switches in the Cisco MDS 9000 Family. Trunking is supported on E ports and F ports. NPV switches. third-party core switches.

A server can reach multiple VSANs through a TF port without inter-VSAN routing (IVR). TNP port: If trunk mode is enabled in an NP port (see the link 2 in Figure 9-10) and that port becomes operational as a trunking NP port. TF PortChannel: If trunk mode is enabled in an F port channel (see the link 4 in Figure 9-10) and that port channel becomes operational as a trunking F port channel. it is referred to as a TF port. TN port: If trunk mode is enabled (not currently supported) in an N port (see the link 1b in Figure 9-10) and that port becomes operational as a trunking N port. When an N port supports trunking. it is referred to as a TF port channel. TF-TNP port link: A single link can be established to connect an TF port to an TNP port using the PTP protocol to carry tagged frames (see the link 2 in Figure 9-10). it is referred to as a TN port. the pWWNs for which the N port requests additional FC_IDs are called virtual pWWNs. . TF-TN port link: A single link can be established to connect an F port to an HBA to carry tagged frames (see the link 1a and 1b in Figure 9-10) using Exchange Virtual Fabrics Protocol (EVFP). By default. it is referred to as a TNP port. TF port: If trunk mode is enabled in an F port (see the link 2 in Figure 9-10) and that port becomes operational as a trunking F port. Cisco Port Trunking Protocol (PTP) is used to carry tagged frames. In the case of MDS core switches. it is referred to as a TE port. A Fibre Channel VSAN is called Virtual Fabric and uses a VF_ID in place of the VSAN ID. PTP is used because PTP also supports trunking port channels. a pWWN is defined for each VSAN and called a logical pWWN. the VF_ID is 1 for all ports.Figure 9-10 Trunking F Ports The trunking feature includes the following key concepts: TE port: If trunk mode is enabled in an E port and that port becomes operational as a trunking E port.

off (disabled). determines the trunking state of the link and the port modes at both ends. by default. trunk mode is enabled on all Fibre Channel interfaces (Mode: E. the trunk mode configuration on E ports has no effect. disable the trunking protocol. FL. If the trunking protocol is disabled on a switch. trunk mode is disabled. If so. On NPV switches. Fx. then EVFP protocol enables trunking on the link. and SD) on non-NPV switches. the trunking protocol is enabled on E ports and disabled on F ports. You can configure trunk mode as on (enabled). F.In Table 9-10. When connected to a third-party switch. . Also. no port on that switch can apply new trunk configurations. or auto (automatic). between two switches. Table 9-10 Supported Trunking Protocols By default. other switches that are directly connected to this switch are similarly affected on the connected interfaces. but only supports traffic in VSANs that it negotiated with previously (when the trunking protocol was enabled). Trunk Modes By default. we summarize supported trunking protocols. Existing trunk configurations are not affected. The TE port continues to function in trunk mode. In Table 9-11. In the case of F ports. if the third-party core switches ACC’s physical FLOGI with the EVFP bit is configured. we summarize the trunk mode status between switches. The ISL is always in a trunking disabled state. The preferred configuration on the Cisco MDS 9000 family switches is one side of the trunk set to auto and the other side set to on. In some cases. ST. The trunk mode configuration at the two ends of an ISL. you may need to merge traffic from different port VSANs across a nontrunking ISL.

MDS does not enforce the uniqueness of logical pWWNs across VSANs. By default. the software indicates its nonnegotiable status. The trunk-allowed VSANs configured for TE. Port channels aggregate multiple physical ISLs into one logical link with higher bandwidth and port resiliency for both Fibre Channel and FICON traffic. If the peer port. frames are transmitted and received in one or more VSANs specified in this list. or responds with a nonnegotiable status. The possible values for a channel group mode are as follows: ON (default): The member ports only operate as part of a port channel or remain inactive. and TNP links are used by the trunking protocol to determine the allowed active VSANs in which frames can be received or transmitted. the port channel protocol is not initiated. if a port channel protocol frame is received from a peer port. However. Port Channel Modes You can configure each port channel with a channel group mode parameter to determine the port channel protocol behavior for all member ports in this channel group. the trunking protocol ensures seamless operation as an E port. With this feature. There are couple of important guidelines and limitations for the trunking feature: F ports support trunking in Fx mode. In this mode. The ACTIVE port channel mode allows automatic recovery without explicitly enabling and disabling the port channel member ports at . does not support the PortChannel protocol. In TE-port mode.Table 9-11 Trunk Mode Status Between Switches Each Fibre Channel interface has an associated trunk-allowed VSAN list. while configured in a channel group. ISL ports can reside on any switching module. the VSAN range (1 through 4093) is included in the trunk-allowed list. up to 16 expansion ports (Eports) or trunking E-ports (TE-ports) can be bundled into a PortChannel. If a trunking enabled E port is connected to a third-party switch. the port channel continues to function properly without requiring fabric reconfiguration. it will default to the ON mode behavior. If a port or a switching module fails. ACTIVE: The member ports initiate port channel protocol negotiation with the peer port(s) regardless of the channel group mode of the peer port. and they do not need a designated master port. TF.

On our sample SAN topology. On MDS 9222i there are three port-groups available and each port-group consists of six Fibre Channel ports. Ports are also in trunk on mode. we will connect two SAN islands by creating port channel using ports 5 and 6 on both MDS switches. When we set these two ports to dedicate they will reserve a dedicated 4 Gb of bandwidth (leaving 4. Groups of six ports have a total 12. In Example 9-13.fc1/6 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/5 1 FX on down swl --fc1/6 1 FX on down swl --! On 9222i ports need to be put into dedicated rate mode in order to be configured as ISL (or be set to mode auto).8 Gbps . so when ISL comes up it will automatically carry all VSANs in common between switches. Show port-resources module command will show the default and configured interface rate mode.8 Gb shared between the other group members). buffer –to-buffer credits and bandwidth.8 Gbps of bandwidth. Figure 9-11 Setting Up ISL Port on Sample SAN Topology On 9222i ports need to be put into dedicated rate mode to be ISLs (or be set to mode auto). Example 9-13 Verifying Port Channel on Our Sample SAN Topology Click here to view code image ! Before changing the ports to dedicated mode . the interface Admin Mode is FX MDS-9222i# show interface fc1/5.either end. we show the verification and configuration steps on NX-OS CLI for port channel configuration on our sample SAN topology. Ports in auto mode will automatically come up as ISL (E-ports) after they are connected to another switch. MDS-9222i(config)# show port-resources module 1 Module 1 Available dedicated buffers are 3987 Port-Group 1 Total bandwidth is 12. Figure 9-11 portrays the minimum required configuration steps to set up ISL port channel with mode active between two MDS switches.

0 dedicated fc1/6 250 4.0 shared fc1/4 16 4.0 shared fc1/6 16 4.0 shared fc1/5 16 4.Total shared bandwidth is 12.0 shared fc1/4 16 4.0 shared fc1/2 16 4. then do "no shutdown" at both ends to bring it up MDS-9222i(config-if)# no shut MDS-9222i(config-if)# int port-channel 56 MDS-9222i(config-if)# channel mode active MDS-9222i(config-if)# sh interface fc1/5-6 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/5 1 auto on trunking swl TE 4 56 fc1/6 1 auto on trunking swl TE 4 56 .0 Gbps -------------------------------------------------------------------Interfaces in the Port-Group B2B Credit Bandwidth Rate Mode Buffers (Gbps) -------------------------------------------------------------------fc1/1 16 4.0 dedicated <snip> ! Configuring port-channel interface MDS-9222i(config-if)# channel-group 56 fc1/5 fc1/6 added to port-channel 56 and disabled please do the same operation on the switch at the other end of the port-channel.0 shared fc1/2 16 4.8 Gbps Total shared bandwidth is 4.0 shared fc1/3 16 4.8 Gbps Allocated dedicated bandwidth is 8.0 shared fc1/3 16 4.0 shared fc1/5 250 4.8 Gbps Allocated dedicated bandwidth is 0.0 shared <snip> ! Changing the rate-mode to dedicated MDS-9222i(config)# inte fc1/5-6 MDS-9222i(config-if)# switchport rate-mode dedicated MDS-9222i(config-if)# switchport mode auto MDS-9222i(config-if)# sh interface fc1/5-6 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/5 1 auto on down swl --fc1/6 1 auto on down swl --! Interface fc 1/5-6 are configured for dedicated rate mode MDS-9222i(config-if)# show port-resources module 1 Module 1 Available dedicated buffers are 3543 Port-Group 1 Total bandwidth is 12.0 Gbps -------------------------------------------------------------------Interfaces in the Port-Group B2B Credit Bandwidth Rate Mode Buffers (Gbps) -------------------------------------------------------------------fc1/1 16 4.

VSAN 1 and VSAN 48 are up but VSAN 47 is in isolated status because VSAN 47 is not configured ! on MDS 9148 switch MDS 9148 is only carrying SAN-B traffic. for VSAN 1 the one with the lowest priority won the election which is MDS ! 9222i in our sample SAN topology. 0 errors 2 input OLS.48) Trunk vsans (isolated) (47) Trunk vsans (initializing) () 5 minutes input rate 1024 bits/sec. 0 too short 361 frames output. Local switch run time information: State: Stable Local switch WWN: 20:01:54:7f:ee:05:07:89 Running fabric name: 20:01:00:05:9b:77:0d:c1 Running priority: 128 . 1 frames/sec 363 frames input. 107 bytes/sec. 4 LRR.! Verifying the portchannel interface. 0 LRR. 2 loop inits Member[1] : fc1/5 Member[2] : fc1/6 ! During ISL bring up there will be principal switch election. 29776 bytes 0 discards.47-48) Trunk vsans (up) (1. 128 bytes/sec. 1 frames/sec 5 minutes output rate 856 bits/sec. MDS-9222i# show fcdomain vsan 1 The local switch is the Principal Switch. 2 NOS. 22 loop inits 2 output OLS. 0 NOS. 36000 bytes 0 discards. Local switch run time information: State: Stable Local switch WWN: 20:01:00:05:9b:77:0d:c1 Running fabric name: 20:01:00:05:9b:77:0d:c1 Running priority: 2 Current domain ID: 0x51(81) Local switch configuration information: State: Enabled FCID persistence: Enabled Auto-reconfiguration: Disabled Contiguous-allocation: Disabled Configured fabric name: 20:01:00:05:30:00:28:df Optimize Mode: Disabled Configured priority: 128 Configured domain ID: 0x00(0) (preferred) Principal switch run time information: Running priority: 2 Interface Role RCF-reject --------------------------------------port-channel 56 Downstream Disabled --------------------------------------MDS-9148(config-if)# show fcdomain vsan 1 The local switch is a Subordinated Switch. trunk mode is on snmp link state traps are enabled Port mode is TE Port vsan is 1 Speed is 8 Gbps Trunk vsans (admin allowed and active) (1. 0 errors 0 CRC. MDS-9222i(config-if)# show int port-channel 56 port-channel 56 is trunking Hardware is Fibre Channel Port WWN is 24:38:00:05:9b:77:0d:c0 Admin port mode is auto. 0 unknown class 0 too long.

Figure 9-12 portrays the necessary zoning configuration. . Zoning through the switches allows traffic on the switches to connect only chosen sets of servers and storage end devices on a particular VSAN.Current domain ID: 0x52(82) Local switch configuration information: State: Enabled FCID persistence: Enabled Auto-reconfiguration: Disabled Contiguous-allocation: Disabled Configured fabric name: 20:01:00:05:30:00:28:df Optimize Mode: Disabled Configured priority: 128 Configured domain ID: 0x00(0) (preferred) Principal switch run time information: Running priority: 2 Interface Role RCF-reject --------------------------------------port-channel 56 Upstream Disabled --------------------------------------MDS-9222i# show port-channel database port-channel 56 Administrative channel mode is active Operational channel mode is active Last membership update succeeded First operational port is fc1/6 2 ports in total. MDS-9148(config-if)# show port-channel database port-channel 56 Administrative channel mode is active Operational channel mode is active Last membership update succeeded First operational port is fc1/6 2 ports in total. 2 ports up Ports: fc1/5 [up] fc1/6 [up] * ! The asterisks (*) indicate which interface became online first. although they could be identified using switch physical port numbers. 2 ports up Ports: fc1/5 [up] fc1/6 [up] * To bring up the rest of our sample SAN topology. (Zoning on separate VSANs is completely independent. we need to provide access to our initiator. which is connected to fc 1/1 of MDS 9148 via VSAN 48 to a LUN that resides on a storage array connected to fc 1/14 of MDS 9222i (VSAN 48).) Usually members of a zone are identified using the end devices World Wide Port Number (WWPN).

Figure 9-12 Zoning Configuration on our Sample SAN Topology Our sample SAN topology is up with the preceding zone configuration. In Table 9-12 we summarize NX-OS CLI commands for trunking and port channel. FC initiator via dual host bus adapter (HBA) can access same LUNs from SAN A (VSAN 47) and SAN B (VSAN 48). .

.

cisco.cisco.com/en/US/docs/switches/datacenter/mds9000/hw/9250i/installation/guide/connect.com/c/en/us/support/storage-networking/mds-9000-nx-os-san-ossoftware/products-installation-guides-list.com/c/en/us/td/docs/switches/datacenter/mds9000/sw/6_2/upgrade/guides/nxos/upgrade.cisco.html#pgfId-419386 Cisco MDS 9000 NX-OS and SAN OS software configuration guides are available at http://www.html Cisco MDS 9000 NX-OS Software upgrade and downgrade guide is available at http://www.html Cisco MDS 9250i Multiservice Fabric Switch Hardware installation guide is available at http://www.com/c/en/us/td/docs/switches/datacenter/mds9000/hw/9700/mds_9700_hig/overview Cisco MDS Install and upgrade guides are available at http://www.Table 9-12 Summary of NX-OS CLI Commands for Trunking and Port Channels Reference List Cisco MDS 9700 Series Hardware installation guide is available at http://www.cisco.com/c/en/us/td/docs/switches/datacenter/mds9000/hw/9148S/install/guide/9148S_h Cisco MDS NX-OS install and upgrade guides are available at http://www.com/c/en/us/support/storage-networking/mds-9000-series-multilayerswitches/products-installation-guides-list.cisco.cisco. Cisco MDS 9148s Multilayer Switch Hardware installation guide is available at http://www.com/c/en/us/support/storage-networking/mds-9000-nx-os-san-os- .cisco.

. Table 9-13 lists a reference for these key topics and the page numbers on which each is found.software/products-installation-and-configuration-guides-list.html Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. noted with the key topic icon in the outer margin of the page.

Define Key Terms Define the following key terms from this chapter.” also on the disc. “Memory Tables. Appendix C. or at least the section for this chapter.Table 9-13 Key Topics for Chapter 9 Complete Tables and Lists from Memory Print a copy of Appendix B. includes completed tables and lists to check your work. “Memory Tables Answer Key. and complete the tables and lists from memory.” (found on the disc). and check your answers in the Glossary: virtual storage-area networks (VSAN) zoning fport-channel-trunk enhanced small form-factor pluggable (SFP+) small form-factor pluggable (SFP) 10 gigabit small form factor pluggable (XFP) host bus adapter (HBA) Exchange Peer Parameters (EPP) Exchange Virtual Fabrics Protocol (EVFP) Extensible Markup Language (XML) common information model (CIM) Power On Auto Provisioning (POAP) switch worldwide name (sWWN) World Wide Port Name (pWWN) slow drain basic input/output system (BIOS) .

random-access memory (RAM) Port Trunking Protocol (PTP) fan-out ratio fan-in ratio .

If you do not remember some details. using the PCPT software. which are relevant to the particular chapter. Details on each task follow the table. answer the “Do I Know This Already?” questions again for the chapters in this part of the book. Self-Assessment Questionnaire Each chapter of this book introduces several key learning items. Refer to “How to View Only DIKTA Questions by Part” in the Introduction to this book for help with how to make the PCPT software show you DIKTA questions for this part only. you should go back and refresh on the key topics. if you’re not certain. you should try to answer these self-assessment questionnaires to the best of your ability. Review Key Topics Browse back through the chapters and look for the Key Topic icons. Table PII-1 Part II Review Checklist Repeat All DIKTA Questions For this task. Refer to “How to View Only Part Review Questions by Part” in the Introduction to this book for help with how to make the PCPT software show you Part Review questions for this part only.Part II Review Keep track of your part review progress with the checklist shown in Table PII-1. using the PCPT software. For this self-assessment exercise in this book. take the time to reread those topics. Answer Part Review Questions For this task. which will help you remember the concepts and technologies of these key topics as you progress with the book. answer the Part Review questions for this part of the book. but as you read through each new chapter you will become more familiar and comfortable with them. without looking back at the chapters or your notes. shown in the following list. This exercise lists a set of key self-assessment questions. This might seem overwhelming to you initially. There will be no written answers to these questions in this book. .

Block Virtualization would apply to Chapter 8. number 6 is about Data Center Storage Architecture. and so on.Think of each of these open self-assessment questions as being about the chapters you’ve read. number 7 is about Cisco MDS Product Family. For example. You might or might not find the answer explicitly. . but the chapter is structured to give you the knowledge and understanding to be able to discuss and explain these self-assessment questions. For example. Note your answer and try to reference key words in the answer from the relevant chapter.

Part III: Data Center Unified Fabric Exam topics covered in Part III: Define Challenges in Today’s Data Center Networks Unified Fabric Principles Inter-Data Center Unified Fabric Components and Operation of the IEEE Data Center Bridging Standards FCoE Principles FCoE Connectivity Options FCoE and Fabric Extenders FCoE and Server Adapter Virtualization Network Topologies and Deployment Scenarios for Multihop Unified Fabric FCoE and Cisco FabricPath Chapter 10: Unified Fabric Overview Chapter 11: Data Center Bridging and FCoE Chapter 12: Multihop Unified Fabric .

Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. In this chapter we discuss how Unified Fabric promises to be the architecture that fulfills the requirements for the next generation of Ethernet networks in the data center. storage. If you do not know the answer to a question or are only partially sure of the answer. Table 10-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. Cisco Unified Fabric delivers flexibility and consistent architecture across a diverse set of environments. such as IEEE Data Center Bridging enhancements. read the entire chapter. Unified Fabric Overview This chapter covers the following exam topics: Define Challenges in Today’s Data Center Networks Unified Fabric Principles Inter-Data Center Unified Fabric Unified Fabric is a holistic network architecture comprising switching.” Table 10-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. and cloud environments. Unified Fabric supports new concepts. By unifying and consolidating infrastructure components. What is Unified Fabric? . virtual. and orchestration platforms for more efficient operations and greater scalability.Chapter 10. “Answers to the ‘Do I Know This Already?’ Quizzes. security. It uniquely integrates with servers. which are designed to operate in physical. which improve the robustness of Ethernet and its responsiveness. You can find the answers in Appendix A. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz allows you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. 1. and services infrastructure components. you should mark that question as wrong for purposes of the self-assessment.

Voice traffic b. Infiniband traffic d. What are the two types of traffic that are most commonly consolidated with Unified Fabric? a. Today’s networks as they are can accommodate any application demand for network and storage. . It is a feature of Cisco Nexus 7000 series switches to steer the traffic toward physical service nodes. 4. b. vPC leverages port-channels and has no blocked ports. It supports virtual. vPC blocks redundant paths toward the STP root bridge. d. c.a. Network traffic 5. Rewrite applications to conform to the network on which they are running. and cloud environments. c. c. Too low power consumption is not good for the electric companies. It leverages IEEE data center bridging. 2. b. What Cisco virtual product is used to secure VM-to-VM communication within the same tenant space? a. Virtual Security Gateway d. vPC topology causes loss of 50% of bandwidth because of blocked ports. Spanning tree is automatically disabled when you enable vPC. Virtual Adaptive Security Appliance c. What is vPath? a. How does vPC differ from traditional spanning tree topologies? a. 6. Deploy separate application-specific dedicated environments. c. It is a feature that allows virtual switches to get their basic configuration during boot. Multilayer design is not optimal for east-west network traffic flows. Virtual Application Security 7. b. Virtual Supervisor Module b. c. physical. It is a virtual path selected between two virtual machines in the data center. It consolidates multiple types of traffic. d. Administrators have hard time doing unified cabling. Which of the following is one challenge of today’s data center networks? a. vPC leverages IS-IS protocol to construct network topology. Allow network discover application requirements. What is one possible way to solve the problem of different application demands in today’s data centers? a. b. It leverages the Ethernet network. SAN storage traffic c. b. d. 3. d.

and core layers into a flatter data center switching fabric architecture leveraging SpineLeaf Clos topology. This prevents the data centers of today from becoming the engine of enablement for the business growth. a. ITR stands for Ingress Transformation Router. d. Figure 10-1 depicts traditional north-south and the new east-west data center switching architectures. Ethernet enjoys a broad base of engineering and operational expertise resulting from its ubiquitous presence in the data center environments for many years. c. IT departments are also under everincreasing pressure of delivering more for less. Modern application environments no longer rely on simple client/server transactions. The frame is flooded through the overlay to all remote sites. converged fabric. Which parts of the infrastructure can DCNM products manage? a. At the same time. True b. False Foundation Topics Challenges of Today’s Data Center Networks Ethernet is by far the predominant choice for a single. aggregation. where a single client request causes several network transactions between the servers in the data center. The frame is dropped and the ICMP message is sent back to the source. LAN b. True or false? In LISP.d. SAN c. which can simultaneously support multiple traffic types. 10. Over the years it has withstood the test of time against other technologies trying to displace it as the popular option for data center network environments. It is a feature of Cisco Nexus 1000v virtual switches to steer the traffic toward virtual service nodes. The frame is turned into multicast and sent only to some of the remote sites. b. 8. Such traffic patterns also facilitated the transition from traditional multilayer “stove pipe” designs of access. . where decreasing budgets have to accommodate for innovative technology and process adoption. The frame is dropped. Traffic patterns have shifted from being predominantly north-south to being predominantly east-west. IT departments of today’s organizations are under increasing pressure to deliver technological solutions. which enable strategic business growth and provide a foundation for the continuing trends of the private/public cloud deployments. vPC 9. What happens when OTV receives a unicast frame and the destination MAC address is not in the MAC address table? a. Operational silos and isolated infrastructure solutions are not optimized for virtualized and cloud environments. Wireless d. It is well understood by network engineers and developers worldwide.

All these require the network infrastructure to be flexible and intelligent to discover and accommodate the changes in network traffic dynamics. even though this is not a popular technology in the majority of the modern . differences also exist in the capability of applications to handle packet drops. are at times addressed by leveraging the low latency InfiniBand network. Although bandwidth and latency matter. The best example is the FCoE protocol.Figure 10-1 Multilayer and Spine-Leaf Clos Architecture Increased volumes of data led to significant network and storage traffic growth across the infrastructure. Different types of traffic carry different characteristics. This fundamental shift in traffic volumes and traffic patterns has resulted in greater reliance on the network. Packet drops can have severely adverse effects on overall application performance. It is not uncommon for the data centers of today to deploy an Ethernet network for IP traffic and a Fibre Channel storage-area network (SAN) for block-based storage connectivity. Client-to-server and server-to-server transactions usually involve short and bursty in nature transmissions. separate. Networks deployed in high-frequency trading environments have to accommodate ultra-low latency packet forwarding where traditional rules of typical Enterprise application deployments do not quite suffice. as well as services such as Remote Direct Memory Access (RDMA). which means that network available bandwidth and latency numbers are both important. whereas most of the server-to-storage traffic consists of long-lasting and steady flows. some accept packet drops as part of a normal network behavior. In the converged infrastructure of the Unified Fabric. Different protocols respond differently to packet drops. Now application performance is measured along with performance of the network. applicationspecific dedicated environments. The demands for high performance and cluster computing. growing application demands require additional capabilities from the underlying networking infrastructure. whereas others absolutely must have guaranteed no-drop packet delivery. One method to solve this is to deploy multiple. which is a predominant use case behind the Unified Fabric deployment. Ultimately. FCoE protocol requires a lossless underlying fabric where packet loss is not tolerated. both of the behaviors must be accommodated.

The following sections examine in more detail the key properties of the Unified Fabric network. This is where Cisco Unified Fabric serves as a key building block for ubiquitous infrastructure to deploy applications on top. All these combined create an opportunity for consolidation. resulting in fewer management points with shorter bring up time. high availability. Figure 10-2 Typical Distinct Data Center Networks When these types of separate networks are evaluated against the capabilities of the Ethernet technology. high operational expenses (OPEX). we can see that Ethernet has a promise of meeting the majority of the requirements for all the network types. It enables optimized resource utilization. data networking. intelligence. faster application rollout. Convergence of Network and Storage Convergence properties of the Cisco Unified Fabric architecture are the melding of the storage-area network (SAN) with a local-area network (LAN) delivered on top of enhanced Ethernet fabric. security. This highly segmented deployment results in high capital expenditures (CAPEX). features. and high demands for management. Figure 10-2 depicts a traditional concept of distinct networks purposely built for specific requirements. and capabilities are combined with concepts of convergence. Cisco Unified Fabric creates a platform of a true multiprotocol environment on a single unified network infrastructure. virtual. and cloud environments. unifying storage. and lower operating costs. greater application performance. Such Ethernet fabric would be operationally simpler to provision and operate. which breaks organizational silos while reducing technological complexity. . and network services onto consistent networking infrastructure across physical. Cisco Unified Fabric Principles Cisco Unified Fabric is a multifaceted approach where architectures. Some organizations even build separate Ethernet networks for IP-based storage.data centers. and so on. while greatly increasing network business value. scalability. It provides foundational connectivity service.

Figure 10-4 depicts unified port operation. . it also allows the flexibility of integrating into existing non-converged infrastructure. Both the Cisco MDS 9000 family of storage switches and the Cisco Nexus 5000/7000 family of Unified Fabric network switches have features that facilitate network convergence.Although end-to-end converged Unified Fabric provides the utmost advantages to the organizations. servers attached through traditional Fibre Channel HBAs can access newer storage arrays connected through FCoE ports. where FCoE servers connect to Fibre Channel storage arrays leveraging the principle behind bidirectional bridging on Cisco Nexus 5000 series switches. For instance. Cisco Nexus 5000 and Cisco MDS 9000 family switches can provide full. allowing FCoE servers to connect to older Fibre Channel-only storage arrays. or 1/2/4/8G Fibre Channel mode. Figure 10-3 depicts bridging between FCoE and traditional Fibre Channel fabric. Figure 10-3 FCoE and Fibre Channel Bidirectional Bridging The Cisco Nexus 5000 series switches can leverage either native Fibre Channel ports or the unified ports to bridge between FCoE and traditional Fibre Channel environment. bidirectional bridging between FCoE and traditional Fibre Channel fabric. Similarly. 10G FCoE. and by that providing investment protection for the current network equipment and technologies. unified ports can be software configured to operate in 1G/10G Ethernet. At the time of writing this book.

Example 10-1 Configuring Unified Port on the Cisco Nexus 5000 Series Switch Click here to view code image nexus5500# configure terminal nexus5500 (config)# slot 1 nexus5500 (config-slot)# port 32 type fc nexus5500 (config-slot)# copy running-config startup-config nexus5500 (config-slot)# reload Consolidation can also translate to a very significant savings. and an out-of-band management connection. For example. At the time of writing this book. and so on. With such great flexibility. a typical server requires at least five connections: two redundant LAN connections. two redundant SAN connections.and IP-based storage protocols as the time progresses.Figure 10-4 Unified Port Operation Note Unified port functionality is not supported on Cisco Nexus 5010 and 5020 switches. you can deploy Cisco Nexus Unified Fabric switches initially connected to traditional systems with HBAs and convert to FCoE or other Ethernet. Cisco Nexus 7000 series switches do not support unified port functionality. you can quickly see how it translates into a significant . Example 10-1 shows a configuration example for defining unified ports as port type Fibre Channel on Cisco Nexus 5500 switches. for each server in the data center. clustering. If you add on top separate connections for backup.

proving at last 2:1 saving in cabling. such as zoning and name servers. all these connections can be replaced by the converged network adapters. Along with reduction in cabling. and fewer switches and adapters also mean less power consumption. intermediate Unified Fabric Ethernet switches operate in either FCoE-NPV or FCoE switching mode. and so on. and support for virtual SANs (VSAN). All switches composing the Unified Fabric environment run the same Cisco NX-OS switch operating system. which contribute to lower over-subscription and eventually high application performance. One possible concern for delivering converged infrastructure for network and storage traffic is that Ethernet protocol does not provide guaranteed packet delivery. Such consistency allows IT staff to leverage operational practices of the isolated environments and apply them to the converged environment of the Unified Fabric. Figure 10-5 Server Adapter Consolidation Using Converged Network Adapters If network adapter virtualization in the form of Cisco Adapter-FEX or Cisco VM-FEX technologies is leveraged. At the same time. To enforce Fibre Channel characteristics at each hop along the path between initiators and targets. This sometimes is referred to as FCoE Dense Mode. equal cost multipathing. With consolidation in place.investment. this includes provisioning of all the Fibre Channel services. this includes support for multicast and broadcast traffic. Figure 10-5 depicts an example of how network adapter consolidation can greatly reduce the number of server adapters used in the Unified Fabric topology. which is essential for healthy Fibre Channel protocol operation. link aggregation. consolidation also carries operational improvements. From the perspective of a large size data center. inter-VSAN routing (IVR). With FCoE. it can yield even further significant savings. Fibre Channel frames are encapsulated in outer Ethernet headers for hop-by-hop delivery between initiators and targets where lossless behavior is expected. and the like. and fewer network layers. Figure 10-6 depicts the logical concept behind . and switches. fewer switches. For Fibre Channel. Fewer cables have implications on cooling by improving the airflow in the data center. a converged data center network provides all the functionality of existing disparate network and storage environments. this cabling reduction translates to fewer ports. VLANs. adapters. For Ethernet.

Cisco FabricPath Spine Nodes do not act as FCoE Fibre Channel Forwarders. making sure frame hashing does not result in out-of-order packets. Figure 10-6 FCoE Dense Mode At the time of writing this book. FCoE switching mode implies running FCoE Fibre Channel Forwarder. Cisco Data Center Network Manager (DCNM). Ultimately. Convergence with Cisco Unified Fabric also extends to management. The only partial exception is Dynamic FCoE. they still perform certain FCoE functions—for example. Cisco’s primary management tool. in-order delivery of packets endto-end. Cisco Nexus 5000/7000 series Unified Fabric switches. SAN and LAN convergence does not have to happen over night. sometimes referred to as FCoE Sparse Mode. FCoE. no-drop. migration from disparate . This set of standards is collectively called Data Center Bridging (DCB). Cisco DCNM allows managing network and storage environments as a single entity. is optimized to work with both the Cisco MDS 9000 series storage switches and Cisco Nexus 5000/7000 series Unified Fabric network switches. For smaller size deployments. and many times non-disruptive. support these standards for enhanced Ethernet. as well as the MDS 9500/9700 series storage switches with FCoE line cards. Cisco MDS 9250i Multiservice Fabric Switch offers support for the Unified Fabric needs of running Fibre Channel. where FCoE traffic is forwarded over Cisco FabricPath fabric core links. where intermediate Ethernet switches do not have FCoE awareness. and they are required for successful FCoE operation. however. the FCF. but also other types of multiprotocol communication across the Unified Fabric. IEEE defines a set of standards augmenting standard Ethernet protocol behavior. Running FCoE requires additional capabilities added to the Ethernet Unified Fabric network. FICON and Fibre Channel over IP. Cisco DCNM provides a secure environment. which is increasingly necessary not only for the storage traffic. which can manage a fully converged network. iSCSI. These additional standards make sure that Unified Ethernet fabric provides lossless. as well as monitor and automate common network administration tasks. With the Dynamic FCoE feature.having all FCoE intermediate switches operating in either FCoE-NPV or FCoE switching mode. DCB increases overall reliability of the data center networks. Cisco does not support FCoE topologies. and Cisco Unified Fabric architecture is flexible to allow gradual.

NICs and HBAs make way for a new type of adapter called converged network adapter (CNA). the underlying network infrastructure cannot differentiate between iSCSI and any other TCP/IP application. to terminate the iSCSI TCP/IP session. some NICs and CNAs connected to Cisco . and reencapsulate it in the FCoE protocol for transmission over the Unified Fabric. which are used for this solution. but that increases the cost of network adapters. iSCSI allows encapsulating original Fibre Channel payload in the TCP/IP packets. In this case. extract the payload. Small Computer System Interface over IP (iSCSI) can be considered as an alternative to FCoE. iSCSI can also introduce increased CPU cycles resulting from TCP/IP overhead on the server. Migration from NICs and HBAs to the CNA can occur either during the regular server hardware refresh cycles or it can be carried out by proactively swapping them during planned maintenance windows. such as a Cisco MDS 9250i storage switch. which relies on IEEE DCB standards for efficient operation. no-drop. Figure 10-7 iSCSI Deployment Options Contrary to FCoE. and it simply delivers IP packets between source and destination. For environments where storage arrays lack iSCSI functionality. Traditionally. In environments where storage connectivity has less stringent requirements. TCP/IP offload can be leveraged on the network adapter to help with higher CPU utilization. iSCSI does not support native Fibre Channel services and tools. organizations can leverage an iSCSI gateway device.isolated environments to unified network infrastructure. iSCSI leverages TCP protocol to deliver lossless. in-order delivery. server network interface cards (NIC) and host bus adapters (HBA) are used to provide connectivity into isolated LAN and SAN environments. which are built around traditional Fibre Channel fabrics. Figure 10-7 depicts iSCSI deployment options with and without the gateway service. Even though DCB is not mandatory for iSCSI operation. With convergence of the two.

and 100G offers substantially higher useable bandwidth for the server and storage connectivity. In that case. Potential use of other DCB standards coded in TLV format adds even further flexibility to the iSCSI deployment by allowing underlying Unified Fabric infrastructure separate iSCSI storage traffic from the rest of IP traffic. is achieved without compromising manageability and cost. allowing overall larger scale. particularly with small and medium-sized businesses (SMB). 40G. as well as the overall fabric and system scalability. Example 10-2 Configuring No-Drop Behavior for iSCSI Traffic Click here to view code image nexus5000# configure terminal nexus5000(config)# class-map type qos match-all c1 nexus5000(config-cmap-qos)# match protocol iscsi nexus5000(config-cmap-qos)# match cos 6 nexus5000(config-cmap-qos)# exit nexus5000(config)# policy-map type qos c1 nexus5000(config-pmap-qos)# class c1 nexus5000(config-pmap-c-qos)# set qos-group 2 nexus5000(config-pmap-c-qos)# exit nexus5000(config)# class-map type network-qos c1 nexus5000(config-cmap-nq)# match qos-group 2 nexus5000(config-cmap-nq)# exit nexus5000(config)# policy-map type network-qos p1 nexus5000(config-pmap-nq)# class type network-qos c1 nexus5000(config-pmap-c-nq)# pause no-drop Although iSCSI continues to be a popular option in many storage applications. DCBX would negotiate configuration and settings between the switch and the network adapter by leveraging typelength-value (TLV) and sub-TLV fields. This allows distributing configuration values. Example 10-2 shows the relevant command-line interface commands to enable a no-drop class for iSCSI traffic on Nexus 5000 series switches for guaranteed traffic delivery. to the connected network adapters in a scalable manner. either traditional or FCoE. fewer switches. and subsequently fewer network tiers and fewer management points. for example. where individual device performance. Each network deployment has unique . part of DCB. such as. The capability to grow the network in a manner that causes the least amount of disruption to the ongoing operation is a crucial aspect of scalability. it is not quite replacing the wide adoption of the Fibre Channel protocol. Class of Service (COS) markings. for critical storage environments. Scalability parameters can also stretch beyond the boundary of a single data center.Nexus 5000 series switches can be configured to accept configuration values sent by the switches leveraging Data Center Bridging Exchange protocol. Scalability and Growth Scalability is defined as the capability to grow the environment based on changing needs. Unified Fabric leveraging connectivity speeds of 10G. which contributes to fewer ports. Unified Fabric is often characterized as having a multidimensional scalability. allowing a true geographic span.

Its architecture does not include fabric modules. higher port capacity. Cisco Nexus 9500 series modular switches have no backplane. Figure 10-8 depicts the Cisco Nexus 7000 series switch fabric architecture. which cannot be addressed by a rigid architecture. Through an evolution of the Fabric Modules and the I/O Modules. These switches provide fixed and modular switch architecture with high port density and the capability to run Application Centric Infrastructure. Cisco Nexus 9000 series switches are based on a combination of Cisco’s own ASICs and Broadcom ASICs. For example. Figure 10-10 depicts Cisco Nexus 9500 series modular switch architecture. Cisco Nexus 7000 series switches offer highly scalable M-series I/O modules for large capacities and feature richness. while also following the investment protection philosophy. and F-series I/O modules for lower latency. Figure 10-9 depicts Cisco Nexus 5000 series switch fabric architecture. Cisco’s entire data center switching portfolio adheres to the principles of supporting incremental growth and demands for scalability. Figure 10-9 Cisco Nexus 5000 Series Switch Fabric Architecture The most recent addition to the Cisco Nexus platforms is the Cisco Nexus 9000 family switches. and the line cards are directly interconnected through the switch backplane. . and several differentiating features. Cisco Nexus 7000 series switches offer expandable pay-asyou-grow scalable switch fabric architecture and non-oversubscribed behavior for high-speed Ethernet fabrics. Unified Fabric caters to the needs of ever expanding scalability by providing flexible architecture. Cisco Nexus 5000 series switches offer a highly scalable access layer platform with 10G and 40G Ethernet capabilities and FCoE support for Unified Fabric convergence. but instead uses a unified port controller and Unified Fabric crossbar.characteristics. Figure 10-8 Cisco Nexus 7000 Series Switch Fabric Architecture Note Cisco Nexus 7004 switch has no fabric modules.

Figure 10-11 NX-OS Software Architecture Components Cisco NX-OS leverages innovative approach in software development by offering stateful process restart. runs modular software architecture based on an underlying Linux kernel. coupled with Role-Based Access Control (RBAC) and a . Cisco NX-OS operating system.Figure 10-10 Cisco Nexus 9500 Series Modular Switch Architecture On the storage switch side. Cisco MDS switches offer a highly available platform for storage area networks with expandable and resilient architecture of supporting increased Fibre Channel speeds and services. persistent storage space (PSS) for process runtime information. Figure 10-11 depicts NX-OS software architecture components. component patching. inservice software upgrade (ISSU) and so on. which powers data center network and storage switching platforms.

as such. Cisco implements virtual port channel (vPC) technology to help alleviate deficiencies of traditional spanning tree–driven topologies. Figure 10-13 depicts vPC operational logic. In the past. Cisco NX-OS software enjoys frequent updates to keep up with the latest feature demands. and service insertion methodologies. many applications reside on virtual machines with inherent characteristics of workload mobility and any workload anywhere. Modern data centers observe different trends in packet forwarding where traditional client/server applications driving primarily north-south traffic patterns are replaced by a new breed of applications with predominantly east-west traffic patterns. where Etherchannels or port channels enable all-forwarding uplink topologies. . This generates additional requirements on the data center switching fabric side around extensibility of Layer 2 connectivity. vPC still runs Spanning Tree Protocol underneath as a safeguard mechanism to break Layer 2 bridging loops should they occur. fabric scale. This shift has significant implications on the data center switching fabric designs. traditional growth implied adding capacity in the form of switch ports or uplink bandwidth. fabric capacities. Spanning tree Layer 2 loop-free topologies do not allow backup links and. this is no longer the case.rich set of remote APIs. Figure 10-12 Spanning Tree Layer 2 Operational Logic The evolution of spanning tree topologies led to the use of multichassis link-aggregation solutions. allowing faster convergence and full utilization of the Access Switch provisioned uplink bandwidth. In addition to the changing network traffic patterns. Figure 10-12 depicts spanning tree Layer 2 operational logic. cause “waste” of 50% of provisioned uplink bandwidth on access switches by blocking redundant uplinks. Legacy spanning tree–driven topologies lack fast convergence required for the stringent demands of modern applications.

Even with vPC topologies. Cisco FabricPath combines the simplicity of Layer 2 with proven scalability of Layer 3. . the scalability of solutions requiring large Layer 2 domains in support of virtual machine mobility can be limited. It leverages Spine-Leaf Clos architecture to construct data center switching fabric with predicable end-to-end latency and vast east-west bisectional bandwidth. It also allows equal cost multipathing for both unicast and multicast traffic to maximize network resource utilization.Figure 10-13 vPC Operational Logic Can we do better? Yes. Cisco FabricPath takes a step further by completely eliminating the dependency on the Spanning Tree Protocol in constructing loop-free Layer 2 topology. Figure 10-14 depicts Cisco FabricPath topology and forwarding. by using IS-IS routing protocol instead of Spanning Tree Protocol. It employs the principle of conversational learning to further scale switch hardware forwarding resources.

Figure 10-14 Cisco FabricPath Topology and Forwarding By taking data center switching from PODs to fabric. Cisco FabricPath breaks the boundaries and enables large-scale virtual machine mobility domains. Similar to Cisco FabricPath. Figure 10-16 depicts the VXLAN packet format. Figure 10-16 VXLAN Packet Fields . Figure 10-15 VXLAN Topology and Forwarding VXLAN also assists in breaking through the boundary of ~4. sometimes also known as the Virtual Network Identifier (VNID) field. such as virtual extensible LANs or VXLAN.000 VLANs supported by the IEEE 802. VXLAN leverages underlying Layer 3 topology to extend Layer 2 services on top (MAC-in-IP). Use of Cisco Nexus 2000 series Fabric Extenders in conjunction with Cisco FabricPath architecture enables us to build even larger-scale switching fabrics without increasing administrative domain.1Q standard widely deployed between switches. Principles of Spine-Leaf topologies can also be applied with other encapsulation technologies. The 24-bit field potentially enables you to create more than 16 million virtual segments. which requires a large number of virtual segments. where you can see the 24-bit VXLAN ID field. providing similar advantages to Cisco FabricPath while enjoying broader switching platform support. This is an important consideration with multitenant Unified Fabric. Figure 10-15 depicts VXLAN topology and forwarding.

Security and Intelligence Network intelligence coupled with security is an essential component of Cisco Unified Fabric. where switch supervisor modules. and the like can be consistently set across the environment allowing to tune those for special application needs if necessary. Figure 10-17 depicts Cisco Nexus 1000v components. and intelligent services delivered to applications residing on physical or virtual network infrastructure alike. Cisco NX-OS. VSMs should be deployed in pairs to provide the needed redundancy. so that creation. deployment of applications required a considerable effort of sequentially configuring various physical infrastructure components.VXLAN fabric allows building large-span virtual machine mobility domains with significant scale and bandwidth. These products run in the hypervisor layer to secure virtual workloads inside a single tenant or across tenants. removed. As multiple types of traffic are consolidated on top of the Unified Fabric infrastructure. Cisco Unified Fabric also contains a foundation for traffic switching in the hypervisor layer in the form of a Cisco Nexus 1000v product. performance. For the Cisco Nexus and Cisco MDS family of switches. The use of consistent network policies is essential for successful Unified Fabric operation. or migration of individual workloads has no impact on the overall environment operation. Policies also greatly reduce the risk of human error. it raises legitimate security concerns. . and Layer 4 through Layer 7 policies for the virtual workloads. security. Cisco Nexus 1000v is a cross-hypervisor virtual software switch that allows delivering consistent network. control the switch line cards called Virtual Ethernet Module (VEM). Policies are applied across the infrastructure and most importantly in the virtualized environment where workloads tend to be rapidly created. Cisco Unified Fabric policies around security. and moved around. called Virtual Supervisor Module (VSM). It provides consistency. high availability. removal. In the past. Policies can be templatized and applied in a consistent repeatable manner. It operates in the manner similar to a modular chassis. a common feature set. differentiated service. Consistent use of policies also enables faster application deployment cycles. the intelligence comes from a common switch operating system. Cisco Unified Fabric contains a complete portfolio of security products that are virtualization aware.

such as VMware vCenter. Virtual services deployed as part of the Cisco Unified Fabric solution can secure intra-tenant communication between the virtual machine belonging to the same tenant. It provides logical isolation and policy enforcement based on both simple networking constructs and more elaborate virtual machine attributes.Figure 10-17 Cisco Nexus 1000v Components Cisco Nexus 1000v runs NX-OS software on the VSMs and communicates with Virtual Machine Managers. Cisco Virtual Security Gateway (VSG) is deployed on top of Cisco Nexus 1000v virtual switch catering the case of intra-tenant security. Figure 10-18 depicts an example of Cisco VSG deployment to secure a three-tier application within a single tenant space. or they can secure intertenant communication between the virtual machines belonging to different tenants. for easy single pane of glass operation. .

a policy decision is made and programmed into the Cisco Nexus 1000v switch residing in hypervisor kernel space. the first packet of each flow is redirected to VSG for inspection and policy enforcement. Figure 10-19 vPath Operation with VSG VSG security policies can be deployed in tandem with switching policies enforced by the Cisco Nexus 1000v. which results in high traffic throughput. Figure 10-19 depicts vPath operation with VSG. All the subsequent packets of the flow are switched in a fast-path in the hypervisor kernel. This model contributes to consistent policies across the environment.Figure 10-18 Three-Tier Application Secured by VSG The VSG product offers a very scalable mode of operation leveraging Cisco vPath technology for traffic steering. With Cisco vPath. Example 10-3 shows how VSG is inserted into the Cisco Nexus 1000v port-profile for specific tenant use. Following inspection. Example 10-3 VSG Port-Profile Configuration .

10.1 vlan 10 profile vnsp-1 Inter-tenant space can be secured by either virtual firewall in the form of virtual Cisco ASA product deployed in the hypervisor layer or by the physical appliances deployed in the upstream physical network.Click here to view code image nexus1000v# configure terminal nexus1000v(config)# port-profile host-profile nexus1000v(config-port-prof)# org root/Tenant-A nexus1000v(config-port-prof)# vn-service ip 10. Cisco WAAS offers the following main advantages: Transparent and standards-based secure application delivery LAN-like application performance with application-specific protocol acceleration . Physical ASA appliances are many times positioned at the Unified Fabric aggregation layer. and they can also provide security services for the physical non-virtualized workloads to create consistent policy across virtual and physical infrastructures. This is where Cisco Wide Area Application Services (WAAS) offer a solution for application acceleration across WAN. Figure 10-20 depicts typical Cisco ASA topology in a traditional multilayer and Spine-Leaf architectures. Figure 10-20 Typical Cisco ASA Deployment Topologies Both physical and virtual ASA together provide a comprehensive set of security services for Cisco Unified Fabric.10. physical ASAs are connected to the services leaf nodes. Another component of intelligent service delivery comes in the form of application acceleration. In the Spine-Leaf Clos architecture. It is also common to use two layers of security with virtual firewalls securing east-west traffic between the virtual machines and physical firewalls securing the north-south traffic in and out of the data center.

it can be deployed either at the edge of the environment in the case of a branch office or at the aggregation layer in case of a data center. both ends of the WAN optimized connection can signal each other with very small signatures. Second. it can be deployed in the hypervisor leveraging Cisco vPath redirection mechanisms. Cisco WAAS technology has two main deployment scenarios. First. a virtual appliance. which can be delivered in the form of a physical appliance. and bandwidth starvation. In both cases. and grow to be business inhibitors. subsequently optimizing application performance across long-haul circuits. Cisco WAAS commonly leverages WCCP protocol traffic redirection to steer the traffic off path toward the WAAS appliance cluster. Figure 10-21 depicts use of WAN acceleration between the branch office and the data center. leveraging deployments of WAAS components in various parts of the topology. Geographically dispersed organizations leverage significant WAN capacities and incur significant operational expenses for maintaining circuits and service levels. packet loss. Cisco WAAS employs TCP flow optimization (TFO) to better manage TCP window size and maximize network bandwidth use in a way superior to common TCP/IP stack behavior on the host. influence user productivity.IT agility through reduction of branch and data center footprint Enterprisewide deployment scalability and design flexibility Enhanced end-user experience Cisco WAAS reduces consumption of WAN bandwidth. . a router module. which cause the content to be served out of the locally maintained cache. With DRE. Cisco WAAS leverages adaptive persistent session-based Lempel-Ziv (LZ) compression to shrink the size of network data traveling across. or a Cisco IOS software feature. Finally. If left unaddressed. Cisco WAAS leverages context-aware data redundancy elimination (DRE) to reduce traffic load on WAN circuits. Cisco WAAS leverages generic network traffic optimization. as well as application-specific acceleration techniques to counter adverse impact of delivering applications across WAN. Hypervisor deployment model brings the advantages of application acceleration close to the virtual workloads. Some of the traffic transmitted across the WAN is repetitive and can be locally cached. these factors can adversely impact application performance. Delivering applications over WAN is often accompanied by jitter. rather than transmitting the entire data.

Figure 10-21 WAN Acceleration Between Data Center and Branch Office Cisco WAAS technology deployment requires positioning of relevant application acceleration engine on either side of the WAN connection. and deployment model. DCNM can set and automatically provision a policy on converged LAN and SAN environments. Management of the network is at the core of its intelligence. These factors contribute to the overall data center infrastructure reliability and uptime. Refer to Chapter 20. DCNM provides a robust set of features by streamlining the provisioning aspects of the Unified Fabric components.” for a more detailed discussion about Cisco WAAS technology. thereby improving business continuity. “Cisco WAAS and Application Acceleration. It dynamically monitors and takes actions when needed on both physical and virtual infrastructures. Figure 10-22 depicts Cisco Prime DCNM dashboard. It provides visibility and control of the unified data center to uphold stringent servicelevel agreements. . types of workloads. Cisco Prime Data Center Network Manager (DCNM) manages Cisco Nexus and Cisco MDS families of switches through a single pane of glass. The selection of an engine is based on performance characteristics.

Figure 10-23 depicts data center interconnect technology solutions. Organizations can also leverage reports available through the preconfigured templates. DCNM can also handle provisioning and monitoring aspects of FCoE deployment.Figure 10-22 Cisco Prime DCNM Dashboard Software features provided by Cisco NX-OS can be deployed with Cisco DCNM because it leverages multiple dashboards for the ease of use and representation of the overall network topology. Network foundation to enable multi-data-center deployment must extend the network properties across the data centers participating in the distributed setup. This discovery data can be imported to the change management systems for easier operations. Such properties are the extension of Layer 2. Layer 3. including an endto-end path containing a mix of traditional Fibre Channel and FCoE. keeping track of all physical and logical network device information. At the same time. and storage connectivity between the data centers powered by the end-to-end Unified Fabric approach. . each data center must remain an autonomous unit of operation. Cisco DCNM provides automated discovery of the network components. Inter-Data Center Unified Fabric Geographically dispersed data centers provide added application resiliency and enhanced application reliability. Cisco DCNM has extensive reporting features that allow building custom reports specific to the operating environment.

availability. while Layer 2 bridged connectivity is achieved by leveraging Overlay Transport Virtualization (OTV). or converged protocol on top. Layer 3 networks allow extending storage connectivity between disparate data center environments by leveraging Fibre Channel over IP protocol (FCIP). storage. Fibre Channel over IP extends the reach of the Fibre Channel protocol across geographic distances by encapsulating Fibre Channel frames in outer IP headers for transport across interconnecting Layer 3 routed networks. These could be long haul point-to-point links. As organizations leverage ubiquitous storage services. or outcomes of acquisitions. Fibre Channel over IP Disparate storage area networks are not uncommon. which lay the foundation for inter-data center Unified Fabric extension. These could be artifacts of building isolated environments to address local needs. Figure 10-24 depicts the principle of transporting Fibre Channel frames across IP transport. a need for interconnecting isolated storage networks while maintaining characteristics of performance. are in use today. deploying storage environments with different administrative boundaries. as well as private or public MPLS networks. Layer 1 technologies in the form of DWDM or CWDM allow running any data.Figure 10-23 Data Center Interconnect Methods Multiple technologies. and fault-tolerance becomes increasingly important. . Disparate storage environments are most often located in topologies segregated from each other by Layer 3 routed networks.

segments received after the packet loss occurs are not acknowledged and must be retransmitted. SACK is not enabled by default on Cisco devices. which in turn have a detrimental effect on FCIP storage traffic performance. and it must be enabled and supported on both ends of FCIP circuit. As TCP window size is scaled up. any packets carrying Fibre Channel payload are retransmitted following standard TCP/IP protocol operation. Cisco implements a selective acknowledgement (SACK) mechanism where the receiver can selectively acknowledge TCP segments received after packet loss occurs. Support for jumbo frames can allow transmission of full-size 2148 bytes Fibre Channel frames without requiring fragmentation and reassembly at the IP level into numerous TCP/IP segments carrying a single Fibre Channel frame. cannot come at the expense of storage network performance. the sustained network throughput increases. This causes unnecessary traffic load. but so do the chances of packet drops. it must be manually enabled if required by issuing the sack-enable CLI command under the FCIP profile configuration. network capacity “waste. This. In case of packet loss. Large Maximum Transmission Unit size. such as leveraging TCP extensions for high performance (IETF RFC 1312) and scaling TCP window size up to 1Gb. . Acknowledgments are used to signal from the receiver back to the sender the receipts of contiguous in-sequence TCP segments. in the transit IP network can also greatly influence the performance characteristics of the storage traffic carried by FCIP. it must be manually enabled if required by issuing the pmtu-enable CLI command under the FCIP profile configuration. Should packet drop occur. The SACK mechanism is most effective in environments with long and large transmissions. TCP/IP protocol traditionally achieves its reliability for traffic delivery by leveraging the acknowledgement mechanism for the received data segments. Intelligent control over TCP window size is needed to make sure long haul links are fully utilized while preventing excessive traffic loss and subsequent retransmission. This lossless storage traffic behavior allows FCIP to extend native Fibre Channel connectivity across greater distances. FCIP PMTUD is not automatically enabled on Cisco devices.” and increased delays. however. Path MTU Discovery (PMTUD) mechanism can be invoked on both sides of the FCIP connection to make sure the effective maximum supported path MTU is automatically discovered. or MTU.Figure 10-24 FCIP Operation Principle Leveraging TCP/IP protocol provides the FCIP guaranteed traffic delivery required for healthy storage network operation. rather than all packets beyond the acknowledgement. Cisco FCIP capable platforms allow several features to maximize utilization of the intermediate Layer 3 links. In this case the sender only needs to retransmit the TCP segment lost in transit.

Overlay Transport Virtualization Certain mechanisms for Layer 2 extension across data centers. VSAN segregation is traditionally achieved through data-plane frame tagging. reliability. Leveraging device. Figure 10-25 depicts the principle behind provisioning multiple redundant FCIP tunnels between disparate storage network environments leveraging port channeled connections. At the time of writing this book. Spanning Tree Protocol–based extension topologies do not scale well and can cause cascading failures that can propagate beyond the boundaries of a single availability domain. introduce challenges that can inhibit their practical use. and even though FCIP can handle such conditions. Each VSAN. These tunnels contribute to connectivity resiliency and allow failover. These redundant tunnels are represented as virtual E_port or virtual TE_port types. Some of the storage traffic types encapsulated inside the FCIP packets can be very sensitive to degraded network conditions. They fall short of making use of . and fabric management for ubiquitous deployment methodology. Along with performance characteristics. while proper capacity management helps alleviate performance bottlenecks.Deployment of Quality of Service features in the transit Layer 3 routed network is not a requirement for FCIP operation. FCIP protocol is available on Cisco Nexus MDS 9500 modular switches by leveraging IP Storage Services Module (IPS) and the Cisco MDS 9250i Multiservice Fabric Switch. such features are highly advised to make sure that FCIP traffic is subjected to the absolute minimum influence from transient network delays and congestion. At the same time. and path redundancy eliminates the single point of failure. FCIP tunnels can be bound into port channels for simpler FSPF topology and efficient traffic loadbalancing operation. either local or extended over the FCIP tunnel. overall storage performance will gain benefits from QoS policies operating at the network level. fabric services. however. which are considered by the Fabric Shortest Path First (FSPF) protocol for equal-cost multipathing purposes. zoning. exhibits the same characteristics in regard to routing. Utmost attention should be paid to make sure the transit Layer 3 routed network between the sites interconnected by FCIP circuits conforms to the high degree of availability. FCIP allows provisioning numerous tunnels interconnecting disparate storage networks. and efficiency. or in some cases across PODs of the same data center. For example. This frame tagging is transparently carried over FCIP tunnels to extend the isolation. Just like regular inter-switch links. FCIP must also be designed following high availability and redundancy principles. Figure 10-25 Redundant FCIP Tunnels Leveraging Port Channeled Connections FCIP protocol allows maintaining VSAN traffic segregation by building logically isolated storage environments on top of shared physical storage infrastructure. circuit.

which greatly reduces the risk often associated with large-scale Layer 2 extensions. Let’s now look at a possible alternative. Figure 10-26 depicts basic OTV traffic forwarding principle where Layer 2 frames are forwarded between the sites attached to OTV overlay. Such a solution can be challenging for organizations not willing to develop operational knowledge around those technologies. however. Example 10-4 IS-IS Neighbor Relationships Between OTV Edge Devices Click here to view code image . and multipathed Layer 2 data center interconnect technology that preserves isolation and fault domain properties between the disparate data centers. transport agnostic. Figure 10-26 OTV Packet Forwarding Principle OTV simulates the behavior of traditional bridged environments while introducing improvements. Example 10-4 shows IS-IS neighbor relations established between the OTV edge devices through the overlay. OTV leverages the control plane protocol in the form of an IS-IS routing protocol to dissimilate MAC address reachability across the OTV edge devices participating in the overlay. at the same time.multipathing topologies by logically blocking redundant links. they depend on specific transport network characteristics. Other Layer 2 data center interconnect technologies offer a more efficient model than Spanning Tree Protocol–based topologies. as well as provide intelligent traffic loadbalancing across the data center interconnect links. The best example is MPLS-based technologies that require label switching between the data center networks for Layer 2 extension. Cisco Overlay Transport Virtualization (OTV) technology provides nondisruptive. multihomed. OTV leverages a MAC-in-IP forwarding paradigm to extend Layer 2 connectivity over any IP transport. A more efficient protocol is needed to leverage all available paths for traffic forwarding.

Overlay MAC age . manage.primary entry. which are difficult to set up. An efficient network core is essential to deliver optimal application performance across the OTV overlay. G .OTV_EDGE1_SITEA# show otv isis adjacency OTV-IS-IS process: default VPN: OTV-IS-IS adjacency database: System ID SNPA NEXUS7K-EDGE2 001b.ac6e dynamic 0 F F Po1 * 110 0000.43c1 OTV_EDGE2_SITE 001b. Let’s now look . such as VLAN ID. After MAC addresses are learned.+ .6e01. because the transport network is transparently forwarding their traffic.Gateway MAC.43c4 Overlay1 Level 1 1 1 State UP UP UP Hold Time 00:00:25 00:00:27 00:00:07 Interface Overlay1 Overlay1 Overlay1 This approach is different from the traditional bridged environments.54c2.6e01. The actual hosts on both sides of the connection are unaware of OTV.43c2 static F F sup-eth1(R) * 110 0000. and multihoming mechanisms. The use of overlay in OTV creates point-to-cloud type connectivity rather than a collection of fully meshed point-to-point links.016d dynamic 0 F F Po1 O 110 0000. Control plane protocol offers intelligence above and beyond the rudimentary behavior of flood-and-learn techniques and thus is instrumental in implementing OTV value-added features to enhance and safeguard the inter-datacenter Layer 2 environment. Local OTV edge devices hold the mapping between remote host MAC addresses and the IP address of the remote OTV edge device. multipathing.026d dynamic 0 F F Overlay1 Control plane protocol leveraged by OTV includes other Layer 2 semantics. Such features stretch across loop-prevention. Example 10-5 shows a MAC address table on an OTV edge device.6e02.6e02.6e02. O . Example 10-5 MAC Address Table Showing Local and OTV Learned Entries Click here to view code image OTV_EDGE1_SITEA# show mac address-table Legend: * . where you can see locally learned MAC addresses as well as MAC addresses available through the overlay.seconds since last seen.43c3 OTV_EDGE_SITE3 001b.026c dynamic 0 F F Overlay1 O 110 0000. (R) . and enable First Hop Routing Protocol localization and Address Resolution Protocol (ARP) proxying without creating an additional operational overhead.6e01.020b dynamic 0 F F Overlay1 O 110 0000.0c07.6e02.Routed MAC. which allow maintaining traffic segregation across the data center boundaries. and scale.primary entry using vPC Peer-Link VLAN MAC Address Type age Secure NTFY Ports ---------+-----------------+--------+---------+------+----+-----------------G 001b. which utilize data-plane learning or flood-and-learn behavior. As mentioned earlier.54c2.54c2. OTV can safeguard the Layer 2 environment across the data centers or across IP-separated PODs of the same data center by employing proven Layer 3 techniques.020a dynamic 0 F F Overlay1 O 110 0000.54c2. OTV edge devices leverage the underlying IP transport to carry original Ethernet frames encapsulated in IP protocol between the data centers.016c dynamic 0 F F Po1 * 110 0000.010a dynamic 0 F F Po1 * 110 0000.

Because OTV is transport agnostic. OTV can be migrated from vPC to routed links for more streamlined infrastructure. IS-IS can leverage its own hello/timeout timers or it can leverage BFD for very rapid failure detection.a little bit deeper into the main benefits achieved with OTV technology. OTV employs the active-active mode of operation at the edge of the overlay where all ingress and egress points forward traffic simultaneously. Local spanning tree topologies are unaffected too. OTV can operate over unicast-only or multicast-enabled core. Because control plane protocol is used to dissimilate MAC address reachability information. unless specially allowed by the administrator. should one be formed in the locally attached Layer 2 environments. OTV can even be deployed over existing Layer 2 data center interconnect solutions. Spanning Tree Bridged Protocol Data Units (BPDU) are not forwarded over the OTV overlay. OTV does not allow unknown unicast traffic across the overlay. and ARP traffic is forwarded only in a control manner by offering local ARP response behavior. where multicast-enabled core is required. One of the flexibility attributes of OTV comes from its deployment model at the data center aggregation layer. so OTV edge devices do not need to maintain a state of tunnels interconnecting data centers. If deployed over vPC. in time. Multipathing is an important characteristic of OTV to maximize the utilization of available bandwidth. Another flexibility attribute of OTV comes from its incremental deployment characteristics. which can rapidly reroute around failed sections of the network. Any network capable of carrying IP packets can be used for OTV core. Fault isolation and site independence: Even though local spanning tree topologies are unaware of OTV. These behaviors create site containment similar to one available with Layer 3 routing. In case of OTV operating over multicast-enabled core. This is a huge advantage for OTV. there is a requirement to run multicast routing in the underlying transit network. as with some of the other Layer 2 data center interconnect technologies. where hosts and switches not participating in OTV do not have any knowledge of OTV. Use of multiple ingress and egress points does not require a complex deployment and operational model. IS-IS. so spanning tree domains in different data centers keep operating independently of each other. OTV’s built-in loop prevention mechanisms avoid loops in the overlay and limit the scope of the loop. with the exception being multicast-based deployment. rather than tunneling. Optimal bandwidth utilization and scalability: OTV can make use of multiple edge devices better utilizing all available ingress and egress points in the data center through the built-in multipathing behavior. Link failure detection in the core is based on routing protocols. Little to no effect on existing network design: OTV leverages the IP network to transport original MAC encapsulated frames. Multipathing also applies to the traffic as it crosses the IP transit network. Fast failover and flexibility: OTV control plane protocol. where it can bring the advantages of fault containment and improved scale. Optimized operations: OTV control plane protocol allows the simple addition and removal of . OTV does introduce a boundary for the Spanning Tree Protocol. OTV also leverages dynamic encapsulation. dictates remote MAC address reachability. it can be seamlessly deployed on top of any existing IP infrastructure. where ECMP behavior can utilize multiple paths and links between the data centers. which provides a very convenient topological placement to address the needs of the Layer 2 extension across the data centers. such as vPC. OTV is also fully transparent to the local Layer 2 domains.

The traditional network forwarding paradigm does not distinguish between the identity of an endpoint (virtual machine) and its location in the network topology. Distributed applications can typically make use of the IP network. distributed applications. nonroutable applications. or resource constraints. Hyper-V. fault tolerance. This behavior automatically establishes IS-IS neighbor relations across the shared medium without the need to define all remote OTV edge devices. Locator ID Separation Protocol Connectivity requirements between data center locations depend on the nature of the applications running in these data centers. it can be very beneficial in numerous cases. however.OTV sides from the overlay. the network must provide ubiquitous and optimized connectivity services across the entire mobility domain. Nonroutable applications typically expect to be placed on a share transmissions medium. . with one example being Hadoop for Big Data distributed computation tasks. Finally. This great simplicity does not come on the account of deep level troubleshooting that is still available on the platforms supporting OTV functionality. Clustered applications are a “mixed bag. and Cisco CSR 1000v software router platforms. which are replicated by the transit network and delivered to all other OTV edge devices that are part of the same overlay (listening to the same multicast group). the need to cater to virtualized application needs also increases. dynamic resourcing. which changes its location. Layer 2 data center interconnect is not a requirement. Cisco ASR 1000 routers. or ESXi.” with some being able to operate over IP-routed network and some requiring Layer 2 adjacency. and simplified disaster recovery are some of the main use cases behind Layer 2 extension. nothing more than an IP network is required to make sure application components can communicate with each other. OTV technology is available on Cisco Nexus 7000 switches. clustered applications. As virtual machines move around the network due to administrative actions. These applications make use of server virtualization technologies and leverage server virtualization properties. even though at the time of writing the book only about 25% of total servers are estimated to be running a hypervisor virtualization layer. At the time of writing this book. Cross-data center workload mobility. Endpoint mobility can cause inefficient traffic-forwarding patterns as a traditional network struggles to deliver packets across the shortest path toward an endpoint. KVM. and virtualized applications. As modern data centers become increasingly virtualized. Figure 10-27 depicts traffic tromboning cases where a virtual machine migrates from Site A to Site B. ingress traffic is still delivered to Site A because this is where the virtual machine’s IP subnet is still advertised out of. such as Ethernet. Ultimately. however. so data center interconnect requirements can be different based on specific clustered application needs. and so on. various failure conditions. the convenience of operations comes from the greatly simplified command-line interface needed to enable and configure OTV service on the device. Most commonly deployed applications fall into one of the following categories: routable applications. virtualized applications reside on the virtual hypervisors. distributed application and server clustering. OTV edge devices express their desire to join the overlay by sending IS-IS hello messages to the multicast control group. and as such require data center interconnect to exhibit Layer 2 connectivity properties. such as virtual machine mobility. such as Zen. For routable applications.

provides a scalable method for network traffic routing. Figure 10-28 shows ITR. To integrate a non-LISP environment into the LISP-enabled network. These edge devices are called tunnel routers. proxy tunnel routers are used (PITR and PETR). LISP accommodates virtual machine mobility while maintaining shortest path forwarding through the infrastructure. part of Cisco Unified Fabric holistic architecture. LISP routers providing both PITR and PETR functionality are called PxTRs. They perform LISP encapsulation sending the traffic from Ingress Tunnel Router (ITR) to Egress Tunnel Router (ETR) on its way toward the destination endpoint. while RLOCs are the identifiers of the edge devices on both sides of a LISP-optimized environment (most likely a loopback interface IP address on the edge router or data center aggregation switch). and the proxy tunnel router topology. LISP routers providing both ITR and ETR functionality are called xTRs. as the name suggests. are the identifiers for the endpoints in the form of either a MAC or IP address.Figure 10-27 Traffic Tromboning Case A better mechanism is needed to leverage network intelligence. LISP. which is based on a true endpoint location in the topology. ETR. . EIDs. By decoupling endpoint identifiers (EID) from the Routing Locators (RLOC).

ESM leverages Layer 2 data center interconnect to extend VLANs and subsequently subnets across multiple data centers. Catering to the virtual machine mobility case. while keeping its assigned IP addresses. If matched. a mapping is needed between the EID that represents the endpoint and the RLOC that represents a tunnel router that this endpoint is “behind. When endpoints roam. such as OTV. Versatility of LISP allows it to be leveraged in a variety of solutions: Simplified and cost-effective multihoming. unless refreshed.Figure 10-28 LISP Tunnel Routers To derive the true location of an endpoint. ASM assumes that there is no VLAN and subnet extension between the data centers. It compares the source IP address of the received packet to the range of EIDs defined for the interface on which this data packet was received. or across subnets. It also updates the tunnel and proxy tunnel routers that have cached this information. The endpoint that moves to another subnet or across the same subnet extended between the data centers leveraging technology. These entries are derived from the LISP query process and are cached for a certain period of time before being flushed. the LISP first-hop router must detect such an event. Unlike the approach in conventional databases. is called a roaming device. Full mapping database is distributed among all tunnel routers in the LISP environment. LISP enables a virtual machine to change its network location. Endpoint mobility events can occur within the same subnet. with each tunnel router only maintaining entries needed in a particular given time for communication. LISP provides an exciting new mechanism for organizations to improve routing system scalability and introduce new capabilities to their networks. including ingress traffic engineering . LISP mapping entries are distributed rather than centralized.” This mapping is called an EID-to-RLOC mapping database. known as LISP Extended Subnet Mode (ESM) mode. first-hop router updates the LISP map server with a new RLOC for the EID. This is where OTV can be leveraged. This mode is best suited for cold migration and disaster recovery scenarios where there is no Layer 2 data center interconnect between the sites. known as LISP Across Subnet Mode (ASM).

Table 10-2 lists a reference of these key topics and the page numbers on which each is found. ASR 1000 routers. ISR routers.IP address portability. and CSR 1000v software routers. including no renumbering when changing providers or adding multihoming IP address (endhost) mobility. Reference List Unified Fabric: http://www.com/go/lisp Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter.html Overlay Transport Virtualization: http://www.com/go/otv Locator ID Separation Protocol: http://www. including incremental deployment of IPv6 using existing IPv4 infrastructure (or IPv4 over IPv6) Simplified multitenancy and large-scale VPNs Operation and network simplification At the time of writing this book.com/c/en/us/solutions/data-center-virtualization/unifiedfabric/unified_fabric. including session persistence across mobility events IPv6 transition simplification. LISP is supported on Nexus 7000 series switches with F3 and N7KM132XP-12 line cards.cisco.html Fibre Channel over IP: http://www.cisco. noted with the key topics icon in the outer margin of the page.cisco.com/c/en/us/tech/storage-networking/fiber-channelover-ip-fcip/index.cisco. .

Table 10-2 Key Topics for Chapter 10 Define Key Terms Define the following key terms from this chapter. and check your answers in the glossary: Nexus Multilayer Director Switch (MDS) Unified Fabric Clos Spine-Leaf Fibre Channel over Ethernet (FCoE) local-area network (LAN) storage-area network (SAN) Fibre Channel Unified Port converged network adapter (CNA) Internet Small Computer Interface (iSCSI) NX-OS Spanning Tree .

virtual port channel (vPC) FabricPath Virtual Extensible LAN (VXLAN) Virtual Network Identifier (VNID) Virtual Supervisor Module (VSM) Virtual Ethernet Module (VEM) Virtual Security Gateway (VSG) virtual ASA vPath Wide Area Application Services (WAAS) TCP flow optimization (TFO) data redundancy elimination (DRE) Web Cache Communication Protocol (WCCP) Data Center Network Manager (DCNM) Overlay Transport Virtualization (OTV) Locator ID Separation Protocol (LISP) .

In this chapter we also observe how coupling of FCoE with Cisco Fabric Extender technologies allows building even larger fabrics for ever-increasing scale demands. . It has a profound impact on the data center economics in a form of lower capital expenditure (CAPEX) and operational expense (OPEX) by consolidating traditionally disparate physical environments of network and storage into a single. Data Center Bridging and FCoE This chapter covers the following exam topics: Components and Operation of the IEEE Data Center Bridging Standards FCoE Principles FCoE Connectivity Options FCoE and Fabric Extenders FCoE and Server Adapter Virtualization Running Fibre Channel protocol on top of Data Center Bridging enhanced Ethernet is one of the key principles of data center Unified Fabric architecture.Chapter 11. you should mark that question as wrong for purposes of the self-assessment.” Table 11-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. FCoE preserves current operational models where particular technology domain expertise is leveraged to achieve deployments conforming to best practices for both network and storage environments. Table 11-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A. read the entire chapter. “Answers to the ‘Do I Know This Already?’ Quizzes. If you do not know the answer to a question or are only partially sure of the answer. highly performing and robust fabric.

4. c. QSFP-40G-SR-BD c. What is an FET? a. What is FPMA? a. SCSI 2. It is a Fabric Extender Transceiver that can be used to connect Cisco Nexus 2000 Series Fabric Extenders to either the upstream parent switches or the servers. b. ETS b. FET-40G d. It is a MAC addresses dynamically assigned to the Converged Network Adapter by the FCoE Fibre Channel Forwarder during the fabric login. b. None of the above. QSFP-40G-SR4 b. LDP c. What function does Priority Flow Control provide? a. FCF d. 3. PFC e. Which two of the following are part of the Data Center Bridging standards? (Choose two. Which of the following allows 40 Gb Ethernet connectivity by leveraging a single pair of multimode fibers? a. It is a port on the FCoE Fabric Extenders toward the FCoE server. It provides lossless Ethernet service by pausing traffic based on MTU value. b. It is a virtual port on the FCoE endpoint connected to the VF_Port on the switch. It provides lossless Ethernet service by pausing traffic based on DSCP value. It is First Priority Multicast Algorithm used by the FCoE endpoints to select the FCoE Fibre Channel Forwarder for a fabric login. It is a MAC address of the FCoE Fibre Channel Forwarder. It is a Fabric Extender Transceiver that can be used to connect Cisco Nexus 2000 Series Fabric Extenders to the upstream parent switches. c. It is a virtual port used to internally attach the FCoE Fibre Channel Forwarder to the Ethernet switch.) a. It provides lossless Ethernet service by pausing traffic based on Class of Service value. e. c. It is an invalid port type. QSFP-H40G-CU 6.1. d. d. b. . d. 5. It is a MAC addresses assigned to the Converged Network Adapter by the manufacturer. It is a configuration exchange protocol to negotiate Class of Service value for the FCoE traffic. What is a VN_Port? a.

vethinterface c. c. Yes b. profile-veth Foundation Topics Data Center Bridging Due to the lack of Ethernet protocol support for transport characteristics required for successful storage-area network implementation. Yes. Is FCoE supported in Enhanced vPC topology? a. 7. dynamic-interface b. it requires the capability to provide lossless traffic delivery. 2232PP d. The lack of those characteristics poses no concerns for the network traffic. Ethernet protocol must be enhanced. No c. Specifically.c. however. Enhancements required for a successful Unified Fabric implementation. What is Adapter-FEX? a. as well as necessary bandwidth and traffic priority services. vethernet d.1Qbb. It is a Fully Enabled Transceiver that can be used to connect Cisco Nexus 2000 Series Fabric Extenders to the upstream parent switches. are defined under the Data Center Bridging standards defined by the IEEE. consolidation of network and storage traffic over the same Ethernet network requires special considerations. These are Priority Flow Control (PFC) defined by IEEE 802. It is the name of the Cisco next-generation Fabric Extender. It is a Fault-Tolerant Extended Transceiver that can be used to connect Cisco Nexus 2000 Series Fabric Extenders to the upstream parent switches in the most reliable manner. 2248TP b. but only when FCoE traffic on Cisco Nexus 2000 Series FCoE Fabric Extender is pinned to a single Cisco Nexus 5000 Series parent switch d. What type of port profile is used to dynamically instantiate vEth interface on the Cisco Nexus 5000 Series switch? a. d. b. 2224TP c. Which two of the following Cisco Nexus 2000 Series Fabric Extenders support FCoE? a. to make Ethernet a viable transport for storage traffic. but only when FIP is not used 8. d. It is the name of the transceiver used to connect Fabric Extender to the parent switch. It is required to run FCoE on the Fabric Extender. where network and storage traffic are carried over the same Ethernet links. Yes. Enhanced . 2232TM-E 9. It is a feature for virtualizing the network adapter on the server. part of the Unified Fabric solution. 10.

pausing either all or none of them. enabling the current DCB-capable switch to “drain” its receive buffer. PFC enables the Ethernet flow control mechanism to operate on the per-802.1p Class of Service level. Upon receipt of a PAUSE frame. such as TCP. they indiscriminately cause buffering for types of traffic that otherwise would rely on TCP protocol for congestion management and might be better off dropped. that is. that is. When such per-physical link PAUSE frames are triggered.3X standard in which flow control operates on the entire physical link level. Quantized Congestion Notification (QCN) defined by IEEE 802. For example. the queue becomes full. the previous hop DCB-capable switch pauses its transmission of the FCoE frames. such as FCoE. Figure 11-1 Priority Flow Control Operation With PFC.1Qau. thus selectively pausing at Layer 2 only traffic requiring guaranteed delivery. Enhanced Transmission Selection Enhanced Transmission Selection (ETS) enables optimal QoS strategy for links between DCBcapable switches and a bandwidth management of virtual links.1AB (LLDP) with TLV extensions. such bandwidth . and it can be triggered hop-by-hop from the point of congestion toward the source of the transmission (FCoE initiator or target).1Qaz. PAUSE frames are signaled back to the previous hop DCB-capable Ethernet switch for the negotiated FCoE Class of Service (CoS) value (Class of Service value of 3 by default). PFC provides lossless Ethernet transport services for FCoE traffic. it is up to the higher-level protocols. Figure 11-1 demonstrates the principle of how PFC leverages PAUSE frames to pause transmission of FCoE traffic over a DCB-enabled Ethernet link. or best effort. low latency. If any Ethernet frame carrying data is dropped.Transmission Selection (ETS) defined by IEEE 802. and it cannot differentiate between different types of traffic forwarded through the link. Priority Flow Control Ethernet protocol provides no guarantees for traffic delivery. and hence lossless behavior can no longer be guaranteed. Ethernet defines the flow control mechanism under the IEEE 802. to retransmit the lost portions. and Data Center Bridging Exchange (DCBX) defined by IEEE 802. which are essential because Fibre Channel frames encapsulated within FCoE packets expect lossless fabric behavior. All other traffic is subject to regular forwarding and queuing rules. ETS creates priority groups by allowing differentiated treatment for traffic belonging to the same Priority Flow Control class of service based on bandwidth allocation. This process is known as a back pressure. PAUSE frames are transmitted when the receive buffer allocated to a no-drop queue on the current DCBcapable switch is exhausted.

Data Center Bridging Exchange Data Center Bridging Exchange (DCBX) is a negotiation and exchange protocol between FCoE initiator/target and a DCB-capable switch or between two DCB-capable switches. Figure 11-3 demonstrates how DCBX messages are exchanged hop-by-hop across the DCB-enabled links.3AB standard. such as Cisco Nexus 5000 Series switches. ETS leverages advanced traffic scheduling features by providing guaranteed minimum bandwidth to the FCoE and high-performance computing traffic. special care is needed to maintain lossless link characteristics. until that traffic class demands the bandwidth. It is an extension of the Link Layer Discovery Protocol (LLDP) defined under the IEEE 802. Parameters exchanged using DCBX protocol are encoded in Type-Length-Value (TLV) format. If the amount of ingress traffic exceeds the capacity of the switch egress interface. traffic conditions called incasts can be encountered. . traffic arriving on multiple ingress interfaces on a given switch is attempted to be sent out through a single switch egress interface.allocation allows assigning 8 Gb to the storage traffic and 2 Gb to the network traffic on the 10 Gb shared Ethernet link. Bandwidth allocations made by ETS are not hard allocations. meaning that traffic classes are allowed to use bandwidth allocated to a different traffic class. With incast. is to assign 50% of the interface bandwidth to FCoE no-drop class. Figure 11-2 demonstrates traffic bandwidth allocation carried out by the ETS feature. This approach allows supporting bursty traffic patterns while still maintaining bandwidth guarantees. At times. Figure 11-2 Enhanced Transmission Selection Operation The default behavior of Cisco switches supporting DCB standards.

detects configuration mismatches.1Qau standard. and logical link up/down are successfully negotiated. takes a different approach of applying back-pressure from the point of congestion directly toward the source of transmission without involving intermediate switches. This approach maintains feature isolation and affects only the parts of the network causing the congestion. where notification messages are sent from the point of congestion in the FCoE fabric directly to the source of transmission. such as FCoE. Quantized Congestion Notification mechanism. and negotiates DCB link parameters. . between two adjacent DCB-capable switches.Figure 11-3 Data Center Bridging Exchange Operation DCBX performs the discovery of adjacent DCB-capable switches. Enhanced Transmission Selection. PFC enables you to throttle down transmission for no-drop traffic classes. Successful FCoE operation relies on Ethernet enhancements introduced with DCB and as such. DCBX plays a pivotal role in making sure behaviors such as Priority Flow Control. Figure 11-4 demonstrates the principle behind the QCN feature operation. As PFC is invoked. Quantized Congestion Notification As discussed earlier. defined in the IEEE 802. “Multihop Unified Fabric”). Network Interface Virtualization (multihop FCoE behavior is discussed in Chapter 12. a back-pressure is applied from the point of congestion toward the source of transmission (FCoE initiator or target) in a hop-by-hop fashion.

respectively. and a Fibre Channel forwarding decision is taken on the exposed Fibre Channel frame. the Fibre Channel frames are reencapsulated with Ethernet headers. Subsequently.Figure 11-4 Quantized Congestion Notification Operation FCoE protocol operates at Layer 2 as Ethernet encapsulated Fibre Channel frames are forwarded between initiators and targets across the FCoE Fibre Channel Forwarders along the path. the MAC address swapping behavior occurring at FCoE Fibre Channel Forwarders makes its operation resemble routing more than bridging. while source and destination MAC addresses assigned are the MAC addresses of the current and next-hop FCoE FCF or the frame target. Ethernet headers are removed. When these frames reach the FCoE FCF. . Even though Layer 2 operation implies bridging. frames are deencapsulated. Figure 11-5 demonstrates the swap of outer Ethernet encapsulation MAC addresses as the FCoE frame is forwarded between initiator and target. these frames include the source MAC address of the endpoint and the destination MAC address of the FCoE Fibre Channel Forwarder. When the FCoE endpoint sends FCoE frames to the network.

Figure 11-6 demonstrates the principle of encapsulating the original Fibre Channel frame with Ethernet headers for transmission across the Ethernet network. The idea behind FCoE is simple: run Fibre Channel protocol frames encapsulated in Ethernet headers. Because those FCoE frames no longer contain the original MAC address of the initiator. it is impossible for the intermediate FCoE Fibre Channel Forwarders to generate QCN messages directly to the source of transmission to throttle down its FCoE transmission into the network. Figure 11-6 Fibre Channel Encapsulated in Ethernet The FC-BB-5 working group of INCITS Technical Committee T11 developed the FCoE standard. Cisco Nexus 5000 Series DCB-capable switches do not implement the QCN feature.Figure 11-5 FCoE Forwarding MAC Address Swap This MAC address swap occurs for all FCoE frames passing through the FCoE Fibre Channel Forwarders along the path. Participation of all FCoE Fibre Channel Forwarders in the QCN process brings little value beyond Priority Flow Control and as such. unless all FCoE Fibre Channel Forwarders along the path from the point of congestion toward the source of transmission participate in the QCN process. Fibre Channel Over Ethernet Over the years. It leverages the same fundamental characteristics of the traditional Fibre Channel protocol. Fibre Channel protocol has established itself as the most reliable and best performing protocol for storage traffic transport. while allowing the flexibility of running it on top of the ever-increasing high-speed Ethernet fabric. FCoE is .

. FCoE_LEP has the standard Fibre Channel layers. Figure 11-7 depicts the breadth of storage transport technologies. The capability to converge LAN and SAN traffic over a single Ethernet network is not trivial. which must be preserved. because network and storage environments are governed by different principles and have different characteristics. Figure 11-7 Storage Transport Technologies Landscape FCoE protocol is the essential building block of the Cisco Unified Fabric architecture for converging network and storage traffic over the same Ethernet network. FCoE Logical Endpoint The FCoE Logical Endpoint (FCoE_LEP) is responsible for the encapsulation and deencapsulation functions of the FCoE traffic.one of the options available for storage traffic transport. starting with FC-2 and continuing up the Fibre Channel Protocol stack. Figure 11-8 depicts the mapping between the layers of a traditional Fibre Channel protocol and the FCoE.

To higher-level system functions. and management models of the traditional Fibre Channel protocol. which enables it to be ubiquitously deployed while maintaining operational. . Each port type plays a specific role in the overall FCoE solution. FCoE does not alter higher levels and sublevels of the Fibre Channel protocol stack. FCoE Port Types Similar to the traditional Fibre Channel protocol. FCoE network appears as a standard Fibre Channel network. This compatibility enables all the tools used in the native Fibre Channel environment to be used in the FCoE environment. Figure 11-9 depicts FCoE and traditional Fibre Channel port designation for the single hop storage topology.Figure 11-8 Fibre Channel and FCoE Layers Mapping As you can see in Figure 11-8. administrative. FCoE operation defines several types of ports.

FCoE ENodes FCoE ENodes are the combination of FCoE Logical Endpoints (FCoE_LEP) and Fibre Channel protocol stack on the Converged Network Adapter. The first field is called FCoE MAC address prefix. Each VN_Port has a unique ENode MAC address. Remember.Figure 11-9 FCoE and Fibre Channel Port Types in Single Hop Storage Topology FCoE ENodes could be either initiators in the form of servers or targets in the form of storage arrays. and it is defined by the FC-BB-5 standard as a range of 256 prefixes. These first hop switches can operate in either full Fibre Channel switching mode or the FCoE-NPV mode (more on that in Chapter 12). . Each one presents a VN_Port port type connected to the VF_Port port type on the first hop upstream FCoE switch. Figure 11-10 depicts the structure of the FPMA MAC address. An ENode also has a globally unique burnt-in MAC address. This compares closely with the traditional Fibre Channel environment where initiators and targets present the N_Port port type connected to the F_Port port type on the first-hop upstream Fibre Channel switch. it does not change the Fibre Channel characteristics. You can think of them as equivalent to Host Bus Adapters (HBA) in the traditional Fibre Channel world. such as the Cisco Nexus 5000 Series switch. which is assigned to it by the CNA vendor. FCoE ENode creates at least one VN_Port toward the first-hop FCoE access switch. which is dynamically assigned to it by the FCoE Fibre Channel Forwarder on the upstream FCoE switch during the fabric login process. although Ethernet is used as a transport for the Fibre Channel traffic. The dynamically assigned MAC address is also known as the fabric-provided MAC address (FPMA). FPMA is 48 bits in length and consists of two concatenated 24-bit fields. including Fibre Channel addressing. such as the Cisco MDS 9000 Series switch. The second field is called FC_ID. or FC-MAP. and it is basically a Fibre Channel ID assigned to the CNA during the fabric login process.

the FCoE_LEP. One or more FCoE_LEPs are used to attach the FCoE Fibre Channel Forwarder internally to the Ethernet switch. Cisco strongly recommends not mapping multiple Fibre Channel fabrics into the same Ethernet FCoE VLAN. it’s like having two servers with the same MAC address in the same VLAN. To resolve this situation. FCoE Fibre Channel Forwarder (FCF) is a logical Fibre Channel switch entity residing in the FCoE-capable Ethernet switch. FCoE Fibre Channel Forwarder is responsible for the host fabric login. FCoE_LEP. Figure 11-11 depicts a relationship between FCoE Fibre Channel Forwarder. to make sure that the overall FPMA address remains unique across the FCoE fabric.Figure 11-10 Fabric-Provided MAC Address Cisco has established simple best practices that make manipulation of FC-MAP values beyond the default value of 0E-FC-00 unnecessary. When Fibre Channel frames are encapsulated and transported over the Ethernet network in a converged fashion. With only default FC-MAP. a challenging situation can occur if each fabric allocates the same FCID to its respective FCoE ENodes. In essence. FCoE-capable Ethernet switches must have the proper intelligence to enforce Fibre Channel protocol characteristics along the forwarding path. and the Ethernet switch. This will ensure the uniqueness of an FPMA address without the need for multiple FC-MAP values. and Ethernet Switch FCoE Encapsulation As fundamentally Ethernet traffic. For example. as well as the encapsulation and deencapsulation of the FCoE traffic. if two Fibre Channel fabrics are mapped to the same Ethernet FCoE VLAN. FCoE packets must be exchanged in a context of a VLAN. Figure 11-11 Fibre Channel Forwarder. The 256 FC-MAP values are still available in case of a nonunique FC_ID address. Fibre Channel Forwarder Simply put. FCoE ENodes across the two fabrics will end up with identical FPMA values. Just as . you can leverage two different FC-MAP values to maintain FPMA uniqueness.

as FCoE frames traverse the Ethernet links.1Q tags are used to segregate between network and FCoE VLANs on the wire. where all FCoE packets tagged with specific VLAN ID are assumed to belong to the corresponding Fibre Channel VSAN. reserved fields can also be leveraged for padding. FCoE leverages EtherType value of 0x8906. Cisco Nexus 5000 Series switches enable mapping between FCoE VLANs and Fibre Channel VSANs. so do virtual SANs by creating logical separation for the storage traffic. that is. Figure 11-12 depicts FCoE frame format. IEEE 802. Source and Destination MAC Address defines Layer 2 connectivity addressing for FCoE frames. especially assigned by IEEE for the protocol to indicate Fibre Channel payload. it enables creation of multiple virtual network topologies across a single physical network infrastructure. The IEEE 802.3 standard defines the minimum-length Ethernet frame as 64 bytes (including Ethernet headers). Inside the IEEE 802. such as IP network traffic. should be carried over these VLANs. With FCoE. With FCoE. The Cisco Nexus 5000 Series switches implementation also expects all VLANs carrying FCoE traffic to be exclusively used for this purpose. Each field is 48 bits in length. If the Fibre Channel payload and all the encapsulating headers result in an Ethernet frame size smaller than 64 bytes. Reserved fields have been inserted to allow future protocol extensibility. IEEE 802. The IEEE 802. that is.virtual LANs create logical separation for the network traffic. Start of Frame (SOF) and End of Frame (EOF) fields indicate the start and end of the encapsulated Fibre Channel frame. no other types of traffic. Each field is 8 bits in length.1Q tag.1Q tag is 4 bytes in length.1Q priority field is used to indicate the Class of Service marking to provide an adequate Quality of Service. Both ends of the FCoE link must agree on the same FCoE VLAN ID. respectively. The IEEE 802. Figure 11-12 FCoE Frame Format Let’s take a closer look at some of the frame header fields. and the incorrectly tagged frames are simply discarded.1Q tag serves the same purpose as in traditional Ethernet network. .

Figure 11-13 depicts the overall length of the FCoE frame. including Fibre Channel cyclic redundancy check (CRC). Jumbo Frame support must be enabled to allow successful transmission of all FCoE frame sizes. These MTU settings are observed for FCoE traffic arriving directly on the physical switch interfaces or through the host interfaces on the Cisco Nexus 2000 Series FCoE Fabric Extenders attached to the parent Cisco Nexus 5000 Series switches. Figure 11-13 FCoE Frame Length As you can see. Example 11-1 depicts the MTU assigned to FCoE traffic under class class-fcoe. On the Cisco Nexus 5010 and 5020 switches. is included in the payload. Before enabling FCoE on the Cisco Nexus 5500 Series switches. After FCoE features are enabled. FCoE traffic is assigned to the class based on the FCoE EtherType value (0x8906). This class provides no-drop service to facilitate guaranteed delivery of FCoE frames end-to-end. This is a standard FCS field in any Ethernet frame.Encapsulated Fibre Channel frame in its entirety. Because of this. Example 11-1 Displaying FCoE MTU Click here to view code image Nexus5000# show policy-map Type qos policy-maps ==================== policy-map type qos default-in-policy class type qos class-fcoe set qos-group 1 Type network-qos policy-maps =============================== policy-map type network-qos default-nq-policy class type network-qos class-fcoe pause no-drop mtu 2240 . Ethernet Frame Check Sequence (FCS) represents the last 4 bytes of the Ethernet frame. Cisco Nexus 5000 Series switches define by default the Maximum Transmission Unit (MTU) of 2240 bytes for FCoE traffic in class-fcoe. you must manually create class-fcoe default system class. class-fcoe default system class is automatically created at the system startup and cannot be deleted. the total size of an Ethernet frame carrying a Fibre Channel payload can be up to 2180 bytes. Non-FCoE traffic is set up by default for the Ethernet standard 1500 byte MTU (excluding Ethernet headers).

Figure 11-14 depicts the phases of FIP operation. FIP continues operating in the background performing virtual link maintenance functions—specifically. server) or the target (for example. The FCoE ENode discovers the VLAN to which it needs to send the FCoE traffic by sending an FIP VLAN discovery request to the All-FCF-MAC multicast MAC address. Figure 11-14 FCoE Initialization Protocol Operation FIP VLAN discovery relies on the use of native VLAN leveraged by the initiator (for example. an exchange of Fibre Channel payload encapsulated with the Ethernet headers can begin. and ultimately performing fabric login/discovery operation (FLOGI or FDISC). discovering the MAC address of an upstream FCoE Fibre Channel Forwarder. Establishment of the virtual link carries three main steps: discovering the FCoE VLAN.FCoE Initialization Protocol FCoE Initialization Protocol (FIP) is the FCoE control plane protocol responsible for establishing and maintaining Fibre Channel virtual links between pairs of adjacent FCoE devices (FCoE ENodes or FCoE Fibre Channel Forwarders). FIP also differentiates from the FCoE protocol messages by using EtherType value 0x8914 in the encapsulating Ethernet header fields. potentially across multiple Ethernet segments. continuously verifying reachability between the two virtual Fibre Channel (vFC) interfaces on the Ethernet network and offering primitives to delete the virtual link in response to administrative actions to that effect. storage array). All FCoE Fibre Channel Forwarders listen to the All-FCF-MAC MAC . After the virtual link has been established.

Following the discovery of FCoE VLAN and the FCoE Fibre Channel Forwarder. as such. Converged Network Adapters are also available in the mezzanine form factor for the use in server blade systems. From the operating system perspective. FIP performs fabric login or discovery by sending FLOGI or FDISC unicast frames from VN_Port to VF_Port. just like in the case of physically segregated adapters.address on the native VLAN. . they still require two sets of drivers. The FCoE ENode can also be manually configured for the FCoE VLAN it should use. The converged nature of CNAs requires them to operate at 10 Gb speeds or higher. which can accept and process fabric login requests. in which case FIP VLAN discovery functionality is not invoked. VLAN ID 1002 is commonly assumed in such case. one for the Ethernet and the other for Fibre Channel. FCoE Fibre Channel Forwarder discovery messages are sent to the All-ENode-MAC multicast MAC address over the FCoE VLAN discovered earlier by the FIP protocol. The FC-BB-5 standard enables FCoE ENodes to send out advertisements to the All-FCF-MAC multicast MAC address to solicit unicast discovery advertisement back from FCoE Fibre Channel Forwarder to the requesting ENode. which could delay FCoE Fibre Channel Forwarder discovery for the new FCoE ENodes joining the fabric during the send interval. CNA is usually a PCI express (PCIe) type adapter. which contains the functions of a traditional Host Bus Adapter (HBA) and a network interface card (NIC) all in one. Figure 11-15 depicts the fundamental principle behind leveraging a separate set of operating system drivers when converging NIC and HBA with Converged Network Adapter. Discovery messages are also used by the FCoE Fibre Channel Forwarder to inform any potential FCoE ENode attached through the FCoE VLAN that its VF_Ports are available for virtual link establishment with the ENode’s VN_Ports. FIP performs the function of discovering the upstream FCoE Fibre Channel Forwarder. Cisco Nexus 5000 Series switches support FIP VLAN Discovery functionality and will respond to the querying FCoE ENode. CNAs are represented as two separate devices. Converged Network Adapter Converged Network Adapter (CNA) is in essence an FCoE ENode. Note FIP VLAN discovery is not a mandatory component of the FC-BB-5 standard. After FCoE VLAN has been discovered. This is the phase when the FCoE ENode MAC address used for FCoE traffic is assigned and virtual link establishment is complete. These messages are sent out periodically.

During the time of writing this book. 10 Gb FCoE achieves 50% more throughput than 8 Gb Fibre Channel. however. Table 11-2 depicts the throughputs and encoding for the 8 Gb Fibre Channel and 10 Gb FCoE traffic. At the same time. both network and storage traffic is sent over a single unified wire connection from the server to the upstream first hop FCoE Access Layer switch. Table 11-2 8 Gb Fibre Channel and 10 Gb FCoE When comparing with higher speeds.Figure 11-15 Separate Ethernet and Fibre Channel Drivers for NICs. such as Cisco Nexus 5000 Series switch. we discover even greater differences. HBAs. 40 Gb FCoE is four times faster than 10 Gb FCoE! Table 11-3 depicts the throughputs and encoding for the 16 Gb Fibre Channel and 40 Gb FCoE traffic. FCoE Speed Rapid progress in the speed increase of Ethernet networks versus a slower pace of speed increase in the traditional Fibre Channel networks makes FCoE a compelling technology for high throughput storage-area networks. Although 16 Gb Fibre Channel is only 33% faster than 10 Gb FCoE. and CNAs With CNAs. due to different encoding schemes. rarely do Fibre Channel connections from the endpoints to the first hop FCoE switches exceed the speed of 8 Gb. despite appearing to be only 2 Gb faster. Table 11-3 16 Gb Fibre Channel and 40 Gb FCoE . server CNAs and FCoE storage arrays offer at least 10 Gb FCoE connectivity speeds. The difference might not seem significant.

Speed is definitely not the only factor in storage-area networks. given the similar characteristics between Fibre Channel and FCoE. which are glass rods drawn into fine threads of glass protected by a plastic coating. Multimode fiber type has a much larger core diameter compared to single-mode fiber. Current multimode cabling standards include 50 microns and 62. Step-Index and Graded Index offer two different fiber designs. Figure 11-16 Fiber Optic Core and Coating Fiber manufacturers use various vapor deposition processes to make the pre-forms. Such modes result from the fact that light will propagate in the fiber core only at discrete angles within the cone of acceptance. and connecting the switches to an upstream LAN and Fibre Channel SAN environments. Figure 11-16 depicts the fiber optic core and coating. allowing for a larger number of modes. you can clearly see the very distinct throughput advantages in favor of the FCoE. FCoE Cabling Options for the Cisco Nexus 5000 Series Switches This section reviews the options for physically connecting Converged Network Adapters (CNA) to Cisco Nexus 5000 Series switches. Figure 11-17 depicts Step-Index multimode fiber design. Optical fibers are hair-thin structures created by forming pre-forms. The fibers drawn from these pre-forms are then typically packaged into cable configurations. however. Multimode fiber refers to the fact that numerous modes or light rays are simultaneously carried through. .5 microns core diameter. with Graded Index being more popular because it offers hundreds of times more bandwidth than Step-Index fiber does. which are then placed into an operating environment for decades of reliable performance. Optical Cabling An optical fiber is a flexible filament of very clear glass capable of carrying information in the form of light.

. Current single-mode cabling standards include 9 microns core diameter. Consequently. information can be transmitted over longer distances. Figure 11-18 Graded Index Multimode Fiber Design Single-mode (or monomode) fiber enjoys lower fiber attenuation than multimode fiber and retains better fidelity of each light pulse because it exhibits no dispersion caused by multiple modes. Figure 11-19 depicts a single-mode fiber design.Figure 11-17 Step-Index Multimode Fiber Design Figure 11-18 depicts Graded Index multimode fiber design.

. LC is the most popular one for both multimode and single-mode fiber optic connections. Following are the different types of SFP transceivers existing at the time of writing this book: 1000BASE-T SFP 1 Gigabit Ethernet copper transceiver for copper networks operates on standard Category 5 unshielded twisted-pair copper cabling for links of up to 100 meters (~330 feet) in length. SFP is responsible for converting between the electrical and optical signals. 1000BASE-LX/LH transceivers transmitting in the 1300 nanometers wavelength over multimode FDDI-grade OM1/OM2 fiber require mode conditioning patch cables. Sonet/SDH. as an example. and 1000 Megabit (1 Gigabit) auto-negotiation and auto-MDI/MDIX. It is compatible with the IEEE 802. Figure 11-20 Fiber Optic SFP Module LC Connector Let’s now review various transceiver module options available at the time of writing this book for deploying 1 Gigabit Ethernet. MDI-X is Medium Dependent Interface Crossover.Figure 11-19 Single-Mode Fiber There are many fiber optic connector types. It can also support distances of up to 1 kilometer (~0. 1000BASE-LX/LH SFP 1 Gigabit Ethernet long-range fiber transceiver operates over standard single-mode fiber-optic link spanning distances of up to 10 kilometers (~6.5 microns Fiber Distributed Data Interface (FDDI)-grade multimode fiber for a distance of up to 220 meters (~720 feet). hot swappable. 1000BASE-SX SFP 1 Gigabit Ethernet fiber transceiver is used only for multimode fiber networks. These modules support 10 Megabit.2 miles) and up to 550 meters (~1800 feet) over any multimode fiber. 100 Megabit.3z 1000BASE-SX standard. while allowing both receive and transmit operations to take place. SFP is also sometimes referred to as mini Gigabit Interface Converter (GBIC) because both provide similar functionality. Transceiver Modules Cisco Transceiver Modules support Ethernet. 10 Gigabit Ethernet. input/output device that is used to link the ports on different network devices together. SFP type transceivers can be leveraged in a variety of high-speed Ethernet and Fibre Channel deployments. MDI is Medium Dependent Interface. and 40 Gigabit Ethernet connectivity. and Fibre Channel applications across all Cisco switching and routing platforms.3z 1000BASE-LX standard. Due to its small form factor. and it operates over either 50 microns multimode fiber links for a distance of up to 550 meters (~1800 feet) or over 62. It is compatible with the IEEE 802. SFP Transceiver Modules A Small Form-Factor Pluggable (SFP) is a compact. Figure 11-20 depicts SFP module LC connector.62 miles) over laser-optimized 50 microns multimode fiber cable.

1000BASE-BX10-D transmits a 1490 nanometer wavelength and receives a 1310 nanometer signal. but the precise link span length depends on multiple factors. A 5dB inline optical attenuator should be inserted between the fiber-optic cable and the receiving port on the SFP at each end of the link for back-to-back connectivity. operate over a single strand of standard single-mode fiber for a transmission range of up to 10 kilometers (~6. This SFP provides an optical link budget of 21 dB. Single-Mode Fiber SFP+ Transceiver Modules The SFP+10 Gigabit Ethernet transceiver module is a bidirectional device with a transmitter and receiver in the same physical package. such as fiber quality. Connectivity up to 300 meters (~1000 feet) is possible when leveraging 2. 1000BASE-BX10-D and 1000BASE-BX10-U transceivers represent two sides of the same fiber link.2 miles). Figure 11-21 depicts the principle of wavelength multiplexing to achieve transmission over a single strand fiber.5 miles). 1000BASE-BX-D and 1000BASE-BX-U 1 Gigabit Ethernet transceivers. Both Cisco Nexus 5000 Series switches and Cisco Nexus 2000 Series FCoE Fabric Extenders support SFP-10G-SR SFP.3ah 1000BASE-BX10-D and 1000BASE-BX10-U standards. 1000BASE-ZX SFP 1 Gigabit Ethernet extra long-range fiber transceiver operates over standard single-mode fiber-optic link spanning extra long-reaching distances of up to approximately 70 kilometers (~43. number of splices.5 microns.1000BASE-EX SFP 1 Gigabit Ethernet extended-range fiber transceiver operates over standard single-mode fiber-optic link spanning long-reaching distances of up to 40 kilometers (~25 miles). The following options exist for 10 Gigabit Ethernet SFP transceivers during the time of writing this book: SFP-10G-SR SFP 10 Gigabit Ethernet short-range transceiver supports a link length of 26 meters (~85 feet) over standard FDDI-grade multimode fiber with a core size of 62. The communication over a single strand of fiber is achieved by separating the transmission wavelength of the two devices. leveraging an LC type connector. compatible with the IEEE 802.700MHz/km OM4 multimode fiber with a core size of 50 microns. . It operates on 850 nanometer wavelength. Figure 11-21 Bidirectional Transmission over a Single Strand. Note the presence of a wavelength-division multiplexing (WDM) splitter integrated into the SFP to split the 1310 nanometer and 1490 nanometer light paths. whereas 1000BASE-BX10-U transmits at a 1310 nanometer wavelength and receives a 1490 nanometer signal. This module has a 20-pin connector on the electrical interface and duplex Lucent Connector (LC) on the optical interface. and connectors.25 miles) is possible when leveraging 4.000MHz/km OM3 multimode fiber with a core size of 50 microns. Connectivity up to 400 meters (~0.

such as Cisco Nexus 5000 Series switches. It operates on 1550 nanometer wavelength. SFP+ Copper Twinax direct-attach 10 Gigabit Ethernet pre-terminated cables are suitable for very short distances and offer a cost-effective way to connect within racks and across adjacent racks.SFP-10G-LRM SFP 10 Gigabit Ethernet short-range transceiver supports link lengths of 220 meters (~720 feet) over standard FDDI-grade multimode fiber with a core size of 62. SFP-10G-ER SFP 10 Gigabit Ethernet extended range transceiver supports a link length of up to 40 kilometers (~25 miles) over standard G.2 miles) over standard G.652 single-mode fiber. Cisco Nexus 5500 switches and Cisco Nexus 2000 Series FCoE Fabric Extenders support SFP-10G-ER SFP. . the transmitter should be coupled through a mode conditioning patch cord.652 single-mode fiber. leveraging an LC type connector. Connectivity is typically between the server and a Top-of-Rack switch or a server and a Cisco Nexus 2000 Series FCoE Fabric Extender. the SFP-10G-ZR SFP transceiver is neither supported on Cisco Nexus 5000 Series switches nor Cisco Nexus 2000 Series FCoE Fabric Extenders. These cables can also be used for fabric uplinks between the Cisco Nexus 2000 Series FCoE Fabric Extender and a Cisco parent switch. neither Cisco Nexus 5000 Series switches nor Cisco Nexus 2000 Series FCoE Fabric Extenders support SFP-10G-LRM optics. It operates on 1550 nanometer wavelength. It operates over a core size of 50 microns and at 850 nanometers wavelength. leveraging an LC type connector.652). At the time of writing this book. such as Cisco Nexus 5000 Series switches. To ensure that specifications are met over FDDI-grade OM1 and OM2 fibers. It operates on 1310 nanometer wavelength. A 5dB 1550 nanometer fixed loss attenuator is required for distances greater than 20 kilometers (~12.652 single-mode fiber. SFP-10G-LRM Module also supports link lengths of 300 meters (~1000 feet) over standard single-mode fiber (G. SFP-10G-LR SFP 10 Gigabit Ethernet long-range transceiver supports a link length of 10 kilometers (~6. FET-10G is supported only on fabric uplinks between the Cisco Nexus 2000 Series FCoE Fabric Extender and a Cisco parent switch. No mode conditioning patch cord is required for applications over OM3 or OM4.5 microns. At the time of writing this book. SFP-10G-ZR 10 Gigabit Ethernet very long-range transceiver supports link lengths of up to about 80 kilometers (~50 miles) over standard G. leveraging an LC type connector. Both Cisco Nexus 5000 Series switches and Cisco Nexus 2000 Series FCoE Fabric Extenders support SFP-10G-LR SFP. leveraging an LC type connector. Cisco Nexus 5000 switches do not support SFP-10G-ER SFP.5 miles). It operates on 1310 nanometer wavelength. QSFP Transceiver Modules The Cisco 40GBASE QSFP (Quad Small Form-Factor Pluggable) hot-swappable input/output device portfolio (see Figure 11-22) offers a wide variety of high-density and low-power 40 Gigabit Ethernet connectivity options for the data center. FET-10G Fabric Extender Transceiver (FET) 10 Gigabit Ethernet cost-effective short-range transceiver supports link lengths of 25 meters (~80 feet) over OM2 multimode fiber and the link length of up to 100m (~330 feet) over laser-optimized OM3 or OM4 multimode fiber. This interface is not specified as part of the 10 Gigabit Ethernet standard and is instead built according to Cisco specifications.

The 4×10 Gigabit Ethernet connectivity is achieved using an external 12-fiber parallel to 2-fiber duplex breakout cable. enabling its use on all QSFP 40 Gigabit Ethernet platforms. Figure 11-23 Cisco QSFP BiDi Wavelength Multiplexing Each QSFP 40 Gigabit Ethernet BiDi transceiver consists of two 20 Gigabit Ethernet transmit and receive channels enabling an aggregated 40 Gigabit Ethernet link over a two-strand multimode fiber connection. It supports link lengths of 100 meters (~330 feet) and 150 meters (~500 feet). BiDi transceiver complies with the QSFP MSA specification. which connects the 40GBASE-SR4 module to four 10GBASE-SR . over laser-optimized OM3 and OM4 multimode fibers. It supports link lengths of 100 meters (~330 feet) and 150 meters (~500 feet) over laser-optimized OM3 and OM4 multimode fibers. QSFP-40G-SR4 QSFP 40 Gigabit Ethernet short-range transceiver operates over 850 nanometer and 50 micron core size multimode fiber. respectively. The following options exist for 40 Gigabit Ethernet QSFP transceivers during the time of writing this book: QSFP 40Gbps Bidirectional (BiDi) 40 Gigabit Ethernet short-range cost-effective transceiver is a pluggable optical transceiver with a duplex LC connector interface over 50 micron core size multimode fiber. Figure 11-23 depicts the use of wavelength multiplexing technology allowing 40 Gigabit Ethernet connectivity across just a pair of multimode fiber strands. BiDi transceivers offer a compelling solution. It can also be used in a 4×10 Gigabit Ethernet mode for interoperability with 10GBASE-SR interfaces up to 100 meters (~330 feet) and 150 meters (~500 feet) over OM3 and OM4 fibers. respectively. and its highspeed electrical interface is compliant with the IEEE 802.3ba standard. It primarily enables high-bandwidth 40 Gigabit Ethernet optical links over 12-fiber parallel fiber terminated with MPO/MTP multifiber connectors.Figure 11-22 Cisco 40GBASE QSFP Transceiver It offers flexibility of interface choice for different distance requirements and fiber types. while being interoperable with other IEEE-compliant 40GBASE interfaces. which enables reuse of existing 10 Gigabit Ethernet duplex multimode cabling infrastructure for migration to 40 Gigabit Ethernet connectivity. respectively.

QSFP-40G-CSR4 QSFP 40 Gigabit Ethernet short-range transceiver operates over 850 nanometer and 50 micron core size multimode fiber. which is critical in high-density racks.25 miles) over laseroptimized OM3 and OM4 multimode parallel fiber. Both QSFP-40G-LR4 and QSFP-40GE-LR4 operate over 1310 nanometer single-mode fiber. Figure 11-24 depicts short range pre-terminated 40 Gigabit Ethernet QSPF to QSFP copper cable. Such cables can either be active copper. Cisco offers both active and passive copper cables. Multiplexing and de-multiplexing of the four wavelengths are managed within the device. QSFP-40GE-LR4 QSFP 40 Gigabit Ethernet long-range transceiver supports 40 Gigabit . AOCs enable efficient system airflow and have no EMI issues.2 miles) over a standard pair of G. Figure 11-25 Cisco 40GBASE-CR4 QSFP Active Optical Cables QSFP-40G-LR4 QSFP 40 Gigabit Ethernet long-range transceiver module supports link lengths of up to 10 kilometers (~6. It extends the reach of the IEEE 40GBASE-SR4 interface to 300 meters (~1000 feet) and 400 meters (~0. passive copper. respectively. or active optical cables. This module can be used for native 40 Gigabit Ethernet optical links over 12-fiber parallel cables with MPO/MTP connectors or in a 4×10 Gigabit Ethernet mode with parallel to duplex fiber breakout cables for connectivity to four 10GBASE-SR interfaces. 40GBASE-CR4 QSFP to QSFP direct-attach 40 Gigabit Ethernet short-range pre-terminated cables offer a highly cost-effective way to establish a 40 Gigabit Ethernet link between QSFP ports of Cisco switches within the rack and across adjacent racks. At the time of writing this book. which makes cabling easier.652 single-mode fiber with duplex LC connectors.optical interfaces. Figure 11-24 Cisco 40GBASE-CR4 QSFP Direct-Attach Copper Cables Active optical cables (AOC) are much thinner and lighter than copper cables. The 40 Gigabit Ethernet signal is carried over four wavelengths. Figure 11-25 depicts short range pre-terminated 40Gb QSPF to QSFP active optical cable. Each 10 Gigabit lane of this module is compliant with IEEE 10GBASE-SR specifications.

passive copper. FET-40G QSFP 40 Gigabit Ethernet cost-effective short-range transceiver connects links between the Cisco Nexus 2000 Series FCoE Fabric Extender and the Cisco parent switch.25 miles) over a standard pair of G. These active optical breakout cables connect to a 40 Gigabit Ethernet QSFP port of a Cisco switch on one end and to four 10 Gigabit Ethernet SFP+ ports of a Cisco switch on the other end. which is critical in high-density racks. whereas the QSFP-40G-LR4 supports OTU3 data rate in addition to 40 Gigabit Ethernet Ethernet rate. This module can be used for native 40 Gigabit Ethernet optical links over 12-fiber ribbon cables with MPO/MTP connectors or in 4×10Gb mode with parallel to duplex fiber breakout cables for connectivity to four FET10G interfaces. Figure 11-26 Cisco 40Gb QSFP to Four SFP+ Copper Breakout Cable Active optical breakout cables (AOC) are much thinner and lighter than copper breakout cables. Figure 11-26 depicts the 40 Gigabit Ethernet copper breakout cable for connecting four 10 Gigabit Ethernet capable nodes (servers) into a single 40 Gigabit Ethernet QSPF switch interface.25 miles). The interconnect will work over parallel 850 nanometer. .652 single-mode fiber with duplex LC connectors. Such cables can either be active copper. which makes cabling easier. Multiplexing and de-multiplexing of the four wavelengths are managed within the device. Figure 11-27 depicts the 40 Gigabit Ethernet active optical breakout cable for connecting four 10 Gigabit Ethernet capable nodes (servers) into a single 40 Gigabit Ethernet QSPF switch interface. 50 micron core size laser-optimized OM3 and OM4 multimode fiber across distances of up to 100 meters (~330 feet) and 150 meters (~500 feet). WSP-Q40GLR4L QSFP 40 Gigabit Ethernet long-range transceiver supports link lengths of up to 2 kilometers (~1. such as Cisco Nexus 5000 Series switches. These breakout cables connect to a 40 Gigabit Ethernet QSFP port of a Cisco switch on one end and to for 10 Gigabit Ethernet SFP+ ports of a Cisco switch on the other end. It is interoperable with QSFP-40G-LR4 for distances up to 2 kilometers (~1. The 40 Gigabit Ethernet signal is carried over four wavelengths. AOCs enable efficient system airflow and have no EMI issues.Ethernet rate only. or active optical cables. QSFP to four SFP+ Direct-Attach 40 Gigabit Ethernet pre-terminated breakout cables are suitable for very short distances and offer a highly cost-effective way to connect within the rack and across adjacent racks. respectively.

Cisco will continue to provide support for the affected product under warranty or covered by a Cisco support program. It is based on the IEEE 802. In the course of providing support for a Cisco networking product Cisco may require that the end user install Cisco transceivers if Cisco determines that removing third-party parts will assist Cisco in diagnosing the cause of a support issue.1BR (Bridge Port Extension) standard and enables the following main benefits: Common. and adaptive server and server-adapter agnostic architecture across data center racks and points of delivery (PoD) Simplified operations with investment protection by delivering new features and functionality on the parent switches with automatic inheritance for physical and virtual server-attached infrastructure Single point of management and policy enforcement across thousands of network access ports Scalable 1 Gigabit Ethernet and 10 Gigabit Ethernet server access.Figure 11-27 Cisco 40Gb QSFP to Four SFP+ Active Optical Breakout Cable Unsupported Transceiver Modules Cisco highly discourages the use of non-officially certified and approved transceivers inside Cisco network devices. When the switch detects a non-Cisco certified transceiver. the following warning message will be displayed on the terminal: Caution When Cisco determines that a fault or defect can be traced to the use of third-party transceivers installed by a customer or reseller. If a product fault or defect is detected and Cisco believes the fault or defect can be traced to the use of third-party transceivers (or other non-Cisco components). such as SMARTnet service. at Cisco’s discretion. Delivering FCoE Using Cisco Fabric Extender Architecture Cisco Fabric Extender (FEX) technology delivers an extensible and scalable fabric solution. then. Cisco may withhold support under warranty or a Cisco support program. Cisco may withhold support under warranty or a Cisco support program. and Cisco concludes that the fault or defect is not attributable to the use of third-party transceivers (or other non-Cisco components) installed inside Cisco network devices. then. When a product fault or defect occurs in the network. with no reliance on . at Cisco’s discretion. scalable. which complements Cisco Unified Fabric strategy.

leverage Adapter-FEX on the server to . Cisco Adapter Fabric Extender (Adapter-FEX): Extends the fabric into the server 3. and it is possible to have cascading Fabric Extender topologies deployed. Cisco Virtual Machine Fabric Extender (VM-FEX): Extends the fabric into the hypervisor for virtual machines 4. Cisco Nexus 2000 Series Fabric Extenders (FEX): Extends the fabric to the top of rack 2. Figure 11-28 Cisco Fabric Extender Architectural Choices 1. for example.Spanning Tree Protocol for logical loop-free topology Low and predictable latency at scale for any-to-any traffic patterns Extended and expanded switching fabric access layer all the way to the server hypervisor for virtualized servers or to the server converged network interface card in case of nonvirtualized servers Reduced power and cooling needs Overall reduced operating expenses and capital expenditures Cisco Fabric Extender technology is delivered in the four following architectural choices depicted in Figure 11-28. Cisco UCS 2000 Series Fabric Extender (I/O module): Extends the fabric into the Cisco UCS Blade Chassis Cisco Fabric Extender architectural choices are not mutually exclusive.

administration.connect to Cisco Nexus 2000 Series FCoE Fabric Extender. the following Cisco Nexus 2000 Series Fabric Extender models support FCoE: 2232PP. At the time of writing this book. Note To reduce the load on the control plane of the parent device. 2348UPQ. 2232TM. and management functions are performed on the parent switch. Short intra-rack server cable runs: Twinax or fiber cables connect servers to top-of-rack Cisco Nexus 2000 Series FCoE Fabric Extenders. and 2348TQ. FCoE Fabric Extenders have no local switching capabilities. and Link Aggregation Control Protocol (LACP). Data Center Bridging Exchange (DCBX). so any traffic. Figure 11-29 depicts the optimal cabling strategy made available by leveraging Cisco Fabric Extender architecture for Top-of-Rack deployment. must be forwarded to the parent switch for forwarding. Cisco NX-OS provides the capability to offload link-level protocol processing to the FCoE Fabric Extender CPU. . Collectively. More on that later in this chapter. they are referred to as Cisco Nexus 2000 Series FCoE Fabric Extenders. 2248PQ. Cisco Nexus 2000 Series FCoE Fabric Extenders operate as remote line cards to the Cisco Nexus 5000 Series parent switch. This model allows simple cabling and easier rollout of racks and PODs. Cisco Discovery Protocol (CDP). Physical host interfaces on a Cisco Nexus 2000 Series FCoE Fabric Extender are represented as logical interfaces on the Cisco Nexus 5000 Series parent switch. Longer inter-rack horizontal runs of fiber: Cisco Nexus 2000 Series FCoE Fabric Extenders in each rack are connected to parent switches that are placed at the end or middle of the row. All operation. 2232TM-E. FCoE and Cisco Nexus 2000 Series Fabric Extenders Extending FCoE to the top of rack by leveraging Cisco Nexus 2000 Series Fabric Extenders creates a scale-out Unified Fabric solution. Cisco Nexus 2000 Series FCoE Fabric Extenders support optimal cabling strategy that simplifies network operations. even between the two ports on the same FCoE Fabric Extender. The following protocols are supported: Link Layer Discovery Protocol (LLDP). or FCoE Fabric Extenders for simplicity.

If the rack server density is low enough. Connection could be in a form of a single link or a port channel. In that case. and as such should be carefully considered. server cabling will be extended beyond a single rack (for the racks where FCoE Fabric Extenders are absent). This parent switch exclusively manages the ports on the FCoE Fabric Extender and provides its features. which can influence power consumption and cooling. FCoE Fabric Extenders can also be deployed every few racks to accommodate the required switch interface capacity. also known as network interfaces (NIF). up to a maximum number of uplinks available on the FCoE Fabric Extender. The following sections examine each one in the context of FCoE. Cisco Nexus 2232 FCoE Fabric Extender has eight 10 Gigabit Ethernet fabric uplinks. Figure 11-30 depicts straight-through FCoE Fabric Extender attachment topology. Two attachment topologies exist when connecting Cisco Nexus 2000 Series FCoE Fabric Extenders to Cisco Nexus 5000 Series parent switch: straight-through and vPC.Figure 11-29 FCoE Fabric Extenders Cabling Strategy Each rack could have two FCoE Fabric Extenders to provide in-rack server connectivity redundancy. For example. FCoE and Straight-Through Fabric Extender Attachment Topology In the straight-through Fabric Extender attachment topology. each FCoE Fabric Extender is connected to a single Cisco Nexus parent switch. .

0(2)N1(1)] Extender Serial: FOC1616R00Q Extender Model: N2K-C2248PQ-10GE. For example.Interface Up. FCoE Fabric Extender will keep being registered. Table 11-4 shows the oversubscription levels that each server will experience as its traffic passes through the Fabric Extender. will keep being up on the parent switch.Interface Up.Interface Up. State: Active Eth2/3 . and its front panel interfaces. if all host interfaces on the Cisco Nexus 2232 FCoE Fabric Extender are connected to 1 Gb servers. State: Active Eth2/2 . . Example 11-2 shows FCoE Fabric Extender registration information as displayed on the Cisco Nexus 5000 Series parent switch. State: Active The number of uplinks also determines the oversubscription levels introduced by the FCoE Fabric Extender into server traffic.Figure 11-30 Straight-Through FCoE Fabric Extender Attachment Topology As long as a single uplink between the FCoE Fabric Extender and the parent switch exists. State: Active Eth2/1 . also known as host interfaces (HIF). State: Active Eth2/4 .Interface Up. Part No: 73-14775-02 Pinning-mode: static Max-links: 1 Fabric port for control traffic: Eth2/3 FCoE Admin: false FCoE Oper: true FCoE FEX AA Configured: false Fabric interface state: Po100 .Interface Up.0(2)N1(1) [Switch version: 6. Example 11-2 Displaying FCoE Fabric Extender Registration Info on the Parent Switch Click here to view code image nexus5000# show fex 100 FEX: 100 Description: FEX0100 state: Online FEX version: 6.

For FCoE traffic.Table 11-4 Number of Fabric Extender Uplinks and Oversubscription Levels It is recommended that you provision as many fabric uplinks as possible to limit the potential adverse effects of the oversubscription. static-pinning mapping looks like what’s shown in Table 11-5. With static pinning mode. This facilitates better port channel hashing and more even traffic distribution across the FCoE Fabric Extender fabric uplinks. as shown in Example 11-3. In the case of Cisco Nexus 2232 FCoE Fabric Extenders. With straight-through FCoE Fabric Extender attachment topology. Example 11-3 Display Static Pinning Mapping on the Cisco Nexus 2232 FCoE Fabric Extender Click here to view code image . higher oversubscription levels will translate into more PAUSE frame and overall slower storage connectivity. Note It is recommended to use a power of 2 to determine the amount of FCoE Fabric Extender fabric uplinks. where X/Y indicates the interface toward the Cisco Nexus 2000 Series FCoE Fabric Extender. a range of host interfaces on the Cisco Nexus 2000 Series FCoE Fabric Extender toward the servers are statically pinned to a given fabric uplink port. you can choose to leverage either static or dynamic pinning. Table 11-5 Static Pinning Mapping on the Cisco Nexus 2232 FCoE Fabric Extender It is also possible to display the static pinning mapping by issuing the show interface ethernet X/Y fex-intf command on the Cisco Nexus 5000 Series parent switch command-line interface.

. host interfaces 5 to 8 pinned to that fabric uplink interface will be brought down and the servers. will see their Ethernet link go down. FCoE Fabric Extender will disable the server facing host interfaces. If a server is singly connected.. a failover event will be triggered on the server. Figure 11-31 FCoE Fabric Extender Static Pinning and Server NIC Failover .. it will lose its network connectivity until FCoE Fabric Extender uplink is restored or until the server is physically replugged into an operational Ethernet port. or if the servers are dual-homed to the different pinning groups on the same Cisco Nexus 2000 Series FCoE Fabric Extender. nexus5000# show interface ethernet 1/8 fex-intf Fabric FEX Interface Interfaces --------------------------------------------------Eth1/8 Eth100/1/32 Eth100/1/31 Eth100/1/30 Eth100/1/1 Eth100/1/29 The essence of the static pinning operation is that if Cisco Nexus 2000 Series FCoE Fabric Extender’s fabric uplink interface toward the Cisco Nexus 5000 Series parent switch fails. if the second fabric uplink interface on the Cisco Nexus 2232 FCoE Fabric Extender fails. If the servers are dual-homed to two different Cisco Nexus 2000 Series FCoE Fabric Extenders.nexus5000# show interface ethernet 1/1 fex-intf Fabric FEX Interface Interfaces --------------------------------------------------Eth1/1 Eth100/1/4 Eth100/1/3 Eth100/1/2 . which are connected to these host interfaces. which are pinned to that specific fabric uplink interface. For example. in which case the server will failover to the redundant network connection. Figure 11-31 depicts server connectivity failover in Cisco Nexus 2232 FCoE Fabric Extender static pinning deployment..

while in case of dynamic pinning. This results in decreased uplink bandwidth. the oversubscription levels will raise with any failed individual FCoE Fabric Extender uplink interface. and as such. Figure 11-32 FCoE Fabric Extender Dynamic Pinning and Server NIC Failover The most significant difference between static and dynamic pinning modes is that static pinning mode allows maintaining the same oversubscription levels for the traversing server traffic. even in the case of FCoE Fabric Extender uplink interface failure. a port channeled fabric uplink interface is leveraged to pass the traffic between the Cisco Nexus 2000 Series FCoE Fabric Extender and the Cisco Nexus 5000 Series parent switch. With dynamic pinning mode. FCoE and vPC Fabric Extender Attachment Topology In the vPC FCoE Fabric Extender attachment topology.Static pinning configuration on the Cisco Nexus 2232 FCoE Fabric Extender results in a 4:1 oversubscription ratio. a failure of a single port channel member interface does not bring down any host facing interfaces on the FCoE Fabric Extender. which should be regularly verified. The possible downside of static pinning is that it relies on healthy server network connectivity failover operation. where each group of four 1 Gb FCoE Fabric Extender host interfaces shares the bandwidth of one 10 GB FCoE Fabric Extender uplink. If an individual port channel fabric uplink member interface failure occurs. server traffic crossing the FCoE Fabric Extender is hashed onto the remaining member interfaces of the uplink port channel. Figure 11-32 depicts dynamic pinning behavior on the Cisco Nexus 2232 FCoE Fabric Extender. each FCoE Fabric Extender is connected to both Cisco Nexus parent switches. This connection could be in a form of a single link or a port . which translates to higher oversubscription levels for the server traffic.

Figure 11-33 vPC FCoE Fabric Extender Attachment Topology Because in case of vPC FCoE Fabric Extender attachment topology both Cisco Nexus 5000 Series parent switches control the FCoE Fabric Extender. As in the case of straight-through FCoE Fabric Extender attachment.channel. and its front panel interfaces will keep being up on the parent switches. the number of uplinks determines the oversubscription levels introduced by the FCoE Fabric Extender for server traffic. As long as a single uplink exists between the FCoE Fabric Extender and any one of its two parent switches. they must be placed in a vPC domain. FCoE Fabric Extender will keep being registered. Example 11-4 shows an example of Cisco Nexus 5000 Series parent switch configuration of the FCoE Fabric Extender uplink port channel for vPC attached FCoE Fabric Extender. Figure 11-33 depicts FCoE Fabric Extender attachment topology leveraging vPC topology with either single or port-channeled fabric uplink interfaces toward each of the Cisco Nexus 5000 Series parent switches. Example 11-4 Fabric Extender Uplink Port Channel for vPC Attached FCoE Fabric Extender Click here to view code image nexus5000-1(config)# interface e1/1-4 nexus5000-1(config-if-range)# channel-group 100 nexus5000-1(config)# interface po100 nexus5000-1(config-if)# vpc 100 nexus5000-1(config-if)# switchport mode fex-fabric nexus5000-1(config-if)# fex associate 100 Relevant configuration parameters must be identical on both parent switches. Both parent switches manage the ports on that FCoE Fabric Extender and provide its features. Cisco Nexus 5000 Series parent switches automatically check for compatibility of some of these parameters on the vPC . up to a maximum number of uplinks available on the FCoE Fabric Extender.

configuration changes that are made to host interfaces on a vPC attached FCoE Fabric Extender must be manually synchronized between the two parent switches. and Cisco Nexus parent switches. whether it is accomplished at the port level. MTU) Note The two parent switches are configured independently. vPC FCoE Fabric Extender attachment topology does not support static pinning. . Note Cisco Nexus 5000 (5010 and 5020) switches do not support Enhanced vPC. The per-interface parameters must be consistent per interface. put in a vPC domain. or even at the physical network level. FCoE Fabric Extenders. such as Cisco Nexus 5500 switches. the supervisor level. so dynamic pinning is automatically leveraged instead. or active) Link speed per port channel Duplex mode per port channel Trunk mode per port channel (native VLAN) Spanning Tree Protocol mode Spanning Tree Protocol region configuration for Multiple Spanning Tree (MST) Protocol Enable or disable state per VLAN Spanning Tree Protocol global settings Spanning Tree Protocol interface settings Quality of service (QoS) configuration and parameters (PFC. This setup creates a dual-layer vPC between servers. and the global parameters must be consistent globally. FCoE servers can use a port channeled connection toward a pair of upstream Cisco Nexus 2000 Series FCoE Fabric Extenders attached to two different Cisco Nexus parent switches. To achieve connectivity redundancy.interfaces. while network traffic is allowed to use both parent switches. Port channel mode (on. In this case. even though network traffic is dual homed between the FCoE Fabric Extender and a parent switch pair. therefore. DWRR. Figure 11-34 depicts Enhanced vPC topology where each FCoE Fabric Extender is pinned to only a single Cisco Nexus 5500 parent switch for the FCoE traffic. To simplify the configuration tasks. Cisco Nexus 5000 Series parent switches support a configuration-synchronization feature. the FCoE traffic must be single homed to maintain SAN Side A / Side B isolation. off. High availability is an essential requirement in most data center designs. which allows the configuration for host interfaces on the Cisco Nexus 2000 Series FCoE Fabric Extender to be synchronized between the two parent switches. It is referred to as Enhanced vPC.

the FCoE traffic from FEX 101 is sent only to N5500-1. As a result. restricting FCoE traffic to a single Cisco Nexus parent switch is called Fabric Extender pinning. traffic load on the FCoE Fabric Extender uplink ports is not evenly distributed because FCoE traffic from a Fabric Extender is pinned to only a single Cisco Nexus . SAN A and SAN B separation is achieved. In the Enhanced vPC topology. and the FCoE traffic from FEX 102 is sent only to N5500-2.Figure 11-34 FCoE Traffic Pinning in Enhanced vPC Topology With Enhanced vPC topology. Example 11-5 FCoE Traffic Pinning in Enhanced vPC Topology Configuration Click here to view code image nexus5500-1(config)# fex 101 nexus5500-1(config-fex)# fcoe nexus5500-2(config)# fex 102 nexus5500-2(config-fex)# fcoe In this configuration. although FEX 101 and FEX 102 are connected and registered to both N5500-1 and N5500-2. This is also true for the return FCoE traffic from a Cisco Nexus 5500 parent switches toward Cisco Nexus 2000 Series FCoE Fabric Extenders where FCoE servers are connected. Example 11-5 shows the command-line interface configuration required to pin the FCoE Fabric Extender to only a single Cisco Nexus 5500 parent switch for the FCoE traffic in Enhanced vPC topology.

as discussed earlier in this chapter. Adapter-FEX Cisco Adapter-FEX technology virtualizes PCIe bus. you can leverage straight-through topology for FCoE deployments. Figure 11-35 depicts server connectivity topologies with and without Cisco Nexus 2000 Series FCoE Fabric Extenders supporting Adapter-FEX solution deployment. FCoE and Server Fabric Extenders Previous sections describe how Cisco Fabric Extender architecture leveraging Cisco Nexus 2000 Series FCoE Fabric Extenders provides substantial advantages for FCoE deployment. leveraging Adapter-FEX technology. including the ones going to the second Cisco Nexus parent switch. while the network traffic is hashed across all FCoE Fabric Extender uplinks. We are now going to further extend those advantages into the servers. as well as the server physical adapters. each type of traffic can take all available bandwidth. . Straight-through FCoE Fabric Extender topology also allows easier control of how the bandwidth of FCoE Fabric Extender uplink is shared between the network and FCoE traffic by configuring an ingress and egress queuing policy. Virtualization technologies and the increased number of CPU cores. presenting a server operating system with a set of virtualized adapters for network and storage connectivity. The default QoS template allocates half of the bandwidth to each one.parent switch for SAN side separation. require servers to be equipped with a large number of network connections. When there is no congestion. This results in a reduced number of network and storage switch ports. Figure 11-35 Adapter-FEX Server Topologies The Cisco Adapter-FEX technology addresses these main challenges: Organizations are deploying virtualized workloads to meet the strong need for saving costs and reducing physical equipment. however. To prevent such an imbalance of traffic distribution across FCoE Fabric Extender uplinks. Enough FCoE Fabric Extender uplink bandwidth should be provisioned to prevent the undesirable oversubscription of links that carry both FCoE and network traffic. so network and FCoE traffic each get half of the guaranteed bandwidth in case of congestion.

The server can either be directly connected to the Cisco Nexus 5500 switch or it can be connected through the Cisco Nexus 2000 Series FCoE Fabric Extender. With a consolidated infrastructure. such as ACLs. No licensing requirement exists on the Cisco Nexus 5500 switch to run Adapter-FEX. as shown in Example 11-6. and switch ports. however. Before vEth interfaces can be created. the network adapter on the server operates in a Classical Ethernet mode. and cable management costs. and maintaining policy consistently across mobility events. Before VN-Tag mode is activated. in which case it has full network connectivity via its physical (nonvirtualized) interface to the upstream Cisco Nexus 5500 switch or Cisco Nexus 2000 Series FCoE Fabric Extender. private VLANs. management. Adapter-FEX allows instantiating virtual network interface card (vNIC) server interface mapped to the virtual Ethernet (vEth) interface on the upstream Cisco Nexus 5500 switch. and operational models. and inconsistency between the physical and virtual data center access layers. cooling. In virtualized environments. Data Center Bridging Exchange (DCBX) protocol must also run between the Cisco Nexus 5500 switch interface and the connected server. Network administrators face challenges in establishing policies. Features. Example 11-6 Installing and Activating the Cisco Adapter-FEX Feature on Cisco Nexus 5500 Switches Click here to view code image nexus5500(config)# install feature-set virtualization nexus5500(config)# feature-set virtualization Note Cisco Nexus 5000 (5010 and 5020) switches do not support Adapter-FEX. which in turn is connected and registered with the Cisco Nexus 5500 parent switch. They also often encounter a dramatic increase in the number of management points.This increase has a tremendous impact on CapEx and OpEx because of the large number of adapters. . disparate provisioning. the physical Cisco Nexus 5500 switch interface the server is connected to must be configured with VN-Tag mode. available on the Cisco Nexus 5500 switches can be equally applied on the vEth interfaces representing vNICs. latency. which directly affects power. it is challenging to provide guaranteed bandwidth. Virtual Ethernet Interfaces By virtualizing the server Converged Network Adapter. Network administrators struggle to link the virtualized network infrastructure and virtualized server platforms to the physical network infrastructure without degrading performance and with a consistent feature set and operational model. and so on. enforcing policies. network administrators experience lack of visibility into the traffic that is exchanged among virtual machines residing on the same host. QOS. cables. it does require feature installation and activation. and isolation of traffic across multiple cores and virtual machines. Example 11-7 depicts command-line interface configuration required to enable VN-Tag mode on the switch interface or the FCoE Fabric Extender host interface.

The server administrator must define a vNIC on the Converged Network Adapter with the same channel number. the server is connected to a Cisco Nexus 2000 Series FCoE Fabric Extender host interface Ethernet 101/1/1. In this configuration. Example 11-8 Static vEth Interface Provisioning Click here to view code image nexus5500(config)# interface veth10 nexus5500(config-if)# bind interface ethernet 101/1/1 channel 50 . the server starts to include Network Interface Virtualization (NIV) capability type-length-value (TLV) fields in its Data Center Bridging Exchange (DCBX) advertisements. For the Cisco UCS C-series servers. It is possible to associate configuration to the interface before it is even brought up. to validate that the Converged Network Adapter is operating in NIV mode—that is. The vNIC on the Converged Network Adapter also must be configured to use channel 50 for successful binding. the UCS Virtual Interface Card. With a static vEth switch interface. the network administrator manually configures interface provisioning model parameters. as depicted in Figure 11-36. before the server administrator completes all the necessary configuration tasks on the server side to bring up the vEth-to-vNIC relationship. the network administrator specifies which channel a given vEth interface uses. operation from Classical Ethernet to Network Interface Virtualization. When creating a static vEth interface. which is bound to a vEth interface 10 on the upstream Cisco Nexus 5500 switch. Be sure to reboot the server to switch Converged Network Adapter. Figure 11-36 Displaying Network Adapter Mode of Operation on Cisco UCS C-Series VIC It is important to distinguish between static and dynamic vEth interface provisioning. as well as between fixed and floating vEth interfaces. supporting Converged Network Adapter virtualization—you can inspect the Cisco UCS Integrated Management Controller (CIMC) tool and observe the Encap type of NIV listed for the physical adapter. Example 11-8 shows vEth configuration and the channel binding. leveraging channel 50.Example 11-7 Switch Interface VN-Tag Mode Configuration Click here to view code image nexus5500-1(config)# int eth101/1/1 nexus5500-1(config-if)# switchport mode vntag nexus5500-1(config)# int eth1/1 nexus5500-1(config-if)# switchport mode vntag After VN-Tag mode had been activated. that is.

This enables the network administrator to use the lower range for statically provisioned vEth interfaces without being concerned about interface number overlap. Virtual machines often migrate from one physical server to another.It is required to have a unique channel number given to each network interface. The dynamic provisioning model of Adapter-FEX is based on the concept of port profiles of type vethernet. Example 11-9 vethernet Port Profile for Adapter-FEX Click here to view code image port-profile type vethernet NIC-VLAN10 switchport access vlan 10 switchport mode access state enabled Port profiles are configured on the Cisco Nexus 5500 switch and are communicated to the Converged Network Adapter on the server through the VIC protocol. to associate port profiles to vNICs. causing them to be connected through . This could present an issue in server virtualization environments. Example 11-9 depicts an example of a vethernet port profile setting port mode to access and assigning it to VLAN 10. such as UCS Cisco Integrated Management Controller. This way. To preserve this dynamically instantiated vEth interface across reboots. where each vNIC in the Converged Network Adapter might be associated with the vNIC of the virtual machine. the dynamic provisioning model is preferred. which does not support migration across physical switch interfaces. Example 11-10 Dynamic vEth Interface Provisioning Click here to view code image nexus5500(config)# interface vethernet32769 nexus5500(config-if)# inherit port-profile NIC-VLAN50 nexus5500(config-if)# bind interface Ethernet105/1/2 channel 1 nexus5500(config)# interface vethernet32770 nexus5500(config-if)# inherit port-profile NIC-VLAN60 nexus5500(config-if)# bind interface Ethernet105/1/2 channel 2 The interface numbering for dynamically instantiated vEth interfaces is always above 32768. the association of the vEth interface to the physical switch interface and the channel will be already present. A fixed vEth interface is a virtual interface. which in turn triggers the dynamic creation of the vEth interface on the upstream Cisco Nexus 5500 switch. Example 11-10 shows a dynamic provisioning model to instantiate two vEth interfaces on the Cisco Nexus 5500 switch leveraging port profiles. upon reboot of the Cisco Nexus 5500 switch. The server administrator can also leverage Converged Network Adapter management tools. which