About This eBook
ePUB is an open, industry-standard format for eBooks. However, support of ePUB and its many
features varies across reading devices and applications. Use your device or app settings to customize
the presentation to your liking. Settings that you can customize often include font, font size, single or
double column, landscape or portrait mode, and figures that you can click or tap to enlarge. For
additional information about the settings and features on your reading device or app, visit the device
manufacturer’s Web site.
Many titles include programming code or configuration examples. To optimize the presentation of
these elements, view the eBook in single-column, landscape mode and adjust the font size to the
smallest setting. In addition to presenting code and configurations in the reflowable text format, we
have included images of the code that mimic the presentation found in the print book; therefore, where
the reflowable format may compromise the presentation of the code listing, you will see a “Click here
to view code image” link. Click the link to view the print-fidelity code image. To return to the
previous page viewed, click the Back button on your device or app.

CCNA Data Center
DCICT 640-916 Official Cert Guide

NAVAID SHAMSEE, CCIE No. 12625
DAVID KLEBANOV, CCIE No. 13791
HESHAM FAYED, CCIE No. 9303
AHMED AFROSE
OZDEN KARAKOK, CCIE No. 6331

Cisco Press
800 East 96th Street
Indianapolis, IN 46240

CCNA Data Center DCICT 640-916 Official Cert Guide
Navaid Shamsee, David Klebanov, Hesham Fayed, Ahmed Afrose, Ozden Karakok
Copyright © 2015 Pearson Education, Inc.
Published by:
Cisco Press
800 East 96th Street
Indianapolis, IN 46240 USA
All rights reserved. No part of this book may be reproduced or transmitted in any form or by any
means, electronic or mechanical, including photocopying, recording, or by any information storage
and retrieval system, without written permission from the publisher, except for the inclusion of brief
quotations in a review.
Printed in the United States of America
First Printing February 2015
Library of Congress Control Number: 2015930189
ISBN-13: 978-1-58714-422-6
ISBN-10: 1-58714-422-0

Warning and Disclaimer
This book is designed to provide information about the 640-916 DCICT exam for CCNA Data Center
certification. Every effort has been made to make this book as complete and as accurate as possible,
but no warranty or fitness is implied.
The information is provided on an “as is” basis. The authors, Cisco Press, and Cisco Systems, Inc.
shall have neither liability nor responsibility to any person or entity with respect to any loss or
damages arising from the information contained in this book or from the use of the discs or programs
that may accompany it.
The opinions expressed in this book belong to the authors and are not necessarily those of Cisco
Systems, Inc.

Trademark Acknowledgements
All terms mentioned in this book that are known to be trademarks or service marks have been
appropriately capitalized. Cisco Press or Cisco Systems, Inc., cannot attest to the accuracy of this
information. Use of a term in this book should not be regarded as affecting the validity of any
trademark or service mark.

Special Sales
For information about buying this title in bulk quantities, or for special sales opportunities (which
may include electronic versions; custom cover designs; and content particular to your business,
training goals, marketing focus, or branding interests), please contact our corporate sales department

at corpsales@pearsoned.com or (800) 382-3419.
For government sales inquiries, please contact governmentsales@pearsoned.com.
For questions about sales outside the U.S., please contact international@pearsoned.com.

Feedback Information
At Cisco Press, our goal is to create in-depth technical books of the highest quality and value. Each
book is crafted with care and precision, undergoing rigorous development that involves the unique
expertise of members from the professional technical community.
Readers’ feedback is a natural continuation of this process. If you have any comments regarding how
we could improve the quality of this book, or otherwise alter it to better suit your needs, you can
contact us through email at feedback@ciscopress.com. Please make sure to include the book title and
ISBN in your message.
We greatly appreciate your assistance.
Publisher: Paul Boger
Associate Publisher: Dave Dusthimer
Business Operation Manager, Cisco Press: Jan Cornelssen
Executive Editor: Mary Beth Ray
Managing Editor: Sandra Schroeder
Development Editor: Ellie Bru
Senior Project Editor: Tonya Simpson
Copy Editor: Barbara Hacha
Technical Editor: Frank Dagenhardt
Editorial Assistant: Vanessa Evans
Cover Designer: Mark Shirar
Composition: Mary Sudul
Indexer: Ken Johnson
Proofreader: Debbie Williams

Americas Headquarters
Cisco Systems, Inc.
San Jose, CA
Asia Pacific Headquarters
Cisco Systems (USA) Pte. Ltd.

Singapore
Europe Headquarters
Cisco Systems International BV
Amsterdam, The Netherlands
Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed
on the Cisco Website at www.cisco.com/go/offices.
CCDE, CCENT, Cisco Eos, Cisco HealthPresence, the Cisco logo, Cisco Lumin, Cisco Nexus, Cisco
StadiumVision, Cisco TelePresence, Cisco WebEx, DCE, and Welcome to the Human Network are
trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks;
and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP,
CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo,
Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity,
Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me
Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, IronPort,
the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound,
MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels,
ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to
Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of
Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective
owners. The use of the word partner does not imply a partnership relationship between Cisco and any
other company. (0812R)

About the Authors
Navaid Shamsee, CCIE No.12625, is a senior solutions architect in the Cisco Services organization.
He holds a master’s degree in telecommunication and a bachelor’s degree in electrical engineering.
He also holds a triple CCIE in routing and switching, service provider, and data center technologies.
Navaid has extensive experience in designing and implementing many large-scale enterprise and
service provider data centers. In Cisco, Navaid is focused on security of data center, cloud, and
software-defined networking technologies. You can reach Navaid on Twitter: @NavaidShamsee.
David Klebanov, CCIE No.13791 (Routing and Switching) is a technical solutions architect with
Cisco Systems. David has more than 15 years of diverse industry experience architecting and
deploying complex network environments. In his work, David influences strategic development of the
industry-leading data center switching platforms, which lay the foundation for the next generation of
data center fabrics. David also takes great pride in speaking at industry events, releasing
publications, and working on patents. You can reach David on Twitter: @DavidKlebanov.
Hesham Fayed, CCIE No.9303 (Routing and Switching/Data Center), is a consulting systems
engineer for data center and virtualization based in California. Hesham has been with Cisco for more
than 9 years and has 18 years of experience in the computer industry, working with service providers
and large enterprises. His main focus is working with customers in the western region of the United
States to address their challenges by doing end-to-end data center architectures.
Ahmed Afrose is a solutions architect with the Cisco Cloud and IT Transformation (CITT) services
organization. He is responsible for providing architectural design guidance and leading complex
multitech service deliveries. Furthermore, he is involved in demonstrating the Cisco value
propositions in cloud software, application automation, software-defined data centers, and Cisco
Unified Computing System (UCS). He is experienced in server operating systems and virtualization
technologies and holds various certifications. Ahmed has a bachelor’s degree in Information Systems.
He started his career with Sun Microsystem-based technologies and has 15 years of diverse
experience in the industry. He was also directly responsible for setting up Cisco UCS Advanced
Services delivery capabilities while evangelizing the product in the region.
Ozden Karakok, CCIE No. 6331, is a technical leader from the data center products and
technologies team in the Technical Assistant Center (TAC). Ozden has been with Cisco Systems for
15 years and specializes in storage area and data center networks. Prior to joining Cisco, Ozden spent
five years working for a number of Cisco’s large customers in various telecommunication roles.
Ozden is a Cisco Certified Internetwork Expert in routing and switching, SNA/IP, and storage. She
holds VCP and ITIL certifications and is a frequent speaker at Cisco and data center events. She
holds a degree in computer engineering from Istanbul Bogazici University. Currently, she works on
Application Centric Infrastructure (ACI) and enjoys being a mother of two wonderful kids.

About the Technical Reviewer
Frank Dagenhardt, CCIE No. 42081, is a systems engineer for Cisco, focusing primarily on data
center architectures in commercial select accounts. Frank has more than 19 years of experience in
information technology and holds certifications from HP, Microsoft, Citrix, and Cisco. A Cisco
veteran of more than eight years, he works with customers daily, designing, implementing, and
supporting end-to-end architectures and solutions, such as workload mobility, business continuity,
and cloud/orchestration services. In recent months, he has been focusing on Tier 1 applications,
including VDI, Oracle, and SAP. He presented on the topic of UCS/VDI at Cisco Live 2013. When
not thinking about data center topics, Frank can be found backpacking, kayaking, or at home spending
time with his wife Sansi and sons Frank and Everett.

Dedications
Navaid Shamsee: To my parents, for their guidance and prayers. To my wife, Hareem, for her love
and support, and to my children, Ahasn, Rida, and Maria.
David Klebanov: This book is dedicated to my beautiful wife, Tanya, and our two wonderful
daughters, Lia and Maya. You fill my life with great joy and make everything worth doing! In the
words of a song: “Everything I do, I do it for you.” This book is also dedicated to all of you who
relentlessly pursue the path of becoming Cisco-certified professionals. Through your dedication you
build your own success. You are the industry leaders!
Hesham Fayed: This book is dedicated to my lovely wife, Eman, and my beautiful children, Ali,
Laila, Hamza, Fayrouz, and Farouk. Thank you for your understanding and support, especially when I
was working on the book during our vacation to meet deadlines. Without your support and
encouragement I would never have been able to finish this book. I can’t forget to acknowledge my
parents; your guidance, education, and making me always strive to be better is what helped in my
journey.
Ahmed Afrose: I dedicate this book to my parents, Hilal and Faiziya. Without them, none of this
would have been possible, and also to those people who are deeply curious and advocate self
advancement.
Ozden Karakok: To my loving husband, Tom, for his endless support, encouragement, and love.
Merci beaucoup, Askim.
To Remi and Mira, for being the most incredible miracles of my life and being my number one source
of happiness.
To my wonderful parents, who supported me in every chapter of my life and are an inspiration for
life.
To my awesome sisters, Gulden and Cigdem, for their unconditional support and loving me just the
way I am.
To the memory of Steve Ritchie, for being the best mentor ever.

Acknowledgements
To my co-authors, Afrose, David, Hesham, and Ozden: Thank you for your hard work and dedication.
It was my pleasure and honor to work with such a great team of co-authors. Without your support, this
book would not have been possible.
To our technical reviewer, Frank Dagenhardt: Thank you for providing us with your valuable
feedback and suggestions. Your input helped us improve the quality and accuracy of the content.
To Mary Beth Ray, Ellie Bru, Tonya Simpson, and the Cisco Press team: A big thank you to the entire
Cisco Press team for all their support in getting this book published. Special thanks to Mary Beth and
Ellie for keeping us on track and guiding us in the journey of writing this book.
—Navaid Shamsee
My deepest gratitude goes to all the people who have shared my journey over the years and who have
inspired me to always strive for more. I have been truly blessed to have worked with the most
talented group of individuals.
—David Klebanov
I’d like to thank my co-authors Navaid, Afrose, Ozden, and David for working as a team to complete
this project. A special thanks goes to Navaid for asking me to be a part of this book. Afrose, thank
you for stepping in to help me complete Chapter 18 so we could meet the deadline; really, thank you.
Mary Beth and Ellie, thank you both for your patience and support through my first publication. It has
been an honor working with you both, and I have learned a lot during this process.
I want to thank my family for their support and patience while I was working on this book. I know it
was not nice, especially on holidays and weekends, when I had to stay to work on the book rather
than taking you out.
—Hesham Fayed
The creation of this book, my first publication, has certainly been a tremendous experience, a source
of inspiration, and a whirlwind of activity. I do hope to impart more knowledge and experience in the
future on different pertinent topics in this ever-changing world of data center technologies and
practices.
Navaid Shamsee has been a good friend and colleague. I am so incredibly thankful to him for this
awesome opportunity. He has helped me achieve a milestone in my career by offering me the
opportunity to be associated with this publication and the world-renowned Cisco Press.
I’m humbled by this experienced and talented team of co-authors: Navaid, David, Hesham, and
Ozden. I enjoyed working with a team of like-minded professionals.
I am also thankful to all our professional editors, especially Mary Beth Ray and Ellie Bru for their
patience and guidance every step of the way. A big thank you to all the folks involved in production,
publishing, and bringing this book to the shelves.
I would also like to thank our technical reviewer, Frank Dagenhardt, for his keenness and attention to
detail; it helped gauge depth, consistency, and improved the quality of the material overall for the
readers.
And last, but not least, a special thanks to my girlfriend, Anna, for understanding all the hours I

needed to spend over my keyboard almost ignoring her. You still cared, kept me energized, and
encouraged me with a smile. You were always in my mind; thanks for the support, honey.
—Ahmed Afrose
This book would never have become a reality without the help, support, and advice of a great number
of people.
I would like to thank my great co-authors Navaid, David, Afrose, and Hesham. Thank you for your
excellent collaboration, hard work, and priceless time. I truly enjoy working with each one of you. I
really appreciate Navaid taking the lead and being the glue of this diverse team. It was a great
pleasure and honor working with such professional engineers. It would have been impossible to
finish this book without your support.
To our technical reviewer, Frank Dagenhardt: Thank you for providing us with your valuable
feedback, suggestions, hard work, and quick turnaround. Your excellent input helped us improve the
quality and accuracy of the content. It was a great pleasure for all of us to work with you.
To Mary Beth, Ellie, and the Cisco Press team: A big thank you to the entire Cisco Press team for all
their support in getting this book published. Special thanks to Mary Beth and Ellie for keeping us on
track and guiding us in the journey of writing this book.
To the extended teams at Cisco: Thank you for responding to our emails and phone calls even during
the weekends. Thank you for being patient while our minds were in the book. Thank you for covering
the shifts. Thank you for believing in and supporting us on this journey. Thank you for the world-class
organization at Cisco.
To my colleagues Mike Frase, Paul Raytick, Mike Brown, Robert Hurst, Robert Burns, Ed Mazurek,
Matthew Winnett, Matthias Binzer, Stanislav Markovic, Serhiy Lobov, Barbara Ledda, Levent
Irkilmez, Dmitry Goloubev, Roland Ducomble, Gilles Checkroun, Walter Dey, Tom Debruyne, and
many wonderful engineers: Thank you for being supportive in different stages of my career. I learned
a lot from you.
To our Cisco customers and partners: Thank you for challenging us to be the number one IT company.
To all extended family and friends: Thank you for your patience and giving us moral support in this
long journey.
—Ozden Karakok

Contents at a Glance
Introduction
Getting Started
Part I Data Center Networking
Chapter 1 Data Center Network Architecture
Chapter 2 Cisco Nexus Product Family
Chapter 3 Virtualizing Cisco Network Devices
Chapter 4 Data Center Interconnect
Chapter 5 Management and Monitoring of Cisco Nexus Devices
Part I Review
Part II Data Center Storage-Area Networking
Chapter 6 Data Center Storage Architecture
Chapter 7 Cisco MDS Product Family
Chapter 8 Virtualizing Storage
Chapter 9 Fibre Channel Storage-Area Networking
Part II Review
Part III Data Center Unified Fabric
Chapter 10 Unified Fabric Overview
Chapter 11 Data Center Bridging and FCoE
Chapter 12 Multihop Unified Fabric
Part III Review
Part IV Cisco Unified Computing
Chapter 13 Cisco Unified Computing System Architecture
Chapter 14 Cisco Unified Computing System Manager
Chapter 15 Cisco Unified Computing System Pools, Policies, Templates, and Service Profiles
Chapter 16 Administration, Management, and Monitoring of Cisco Unified Computing System
Part IV Review
Part V Data Center Server Virtualization
Chapter 17 Server Virtualization Solutions
Chapter 18 Cisco Nexus 1000V and Virtual Switching
Part V Review

Part VI Data Center Network Services
Chapter 19 Cisco ACE and GSS
Chapter 20 Cisco WAAS and Application Acceleration
Part VI Review
Part VII Final Preparation
Chapter 21 Final Review
Part VIII Appendices
Appendix A Answers to the “Do I Know This Already?” Questions
Glossary
Index
On the CD:
Appendix B Memory Tables
Appendix C Memory Tables Answer Key
Appendix D Study Planner

Contents
Introduction
Getting Started
A Brief Perspective on the CCNA Data Center Certification Exam
Suggestions for How to Approach Your Study with This Book
This Book Is Compiled from 20 Short Read-and-Review Sessions
Practice, Practice, Practice—Did I Mention: Practice?
In Each Part of the Book You Will Hit a New Milestone
Use the Final Review Chapter to Refine Skills
Set Goals and Track Your Progress
Other Small Tasks Before Getting Started
Part I Data Center Networking
Chapter 1 Data Center Network Architecture
“Do I Know This Already?” Quiz
Foundation Topics
Modular Approach in Network Design
The Multilayer Network Design
Access Layer
Aggregation Layer
Core Layer
Collapse Core Design
Spine and Leaf Design
Port Channel
What Is Port Channel?
Benefits of Using Port Channels
Port Channel Compatibility Requirements
Link Aggregation Control Protocol
Port Channel Modes
Configuring Port Channel
Port Channel Load Balancing
Verifying Port Channel Configuration
Virtual Port Channel
What Is Virtual Port Channel?
Benefits of Using Virtual Port Channel
Components of vPC

vPC Data Plane Operation
vPC Control Plane Operation
vPC Limitations
Configuration Steps of vPC
Verification of vPC
FabricPath
Spanning Tree Protocol
What Is FabricPath?
Benefits of FabricPath
Components of FabricPath
FabricPath Frame Format
FabricPath Control Plane
FabricPath Data Plane
Conversational Learning
FabricPath Packet Flow Example
Unknown Unicast Packet Flow
Known Unicast Packet Flow
Virtual Port Channel Plus
FabricPath Interaction with Spanning Tree
Configuring FabricPath
Verifying FabricPath
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 2 Cisco Nexus Product Family
“Do I Know This Already?” Quiz
Foundation Topics
Describe the Cisco Nexus Product Family
Cisco Nexus 9000 Family
Cisco Nexus 9500 Family
Chassis
Supervisor Engine
System Controller
Fabric Modules
Line Cards
Power Supplies

Fan Trays
Cisco QSFP Bi-Di Technology for 40 Gbps Migration
Cisco Nexus 9300 Family
Cisco Nexus 7000 and Nexus 7700 Product Family
Cisco Nexus 7004 Series Switch Chassis
Cisco Nexus 7009 Series Switch Chassis
Cisco Nexus 7010 Series Switch Chassis
Cisco Nexus 7018 Series Switch Chassis
Cisco Nexus 7706 Series Switch Chassis
Cisco Nexus 7710 Series Switch Chassis
Cisco Nexus 7718 Series Switch Chassis
Cisco Nexus 7000 and Nexus 7700 Supervisor Module
Cisco Nexus 7000 Series Supervisor 1 Module
Cisco Nexus 7000 Series Supervisor 2 Module
Cisco Nexus 7000 and Nexus 7700 Fabric Modules
Cisco Nexus 7000 and Nexus 7700 Licensing
Cisco Nexus 7000 and Nexus 7700 Line Cards
Cisco Nexus 7000 and Nexus 7700 Series Power Supply Options
Cisco Nexus 7000 and Nexus 7700 Series 3.0-KW AC Power Supply Module
Cisco Nexus 7000 and Nexus 7700 Series 3.0-KW DC Power Supply Module
Cisco Nexus 7000 Series 6.0-KW and 7.5-KW AC Power Supply Module
Cisco Nexus 7000 Series 6.0-KW DC Power Supply Module
Cisco Nexus 6000 Product Family
Cisco Nexus 6001P and Nexus 6001T Switches and Features
Cisco Nexus 6004 Switches Features
Cisco Nexus 6000 Switches Licensing Options
Cisco Nexus 5500 and Nexus 5600 Product Family
Cisco Nexus 5548P and 5548UP Switches Features
Cisco Nexus 5596UP and 5596T Switches Features
Cisco Nexus 5500 Products Expansion Modules
Cisco Nexus 5600 Product Family
Cisco Nexus 5672UP Switch Features
Cisco Nexus 56128P Switch Features
Cisco Nexus 5600 Expansion Modules
Cisco Nexus 5500 and Nexus 5600 Licensing Options
Cisco Nexus 3000 Product Family
Cisco Nexus 3000 Licensing Options

Cisco Nexus 2000 Fabric Extenders Product Family
Cisco Dynamic Fabric Automation
Summary
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 3 Virtualizing Cisco Network Devices
“Do I Know This Already?” Quiz
Foundation Topics
Describing VDCs on the Cisco Nexus 7000 Series Switch
VDC Deployment Scenarios
Horizontal Consolidation Scenarios
Vertical Consolidation Scenarios
VDCs for Service Insertion
Understanding Different Types of VDCs
Interface Allocation
VDC Administration
VDC Requirements
Verifying VDCs on the Cisco Nexus 7000 Series Switch
Describing Network Interface Virtualization
Cisco Nexus 2000 FEX Terminology
Nexus 2000 Series Fabric Extender Connectivity
VN-Tag Overview
Cisco Nexus 2000 FEX Packet Flow
Cisco Nexus 2000 FEX Port Connectivity
Cisco Nexus 2000 FEX Configuration on the Nexus 7000 Series
Summary
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Chapter 4 Data Center Interconnect
“Do I Know This Already?” Quiz
Foundation Topics
Introduction to Overlay Transport Virtualization
OTV Terminology

OTV Control Plane
Multicast Enabled Transport Infrastructure
Unicast-Enabled Transport Infrastructure
Data Plane for Unicast Traffic
Data Plane for Multicast Traffic
Failure Isolation
STP Isolation
Unknown Unicast Handling
ARP Optimization
OTV Multihoming
First Hop Redundancy Protocol Isolation
OTV Configuration Example with Multicast Transport
OTV Configuration Example with Unicast Transport
Summary
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Chapter 5 Management and Monitoring of Cisco Nexus Devices
“Do I Know This Already?” Quiz
Foundation Topics
Operational Planes of a Nexus Switch
Data Plane
Store-and-Forward Switching
Cut-Through Switching
Nexus 5500 Data Plane Architecture
Control Plane
Nexus 5500 Control Plane Architecture
Control Plane Policing
Control Plane Analyzer
Management Plane
Nexus Management and Monitoring Features
Out-of-Band Management
Console Port
Connectivity Management Processor
Management Port (mgmt0)
In-band Management

Role-Based Access Control
User Roles
Rules
User Role Policies
RBAC Characteristics and Guidelines
Privilege Levels
Simple Network Management Protocol
SNMP Notifications
SNMPv3
Remote Monitoring
RMON Alarms
RMON Events
Syslog
Embedded Event Manager
Event Statements
Action Statements
Policies
Generic Online Diagnostics (GOLD)
Smart Call Home
Nexus Device Configuration and Verification
Perform Initial Setup
In-Service Software Upgrade
ISSU on the Cisco Nexus 5000
ISSU on the Cisco Nexus 7000
Important CLI Commands
Command-Line Help
Automatic Command Completion
Entering and Exiting Configuration Context
Display Current Configuration
Command Piping and Parsing
Feature Command
Verifying Modules
Verifying Interfaces
Viewing Logs
Viewing NVRAM logs
Exam Preparation Tasks
Review All Key Topics

Complete Tables and Lists from Memory
Define Key Terms
Part I Review
Part II Data Center Storage-Area Networking
Chapter 6 Data Center Storage Architecture
“Do I Know This Already?” Quiz
Foundation Topics
What Is a Storage Device?
What Is a Storage-Area Network?
How to Access a Storage Device
Block-Level Protocols
File-Level Protocols
Putting It All Together
Storage Architectures
Scale-up and Scale-out Storage
Tiered Storage
SAN Design
SAN Design Considerations
1. Port Density and Topology Requirements
2. Device Performance and Oversubscription Ratios
3. Traffic Management
4. Fault Isolation
5. Control Plane Scalability
SAN Topologies
Fibre Channel
FC-0: Physical Interface
FC-1: Encode/Decode
Encoding and Decoding
Ordered Sets
FC-2: Framing Protocol
Fibre Channel Service Classes
Fibre Channel Flow Control
FC-3: Common Services
Fibre Channel Addressing
Switched Fabric Address Space
Fibre Channel Link Services

Fibre Channel Fabric Services
FC-4: ULPs—Application Protocols
Fibre Channel—Standard Port Types
Virtual Storage-Area Network
Dynamic Port VSAN Membership
Inter-VSAN Routing
Fibre Channel Zoning and LUN Masking
Device Aliases Versus Zone-Based (FC) Aliases
Reference List
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 7 Cisco MDS Product Family
“Do I Know This Already?” Quiz
Foundation Topics
Once Upon a Time There Was a Company Called Andiamo
The Secret Sauce of Cisco MDS: Architecture
A Day in the Life of a Fibre Channel Frame in MDS 9000
Lossless Networkwide In-Order Delivery Guarantee
Cisco MDS Software and Storage Services
Flexibility and Scalability
Common Software Across All Platforms
Multiprotocol Support
Virtual SANs
Intelligent Fabric Applications
Network-Assisted Applications
Cisco Data Mobility Manager
I/O Accelerator
XRC Acceleration
Network Security
Switch and Host Authentication
IP Security for FCIP and iSCSI
Cisco TrustSec Fibre Channel Link Encryption
Role-Based Access Control
Port Security and Fabric Binding
Zoning

Availability
Nondisruptive Software Upgrades
Stateful Process Failover
ISL Resiliency Using Port Channels
iSCSI, FCIP, and Management-Interface High Availability
Port Tracking for Resilient SAN Extension
Manageability
Open APIs
Configuration and Software-Image Management
N-Port Virtualization
Autolearn for Network Security Configuration
FlexAttach
Cisco Data Center Network Manager Server Federation
Network Boot for iSCSI Hosts
Internet Storage Name Service
Proxy iSCSI Initiator
iSCSI Server Load Balancing
IPv6
Traffic Management
Quality of Service
Extended Credits
Virtual Output Queuing
Fibre Channel Port Rate Limiting
Load Balancing of Port Channel Traffic
iSCSI and SAN Extension Performance Enhancements
FCIP Compression
FCIP Tape Acceleration
Serviceability, Troubleshooting, and Diagnostics
Cisco Switched Port Analyzer and Cisco Fabric Analyzer
SCSI Flow Statistics
Fibre Channel Ping and Fibre Channel Traceroute
Call Home
System Log
Other Serviceability Features
Cisco MDS 9000 Family Software License Packages
Cisco MDS Multilayer Directors
Cisco MDS 9700

Cisco MDS 9500
Cisco MDS 9000 Series Multilayer Components
Cisco MDS 9700–MDS 9500 Supervisor and Crossbar Switching Modules
Cisco MDS 9000 48-Port 8-Gbps Advanced Fibre Channel Switching Module
Cisco MDS 9000 32-Port 8-Gbps Advanced Fibre Channel Switching Module
Cisco MDS 9000 18/4-Port Multiservice Module (MSM)
Cisco MDS 9000 16-Port Gigabit Ethernet Storage Services Node (SSN) Module
Cisco MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module
Cisco MDS 9000 8-Port 10-Gbps Fibre Channel over Ethernet (FCoE) Module
Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module
Cisco MDS 9700 48-Port 10-Gbps Fibre Channel over Ethernet (FCoE) Module
Cisco MDS 9000 Family Pluggable Transceivers
Cisco MDS Multiservice and Multilayer Fabric Switches
Cisco MDS 9148
Cisco MDS 9148S
Cisco MDS 9222i
Cisco MDS 9250i
Cisco MDS Fibre Channel Blade Switches
Cisco Prime Data Center Network Manager
Cisco DCNM SAN Fundamentals Overview
DCNM-SAN Server
Licensing Requirements for Cisco DCNM-SAN Server
Device Manager
DCNM-SAN Web Client
Performance Manager
Authentication in DCNM-SAN Client
Cisco Traffic Analyzer
Network Monitoring
Performance Monitoring
Reference List
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 8 Virtualizing Storage
“Do I Know This Already?” Quiz
Foundation Topics

What Is Storage Virtualization?
How Storage Virtualization Works
Why Storage Virtualization?
What Is Being Virtualized?
Block Virtualization—RAID
Virtualizing Logical Unit Numbers
Tape Storage Virtualization
Virtualizing Storage-Area Networks
N-Port ID Virtualization (NPIV)
N-Port Virtualizer (NPV)
Virtualizing File Systems
File/Record Virtualization
Where Does the Storage Virtualization Occur?
Host-Based Virtualization
Network-Based (Fabric-Based) Virtualization
Array-Based Virtualization
Reference List
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 9 Fibre Channel Storage-Area Networking
“Do I Know This Already?” Quiz
Foundation Topics
Where Do I Start?
The Cisco MDS NX-OS Setup Utility—Back to Basics
The Power On Auto Provisioning
Licensing Cisco MDS 9000 Family NX-OS Software Features
License Installation
Verifying the License Configuration
On-Demand Port Activation Licensing
Making a Port Eligible for a License
Acquiring a License for a Port
Moving Licenses Among Ports
Cisco MDS 9000 NX-OS Software Upgrade and Downgrade
Upgrading to Cisco NX-OS on an MDS 9000 Series Switch

Downgrading Cisco NX-OS Release on an MDS 9500 Series Switch
Configuring Interfaces
Graceful Shutdown
Port Administrative Speeds
Frame Encapsulation
Bit Error Thresholds
Local Switching
Dedicated and Shared Rate Modes
Slow Drain Device Detection and Congestion Avoidance
Management Interfaces
VSAN Interfaces
Verify Initiator and Target Fabric Login
VSAN Configuration
Port VSAN Membership
Verify Fibre Channel Zoning on a Cisco MDS 9000 Series Switch
About Smart Zoning
About LUN Zoning and Assigning LUNs to Storage Subsystems
About Enhanced Zoning
VSAN Trunking and Setting Up ISL Port
Trunk Modes
Port Channel Modes
Reference List
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Part II Review
Part III Data Center Unified Fabric
Chapter 10 Unified Fabric Overview
“Do I Know This Already?” Quiz
Foundation Topics
Challenges of Today’s Data Center Networks
Cisco Unified Fabric Principles
Convergence of Network and Storage
Scalability and Growth
Security and Intelligence

Inter-Data Center Unified Fabric
Fibre Channel over IP
Overlay Transport Virtualization
Locator ID Separation Protocol
Reference List
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Chapter 11 Data Center Bridging and FCoE
“Do I Know This Already?” Quiz
Foundation Topics
Data Center Bridging
Priority Flow Control
Enhanced Transmission Selection
Data Center Bridging Exchange
Quantized Congestion Notification
Fibre Channel Over Ethernet
FCoE Logical Endpoint
FCoE Port Types
FCoE ENodes
Fibre Channel Forwarder
FCoE Encapsulation
FCoE Initialization Protocol
Converged Network Adapter
FCoE Speed
FCoE Cabling Options for the Cisco Nexus 5000 Series Switches
Optical Cabling
Transceiver Modules
SFP Transceiver Modules
SFP+ Transceiver Modules
QSFP Transceiver Modules
Unsupported Transceiver Modules
Delivering FCoE Using Cisco Fabric Extender Architecture
FCoE and Cisco Nexus 2000 Series Fabric Extenders
FCoE and Straight-Through Fabric Extender Attachment Topology
FCoE and vPC Fabric Extender Attachment Topology
FCoE and Server Fabric Extenders

Adapter-FEX
Virtual Ethernet Interfaces
FCoE and Virtual Fibre Channel Interfaces
Virtual Interface Redundancy
FCoE and Cascading Fabric Extender Topologies
Adapter-FEX and FCoE Fabric Extenders in vPC Topology
Adapter-FEX and FCoE Fabric Extenders in Straight-Through Topology with vPC
Adapter-FEX and FCoE Fabric Extenders in Straight-Through Topology
Reference List
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Chapter 12 Multihop Unified Fabric
“Do I Know This Already?” Quiz
Foundation Topics
First Hop Access Layer Consolidation
Aggregation Layer FCoE Multihop
FIP Snooping
FCoE-NPV
Full Fibre Channel Switching
Dedicated Wire Versus Unified Wire
Cisco FabricPath FCoE Multihop
Dynamic FCoE FabricPath Topology
Dynamic FCoE Virtual Links
Dynamic Virtual Fibre Channel Interfaces
Redundancy and High Availability
Reference List
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Part III Review
Part IV Cisco Unified Computing
Chapter 13 Cisco Unified Computing System Architecture
“Do I Know This Already?” Quiz
Foundation Topics
Evolution of Server Computing in a Nutshell

What Is a Socket? What Is a Core? What Is Hyperthreading? Understanding Server Processor Numbering Value of Cisco UCS in the Data Center One Unified System Unified Fabric Unified Management and Operations Cisco UCS Stateless Computing Intelligent Automation. Cloud-Ready Infrastructure Describing the Cisco UCS Product Family Cisco UCS Computer Hardware Naming Cisco UCS Fabric Interconnects What Is a Unified Port? Cisco UCS Fabric Interconnect—Expansion Modules Cisco UCS 5108 Blade Server Chassis Cisco UCS 5108 Blade Server Chassis—Power Supply and Redundancy Modes Cisco UCS I/O Modules (FEX) Cisco UCS B-series Blade Servers Cisco UCS B-series Best Practices for Populating DRAM Cisco Adapters (Mezzanine Cards) for UCS B-series Blade Servers Cisco Converged Network Adapters (CNA) Cisco Virtual Interface Cards Cisco UCS Port Expander for VIC 1240 What Is Cisco Adapter FEX? What Is Cisco Virtual Machine-Fabric Extender (VM-FEX)? Cisco UCS Storage Accelerator Adapters Cisco UCS C-series Rackmount Servers Cisco UCS C-series Best Practices for Populating DRAM Cisco Adapters for UCS C-series Rackmount Servers Cisco UCS C-series RAID Adapter Options Cisco UCS C-series Virtual Interface Cards and OEM CNA Adapters What Is SR-IOV (Single Root-IO Virtualization)? Cisco UCS C-series Network Interface Card Cisco UCS C-series Host Bus Adapter What Is N_Port ID Virtualization (NPIV)? Cisco UCS Storage Accelerator Adapters .

SAN Connectivity Ethernet Switching Mode Fibre Channel Switching Mode Cisco UCS I/O Module and Fabric Interconnect Connectivity Cisco UCS I/O Module Architecture Fabric and Adapter Port Channel Architecture Cisco Integrated Management Controller Architecture Reference List Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Chapter 14 Cisco Unified Computing System Manager “Do I Know This Already?” Quiz Foundation Topics Cabling a Cisco UCS Fabric Interconnect HA Cluster Cisco UCS Fabric Interconnect HA Architecture Cisco UCS Fabric Interconnect Cluster Connectivity Initial Setup Script for Primary Fabric Interconnect Initial Setup Script for Subordinate Secondary Fabric Interconnect Verify Cisco UCS Fabric Interconnect Cluster Setup Changing Cluster Addressing via Command-Line Interface Command Modes Changing Cluster IP Addresses via Command-Line Interface Changing Static IP Addresses via Command-Line Interface .Cisco UCS E-series Servers Cisco UCS “All-in-One” Cisco UCS Invicta Series—Solid State Storage Systems Cisco UCS Software Cisco UCS Manager Cisco UCS Central Cisco UCS Director Cisco UCS Platform Emulator Cisco goUCS Automation Tool Cisco UCS Connectivity Architecture Cisco UCS 5108 Chassis to Fabric Interconnect Physical Connectivity Cisco UCS C-series Rackmount Server Physical Connectivity Cisco UCS 6200 Fabric Interconnect to LAN.

Policies. and Service Profiles “Do I Know This Already?” Quiz Foundation Topics Cisco UCS Service Profiles Cisco UCS Hardware Abstraction and Stateless Computing Contents of a Cisco UCS Service Profile Other Benefits of Cisco UCS Service Profiles Ease of Incremental Deployments Plan and Preconfigure Cisco UCS Solutions Ease of Server Replacement Right-Sized Server Deployments Cisco UCS Templates Organizational Awareness with Service Profile Templates Cisco UCS Service Profile Templates Cisco UCS vNIC Templates Cisco UCS vHBA Templates Cisco UCS Logical Resource Pools UUID Identity Pools MAC Address Identity Pools WWN Address Identity Pools .Cisco UCS Manager Operations Cisco UCS Manager Functions Cisco UCS Manager GUI Layout Cisco UCS Manager: Navigation Pane—Tabs Equipment Tab Servers Tab LAN Tab SAN Tab VM Tab Admin Tab Device Discovery in Cisco UCS Manager Basic Port Personalities in the Cisco UCS Fabric Interconnect Verifying Device Discovery in Cisco UCS Manager Reference List Exam Preparation Tasks Review All Key Topics Define Key Terms Chapter 15 Cisco Unified Computing System Pools. Templates.

WWNN Pools WWPN Pools Cisco UCS Physical Resource Pools Server Pools Cisco UCS Server Qualification and Pool Policies Server Auto-Configuration Policies Creation of Policies for Cisco UCS Service Profiles and Service Profile Templates Commonly Used Policies Explained Equipment Tab—Global Policies BIOS Policy Boot Policy Local Disk Configuration Policy Maintenance Policy Scrub Policy Host Firmware Packages Adapter Policy Cisco UCS Chassis and Blade Power Capping What Is Power Capping? Cisco UCS Power Capping Strategies Static Power Capping (Explicit Blade Power Cap) Dynamic Power Capping (Implicit Chassis Power Cap) Power Cap Groups (Explicit Group) Utilizing Cisco UCS Service Profile Templates Creating Cisco UCS Service Profile Templates Modifying a Cisco UCS Service Profile Template Utilizing Cisco UCS Service Profile Templates for Different Workloads Creating Service Profiles from Templates Reference List Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Chapter 16 Administration. and Monitoring of Cisco Unified Computing System “Do I Know This Already?” Quiz Foundation Topics Cisco UCS Administration Organization and Locales . Management.

Role-Based Access Control Authentication Methods LDAP Providers LDAP Provider Groups Domain. and Default Authentication Settings Communication Management UCS Firmware. and License Management Firmware Terminology Firmware Definitions Firmware Update Guidelines Host Firmware Packages Cross-Version Firmware Support Firmware Auto Install UCS Backups Backup Automation License Management Cisco UCS Management Operational Planes In-Band Versus Out-of-Band Management Remote Accessibility Direct KVM Access Advanced UCS Management Cisco UCS Manager XML API goUCS Automation Toolkit Using the Python SDK Multi-UCS Management Cisco UCS Monitoring Cisco UCS System Logs Fault. Event. and Audit Logs (Syslogs) System Event Logs UCS SNMP UCS Fault Suppression Collection and Threshold Policies Call Home and Smart Call Home Reference List Exam Preparation Tasks Review All Key Topics . Realm. Backup.

Define Key Terms Part IV Review Part V Data Center Server Virtualization Chapter 17 Server Virtualization Solutions “Do I Know This Already?” Quiz Foundation Topics Brief History of Server Virtualization Server Virtualization Components Hypervisor Virtual Machines Virtual Switching Management Tools Types of Server Virtualization Full Virtualization Paravirtualization Operating System Virtualization Server Virtualization Benefits and Challenges Server Virtualization Benefits Server Virtualization Challenges Reference List Exam Preparation Tasks Review All Key Topics Define Key Terms Chapter 18 Cisco Nexus 1000V and Virtual Switching “Do I Know This Already?” Quiz Foundation Topics Evolution of Virtual Switching Before Server Virtualization Server Virtualization with Static VMware vSwitch Virtual Network Components Virtual Access Layer Standard VMware vSwitch Overview Standard VMware vSwitch Operations Standard VMware vSwitch Configuration VMware vDS Overview VMware vDS Configuration .

VMware vDS Enhancements VMware vSwitch and vDS Cisco Nexus 1000V Virtual Networking Solution Cisco Nexus 1000V System Overview Cisco Nexus 1000V Salient Features and Benefits Cisco Nexus 1000V Series Virtual Switch Architecture Cisco Nexus 1000V Virtual Supervisory Module Cisco Nexus 1000V Virtual Ethernet Module Cisco Nexus 1000V Component Communication Cisco Nexus 1000V Management Communication Cisco Nexus 1000V Port Profiles Types of Port Profiles in Cisco Nexus 1000V Cisco Nexus 1000V Administrator View and Roles Cisco Nexus 1000V Verifying Initial Configuration Cisco Nexus 1000V VSM Installation Methods Initial VSM Configuration Verification Verifying VMware vCenter Connectivity Verify Nexus 1000V Module Status Cisco Nexus 1000V VEM Installation Methods Initial VEM Status on ESX or ESXi Host Verifying VSM Connectivity from vCenter Server Verifying VEM Agent Running Verifying VEM Uplink Interface Status Verifying VEM Parameters with VSM Configuration Validating VM Port Groups and Port Profiles Verifying Port Profile and Groups Key New Technologies Integrated with Cisco Nexus 1000V What Is the Difference Between VN-Link and VN-Tag? What Is VXLAN? What Is vPath Technology? Reference List Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Part V Review .

Part VI Data Center Network Services Chapter 19 Cisco ACE and GSS “Do I Know This Already?” Quiz Foundation Topics Server Load-Balancing Concepts What Is Server Load Balancing? Benefits of Server Load Balancing Methods of Server Load Balancing ACE Family Products ACE Module Control Plane Data Plane ACE 4710 Appliances Global Site Selector Understanding the Load-Balancing Solution Using ACE Layer 3 and Layer 4 Load Balancing Layer 7 Load Balancing TCP Server Reuse Real Server Server Farm Server Health Monitoring In-band Health Monitoring Out-of-band Health Monitoring (Probes) Load-Balancing Algorithms (Predictor) Stickiness Sticky Groups Sticky Methods Sticky Table Traffic Policies for Server Load Balancing Class Maps Policy Map Configuration Steps ACE Deployment Modes Bridge Mode Routed Mode One-Arm Mode ACE Virtualization .

Contexts Domains Role-Based Access Control Resource Classes ACE High Availability Management of ACE Remote Management Simple Network Management Protocol XML Interface ACE Device Manager ANM Global Load Balancing Using GSS and ACE Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Chapter 20 Cisco WAAS and Application Acceleration “Do I Know This Already?” Quiz Foundation Topics Wide-Area Network Optimization Characteristics of Wide-Area Networks Latency Bandwidth Packet Loss Poor Link Utilization What Is WAAS? WAAS Product Family WAVE Appliances Router-Integrated Form Factors Virtual Appliances (vWAAS) WAAS Express (WAASx) WAAS Mobile WAAS Central Manager WAAS Architecture WAAS Solution Overview Key Features and Benefits of Cisco WAAS .

Cisco WAAS WAN Optimization Techniques Transport Flow Optimization TCP Window Scaling TCP Initial Window Size Maximization Increased Buffering Selective Acknowledgement Binary Increase Congestion TCP Compression Data Redundancy Elimination LZ Compression Application-Specific Acceleration Virtualization Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Part VI Review Part VII Final Preparation Chapter 21 Final Review Advice About the Exam Event Learn the Question Types Using the Cisco Certification Exam Tutorial Think About Your Time Budget Versus the Number of Questions A Suggested Time-Check Method Miscellaneous Pre-Exam Suggestions Exam-Day Advice Exam Review Take Practice Exams Practicing Taking the DCICT Exam Advice on How to Answer Exam Questions Taking Other Practice Exams Find Knowledge Gaps Through Question Review Practice Hands-On CLI Skills Other Study Tasks Final Thoughts Part VIII Appendices Appendix A Answers to the “Do I Know This Already?” Questions .

Glossary Index On the CD: Appendix B Memory Tables Appendix C Memory Tables Answer Key Appendix D Study Planner .

Icons Used in This Book .

mutually exclusive elements. The Command Reference describes these conventions as follows: Boldface indicates commands and keywords that are entered literally as shown.Command Syntax Conventions The conventions used to present command syntax in this book are the same conventions used in the IOS Command Reference. Vertical bars (|) separate alternative. Square brackets ([ ]) indicate an optional element. In actual configuration examples and output (not general command syntax). Italic indicates arguments for which you supply actual values. boldface indicates commands that are manually input by the user (such as a show command). Braces within brackets ([{ }]) indicate a required choice within an optional element. . Braces ({ }) indicate a required choice.

Exams That Help You Achieve CCNA Data Center Certification Cisco CCNA Data Center is an entry-level Cisco data center certification that is also a prerequisite for other Cisco Data Center certifications. . as shown in Figure I-1. Getting your CCNA Data Center certification is a great first step in building your skills and becoming a recognized authority in the data center field. you need to know Cisco. network services. Cisco has achieved significant market share and has become one of the primary vendors for server hardware. The only data center focus in the DCICN exam is that all the configuration and verification examples use Cisco Nexus data center switches. To achieve the CCNA Data Center certification. If you want to succeed as a technical person in the networking industry in general. and data networking features unique to the Cisco Nexus series of switches. Cisco dominates the networking marketplace. DCICN focuses on networking technology. Unified Fabric.Introduction About the Exam Congratulations! If you are reading far enough to look at this book’s Introduction. focusing on Ethernet switching and IP routing. In fact. you’ve probably already decided to pursue your Cisco CCNA Data Center certification. and in data centers in particular. which leads to the Cisco Certified Entry Network Technician (CCENT) certification. The DCICT exam instead focuses on technologies specific to the data center. and after a few short years of entering the server marketplace. CCNA Data Center itself has no other prerequisites. DCICN explains the basics of networking. Unified Computing (the term used by Cisco for its server products and services). Figure I-1 Path to Cisco CCNA Data Center Certification The DCICN and DCICT exams differ quite a bit in terms of the topics covered. it overlaps quite a bit with the topics in the ICND1 100-101 exam. you must pass two exams: 640-911 Introduction to Cisco Data Center Networking (DCICN) and 640-916 Introduction to Cisco Data Center Technologies (DCICT). These technologies include storage networking.

Before the exam timer begins. and the testing software prevents you from choosing too many answers.Types of Questions on the Exams Cisco certification exams follow the same general format. Simlet questions also use a network simulator. and you must fix it to answer the question correctly. You left-click the mouse to hold the item. the two types enable Cisco to assess two very different skills. you will be presented with a series of questions. simlets require you to analyze both working and broken networks. and release the mouse button to place the item in its destination (usually into a list). The testlet asks you several multiple-choice questions.” What’s on the DCICT Exam? Everyone wants to know what is on the test. Cisco openly publishes the topics of each of its certification exams. you can complete a few other tasks on the PC. you sit in a quiet room in front of the PC. you might need to put a list of items in the proper order or sequence. they include one or more multiple-choice questions. sim questions generally describe a problem. DND questions require you to move some items around in the graphical user interface (GUI). Cisco traditionally tells you how many answers you need to choose. Basically. First. To find the Cisco Certification Exam Tutorial.com and search for “exam tutorial. multiple answers Testlet Drag-and-drop (DND) Simulated lab (sim) Simlet The first three items in the list are all multiple-choice questions. ever since the early days of school. Interestingly. for example. these questions begin with a broken configuration. Anyone who has basic skills in getting around a PC should have no problems with the testing environment. all based on the same larger scenario. sim and simlet questions. The multiple-choice format requires you to point to and click a circle or square beside the correct answer(s). After the exam starts. The exam then grades the question based on the configuration you changed or added. Cisco wants the candidates to know the . you can take a sample quiz just to get accustomed to the PC and the testing engine. while your task is to configure one or more routers and switches to fix it.cisco. but instead of answering the question by changing or adding the configuration. go to www. one at a time. These questions require you to use the simulator to examine network behavior by interpreting the output of show commands you decide to leverage to answer the question. single answer Multiple choice. The questions typically fall into one of the following categories: Multiple choice. to get the question correct. move it to another area. for any test. For some questions. Whereas sim questions require you to troubleshoot problems related to configuration. You can watch and even experiment with these command types using the Cisco Exam Tutorial. The last two types. At the testing center. on the PC screen. correlating show command output with your knowledge of networking theory and configuration commands. both use a network simulator to ask questions.

be able to configure it (to see what’s wrong with the configuration). Alternatively. but otherwise.. where you can easily navigate to the nearby CCNA Data Center page.cisco. you can go to www. the topics were simply listed as bullets with indentation to imply subtopics under a major topic... data center technologies require you to put many concepts together. Cisco makes an effort to keep the exam questions within the confines of the stated exam topics. in the authors’ opinion. Cisco has begun making two stylistic improvements to the posted exam topics. including for the DCICN and DCICT exam topics. More often today. and be able to verify it (to find the root cause of the problem). which gets you to the page for CCNA Routing and Switching. DCICT 640-916 Exam Topics The exam topics for both the DCICN and DCICT exams can be easily found at Cisco. plan to study all the exam topics..com/go/ccna. Table I-1 Six Major Topic Areas in the DCICT 640-916 Exam Note that while the weighting of each topic area tells you something about the exam. with one table for each of the major topic . Tables I-2 through I-7 list the details of the exam topics.. For example. The weighting might indicate where you should spend a little more time during the last days before taking the exam. The DCICT contains six major headings with their respective weighting.” and another with “Troubleshoot. because to troubleshoot.” Questions beginning with “Troubleshoot” require the highest skills level. and we know from talking to those involved that every question is analyzed for whether it fits the stated exam topic.. one topic might begin with “Describe. In the past. are only guidelines..” another with “Configure.. Cisco also numbers the exam topics. you probably will not pass. Furthermore. shown in Table I-1.” another with “Verify. Over time. making it easier to refer to specific ones. however.. The verb tells us to what degree the topic must be understood and what skills are required.. Cisco lists the weighting for each of the major topic headings. so you need all the pieces before you can understand the holistic view.variety of topics and get an idea about the kinds of knowledge and skills required for each topic. Cisco’s posted exam topics. Cisco’s disclaimer language mentions that fact. you must understand the topic.com by searching. which should come from each major topic area.. The weighting tells the percentage of points from your exam. Pay attention to the question verbiage.. Additionally. Exam topics are very specific and the verb used in their description is very important. All six topic areas hold enough weighting so that if you completely ignore an individual topic. the weighting probably does not change how you study.

Note that these tables also list the book chapters that discuss each of the exam topics. Table I-2 Exam Topics in the First Major DCICT Exam Topic Area .areas listed in Table I-1.

Table I-3 Exam Topics in the Second Major DCICT Exam Topic Area Table I-4 Exam Topics in the Third Major DCICT Exam Topic Area Table I-5 Exam Topics in the Fourth Major DCICT Exam Topic Area Table I-6 Exam Topics in the Fifth Major DCICT Exam Topic Area .

About the Book This book discusses the content and skills needed to pass the 640-916 DCICT certification exam.com/go/certifications and navigate to the CCNA Data Center page). This book uses several tools to help you discover your weak topic areas. it might be worth the time to double-check the exam topics as listed on the Cisco website (www. In fact. to help you improve your knowledge and skills with those topics. and to prove that you have retained your knowledge of those topics.cisco. Book Features The most important and somewhat obvious objective of this book is to help you pass the DCICT exam and help you achieve the CCNA Data Center certification.Table I-7 Exam Topics in the Sixth Major DCICT Exam Topic Area Note Because it is possible that the exam topics may change over time. the methods used in this book to help you pass the exam are also designed to make you much more knowledgeable in the general field of the data center and help you in your daily job responsibilities. This book’s companion title. which is the second and final exam to achieve CCNA Data Center certification. and it would be a disservice to you if this book did not help you truly learn the material. but by truly learning and understanding the topics. We strongly recommend that you plan and structure your learning to align with both exam requirements. the book’s title would be misleading! At the same time. this book does not try to help you pass the exams only by memorization. This book helps you pass the CCNA exam by using the following methods: Helping you discover which exam topics you have not mastered Providing explanations and information to fill in your knowledge gaps . CCNA entry-level certification is the foundation for many of the Cisco professional-level certifications. if the primary objective of this book were different. CCNA Data Center DCICN 640-911 Official Cert Guide. discusses the content needed to pass the 640-911 DCICN certification exam. Importantly.

Foundation Topics: These are the core sections of each chapter. Define Key Terms: Although the exam might be unlikely to ask a question such as. asking you to write a short definition and compare your answer to the Glossary at the end of this book. but use the Pearson IT Certification Practice Test (PCPT) exam software that comes . This section lists the most important terms from the chapter. so that you can track your progress. Each chapter includes the activities that make the most sense for studying the topics in that chapter. Part Review The part review tasks help you prepare to apply all the concepts you learned in that part of the book. along with checklists. Complete Tables and Lists from Memory: To help you exercise your memory and memorize certain lists of facts. many of the more important lists and tables from the chapter are included in a document on the CD. you should definitely know the information listed in each key topic. uncovering what you truly understood and what you did not quite yet understand. The following list explains the most common tasks you will see in the part review: Review “Do I Know This Already?” (DIKTA) Questions: Although you have already seen the DIKTA questions from the chapters in a part. answering those questions again can be a useful way to review facts. The activities include the following: Review Key Topics: The Key Topic icon is shown next to the most important items in the “Foundation Topics” section of the chapter. “Define this term. The “Review Key Topics” activity lists the key topics from the chapter and their corresponding page numbers. the “Exam Preparation Tasks” section lists a series of study activities that should be completed at the end of the chapter. References: Some chapters contain a list of reference links for additional information and details on the topics discussed in that particular chapter.” the CCNA exams require that you learn and know a lot of networking terminology. Exam Preparation Tasks: At the end of the “Foundation Topics” section of each chapter. concepts. Although the content of the entire chapter could appear on the exam. The “Part Review” section suggests that you repeat the DIKTA questions. The part reviews list tasks.Supplying exercises that enhance your ability to grasp topics and deduce the answers to subjects related to the exam Providing practice exercises on the topics and the testing process via test questions on the CD Chapter Features To help you customize study time using this book. This document lists only partial information. The part review includes sample test questions that require you to apply the concepts from multiple chapters in that part. the core chapters have several features that help you make the best use of your time: “Do I Know This Already?” Quizzes: Each chapter begins with a quiz that helps you determine the amount of time you need to spend studying the chapter. and configuration for the topics in the chapter. Each book part contains several related chapters. allowing you to complete the table or list. They explain the protocols.

Book Organization. sometimes from multiple chapters. The core chapters cover the following topics: . for extra practice in answering multiple-choice questions on a computer.” but the CCNA exams require that you learn and know a lot of technology concepts and architectures. PearsonITCertification. or Nook or other e-reader). and Appendixes This book contains 20 core chapters—Chapters 1 through 20.pearsonitcertification. has additional study resources. and Mobi (the native Kindle version)—you will receive additional practice test questions and enhanced practice test features. These questions purposefully include multiple concepts in each question. “Final Review. eBook: If you are interested in obtaining an e-book version of this title. Each core chapter covers a subset of the topics on the 640-916 DCICT exam. You can answer the questions in study mode or take simulated DCICT exams with the CD and activation code included in this book. blogs. again! They are indeed the most important topics in each chapter.” lists a series of tasks that you can use for your final preparation before taking the exam. videos. Chapters. Check this site regularly for new and updated postings written by the authors that provide further insight into the more troublesome topics on the exam.with the book. Final Prep Tasks Chapter 21. EPUB (for reading on your tablet. we have included a special offer on a coupon card inserted in the CD sleeve in the back of the book. mobile device. to help build the skills needed for the more challenging analysis questions on the exams. with Chapter 21 including some suggestions for how to approach the actual exams.com/title/9781587144226 posts up-to-theminute material that further clarifies complex exam topics. One exam database holds part review questions.ciscopress. Open Self-Assessment Questions: The exam is unlikely to ask a question such as. In addition to three versions of the e-book—PDF (for reading on your computer). this book. Premium Edition e-book and practice test at a 70 percent discount off the list price. including the following: CD-based practice exam: The companion CD contains the powerful PCPT exam engine. Answer Part Review Questions: The PCPT exam software includes several exam databases. This section asks you some open questions that you should try to describe or explain in your own words. written specifically for part reviews. The core chapters are organized into sections. Review Key Topics: Yes. and other certification preparation tools from the industry’s best authors and trainers. as a whole.com: The website www. This will help you develop a thorough understanding of important exam topics pertaining to that part.com is a great resource for all things IT-certification related. Companion website: The website www. Other Features In addition to the features in each of the core chapters. “Define this term. Check out the great CCNA articles. This offer enables you to purchase the CCNA Data Center DCICT 640-916 Official Cert Guide.

Also covered in this chapter is the NIV—what it is. Basic configuration and verification commands for these technologies are also included in this chapter. and Data Center Network Management. Chapter 2. which includes Nexus 9000. This chapter also identifies the mechanism available in the Nexus platform to protect the control plane of the switch. It goes into the detail of multilayer data center network design and technologies. Part II: Data Center Storage-Area Networking Chapter 6. “Virtualizing Storage”: Storage virtualization is the process of combining multiple . such as port channel. and the focus is on the current generation of modules and storage services solutions. Chapter 5. how it works. The Nexus platform provides several methods for device configuration and management. and Nexus 2000. and storage-area network (SAN). “Data Center Storage Architecture”: This chapter provides an overview of the data center storage-networking technologies. It details the VDC concept and the VDC deployment scenarios. Chapter 4. control plane. Cisco MDS Multiservice and Multilayer Fabric Switches. Fibre Channel. Chapter 3. “Virtualizing Cisco Network Devices”: This chapter covers the virtualization capabilities of the Nexus 7000 and Nexus 5000. how it works. Nexus 3000. virtual port channel.Part I: Data Center Networking Chapter 1. “Data Center Network Architecture”: This chapter provides an overview of the data center networking architecture and design practices relevant to the exam. “Management and Monitoring of Cisco Nexus Devices”: This chapter provides an overview of operational planes of the Nexus platform. which is called Overlay Transport Virtualization (OTV). Commands to configure and how to verify the operation of the OTV are also covered in this chapter. Cisco MDS Software and Storage Services. This chapter goes directly into the key concepts of the Cisco Storage Networking product family. “Data Center Interconnect”: This chapter covers the latest Cisco innovations in the data center extension solutions and in LAN extension in particular. It explains the functions performed by each plane and the key features of data plane. Nexus 6000. Chapter 7. “Cisco Nexus Product Family”: This chapter provides an overview of the Nexus switches product family. and management plane. These methods are also discussed with some important commands for initial setup. It compares Small Computer System Interface (SCSI). and how it is configured. The different VDC types and commands used to configure and verify the setup are also included in the chapter. and verification. It goes into detail about OTV. Chapter 8. and Cisco FabricPath. and how it is deployed. using Virtual Device Contexts (VDC) and Network Interface Virtualization (NIV). It also provides an overview of out-of-band and in-band management interfaces. Nexus 5000. Nexus 7000. “Cisco MDS Product Family”: This chapter provides an overview of the Cisco Multilayer Data Center Switch (MDS) product family in four main areas: Cisco MDS Multilayer Directors. configuration. network-attached storage (NAS) connectivity for remote server storage. The chapter explains the different specifications of each product. The edge/core layer of the SAN design is also included. It covers Fibre Channel protocol and operations in detail.

It also takes a look at different cabling options to support a converged Unified Fabric environment. It also discusses the use of converged Unified Fabric leveraging Cisco FabricPath technology. Chapter 12. and verify the Cisco UCS Fabric Interconnect cluster. Chapter 11. hardware. why. zoning. secure. “Cisco Unified Computing System Architecture”: This chapter begins with a quick fly-by on the evolution of server computing. It goes directly into the key concepts of storage virtualization. “Fibre Channel Storage-Area Networking”: This chapter provides an overview of how to configure Cisco MDS 9000 Series multilayer switches and how to update software licenses. Network virtualization is using network resources through a logical segmentation of a single physical network. Part III: Data Center Unified Fabric Chapter 10. Chapter 9. the fabric login. It also details the Cisco Integrated Management Controller architecture and purpose. Server virtualization is partitioning a physical server into smaller virtual servers. “Multihop Unified Fabric”: This chapter discusses various options for multihop FCoE topologies to extend the reach of Unified Fabric beyond the single-hop boundary.network storage devices into what appears to be a single storage unit. “Cisco Unified Computing System Manager”: This chapter starts by describing how to set up. It also explains how to monitor and verify this process. This chapter goes directly into the practical configuration steps of the Cisco MDS product family and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. . VSAN trunking. Then the chapter explains UCS architecture in terms of component connectivity options and unification of blade and rackmount server connectivity. It also takes a look at some of the technologies allowing extension of the Unified Fabric environment beyond the boundary of a single data center. It discusses FCoE protocol and its operation over numerous switching topologies leveraging a variety of Cisco Fabric Extension technologies. fabric domain. Part IV: Cisco Unified Computing Chapter 13. as well as their benefits and challenges. Application virtualization is decoupling the application and its data from the operating system. and setting up an ISL port using the command-line interface. “Data Center Bridging and FCoE”: This chapter introduces principles behind IEEE data center bridging standards and explains how they enhance traditional Ethernet environments to support convergence of network and storage traffic over a single Unified Fabric. It also covers the power-on auto provisioning (POAP). followed by an introduction to the Cisco UCS value proposition. configure. It then describes the process of hardware and software discovery in Cisco UCS. “Unified Fabric Overview”: This chapter provides an overview of challenges faced by today’s data centers. This chapter guides you through storage virtualization concepts answering basic what. and software portfolio. and intelligent services. It also describes how to verify virtual storage-area networks (VSAN). It focuses on how Cisco Unified Fabric architecture addresses those challenges by converging traditionally disparate network and storage environments while providing a platform for scalable. It takes a look at characteristics of each option. and how questions. Chapter 14.

and integration with VMware vCenter Server. and the well-documented UCS XML API using the Python SDK. in particular. Part VI: Data Center Network Services Chapter 19. It explains the Cisco WAAS solution and describes the concepts and features that are important for the DCICT exam. An overview of these products is also provided in this chapter. It describes Cisco Application Control Engine (ACE) and Global Site Selector (GSS) products and their key features. goUCS automation toolkit. and then introduces the distributed virtual switches and. “Cisco ACE and GSS”: This chapter gives you an overview of server load balancing concepts required for the DCICT exam. “Answers to the ‘Do I Know This Already?’ Quizzes”: Includes the answers to all the questions from Chapters 1 through 20. It also details different deployment modes of ACE and their application within the data center design. virtual supervisory modules. It also gives you an introduction to Cisco UCS XML. “Server Virtualization Solutions”: This chapter takes a peek into the history of server virtualization and discusses fundamental principles behind different types of server virtualization technologies. Chapter 18. the Cisco vision—Cisco Nexus 1000V. “Administration.Chapter 15. GSS and ACE integration is also explained using an example. Part V: Data Center Server Virtualization Chapter 17. Chapter 16. which helps in choosing the correct solution for the load balancing requirements in the data center. As a bonus. “Cisco Nexus 1000V and Virtual Switching”: The chapter starts by describing the challenges of current virtual switching layers in data centers. in particular explaining the many study options available in the book. “Cisco Unified Computing System Pools. It evaluates the benefits and challenges of server virtualization while offering approaches to mitigate performance and security concerns. Chapter 20. and Monitoring of Cisco Unified Computing System”: This chapter starts by explaining some of the important features used when administering and monitoring Cisco UCS. The methods used to optimize the WAN traffic are also discussed in detail. Appendix A. Policies. tips. followed by explanations of logical and physical resource pools and the essentials to create templates and aid rapid deployment of service profiles. commands to verify initial configuration of virtual Ethernet modules. Management. Cisco supports a range of hardware and software platforms that are suitable for various wide-area network optimization and application acceleration requirements. and most relevant features. The Glossary contains definitions for all the terms listed in the “Definitions of Key Terms” . “Final Review”: This chapter suggests a plan for exam preparation after you have finished the core parts of the book. “Cisco WAAS and Application Acceleration”: This chapter gives you an overview of the challenges associated with a wide-area network. you also see notes. The chapter explains installation options. and Service Profiles”: The chapter starts by explaining the hardware abstraction layer in more detail and how it relates to stateless computing. Templates. Part VII: Final Preparation Chapter 21.

as a memory exercise. Appendix C. “Memory Tables Answer Key”: Contains the answer key for the exercises in Appendix B. Appendixes On the CD Appendix B. “Memory Tables”: Holds the key tables and lists from each chapter. Appendix D. with some of the content removed. The goal is to help you memorize facts that can be useful on the exams.section at the conclusion of Chapters 1 through 20. complete the tables and lists. “Study Planner”: A spreadsheet with major study milestones enabling you to track your progress through your study. . You can print this appendix and.

which lists several contact details. If you have already installed the PCPT software from another Pearson product. After you install the software. you will find a one-time-use coupon code that will give you 70% off the purchase of the CCNA Data Center DCICT 640-916 Official Cert Guide. and testlet questions). Step 2. The software that automatically runs is the Cisco Press software to access and use all CDbased features. The installation process requires two major steps. Respond to windows prompts as with any typical software installation process. but you may also skip these topics and refer to them later. The practice exam (the database of DCICT exam questions) is not on the CD. Note The cardboard CD case in the back of this book includes both the CD and a piece of thick paper. Step 3. make sure to note the final page of this introduction. you do not need to reinstall the software. The following steps outline the installation process: Step 1. including the exam engine. you can either study by going through the questions in study mode. click the option to Install the Exam Engine. The installation process gives you the option to activate your exam with the activation code supplied . drag-and-drop. Install the Software from the CD The software installation process is routine as compared with other software installation processes. on the opposite side from the exam activation code. Simply launch the software on your desktop and proceed to activate the practice exam from this book by using the activation code included in the CD sleeve. Do not lose the activation code. Install the Pearson IT Certification Practice Test Engine and Questions The CD in the book includes the Pearson IT Certification Practice Test (PCPT) engine (software that displays and grades a set of exam-realistic multiple-choice. You may read these when you first use the book. Using the PCPT engine. including how to get in touch with Cisco Press. Premium Edition eBook and Practice Test.Reference Information This short section contains a few topics available for reference elsewhere in the book. Insert the CD into your PC. The CD in the back of this book has a recent copy of the PCPT engine. The paper lists the activation code for the practice exam associated with this book. fill-in-the-blank. Note Also on this same piece of paper. In particular. or you can take a simulated DCICT exam that mimics real exam conditions. From the main menu. the PCPT software will use your Internet connection and download the latest versions of both the software and the question databases for this book.

This process requires that you establish a Pearson website login. both in study mode and practice exam mode. use PCPT to review the DIKTA questions for that part. At the next screen. using study mode. there is no need to register again. Here is a suggested study plan: During part review. after they have finished reading the entire book. Updating your exams will ensure that you have the latest changes and updates to the exam data. The questions come in different exams or exam databases. . the software and practice exam are ready to use. from the My Products or Tools tab. This will ensure that you are running the latest version of the software engine. using study mode. and then click Finish. Just use your existing login. Activate and Download the Practice Exam When the exam engine is installed. During part review.on the paper in the CD sleeve. select the Tools tab and click the Update Products button. With the DCICT book alone. the My Products tab should list your new exam. Then for each new product. extract the activation code from the CD sleeve in the back of that book. To update a particular product’s exams that you have already activated and downloaded. you do not even need the CD at this point. From there. For instance. “Final Review. the PCPT software downloads the latest version of all these exam databases. Just select the exam and click the Open Exam button. You can choose to use any of these exam databases at any time. many people find it best to save some of the exams until exam review time.” or four different sets of questions. Step 2. The activation process will download the practice exam. select the Tools tab and click the Update Application button. click the Activate button. PCPT Exam Databases with This Book This book includes an activation code that allows you to load a set of practice questions. At this point. use the questions built specifically for part review (the part review questions) for that part of the book. If you want to check for updates to the PCPT software. Click Next. Then click the Activate button. If you already have a Pearson website login. However. When you install the PCPT software and type in the activation code. You will need this login to activate the exam. Step 3. If you do not see the exam. To activate and download the exam associated with this book. if you buy another new Cisco Press Official Cert Guide or Pearson IT Certification Cert Guide. you should then activate the exam associated with this book (if you did not do so during the installation process) as follows: Step 1. When the activation process completes. enter the activation key from paper inside the cardboard CD holder in the back of the book. all you have to do is start PCPT (if not still up and running) and perform Steps 2 through 4 from the previous list. Start the PCPT software from the Windows Start menu or from your desktop shortcut icon. The two modes inside PCPT give you better options for study versus practicing a timed exam event. you get four different “exams. Save the remaining exams to use with Chapter 21. Activating Other Products The exam software installation process and the registration process have to happen only once. so please do register when prompted. make sure that you have selected the My Products tab on the menu. Step 4.” using practice exam mode. only a few steps are required.

select the box beside Part Review Questions. This tells the PCPT software to give you part review questions from the selected part. the DIKTA questions from the beginning of each chapter). select the item for this product. To view these questions. with a timer event. click at the bottom of the screen to deselect all objectives (chapters). it is slightly better to review these questions from inside the PCPT software. with a name like DCICT 640916 Official Cert Guide. you can view questions from the chapters in only one part of the book. Step 2. The top of the next window should list some exams. Step 4. This selects the questions intended for part-ending review. Select any other options on the right side of the window. select the box beside DCICT Book Questions. with basic application. and deselect the other boxes. Select any other options on the right side of the window. as follows: Step 1. follow the same process as you did with DIKTA/book questions. you can choose a subset of the questions in an exam database. Start the PCPT software. Practice exam mode also gives you a score for that timed event. but select the part review database instead of the book database. with a name like DCICT 640916 Official Cert Guide. The top of the next window that appears should list some exams. so you can study the topics more easily. click at the bottom of the screen to deselect all objectives. This selects the “book” questions (that is. select the item for this product. and click Open Exam. How to View Only DIKTA Questions by Part Each “Part Review” section asks you to repeat the Do I Know This Already (DIKTA) quiz questions from the chapters in that part. How to View Only Part Review Questions by Part The exam databases you get with this book include a database of questions created solely for study during the part review process. just to get a little more practice in how to read questions from the testing software.In study mode. for instance. from all chapters. Step 6. . To view these DIKTA (book) questions inside the PCPT software. you can see the answers immediately. and then select (check) the box beside the book part you want to review. and deselect the other boxes. Also. follow these steps: Step 1. DIKTA questions focus more on facts. then select the box beside each chapter in the part of the book you are reviewing. From the main (home) menu. Step 3. Practice exam mode creates an event somewhat like the actual exam. Step 5. From the main (home) menu. Step 2. Step 5. Click Start to start reviewing the questions. Step 3. The part review questions instead focus more on application and look more like real exam questions. On this same window. Step 6. Step 4. It gives you a preset number of questions. Click Start to start reviewing the questions. and click Open Exam. Although you could simply scan the book pages to review these questions. Start the PCPT software. In this same window.

select Contact Us.com/go/certification for the latest details. Cisco might make changes that affect the CCNA data center certification from time to time. We at Cisco Press believe that this book certainly can help you achieve CCNA data center certification. submit them via www.ciscopress. Just go to the website. and type your message. but the real work is up to you! We trust that your time will be well spent.For More Information If you have any comments about the book. The CCNA Data Center DCICT 640-916 Official Cert Guide helps you attain the CCNA data center certification.com. You should always check www.cisco. . This is the DCICT exam prep book from the only Cisco-authorized publisher.

running is both exercise and a metaphor. understand relevant technology landscape. On running as a metaphor. You will need certain analytical problem-solving skills. virtualization. bit-by-bit I raise the bar. Spend time thinking about key milestones you want to reach. Many questions in the CCNA data center certification exam apply to connectivity between the data center devices. Suggestions for How to Approach Your Study with This Book Although Cisco certification exams are challenging. including all the exams related to CCNA certifications. piling up the races. a question might give you a data center topology diagram and then ask what needs to be configured to make that setup work. We welcome your decision to become a Cisco Certified Network Associate Data Center professional! You are probably wondering how to get started. For example. Running day after day. such as your ability to analyze. Such skills require you to prepare by doing more than just reading and memorizing this book’s material. since then. it is. So. This book offers a solid foundation for the knowledge required to pass the CCNA data center certification exam. I’m at an ordinary—or perhaps more like mediocre—level. It requires a bit of practice every day. many people pass them every day. configure. and so on. we also encourage you to develop your practical knowledge by having hands-on experience in the topics covered in the CCNA data center certification exam. and plan accordingly to review material for exam topics you are less comfortable with. A Brief Perspective on the CCNA Data Center Certification Exam Cisco sets the bar somewhat high for passing its certification exams. what needs to be considered when designing that topology. and by clearing each level I elevate myself. Murakami started running in the early 1980s. storage. and troubleshoot networking features. but at the same time it also encourages you to continue your journey to become a better data center networking professional. In long-distance running the only opponent you have to . The point is whether or not I improved over yesterday. At the same time. These exams challenge you from many angles. such as CCNA data center. what is the missing ingredient of the solution. But that’s not the point. and analyze emerging technology trends. At least that’s why I’ve put in the effort day after day: to raise my own level. by any means. and security to the test.Getting Started It might look like this is yet another certification exam preparation book—indeed. A team of five data center professionals came together to carefully pick the content for this book to send you on your CCNA data center certification journey. compute. what do you need to do to be ready to pass the exam? We encourage you to build your theoretical knowledge by thoroughly reading this book and taking time to remember the most critical facts. The CCNA data center exam puts your knowledge in network. and today you are making a very significant step in achieving your goal. I’m no great runner. What I Talk About When I Talk About Running is a memoir by Haruki Murakami in which he writes about his interest and participation in long-distance running. he wrote in his book: “For me. he has competed in more than 20 marathons and an ultramarathon. Think about studying for the CCNA data center certification exam as preparing for a marathon. This “Getting Started” chapter guides you through building a strategy toward success. Strategizing about your studying is a key element to your success. Take time to review it before starting your CCNA data center certification journey.

Each chapter consists of numerous “Foundation Topics” sections and “Exam Preparation Tasks” at the end. Based on your score in the “Do I Know This Already?” (DIKTA) quiz. Do not put off using these tasks until later! The chapter-ending “Exam Preparation Tasks” section helps you with the first phase of deepening your knowledge and skills of the key topics.” Doing these tasks. do the study tasks in the “Exam Preparation Tasks” section at the end of the chapter. and Storage Area Networks in a Data Center. The following list describes the majority of the activities you will find in “Exam Preparation Tasks” sections: Review key topics Complete memory tables Define key terms Approach each chapter with the same plan. This Book Is Compiled from 20 Short Read-and-Review Sessions First. look at your study as a series of read-and-review tasks. and doing them at the end of the chapter. view the book as having six major milestones. Review those sections carefully and make sure you are thoroughly familiar with their content. DIKTA is a self-assessment quiz appearing at the beginning of each content chapter. regardless of whether you skim or read thoroughly. These topics are structured to cover main data center infrastructure technologies. Computer. Practice—Did I Mention: Practice? Second. such as Network. Table 1 Suggested Study Approach for the Content Chapter In Each Part of the Book You Will Hit a New Milestone Third. Server Virtualization. plan to use the practice tasks at the end of each chapter.beat is yourself. one for each major topic. Take time to review this book chapter by chapter and topic by topic. and the only opponent you have to beat is yourself. Plan your time accordingly and make progress every day. Remember. the way you used to be. Each chapter marks key topics you should pay extra attention to. each on a relatively small set of related topics. This book has 20 content chapters covering the topics you need to know to pass the exam. Practice. and linking the concepts in your brain so that you can remember how it all fits together. .” Similar principles apply here. Each chapter ends with practice and study tasks under the heading “Exam Preparation Tasks. Table 1 shows the suggested study approach for each content chapter. really helps you get ready. This book organizes the content into topics of a more manageable size to give you something more digestible to build your knowledge. remembering terms. Practice. you can choose to read the entire core (Foundation Topics) section of the chapter or just skim the chapter.

Beyond the more obvious organization into chapters. Set Goals and Track Your Progress Finally. Use the Final Review Chapter to Refine Skills Fourth. helping you avoid some of the pitfalls people experience with the actual exam. Consider setting a goal date for finishing each part of the book and reward yourself for achieving this goal! Plan breaks—some personal time out—to get refreshed and motivated for the next part. First. this book also organizes the chapters into six major topic areas called book parts. Acknowledge yourself for completing major milestones. Chapter 21. this chapter’s tasks give you activities to make further progress. To set the goals. Chapter 21 has two major goals. Many of the questions are purposely designed to test your knowledge in areas where the most common mistakes and misconceptions occur. goal setting can help everyone studying for these exams. Each “Part Review” instructs you to repeat the DIKTA questions while using the PCPT software. Table 2 Six Major Milestones: Book Parts Note that the “Part Review” directs you to use the Pearson Certification Practice Test (PCPT) software to access the practice questions. Many questions require that you cross-connect ideas about design. and troubleshooting. depending on your personality.” Note that the PCPT software and exam databases included with this book also give you the rights to additional questions. uncovering any gaps in your knowledge. or the “Final Review” . it helps you further develop the analytical skills you need to answer the more complicated questions in the exam. The list of tasks does not have to be very detailed. Table 2 lists the six parts in this book. This final element gives you repetition with high-challenge exam questions. More reading will not necessarily develop these skills. and be ready to track your progress. you need to know what tasks you plan to do. such as putting down every single task in the “Exam Preparation Tasks” section. do the tasks outlined at the end of the book in Chapter 21. take extra time to complete “Part Review” tasks and ask yourself where your weak and strong areas are. The tasks in the chapter also help you find your weak areas. take the time to make a plan. It also instructs you how to access a specific set of questions reserved for reviewing concepts at the “Part Review. verification. Completing each part means that you have completed a major area of study. configuration. Although making lists of tasks might or might not appeal to you. before you start reading the book. the “Part Review” tasks section. set some goals. “Final Review.” gives some recommendations on how to best use those questions for your final exam preparation. At the end of each part.

You should track at least two tasks for each typical chapter: reading the “Foundation Topics” section and doing the “Exam Preparation Tasks” section at the end of the chapter.” Table 3 shows a sample of a task planning table for Part I of this book. you can browse through the posts to find . such as install software. and so on—and either adjust your goals or work a little harder on your study. think about how fast you read and the length of each chapter’s “Foundation Topics” section. should you encounter software installation problems. If you miss a few dates. Instead. listing only the major tasks can be sufficient. Even if you do not spend time reading all the posts yet. Do not delay. Of course. and so on. when you have time to read. http://learningnetwork. do not start skipping the tasks listed at the ends of the chapters! Instead.chapter. think about what is impacting your schedule—family commitments. Other Small Tasks Before Getting Started You will need to complete a few overhead tasks. you have sufficient time to resolve them before you need the tool. This mailing list allows you to lurk and participate in discussions about CCNA data center topics.com) and join the CCNA data center study group.cisco. later. Table 3 Sample Excerpt from a Planning Table Use your goal dates as a way to manage your study. as listed in the Table of Contents. join the group. do not forget to list tasks for “Part Reviews” and the “Final Review. Register. You can complete these tasks now or do them in your spare time when you need a study break during the first few chapters of the book. If you finish a task sooner than planned. for both the DCICN and DCICT exams. find some PDFs. Pick reasonable dates that you can meet. move up the next few goal dates. and set up an e-mail filter to redirect the messages to a separate folder. work commitments. and do not get discouraged if you miss a date. Register (for free) at the Cisco Learning Network (CLN. When setting your goals.

For more details on how to load the software.relevant topics. refer to the “Introduction.” Keep calm. All the recorded sessions last a maximum of one hour. and enjoy the ride! .” under the heading “Install the Pearson Certification Practice Test Engine and Questions. which you can listen to as much as you want. You can also search the posts from the CLN website. There are many recorded CCNA data center sessions from the technology experts. Install the PCPT exam software and activate the exams.

and Management Plane Initial Setup of Nexus Switches Nexus Device Configuration and Verification Chapter 1: Data Center Network Architecture Chapter 2: Cisco Nexus Product Family Chapter 3: Virtualizing Cisco Network Devices Chapter 4: Data Center Interconnect Chapter 5: Management and Monitoring of Cisco Nexus Devices . Aggregation. Data Plane.Part I: Data Center Networking Exam topics covered in Part I: Modular Approach in Network Design Core. and Access Layer Collapse Core Model Port Channel Virtual Port Channel FabricPath Describe the Cisco Nexus Product Family Describe FEX Products Describe Device Virtualization Identify Key Differentiator Between DCI and Network Interconnectivity Control Plane.

If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. This chapter discusses the data center networking architecture and design practices relevant to the exam. You can find the answers in Appendix A. It is assumed that you are familiar with the basics of networking concepts outlined in the Introducing Cisco Data Center Networking (DCICN). “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. network plays an important role in the availability of data center resources. from a branch office over the widearea network (WAN). Network architecture is an important subject that addresses many functional areas of the data center to ensure that data center services are running around the clock. “Answers to the ‘Do I Know This Already?’ Quizzes. and from the remote locations over the public Internet. read the entire chapter. Data Center Network Architecture This chapter covers the following exam topics: Modular Approach in Network Design Core. Therefore. These resources can be utilized from a campus building of an enterprise network.Chapter 1.” Table 1-1 “Do I Know This Already?” Section-to-Question Mapping . Aggregation. and Access Layer Collapse Core Model Port Channel Virtual Port Channel FabricPath Data centers are designed to host critical computing resources in a centralized place. Basic network configuration and verification commands are also included in this chapter. Table 1-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. This chapter goes directly into the key concepts of data center networking and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification.

b. Load balancing c. Simplicity 2. and core layers are collapsed. It provides policy-based connectivity. This layer is optional in data center design. Services layer 5.Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. access layer. Scalability b. High availability c. d. If you do not know the answer to a question or are only partially sure of the answer. Increased capacity b. What are the benefits of modular approach in network design? a. The purpose of the aggregation layer is to provide high-speed packet forwarding. Which of the following are key characteristics of data center core layer? a. Provides connectivity to aggregation layer b. Collapse Core is a design where only one core switch is used. 6. 4. you should mark that question as wrong for purposes of the self-assessment. 1. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. Aggregation layer c. Security d. b. It provides connectivity to network-based services (load balancer and firewall). c. Which of the following statements about the data center aggregation layer is correct? a. distribution layer. Access layer b. distribution and core layers are collapsed. Collapse Core design is a nonredundant design. Quality of Service d. d. c. Where will you connect a server in a data center? a. High availability d. Provides connectivity to servers c. Which of the following are benefits of port channels? a. High availability . In Collapse Core design. Core layer d. What is a Collapse Core data center network design? a. Used to enforce network security 3. In Collapse Core design.

Identify the Cisco switching platform that supports vPC. Duplex c. Progressive c. OFF 9. b. Passive d. ON b. Nexus 7000 Series switches c. Which LACP mode is used? a. Nexus 5000 Series switches d. Flow control 8. Source and destination IP address b. . Active d. Catalyst 6500 switches b. Progressive b. LACP port priority d. Active 11. Speed b. Active b. ON e. LACP is enabled on a port. Source and destination MAC address c. It uses all available uplink bandwidth.7. Source and destination TCP/UDP port number d. All Cisco switches 13. Which of the following are valid port channel modes? a. Which of the following attributes must match to bundle ports into a port channel? a. The port responds to LACP packets that it receives but does not initiate LACP negotiation. The port initiates negotiations with a peer switch by sending LACP packets. Which of the following are benefits of using vPC? a. a. It eliminates Spanning Tree Protocol (STP) blocked ports. Cisco Nexus switches support which of the following port channel load-balancing methods? a. ON 10. Standby c. Which LACP mode is used? a. Passive c. None of the above 12. Passive d. LACP is enabled on a port.

In a FabricPath network. Cisco Fabric Service over Ethernet (CFSoE) uses which of the following links? a. There is no limit. In vPC setup. It performs load balancing of traffic between two vPC switches. Only two c. It provides fast convergence if either the link or the device fails.c. Source switch ID in outer source address (OSA) b. d. Cisco Fabric Services over Ethernet (CFSoE) performs which of the following operations? a. Destination switch ID in outer destination address (ODA) d. 17. the switch ID must be allocated manually. d. you can configure as many as you want. Only one b. Dynamic Host Configuration Protocol (DHCP) b. What is the name of the FabricPath control plane protocol? a. the root of the first multidestination tree is elected based on which of the following parameters? a. Root priority b. 16. 18. vPC peer link and vPC peer keepalive link 15. vPC peer link c. b. In a FabricPath network. It monitors the status of vPC member ports. MAC address 19. IS-IS . 14. System ID c. IP address of packet 20. Switch ID d. 255 d. In FabricPath. It is supported on all Cisco switches. c. vPC peer keepalive link b. How many vPC domains can be configured on a switch or VDC? a. vPC ports d. It performs consistency and compatibility checks. Cisco Fabric Services over Ethernet (CFSoE) c. which protocol is used to automatically assign a unique switch ID to each FabricPath switch? a. It is used to exchange Layer 2 forwarding tables between vPC peers. Dynamic Resource Allocation Protocol (DRAP) d. FabricPath routes the frames based on which of the following parameters? a. Destination MAC address c.

Figure 1-1 shows functional layers of a multilayer network design. the network is divided into three functional layers: Core. Therefore. but it is very often forgotten. these blocks are independent from each other. These building blocks are used to construct a network topology within the data center.b. Modular networks are scalable. These layers can be physical or logical. The concept seems obvious. This section describes the function of each layer in the data center. and changes (MACs). OSPF c. the addressing is also simplified within the data center network. the network is divided into distinct building blocks based on features and behaviors. . Aggregation. In other words. Modularity also implies isolation of building blocks. Because of the hierarchical topology of a modular design. Modular designs are simple to design. configure. It allows for the considerable growth or reduction in the size of a network without making drastic changes or needing any redesign. a change in one block does not affect other blocks. The main advantage occurs during the changes in the network. and Access. Scalable data center network design is achieved by using the principle of hierarchy and modularity. These blocks or modules can be added and removed from the network without redesigning the entire data center network. A network must always be kept as simple as possible. Modularity also enables faster moves. and incremental changes in the network. adds. BGP d. This approach has several advantages. EIGRP Foundation Topics Modular Approach in Network Design In a modular design. so these blocks are separated from each other and communicate through specific network connections between the blocks. and troubleshoot. The Multilayer Network Design When you design a data center using a modular approach. modular design provides easy control of traffic flow and improved security.

Table 1-2 describes the key features of each layer within the data center. .Figure 1-1 Functional Layers in the Hierarchical Model The hierarchical network model provides a modular framework that enables flexibility in network design and facilitates ease of implementation and troubleshooting. The hierarchical model divides networks or their modular blocks into three layers.

The redundancy of the building block is extended with redundancy in the backbone. including segmentation. Layer 2 switching is used in the access layer. An advantage of the multilayer network design is scalability. . New buildings and server farms can be easily added or removed without changing the design. Layer 2 and Layer 3 switching in the distribution layer.Table 1-2 Data Center Functional Layers The multilayer network design consists of several building blocks connected across a network backbone. and failure recovery. load balancing. and Layer 3 switching in the core. Figure 1-2 shows sample topology of a multilayer network design. In the most general model. The multilayer network design takes maximum advantage of many Layer 3 services.

firewalls. The design of the access layer varies. The switches in the access layer are connected to two separate distribution layer switches for redundancy. Layer 3. The default gateway would be implemented at the aggregation layer switch. . which requires the servers to be Layer 2 adjacent. SSL offloading devices. there are physical loops in the topology that must be managed by the STP. A mix of both Layer 2 and Layer 3 access models permits a flexible solution and enables application environments to be optimally positioned. This layer is the critical point for control and application services.Figure 1-2 Multilayer Network Design Access Layer The access layer is the first point of entry into the network for edge devices. which allows better sharing of service devices across multiple servers. Aggregation Layer The aggregation (or distribution) layer aggregates the uplinks from the access layer to the data center core. This design lowers TCO and reduces complexity by reducing the number of components that you need to configure and manage. the default gateway for the servers can be configured at the aggregation layer. you need two access layer switches with the same VLAN extended across both access switches using a trunk port. and servers. Although Layer 2 at the aggregation layer is tolerated for traditional designs. end stations. new designs try to confine Layer 2 to the access layer. Rapid Per-VLAN Spanning Tree Plus (Rapid PVST+) is recommended to ensure a logically loop-free topology over the physical topology. and mainframe connectivity. The VLAN or trunk supports the same IP address allocation on the two server links to the two separate switches. This design also enables the use of Layer 2 clustering. With Layer 2 access. The access layer in the data center is typically built at Layer 2. This topology provides access layer high availability. With a dual-homing network interface card (NIC). Security and application service devices (such as load-balancing devices. With Layer 2 at the aggregation layer. and IPS devices) are often deployed as a module in the aggregation layer. The data center access layer provides Layer 2. depending on whether you use Layer 2 or Layer 3 access.

service devices that are deployed at the aggregation layer are shared among all the servers. 10 Gigabit Ethernet density: Without a data center core. Core Layer Implementing a data center core is a best practice for large data centers. these layers do not have to be physical layers. the boundary between Layer 2 and Layer 3 at the aggregation layer can be in the multilayer switches.Note In traditional multilayer data center design. administration. and the core injects a default route into the data center network. The data center network is summarized. and implementation of policies (such as quality of service. the Cisco Nexus 7000 Series switches can now support much higher bandwidth densities. or the content switching devices. but can be logical layers. it helps ease network expansion and avoid disruption to the data center environment. the aggregation layer might also need to support a large Spanning Tree Protocol (STP) processing load. such as 40 or 100 Gigabit Ethernet? With the introduction of the Cisco Nexus 7000 M-2 Series of modules. The following drivers are used to determine whether a core solution is appropriate: 40 and 100 Gigabit Ethernet density: Are there requirements for higher bandwidth connectivity. The data center typically connects to the campus core using Layer 3 links. Depending on the requirements. Anticipation of future development: The impact that can result from implementing a separate data center core layer at a later date might make it worthwhile to install the core layer at the beginning. will there be enough 10 Gigabit Ethernet ports on the campus core switch pair to support both the campus distribution and the data center aggregation modules? Administrative domains and policies: Separate cores help to isolate campus distribution layers from data center aggregation layers for troubleshooting. If the same VLAN is required across multiple access layer devices. When the core is implemented in an initial data center design. This topic describes the reasons for combining the core and . Service devices that are deployed at the access layer provide benefit only to the servers that are directly attached to the specific access switch. ACLs. Key core characteristics include the following: Distributed forwarding architecture Low-latency switching 10 Gigabit Ethernet scalability Scalable IP multicast support Collapse Core Design Although there are three functional layers in a network design. and maintenance). the firewalls. The aggregation layer typically provides Layer 3 connectivity to the core. troubleshooting.

a leaf switch acts as the access layer. and adding more leaf switches can increase the port density on the access layer. server farm. the small or medium-sized business can reduce costs because the same switch performs both distribution and core functions. connect to leaf switches. Spine and Leaf Design Most large-scale modern data centers are now using spine and leaf design. With this option. In the spine and leaf design. Examples of spine and leaf architectures are FabricPath and Application Centric Infrastructure (ACI). This design can have only two layers that are spine and leaf. In the collapse core design. In this mixed approach you can build a scalable access layer using spine and leaf architecture. Spine and leaf design can also be mixed with traditional multilayer data center design. adding more spine switches can increase the capacity of the network. therefore. and edge modules. Figure 1-3 Collapse Core Design The three-tier architecture previously defined is traditionally used for a larger enterprise campus. Figure 1-4 shows a sample spine and leaf topology. An alternative would be to use a two-tier architecture in which the distribution and the core layer are combined. As the small campus begins to scale. while keeping other layers of the data center intact. a threetier architecture should be considered. the core switches would provide direct connections to the access-layer switches. the three-tier architecture might be excessive. The end hosts. The spine provides connectivity between the leaf switches. and it connects to all the spine switches. For the campus of a small or medium-sized business. One disadvantage of two-tier architecture is scalability. such as servers and other edge devices. Figure 1-4 Spine and Leaf Architecture The following are key characteristics of spine and leaf design: . Figure 1-3 shows sample topology of a collapse core network design.aggregation layers.

you can bundle eight active ports on an M-Series module. shared buffer-switching platform for leaf node.Multiple design and deployment options enabled using a variable-length spine with ECMP to leverage multiple available paths between leaf and spine. This logical group is called EtherChannel. The port channel interface can be configured with its own speed. and up to 16 ports on the F-Series module. Therefore. Port Channel In this section. flow control. The two devices connected via this logical group of ports see it as a single. The current power consumption of a 40-Gbps optic is more than ten times a single 10-Gbps optic. When you connect two devices using multiple physical links. Lower power consumption using multiple 10 Gbps between spine and leaf instead of a single 40-Gbps link. Figure 1-5 shows how multiple physical interfaces are combined to create a logical port channel between two switches. On the Nexus 5000 Series switch you can bundle up to 16 links into a port channel. IP address. you will see a new interface in the switch configuration. delay. you can group these links to form a logical bundle. Note At the time of writing this book. spine and leaf topology is supported only by Cisco Nexus switches. duplex. Figure 1-5 Port Channel When a port channel is created. These switches are connected to each other using physical network links. What Is Port Channel? As you learned in the previous section. bandwidth. or port channel. This new interface is a logical representation of all the member ports of the port channel. . a port channel is an aggregation of multiple physical interfaces that creates a logical interface. data center network topology is built using multiple switches. big network pipe between the two devices. and physical interfaces participating in this group are called member ports of the group. you learn about the important concepts of a port channel. Reduced network congestion using a large. Network congestion results when multiple packets inside a switch request the same outbound port because of application architecture. On the Nexus 7000 Series switch. Future proofing for higher-performance network through the selection of higher port count switches at the spine.

This will help you increase network availability in the case of line card failure. Load balancing: The switch distributes traffic across all operational interfaces in the port channel. you can use the show port-channel compatibility-parameters command to see the full list of compatibility checks. it is recommended that you use member ports from different line cards. these interfaces must meet the compatibility requirements. and interface description. Duplex. High availability: In a port channel. Duplex and Flow-control. to avoid network loops. and so on. VLANs. MTU. Port channel load balancing configuration is discussed later in this chapter. multiple links are combined into one logical link. This enables you to distribute traffic across multiple physical interfaces. Port mode. its speed must match with other member interfaces in this port channel. if you have multiple links between a pair of switches. You can also shut down the port channel interface. when you add an interface to a port channel. You can force ports with incompatible parameters to join the port channel if the following parameters are the same: Speed. Media type. you cannot put a 1 G interface and a 10 G interface into the same port channel. when creating a port channel on a modular network switch. As an example. Some of these attributes are Speed. STP blocks all the links except one. the capacity of the link can be increased. Other interface attributes are checked by switch software to ensure that the interface is compatible with the channel group. it automatically increases availability of the network. multiple Ethernet links are bundled together to create a logical link. In other words. Port Channel Compatibility Requirements To bundle multiple switch interfaces into a port channel. If you configure a member port with an incompatible attribute. Typically. the software suspends that port in the port channel. and because of these benefits you will find it commonly used within data center networks. Therefore. . eight 10-Gbps links can be combined to create a logical link with bandwidth of 80 Gbps. port channel creation will fail. On the Nexus platform. In the case of port channels. For example.maximum transmission unit (MTU). Some of these benefits are as follows: Increased capacity: By combing multiple Ethernet links into one logical link. In case of an incompatible attribute. Simplified network topology: In some cases. For example. which will result in shutting down all member ports. Flow-control. the port channel continues to operate even if a single member link is alive. increasing the efficiency of your network. In case of a physical link failure. therefore. Benefits of Using Port Channels There are several benefits of using port channels. port channels can be used to simplify network topology by aggregating multiple links into one logical link. it simplifies the network topology by avoiding the STP calculation and reducing network complexity by reducing the number of links between switches.

LACP must be enabled on switches at both ends of the link.Link Aggregation Control Protocol The Link Aggregation Control Protocol (LACP) provides a standard mechanism to ensure that peer switches exchange information and agree on the necessary details to bundle ports into a port channel. You can configure channel mode active or passive for individual links in the LACP channel group when you are adding the links to the channel group. Port Channel Modes Member ports in a port channel are configured with channel mode. LACP was initially part of the IEEE 802. you first enable the LACP feature globally on the device. active. . Duplex. a maximum of 8 interfaces can be in active state.3ad standard and later moved to the IEEE 802. Table 1-3 shows the port channel modes supported by Nexus switches.1. When you configure a static port channel without specifying a channel mode. allowed VLAN) to form a bundle. On M Series modules. and passive. LACP enables you to configure up to 16 interfaces into a port channel. the channel mode is set to on. and a maximum of 8 interfaces can be placed in a standby state. Network devices use LACP to negotiate an automatic bundling of links by sending LACP packets to the peer. Nexus switches support three port channel modes: on. This protocol ensures that both sides of the link have compatible configuration (Speed.1AX standard. Note Nexus switches do not support Cisco proprietary Port Aggregation Protocol (PAgP) to create port channels. To run LACP. Starting from NX-OS Release 5. Then you enable LACP for each channel by setting the channel mode for each interface to active or passive. you can bundle up to 16 active links into a port channel on the F Series module. Flow-control. On Nexus 7000 switches.

Table 1-3 Port Channel Modes If you attempt to change the channel mode to active or passive before enabling the LACP feature on the switch. 2. Enable the LACP feature. This command automatically creates an interface port channel with the number that you specified in the command. based on criteria such as the port speed and the trunking state. Ports can form an LACP port channel when they are in different LACP modes as long as the modes are compatible. and it becomes an individual link for that interface. supports LACP. When an LACP attempts to negotiate with an interface in the on state. The passive mode is useful when you do not know whether the remote system. or partner. After LACP is enabled globally. You can also specify the channel mode on. Configure the physical interface of the switch with the channel-group command and specify the channel number. Figure 1-6 Port Channel Mode Compatibility Configuring Port Channel Port channel configuration on the Cisco Nexus switches includes the following steps: 1. as shown in Figure 1-6. therefore. or passive within the channel-group command. the device returns an error message. Both the passive and active modes enable LACP to negotiate between ports to determine whether they can form a port channel. active. you can enable LACP on ports by configuring each interface in the channel for the channel mode as either active or passive. . This step is required only if you are using active mode or passive mode. it does not join the LACP channel group. it does not receive any LACP packets.

. You can configure the device to use one of the following methods to load balance across the port channel: Destination MAC address Source MAC address Source and destination MAC address Destination IP address Source IP address Source and destination IP address Source TCP/UDP port number Destination TCP/UDP port number Source and destination TCP/UDP port number Verifying Port Channel Configuration Several show commands are available on the Nexus switch to check port channel configuration. trunk configuration. Figure 1-7 Port Channel Configuration Example Port Channel Load Balancing Nexus switches distribute the traffic across all operational ports in a port channel. hence. allowed VLANs. and so on. These commands are helpful to verify port channel configuration and troubleshoot the port channel issue. such as description. but it also ensures that frames from a particular address are always forwarded on the same port. Table 1-4 provides some useful commands.3. This provides load balancing across all member ports of a port channel. Figure 1-7 shows configuration of a port channel between two switches. Configure the newly created interface port channel with the appropriate configuration. they are always received in order by the remote device. This load balancing is done using a hashing algorithm that takes addresses in the frame as input and generates a value that selects one of the links in the channel.

In Layer 2 network design. The upstream switches present themselves to the downstream device as one switch from the port channel and Spanning Tree Protocol (STP) perspective. It means that member ports in a virtual port channel can be from two different network switches. so a port channel can be extended across the two devices within this virtual domain. These two switches work together to create a virtual domain.Table 1-4 Verify Port Channel Configuration Virtual Port Channel In this section. Figure 1-8 shows a downstream switch connected to two upstream switches in a vPC configuration. you learn about important concepts of virtual port channel (vPC). A vPC enables the extension of a port channel across two physical switches. . What Is Virtual Port Channel? You learned in the previous section that a port channel bundles multiple physical links into a logical link. All the member ports of a port channel belong to the same network switch. Cisco vPC technology allows dual-homing of a downstream device to two upstream switches.

therefore. A large Layer 2 network is built using many network switches connected to each other in a redundant topology using multiple network links. causing high CPU and link utilization. and Citrix XenServer. it is important to migrate away from STP as a primary loop management technology toward new technologies such as vPC and Cisco FabricPath. therefore. It means that no matter how many connections exist between network devices. it was necessary to develop protocols and control mechanisms to limit the disastrous effects of a Layer 2 topology loop in the network. Building redundancy is an important design consideration to protect against device or link failure. such as high availability clusters. that require member servers to be on the same Layer 2 Ethernet network to support private interprocess communication and public client facing communication using a virtual IP of the cluster. A virtualized server running hypervisor software. Over a period of time. Although STP can support very large network environments. such as VMware ESX Server. However. it still has a fundamental issue that makes it suboptimal for the network. smaller Layer 2 segments are preferred. STP does not provide an efficient way to manage large Layer 2 networks. These requirements lead to the use of large Layer 2 networks. This issue is related to the STP principle that allows only one active network path from one device to another to break the topology loops in a network. for an Ethernet network this redundant topology can result in a Layer 2 loop. A Layer 2 loop is a condition where packets are circulating within the redundant network topology. within and across the data center locations. There are some other applications.Figure 1-8 Virtual Port Channel In a conventional network design. with the current data center virtualization trend. These loops can cause a meltdown of network resources. The innovation of new technologies to manage large Layer 2 network environments is the result of this shift in the data center architecture. Spanning Tree Protocol (STP) was the primary solution to this problem because it provides loop detection and loop management capability for Layer 2 Ethernet networks. As a result. STP became very mature and stable. this preference is changing. data center network architecture is shifting from a highly scalable Layer 3 network model to a highly scalable Layer 2 model. Microsoft Hyper-V. requires Layer 2 Ethernet connectivity between the physical servers to support seamless workload mobility. Note A brief summary of Spanning Tree is provided later in the FabricPath section. only one will be active at any given time. Detailed . However. and after going though a number of enhancements and extensions. FabricPath technology is discussed later in this chapter.

In large networks with redundant devices. Figure 1-9 shows the comparison between STP and vPC topology using the same devices and network links. It means that if you configure a port channel between two network devices. This environment combines the benefits of hardware redundancy with the benefits of port channel. It can recover from a link loss in the bundle within a second. In a port channel. Layer 2 loops are managed by bundling multiple physical links into one logical link.discussion on STP is available in Introducing Cisco Data Center Networking (DCICN). Figure 1-9 Virtual Port Channel Bi-Section Bandwidth The two switches that act as the logical port channel endpoint are still two separate devices. Virtual Port Channel (vPC) addresses this limitation by allowing a pair of switches acting as a virtual endpoint. This enhancement enables the use of multiple Ethernet links to forward traffic between the two network devices. so it looks like a single logical entity to port channel–attached devices. resulting in efficient use of network capacity. It is important to observe that there is no blocking port in the vPC topology. with minimal loss of traffic and no effect on the active network topology. It also provides the capability to equally balance traffic by sending packets to all available links using a load-balancing algorithm. Port channel technology has another great benefit. This configuration keeps the switches from forwarding broadcast and multicast traffic back to the same link. The limitation of the classic port channel is that it operates between only two devices. Figure 1-9 shows a comparison of STP topology and vPC topology. there is only one logical path between the two switches. Benefits of Using Virtual Port Channel Virtual Port Channel provides the following benefits: . These switches collaborate only for the purpose of creating a vPC. avoiding loops. Port channel technology discussed earlier in this chapter was a key enhancement to Layer 2 Ethernet networks. the alternative path is often connected to a different network switch in a topology that would cause a loop.

it is important to understand the components of vPC and the function of these components. data. a downstream network device that connects to a pair of upstream Nexus switches sees them as a single Layer 2 switch. and management planes. These two upstream Nexus switches operate independently with their own control. This subject is covered in Chapter 5. Note The operational plane of a switch includes control. the data planes of these switches are modified to ensure optimal packet forwarding in vPC configuration.” Before going into the detail of vPC control and data plane operation. data.Enables a single device to use a port channel across two upstream switches Eliminates STP blocked ports Provides a loop-free topology Uses all available uplink bandwidth Provides fast convergence in the case of link or device failure Provides link-level resiliency Helps ensure high availability Components of vPC In vPC configuration. . The control planes of these switches also exchange information so it appears as a single logical Layer 2 switch to a downstream device. However. “Management and Monitoring of Cisco Nexus Devices. Figure 1-10 shows the components of a virtual port channel. and management planes.

the peer link also carries Hot Standby Router Protocol (HSRP) and Virtual Router Redundancy Protocol (VRRP) packets. This pair of switches acts as a single logical switch. The vPC peer link is the most important connectivity element in the vPC system. This link creates the illusion of a single control plane by forwarding bridge protocol data units (BPDUs) and Link Aggregation Control Protocol (LACP) packets to the primary vPC switch from the secondary vPC switch. Typically. If a vPC device is also a Layer 3 switch. The peer link provides the necessary transport for data plane traffic. which enables other devices to connect to the two chassis using vPC. vPC peer link: This is a link between two vPC peers to synchronize state. .Figure 1-10 Virtual Port Channel Components The vPC architecture consists of the following components: vPC peers: The two Cisco Nexus switches combined to build a vPC architecture are referred to as vPC peers. this link is built using two physical interfaces in port channel configuration. such as multicast traffic. The peer link is also used to synchronize MAC address tables between the vPC peers and to synchronize Internet Group Management Protocol (IGMP) entries for IGMP snooping. and for the traffic of orphaned ports.

No CFSoE configuration is required for vPC to function properly. vPC: This is a Layer 2 port channel that spans the two vPC peer switches. vPC domain: This is a pair of vPC peer switches combined using vPC configuration. Spanning Tree has been modified to keep the peer-link ports always forwarding. Orphan device: The term orphan device is used for any device that is connected to only one switch within a vPC domain. The device on the other end of vPC sees the vPC peer switches as a single logical switch. It helps the vPC switch to determine whether the peer has failed or whether the vPC peer switch has failed entirely. it only sends IP packets to make sure that the originating switch is operating and running vPC. Peer keepalive is used as a secondary test to make sure that the remote peer switch is operating properly. The peer keepalive link is used to monitor the status of the vPC peer when the vPC peer link goes down. When you enable the vPC feature. vPC member port: This port is a member of a virtual port channel on the vPC peer switch. vPC peer link is not used for the regular vPC traffic and is considered to be an extension of . Orphan port: This is a switch port that is connected to an orphan device.Cisco Fabric Services: The Cisco Fabric Services is a reliable messaging protocol used between the vPC peer switches to synchronize control and data plane information. A vPC domain is created using two vPC peer devices that are connected together using a vPC peer keepalive link and a vPC peer link. This term is also used for the vPC member ports that are connected to only one vPC switch. It uses the vPC peer link to exchange information between peer switches. and for this device there is no vPC configured on the peer switches. such as STP and IGMP. The vPC peer keepalive link is not used for any data or synchronization traffic. This situation can happen in a failure condition where a device connected to vPC loses all its connection to one of the vPC peer switches. A peer switch can join only one vPC domain. To help ensure that the vPC peer link communication for the CFSoE protocol is always available. In a normal network condition. A numerical domain ID is assigned to the vPC domain. vPC Data Plane Operation vPC technology is designed to always forward traffic locally when possible. It means that the orphan device is not using port channel configuration. There is no need for this device to support vPC itself. the device automatically enables Cisco Fabric Services over Ethernet (CFSoE). vPC peer keepalive link: This is a Layer 3 communication path between vPC peer switches. vPC VLANs: The VLANs that are allowed on vPC are called vPC VLANs. These VLANs must also be allowed on the vPC peer link. This device can be configured to use LACP or a static port channel configuration. Non-vPC VLANs: These VLANs are not part of any vPC and are not present on a vPC peer link. It connects to the vPC peer switches using a regular port channel. Peer keepalive can also use the management port of the switch and often runs over an out-of-band (OOB) network. CFSoE carries messages and packets for many features linked with vPC.

BPDUs. and unknown unicast traffic Traffic from orphan ports It means that vPC peer link does not typically forward data packets. CFSoEbased MAC address learning is applicable only to the vPC ports. This method of learning is not used for the ports that are not part of vPC configuration. This rule prevents the packets that are received on a vPC from being flooded back onto the same vPC by the other peer switch. it is used specifically for switch management traffic and occasionally for the data packets from failed network ports. there is no dependency on the regular MAC address learning between the vPC peer switches. and then goes to other peer switches via peer link. vPC performs loop avoidance at the data plane by implementing forwarding rules in the hardware. Figure 1-11 vPC Loop Avoidance It is important to understand vPC data plane operation for the orphan ports. This packet can exit on any other type of port. The vPC peer link carries the following type of traffic: vPC control traffic. The second type of orphan port is the one that is a member of a vPC configuration. vPC Control Plane Operation . such as Cisco FSoE. The first type of orphan port is the one that is connected to an orphan device and is not part of any vPC configuration. For this type of orphan port.the control plane between the vPC peer switches. the vPC peer switch will be allowed to forward the traffic that is received on the peer link to one of the remaining active vPC member ports. normal switch forwarding rules are applied. such as broadcast. Therefore. Cisco Fabric Services over Ethernet (CFSoE) is used to synchronize the Layer 2 forwarding tables between the vPC peer switches. One of the most important forwarding rules for vPC is that a packet that enters the vPC peer switch via the vPC member port. multicast. This behavior of vPC enables the solution to scale because the bandwidth requirement for the vPC peer link is not directly related to the total bandwidth of all vPC ports. and LACP messages Flooding traffic. is not allowed to exit the switch on the vPC member port. The vPC loop avoidance rule is depicted in Figure 1-11. In this special case. For this type of orphan port. To implement the specific vPC forwarding behavior. the vPC loop avoidance rule is disabled. but the other peer switch has lost all the associated vPC member ports. The traffic for this type of orphan port can use a vPC peer link as a transit link to reach the devices on the other vPC peer switch. such as an L3 port or an orphan port.

vPC is also subject to the port compatibility checks. and for Nexus 7000 it is implemented in Cisco NXOS Software Version 4. Synchronize the Address Resolution Protocol (ARP) table: This feature enables faster convergence time upon reload of a vPC switch. both vPC peer switches know that MAC address. this feature is implemented in Cisco NX-OS Software Release 5. the vPC peer link is used for control plane messaging. It also helps to stabilize the vPC operation during failure conditions. The ARP table is synchronized between the vPC peer devices using CFSoE. This mechanism protects from split brain between the vPC peer switches. Perform consistency and compatibility checks: The peer switches exchange information using CFSoE to ensure that vPC configuration is consistent across both switches. the peer keepalive mechanism is used to find whether the peer switch is still operational. such as CFSoE. and it remains primary even if the original device is restored. If the primary vPC peer device fails. BPDUs. CFSoE is used to validate that vPC member ports are compatible and can be combined to form a port channel.As discussed earlier. Agree on LACP and STP parameters: In a vPC domain. and it defines which switch will be primarily responsible for the generation and processing of spanning tree BPDUs. the other peer switch is notified that its ports are now orphan ports. The CFSoE is used as the primary control plane protocol for vPC. On the Nexus 5000 Series switch. Synchronize the IGMP snooping status: In vPC configuration. and this MAC is programmed on the Layer 2 Forwarding (L2F) table of both peer devices. IGMP snooping behavior is modified to synchronize the IGMP entries between the peer switches. If both switches are operational and the peer link is down.2(6). and LACP messages. This new method of MAC address learning using CFSoE replaces the conventional method of MAC address learning for vPC. In the case of peer link failure. the secondary vPC peer device takes the primary role. Determine primary and secondary vPC devices: When you combine the vPC peer switches in a vPC domain. CFSoE performs the following vPC control plane operations: Exchange the Layer 2 forwarding table between the vPC peers: It means that if one vPC peer learns a new MAC address. It is helpful in reducing data traffic on the vPC peer link by forwarding the traffic to the local vPC member ports.0(2). Similar to the conventional port channel. it triggers the synchronization of the multicast entry on both vPC peer switches. Monitor the status of the vPC member ports: For a smooth vPC operation during failure conditions. When all member ports of a vPC go down on one of the peer switches. Switch modifies the forwarding behavior so that all traffic received on the peer link for that vPC is now forwarded locally on the vPC port. CFSoE monitors the status of vPC member ports. It checks the switch configuration for vPC as well as switch-wide parameters that need to be configured consistently on both peer switches. two peer switches agree on LACP . This role is a control plane function. This vPC role is nonpreemptive. When IGMP traffic enters a vPC peer switch. an election is held to determine the primary and secondary role of the peer switches. the secondary switch shut downs all vPC member ports and all switch virtual interfaces (SVI) that are associated with the vPC VLANs.

It is not possible for a switch or VDC to participate in more than one vPC domain. vPC configuration on the Cisco Nexus switches includes the following steps: 1. Dynamic routing from the vPC peers to routers connected on a vPC is not supported. the configuration must be done on both peer switches. For LACP. Use the following command to enable the vPC feature: feature vpc 2. It is recommended that you use at least two 10 Gigabit Ethernet ports in dedicated mode on two different I/O modules. Peer-link is always 10 Gbps or more: Only 10 Gigabit Ethernet ports can be used for the vPC peer link. vPCs can be configured in multiple virtual device contexts (VDC). A separate vPC peer link and vPC peer keepalive link is required for each of the VDCs. Before you start vPC configuration. It is recommended that routing adjacencies be established on separate routed links. It is not possible to add more than two switches or Virtual Device Context (VDC) to a vPC domain.and STP parameters to present themselves as a single logical switch to the device that is connected on a vPC. Create the vPC domain and assign a domain ID to this domain. vPC is a Layer 2 port channel: A vPC is a Layer 2 port channel. but the configuration is entirely independent. which must be enabled on both peer switches. vPC domains cannot be stretched across multiple VDCs on the same switch. In this case. a common LACP system ID is generated from a reserved pool of MAC addresses. which are combined with the vPC domain ID. In this case. and that a Layer 3 path is available between the two peer switches. make sure that peer switches are connected to each other via physical links that can support the requirements of a vPC peer link. and all ports for a given vPC must be in the same VDC. This domain ID must be unique . and it uses its own bridge ID. If the peer-switch option is not used. Configuration Steps of vPC When you are configuring a vPC. only the primary vPC switch sends and processes BPDUs. Enable the vPC feature. Only one vPC domain ID per switch: Only one vPC domain ID can be configured on a single switch or VDC. When the peer-switch option is used. the behavior depends on the peer-switch option. The vPC technology does not support the configuration of Layer 3 port channels. the secondary switch only relays BPDUs and does not generate any BPDU. Each VDC is a separate switch: vPC is a per-VDC function on the Cisco Nexus 7000 Series switches. both primary and secondary switches use the same bridge ID to present themselves as a single switch. Some of the key points about vPC limitations are outlined next: Only two switches per vPC domain: A vPC domain by definition consists of a pair of switches identified by a shared vPC domain ID. vPC Limitations It is important to know the limitations of vPC technology. For STP. both switches send and process BPDU.

Create a peer link. A peer keepalive link must be configured before you set up the vPC peer link.and should be configured on both vPC switches. The vPC peer link is configured in a port channel configuration. Create the vPC itself. 4. Click here to view code image Interface port-channel <vPC number> vpc <vPC number> Figure 1-12 shows configuration of vPC to connect an access layer switch to a pair of distribution layer switches. Use the following commands under the port channel to configure the peer link: Click here to view code image Interface port-channel <peer link number> switchport mode trunk vpc peer-link 5. including management VRF. This port channel must be trunked to enable multiple VLANs over the vPC. Use the following command under the vPC domain to configure the peer keepalive link: Click here to view code image peer-keepalive destination <remote peer IP> source <local IP> vrf mgmt. Create a regular port channel and add the vpc command under the port channel configuration to create a vPC. Use the following command to create the vPC domain: vpc domain <domain-id> 3. This configuration must be the same on both peer switches. . The peer keepalive link can be in any VRF. Establish peer keepalive connectivity.

. Table 1-5 shows some useful vPC commands. several show commands are available for vPC. These commands are helpful in verifying vPC configuration and troubleshooting the vPC issue. you can use all port channel show commands discussed earlier.Figure 1-12 vPC Configuration Example Verification of vPC To verify the status of vPC. In addition.

Spanning Tree Protocol To support high availability within a Layer 2 domain. STP is a Layer 2 control plane protocol that runs on switches to ensure that you do not create topology loops when you have these redundant paths in the network. . and STP ensures that only one network path to the root bridge exists. A root bridge is elected. and this information is used to elect a root bridge and perform subsequent network configuration by selecting only the shortest path to the root bridge. STP builds a treelike structure by blocking certain ports in the network. Figure 1-13 shows the STP topology. switches are normally interconnected using redundant links. All network ports connecting to the root bridge via an alternative network path are blocked. A large-scale. and flexible Layer 2 fabric is needed in most data centers to support distributed and virtualized application workloads. Switches exchange information using bridge protocol data units (BPDU). and traffic does not necessarily take the optimal path. it is important to discuss limitations of STP. highly scalable. To understand FabricPath.Table 1-5 Verify Port Channel Configuration FabricPath As mentioned earlier in this chapter. To create loop-free topology. This tree topology implies that certain links are unused. server virtualization and seamless workload mobility is a leading trend in the data center.

however. Limited scalability: Several factors limit scalability of classic Ethernet. equal cost multipath cannot be used to efficiently load balance traffic across multiple network links. Rapid Spanning Tree Protocol (RSTP) has improved the STP convergence to a few seconds. packet flooding for broadcast and unknown unicast packets populates all end host MAC addresses in all switches in the network. and all the redundant links are blocked. Inefficient path selection: STP does not always provide a shortest path between two switches. certain failure scenarios are complex and can destabilize the network for longer periods of time. Slow convergence: A link failure or any other small change in network topology can cause large changes in the spanning tree that must be propagated to all switches and could take about 30 seconds in older versions of STP. This leads to inefficient utilization of network resources. where host A and host B cannot use the direct link between the two switches because it is blocked by STP.Figure 1-13 Spanning Tree Topology Key limitations of STP are as follows: Lack of multipathing: In STP. when there are changes in the spanning tree topology the switches will clear and rebuild their forwarding table. Furthermore. Therefore. This results in more flooding to learn new MAC addresses. however. . Such failures can be extremely dangerous for the switched network because a topology loop can consume all network bandwidth. packets can continue circulating in the network with no time out. causing a major network outage. all the switches build only one shortest path to the root switch. A switch has limited space to store Layer 2 forwarding information. the Layer 2 MAC addressing is nonhierarchical. the demand to store more addresses is also growing with computing capacity and the introduction of server virtualization. Therefore. As you know. A simple example of inefficient path selection is shown in Figure 1-13. In addition to the large forwarding table. so in a Layer 2 loop condition. This limit is increasing with improvements in the silicon technology. and it is not possible to optimize Layer 2 forwarding tables by summarizing the Layer 2 addresses. there is no safety against STP failure or misconfiguration. Protocol safety: A Layer 2 packet header does not have a time to live (TTL).

In addition to ECMP. hence. however. FabricPath brings the stability and performance of Layer 3 routing to the Layer 2 switched network and builds a highly resilient. The Ethernet fabric can use all the available paths in the network. It combines the benefits of Layer 3 routing with the advantages of Layer 2 switching. fast convergence. and VLAN pruning. High performance: High performance in a FabricPath network is achieved using an equal-cost multipath (ECMP) feature. The level of enhanced flexibility translates into lower operational costs.What Is FabricPath? Cisco FabricPath is an innovation to build a simplified and scalable Layer 2 fabric. However. Figure 1-14 shows the FabricPath topology. as soon as the feature is enabled and ports are assigned to the FabricPath network. Intermediate System-to-Intermediate System (IS-IS) Protocol is used to provide fabric-wide intelligence and ties the elements together. A FabricPath network is created by connecting a group of switches into an arbitrary topology. Reliability based on proven technologies: Cisco FabricPath uses a reliable and proven . The switch control plane protocol for FabricPath starts automatically. overall complexity of the solution is reduced significantly. therefore. combining them in a fabric using a few simple commands. FabricPath can be deployed in any arbitrary topology. offering a much more scalable and flexible network fabric. which makes it less flexible for workload mobility within a virtualized data center. multipathing. A single control protocol is used for unicast forwarding. Benefits of Cisco FabricPath technology are as follows: Operational simplicity: Cisco FabricPath is simple to configure and easy to operate. Layer 3 creates network segmentation. and shortest path selection from any host to any other host in the network. and flexible Layer 2 fabric. massively scalable. a typical spine and leaf topology is used in the data center. There is no need to run Spanning Tree Protocol within the FabricPath network. Because all servers within this fabric reside in one common Layer 2 network. FabricPath supports the shortest path to the destination. Flexibility: FabricPath provides a single domain Layer 2 fabric to connect all the servers. Layer 3 routing provides scalability. multicast forwarding. and traffic can be load balanced across these paths. This protocol requires less configuration than the STP in the conventional Layer 2 network. the network latency is reduced and higher performance can be achieved. they can be moved around within the data center in a nondisruptive manner without any constrains on the design. Figure 1-14 FabricPath Topology Benefits of FabricPath FabricPath converts the entire data center into one big switching fabric.

you might require equipment upgrades. Furthermore. FabricPath provides increased redundancy and high availability with the multiple paths between the end hosts. Therefore. and a Reverse Path Forwarding (RPF) check is applied for multidestination frames. FabricPath data plan also provides robust controls for loop prevention and mitigation. large networks with many hosts require a large forwarding table. The FabricPath frames includes a time-to-live (TTL) field similar to the one used in IP. which is required as computing and virtualization density continue to grow. As the network size grows. High scalability and high availability: FabricPath delivers highly scalable bandwidth with capabilities such as ECMP and multi-topology forwarding. FabricPath learns MAC addresses selectively at the edge. In addition. Components of FabricPath To understand FabricPath control and data plane operation. the Ethernet MAC address cannot be summarized to reduce the size of the forwarding table. This protocol provides fast convergence that has been proven in large service provider networks. it is important to understand the components of FabricPath.control plane mechanism. Figure 1-15 shows components of a FabricPath network. Figure 1-15 Components of FabricPath Spine switches: In a two-tier architecture. The role of . and it increases the cost of switches. allowing scaling the network beyond the limits of the MAC address table of individual switches. Efficiency: Conventional Ethernet switches learn the MAC addresses from the frames they receive on the network. driving scale requirements up in the data center. Spine switches act as the backbone of the switching fabric. spine switches are deployed to provide connectivity between leaf switches. enabling IT architects to start with a small Layer 2 network and build a much bigger data center fabric. It means incremental levels of resiliency and bandwidth capacity. which is based on the IS-IS routing protocol.

and it is stripped when the packet leaves the FabricPath network at the egress switch. Classic Ethernet (CE): Conventional Ethernet using transparent bridging and running STP is referred to as classic Ethernet. In a data center. devices connected on an edge port in FabricPath VLAN can send traffic to a remote host over the FabricPath network. Ethernet frames transmitted on core ports always carry an IEEE 802. FabricPath network: A network that connects the FabricPath switches is called a FabricPath network. When you create a VLAN. you must go to VLAN configuration and change the mode to FabricPath. by default it operates in Classic Ethernet (CE) mode. Switch table: The switch table provides Layer 2 routing information based on Switch ID. In a case where a MAC address table is pointing to a remote switch.1Q VLAN tag and can be considered as a trunk port.spine switches is to provide a transit network between the leaf switches. MAC learning is performed as usual. a typical FabricPath network is built using a spine and leaf topology. and packets are transmitted using standard IEEE 802. the packet is delivered using classic Ethernet format. and you can connect any Ethernet device to these ports. Because these ports are part of classic Ethernet. If a MAC address table has a local switch interface. MAC address table: This table contains MAC address entries and the source from where these MAC addresses are learned. In this FabricPath network. FabricPath core ports always forward Ethernet frames encapsulated in a FabricPath header. Only FabricPath VLANs are allowed on core ports. It contains Switch IDs and next-hop interfaces to reach the switch. An edge port can carry both CE as well as FabricPath VLANs. Leaf switches: Leaf switches provide access layer connectivity for servers and other access layer devices. FabricPath VLANs: The VLANs that are allowed on the FabricPath network are called FabricPath VLANs. The hosts on two leaf switches use spine switches to communicate with each other. Edge port: Edge ports are switch interfaces that are part of classic Ethernet. Dynamic Resource Allocation Protocol: Dynamic Resource Allocation Protocol (DRAP) is an extension to FabricPath IS-IS that ensures networkwide unique and consistent Switch IDs and FTAG values. These interfaces behave like normal Ethernet ports. The packet is then encapsulated in the FabricPath . The source could be a local switch interface or a remote switch.1Q trunk.3 Ethernet frames. which includes information such as switch ID and hop count. packets are forwarded based on a new header. This link state protocol is the core of the FabricPath control plane. You can configure an edge port as an access port or as an IEEE 802. with some ports participating in FabricPath and some ports in a classic Ethernet network. Core port: Ports that are part of a FabricPath network are called core ports. The FabricPath header is added to the Layer 2 packet when a packet enters the FabricPath network at the ingress switch. FabricPath IS-IS: FabricPath switches run a Shortest Path First (SPF) routing protocol based on standard IS-IS to build their forwarding table similar to a Layer 3 network. To allow a VLAN over the FabricPath network. Therefore. Nexus switches can be part of both a FabricPath network and a classic Ethernet network at the same time. a lookup is done in the switch table to find the next-hop interface.

The purpose of this field is to provide future capabilities for perpacket load sharing when multiple equal cost paths are available. The FabricPath tag contains the new Ethertype for the FabricPath header. This header includes a 48-bit outer MAC destination address (ODA). and a 32-bit FabricPath tag. The purpose of this field is to provide future capabilities. the switch encapsulates the frame with a 16byte FabricPath header. or security. U/L bit: FabricPath switches set this bit in all unicast OSA and ODA fields. This could be used for traffic engineering. The function of the OOO (out of order)/don’t learn (DL) bit varies depending on whether it is used in outer source address (OSA) or outer destination address (ODA). OOO/DL: This field is currently not in use. where a set of VLANs could belong to one or a different instance. The ODA and SDA contain a 12bit unique identifier called SwitchID that is used to forward packets within the FabricPath network. where a virtual or physical end host located behind a single port can uniquely identify itself to the FabricPath network. The 48-bit source and destination MAC address contains a 12-bit unique identifier called SwitchID that is used for forwarding in the FabricPath network. a forwarding tag called FTAG that uniquely identifies a forwarding graph and the TTL field that has the same functionality as in the IP header. FabricPath topology: Similar to STP. This is required because the OSA and ODA fields are not MAC addresses and do not uniquely identify a particular hardware component the way a standard MAC address would. . a 48bit outer MAC source address (OSA). FabricPath Frame Format When an Ethernet frame enters the FabricPath network. Figure 1-16 FabricPath Frame Format Endnode ID: This field is currently not in use.header and sent to the next hop within the FabricPath network. Any multidestination addresses have this bit set. administrative purposes. and different topologies could be constructed for a different set of VLANs. I/G bit: The I/G bit serves the same function in FabricPath as in standard Ethernet. Figure 1-16 shows the FabricPath frame format. in FabricPath a set of VLANs that have the same forwarding behavior will be mapped to a topology or forwarding instance. indicating that the MAC address is a locally administered address.

There is no need to run STP within the FabricPath network. TTL operates in the same manner within FabricPath as it does in traditional IP forwarding. Easily extensible: Using custom type. In the outer source address (OSA) of the FabricPath header. The system selects a unique FTAG for each topology configured within FabricPath. value (TLV) settings. The Cisco FabricPath Layer 2 IS-IS is a separate process from Layer 3 IS-IS. For unicast traffic it identifies the FabricPath topology the frame is traversing. this field is always set to 0. The value of this field in the FabricPath header is locally significant to each switch. In the case of multicast and broadcast traffic. the TTL prevents Layer 2 frames from circulating endlessly within the network.Switch ID: The switch ID is a 12-bit unique identifier for a switch within the FabricPath network. Provides Shortest Path First (SPF) routing: Excellent topology building and reconvergence characteristics. Each FabricPath switch hop decrements the TTL by 1. This switch ID can be assigned manually within the switch configuration or allocated automatically using Dynamic Resource Allocation Protocol (DRAP). In case of temporary loops during protocol convergence. LID: Local ID (LID) is also known as port ID. FabricPath IS-IS provides the following benefits: No IP dependency: No need for IP reachability to form adjacency between devices. TTL: This field represents the Time to Live (TTL) of the FabricPath frame. LID is used to identify the specific physical or logical edge port from which the packet is originated or is destined to. FabricPath Control Plane Cisco FabricPath uses a single control plane that functions for unicast. The FabricPath control plane uses IS-IS Protocol to perform the following functions: Switch ID allocation: FabricPath switches establish IS-IS adjacency to the directly connected neighboring switches on the core ports. and multicast packets by leveraging the Layer 2 Intermediate System-to-Intermediate System (IS-IS) Protocol. broadcast. These switches then start a negotiation process that . The forwarding decision at the destination switch can use the LID to send the packet to the correct edge port without any additional lookups on the MAC table. The system selects the FTAG value for each multidestination tree. Sub-SwitchID: Sub-SwitchID is a unique identifier assigned to an emulated switch within the FabricPath SwitchID. FTAG: FabricPath forwarding tag (FTAG) is a 10-bit traffic identification tag within the FabricPath header. This emulated switch represents a vPC+ port-channel interface associated with a pair of FabricPath switches. and frames with an expired TTL are discarded. In the outer destination address (OSD) this field identifies the destination switch for the FabricPath packet. vPC+ is discussed later in this chapter. Ethertype: An Ethertype value of 0x8903 is used for FabricPath. length. In the absence of vPC+. the FTAG identifies a multidestination forwarding tree within a given topology. this field is used to identify the switch that originated the packet. IS-IS devices can exchange information about virtually anything.

which includes the FabricPath switch ID and its next hop. FabricPath switches use the following three parameters to elect a root for the first tree. Figure 1-17 FabricPath Multidestination Trees Within the FabricPath domain. While the initial negotiation is in progress. It also ensures that the type and number of FTAG values in use are consistent. multicast.ensures all FabricPath switches have a unique switch ID.2(2). If multiple equal-cost paths to a particular switch ID exist. multidestination trees are built within the FabricPath network. a configuration command is also provided for the network administrator to statically assign a switch ID to a FabricPath switch. this root switch selects the root for each additional multidestination tree in the topology and assigns each multidestination tree a unique FTAG value. You can modify the number of next hops installed using the maximum paths command in FabricPath-domain configuration mode. and no data plane traffic is passed on these interfaces. Multidestination trees: To ensure loop-free topology for broadcast. and select . Figure 1-17 shows a network with two multidestination trees. and unknown unicast traffic. This routing table stores the best path to the destination FabricPath switch. the FabricPath calculates its unicast routing table. By default. the core interfaces are brought up but not added to the FabricPath topology. where switch S1 is the root for tree number 1 and switch S4 is the root for tree number 2. first a root switch is elected for tree number 1 in a topology. Dynamic Resource Allocation Protocol (DRAP) automatically performs this task. two such trees are supported per topology. FabricPath IS-IS automatically creates multiple multidestination trees connecting to all FabricPath switches and uses them to forward multidestination traffic. However. After a switch becomes the root for the first tree. As of NX-OS release 6. up to 16 next hops are stored in the FabricPath routing table. Unicast routing: After IS-IS adjacencies are built and routing information is exchanged.

or broadcast packets. Known unicast forwarding is done based on the mapping between the unicast MAC address and the switch ID that is already learned by the network. Multiple equal cost paths could exist to a destination switch ID. it goes though different lookups and forwarding logic. For optimized IP multicast forwarding. FabricPath IS-IS computes the SwitchID reachability for each switch in the network.root for any additional trees. spine switches are the best candidates for the root. FabricPath Data Plane The Layer 2 packets forwarded through a FabricPath network are of the following types: known unicast packet. Performs switch ID table lookup to determine FabricPath next-hop interface. Performs MAC table lookup to identify the FabricPath switch ID. Therefore. The FTAG uniquely identifies a topology because different topology could have different broadcast trees. Decrements TTL and sends the frame to the core port for the next-hop FabricPath switch. Multicast forwarding is done based on multidestination trees computed by IS-IS. The default root priority is 64. This allows switches to prune off subtrees that do not have downstream receivers. This forwarding logic is based on the role that particular switch plays relative to the traffic flow within the FabricPath network. It is recommended to configure a centrally connected switch as root switch. Core FabricPath switch Receives a FabricPath-encapsulated frame on a FabricPath core port. Broadcast and unknown unicast forwarding is also done based on a multidestination tree computed by IS-IS for each topology. Root priority: An 8-bit value. When a unicast packet enters the FabricPath network. unknown unicast. In this case. Performs switch ID table lookup to determine on which next-hop interface frame it is destined to. System ID: A 48-bit value. A higher value of these parameters is preferred. Encapsulates frames with the FabricPath header and sends it to the core port for the next-hop FabricPath switch. . It is composed of the VDC MAC address. FabricPath uses IS-IS to propagate multicast receiver locations within the FabricPath network. Following are three general roles and the forwarding logic for FabricPath traffic: Ingress FabricPath switch Receives the classic Ethernet (CE) frame from a FabricPath edge port. and as it passes different switches. these trees are further pruned based on IGMP snooping at the edges of the FabricPath network. You can influence root bridge election by using the root-priority command. multicast packet. Switch ID: The unique 12-bit switch ID. the packets are flow-based load balanced to distribute traffic over multiple equal cost paths.

the switch unconditionally learns the source MAC address as a local MAC entry. The following steps explain . the switch learns remote MAC addresses only if the remote device is having a bidirectional “conversation” with a locally connected device. For example. FabricPath is configured correctly and the network is fully converged. allowing a significant reduction in the size of the MAC address table. and leaf switches are S10. it is called conversational learning. Performs switch ID table lookup to determine whether it is the egress FabricPath switch. Broadcast frames do not trigger learning on edge switches. FabricPath VLANs use conversational MAC address learning. This selective MAC address learning is based on bidirectional conversation of end hosts. FabricPath Packet Flow Example In this section. because several key LAN protocols (such as HSRP) rely on source MAC learning from multicast frames to facilitate proper forwarding. therefore. however. FabricPath switches receiving that broadcast will update an existing entry for the source MAC address. In other words.Egress FabricPath switch Receives a FabricPath-encapsulated frame on a FabricPath core port. to determine on which FabricPath edge port the frame should be forwarded to. and frames are forwarded based on the switch ID in the outer destination address (ODA). and classic Ethernet VLANs use traditional MAC address learning. The unknown unicast frames being flooded in the FabricPath network do not necessarily trigger learning on edge switches. There are two spine switches and three leaf switches in this sample topology. the MAC address table of switch S10 and S14 is empty. S14. broadcast frames are used to update any existing MAC entries already in the table. Host A connected to the leaf switch S10 would like to communicate with the Host B connected to the leaf switch S15. the switch learns the source MAC address of the frame as a remote MAC entry only if the destination MAC address matches an already learned local MAC entry. For Ethernet frames received from a directly connected access or trunk port. FabricPath core switches generally do not learn any MAC addresses. you review an example of packet flow within a FabricPath network. and S15. The MAC address table of switch S15 has only one local entry for Host B pointing to port 1/2. Multicast frames trigger learning on FabricPath switches (both edge and core). Uses the LID value in the outer destination address (ODA). Conversational Learning FabricPath switches learn MAC addresses selectively. if a host moves from one switch to another and sends a gratuitous ARP to update the Layer 2 forwarding tables. The following are MAC learning rules FabricPath uses: Only FabricPath edge switches populate the MAC table and use MAC table lookups to forward frames. This MAC learning rule is the same as the classic Ethernet switch. For unicast frames received with FabricPath encapsulation. or a MAC address table lookup. Unknown Unicast Packet Flow In the beginning. Spine switches are S1 and S2. In a FabricPath switch.

6. S14 receives the packet and performs a lookup for Host B in its MAC address table. A lookup is performed for the destination MAC address of Host B. Host B sends a known unicast packet to Host A on switch S15 port 1/2. The packet is delivered to port 1/2 on switch S15. The following steps explain known unicast packet flow originated from Host B: 1. 2. The MAC address lookup is missed. Figure 1-18 FabricPath Unknown Unicast Known Unicast Packet Flow After the unknown unicast packet flow from switch S10 to S14 and S15.unknown unicast packet flow originated from Host A: 1. the Host A MAC address is not learned on switch S14. This flow is unknown unicast because switch S10 does not know how to reach destination Host B. Switch S14 has no entry in its MAC address table. Host A sends an unknown unicast packet to switch S10 on port 1/1. Figure 1-18 illustrates these steps for an unknown unicast packet flow. 3. S10 encapsulates the packet into the FabricPath header and floods it on the FabricPath multidestination tree. therefore. some MAC address tables are populated. 8. This flow is known unicast because switch S15 knows how to reach the destination MAC address of Host A. Switch S10 has only one entry for Host A connected on port 1/1. . S10 updates the classic Ethernet MAC address table with the entry for the source MAC address of Host A and the port 1/1 where the host is connected. The MAC address table on S15 shows Host A pointing to switch S10. 7. 5. The first entry is local for Host B pointing toward port 1/2. This MAC address is learned on FabricPath and the switch ID in OSA is S10. Switch S15 has two entries in its MAC address table. S15 receives the packet and performs a lookup for Host B in its MAC address table. and the Host A MAC address is added to the MAC table on switch S15. This lookup is missed because no entry exists for Host B in the switch S10 MAC address table. This packet has switch S10 switch ID in its outer source address (OSA). and the second entry is remote for Host A pointing toward switch S10. 4. The MAC address lookup finds a hit. The FabricPath frame is received by switch S14 and switch S15.

FabricPath implements an emulated switch. the MAC address table of this remote FabricPath switch is updated with the switch ID of the first vPC switch. On this remote FabricPath switch. and it is performed in the data plane. S15 performs the MAC address lookup and finds a hit pointing toward switch S10. The learning consists of mapping the Layer 2 MAC address to a switch ID.2. 4. which is switch S10. The packet is received by switch S10 and a MAC address of Host B is learned by switch S10. it sees it as MAC move. In a vPC configuration. the Layer 2 MAC addresses are learned only by the edge switches. the next traffic flow can go via a second vPC switch. with the switch ID of the first vPC switch in OSA. a dual-connected host can send a packet into the FabricPath network from two edge switches. 5. Switch S15 performs the switch ID lookup and selects a FabricPath core port to send the packet. The packet traverses the FabricPath network to reach the destination switch. the MAC address table is updated with the switch ID of the second vPC switch. and the packet is delivered on the port 1/1. The MAC address table on S10 shows Host B pointing to switch S15. Because the virtual port channel performs load balancing. 3. In a FabricPath network. and it can destabilize the network. This behavior is called MAC flapping. The packet is encapsulated in FabricPath header. with S15 in the OSA and S10 in the ODA. Figure 1-19 FabricPath Known Unicast Virtual Port Channel Plus A virtual port channel plus (vPC+) enables a classical Ethernet (CE) vPC domain and Cisco FabricPath to interoperate. This MAC address is learned on FabricPath and the switch ID in OSA is S15. Switch S10 performs the MAC address table lookup for Host A. This emulated switch is a logical way of combining two or more FabricPath switches to emulate a single switch to the rest of . When the remote FabricPath switch receives a packet for the same MAC address with the switch ID of the second vPC switch in its OSA. Figure 1-19 illustrates these steps for a known unicast packet flow. To address this problem. When a packet is received at a remote FabricPath switch.

if multiple FabricPath edge switches connect to the same STP domain. As a result of this configuration. Therefore. The packets are load-balanced across the vPC link. The entire FabricPath network appears as a single STP bridge to any connected STP domains. simply configure the vPC peer-link as a FabricPath core port. By default. Each FabricPath edge switch must be configured as root for all FabricPath VLANs. Figure 1-20 shows an example of vPC+ configuration. or BID (C84C. It means that the two-emulating switch must be directly connected via peer link. The emulated switch implementations in FabricPath where two FabricPath edge switches provide a vPC to a thirdparty device is called vPC+. make sure you configure all edge switches as STP root. FabricPath Interaction with Spanning Tree FabricPath edge ports support direct Classic Ethernet host connections and connections to the traditional Spanning-Tree switches. The other FabricPath switches are not aware of this emulation and simply see the emulated switch reachable through both of these emulating switches. If you are connecting STP devices to the FabricPath network. Figure 1-20 FabricPath with vPC+ In this example.the FabricPath network. To appear as one STP bridge. To solve this problem. and these packets are seen at the remote switch S10. and assign an emulated switch ID to the vPC. Host B is connected to switch S14 and S15 in vPC configuration. FabricPath isolates STP domains.75FA. To configure vPC+. all FabricPath switches share a common bridge ID. remote switch S10 sees the Host B MAC address entry pointing to the emulated switch. make sure . configure switch S14 and S15 with a FabricPath emulated switch using vPC+ configuration. FabricPath switches transmit and process STP BPDUs on FabricPath edge ports. The FabricPath packets originated by the two emulating switches are sourced with the emulated switch ID. However. This BID is statically defined and is not user configurable.6000). coming from S14 as well as from S15. BPDUs including TCNs are not transmitted on FabricPath core ports and as a result. no BPDUs are forwarded or tunneled through the FabricPath network. This results in MAC flapping at switch S10. Additionally. and there should be a peer keepalive path between the two switches.

There is no need to configure IS-IS protocol for FabricPath. If required. identifying FabricPath interfaces. use the following command in switch global configuration mode: feature-set fabricpath 2.1 (Nexus 7000) / NX-OS 5. The Enhanced Layer 2 license is required to run FabricPath. The FabricPath configuration steps are as follows: 1. FabricPath devices will form IS-IS adjacencies. and switches will start forwarding traffic. To enable the FabricPath feature set. To ensure that the FabricPath network acts as STP root. all the VLANs are in Classic Ethernet mode. By default. all FabricPath edge ports have the STP rootguard function enabled implicitly.that those edge switches use the same bridge priority value. Before you start configuring the FabricPath on a switch. Before you start FabricPath configuration. Define FabricPath VLANs: identify the VLANs that will be extended across the FabricPath network. You can change the VLAN mode to FabricPath by going into the VLAN configuration. and identifying FabricPath VLANs. These steps involve enabling the FabricPath feature. install the license using the following command: install license <filename> Install the FabricPath feature set by issuing install feature-set fabricpath. Configuring FabricPath FabricPath can be configured using a few simple steps. Figure 1-21 shows an example of FabricPath topology and its configuration. These ports connect to other FabricPath switches to establish IS-IS adjacency. You have the appropriate license for FabricPath. If a superior BPDU is received on a FabricPath edge port.3 (Nexus 5500) software release. the port is placed in the L2 Gateway Inconsistent state until the condition is cleared. Use the following command to change the VLAN mode for a range of VLANs: vlan <range> mode fabricpath 3. You can also configure a static switch ID using the fabricpath switch-id <ID number> command. The following command is used to configure an interface in FabricPath mode: Click here to view code image interface <name> switchport mode fabricpath After performing this configuration on all FabricPath switches in the topology. The unicast and multicast routing information is exchanged. Identify FabricPath interfaces: identify the FabricPath core ports. ensure the following: You have Nexus devices that support FabricPath. The system is running minimum NX-OS 5. Configuring a port in FabricPath mode automatically enables IS-IS protocol on the port.1. FabricPath Dynamic Resource Allocation Protocol (DRAP) automatically assigns a switch ID to each switch within FabricPath network. the FabricPath feature set must be enabled. . You perform these steps on all FabricPath switches.1.

Table 1-6 lists some useful commands to verify FabricPath.Figure 1-21 FabricPath Configuration Example Verifying FabricPath The show mac address table command can be used to verify that MAC addresses are learned on Cisco FabricPath edge devices. The show fabricpath route command can be used to view the Cisco FabricPath routing table that results from the Cisco FabricPath IS-IS SPF calculations. it provides a pointer to the remote switch from which this address was learned. If multiple equal paths are available between two switches. The Cisco FabricPath routing table shows the best paths to all the switches in the fabric. The command shows local addresses with a pointer to the interface on which the address was learned. all paths will be installed in the Cisco FabricPath routing table to provide ECMP. . For remote addresses.

Table 1-6 Verify FabricPath Configuration Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. noted with the key topics icon in the outer margin of the page. . Table 1-7 lists a reference for these key topics and the page numbers on which each is found.

“Memory Tables. and check your answers in the glossary: Virtual LAN (VLAN) Virtual Port Channel (vPC) Network Interface Card (NIC) Application Centric Infrastructure (ACI) Maximum Transmit Unit (MTU) Link Aggregation Control Protocol (LACP) Spanning Tree Protocol (STP) . and complete the tables and lists from memory.” also on the disc. Appendix C.Table 1-7 Key Topics for Chapter 1 Complete Tables and Lists from Memory Print a copy of Appendix B. includes completed tables and lists to check your work.” (found on the disc) or at least the section for this chapter. Define Key Terms Define the following key terms from this chapter. “Memory Tables Answer Key.

Virtual Device Context (VDC) Cisco Fabric Services over Ethernet (CFSoE) Hot Standby Router Protocol (HSRP) Virtual Router Redundancy Protocol (VRRP) Bridge Protocol Data Unit (BPDU) Internet Group Management Protocol (IGMP) Time To Live (TTL) Dynamic Resource Allocation Protocol (DRAP) .

scalability. You learn the different specifications of each product. and so on. 1. which includes Nexus 9000. iSCSI. Nexus 3000. Which of the following Nexus 9000 hardwares can work as a spine in a Spine-Leaf topology? (Choose three.) . “Cisco Nexus 1000V and Virtual Switching. such as FCoE. read the entire chapter. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. “Answers to the ‘Do I Know This Already?’ Quizzes. including different types of data center protocols. and Nexus 2000. FC. it is covered in detail in Chapter 18. Note Nexus 1000V is not covered in this chapter. This chapter focuses on the Cisco Nexus product family. Nexus 6000. You can find the answers in Appendix A. Cisco Nexus Product Family This chapter covers the following exam topics: Describe the Cisco Nexus Product Family Describe FEX Products The Cisco Nexus product family offers a broad portfolio of LAN and SAN switching products spanning software switches that integrate with multiple types of hypervisors to hardware data center core switches and campus core switches. you should mark that question as wrong for purposes of the self-assessment. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. Table 2-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions.” “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. Cisco Nexus products provide the performance. Nexus 7000. and availability to accommodate diverse data center needs.Chapter 2. Nexus 5000. If you do not know the answer to a question or are only partially sure of the answer.” Table 2-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter.

True b. a.3 Tbps d. 8 b. 24 c. False 8. 220 Gbps c. 16+1 c. a. How many VDCs can a SUP2 have? a. Nexus 9396PX d. a. How many strands do the 40F BiDi optics use? a. 4+1 7. 2+1 b. 2. 20 b. True or false? By upgrading SUP1 to the latest software and using the correct license. True b. True or false? The Nexus N9K-X9736PQ can be used in standalone mode when the Nexus 9000 modular switches use NX-OS.6 Tbps 5. How many unified ports does the Nexus 6004 currently support? a. False 6. True b. 550 Gbps b. What is the current bandwidth per slot for the Nexus 7700? a. 64 . Nexus 9336PQ 2. 1. Nexus 9508 b. you can use FCoE with F2 cards. Nexus 9516 c. 8+1 d. Nexus 93128TX e. False 3. 16 4. 48 d. 12 d. 2 c. True or false? The F3 modules can be interchanged between the Nexus 7000 and Nexus 7700.a.

and Nexus 5548UP has 32 fixed FCoE only. which are Ethernet ports only. 16 c. False 10. Nexus 2232TM d. 12 b. Nexus 2232PP b. 11. Using the Cisco Nexus products. Which vendors support the B22 FEX? a. 8 d. c. and 100 G ports as well. Nexus 2248TP-E Foundation Topics Describe the Cisco Nexus Product Family The Cisco Nexus product family is a key component of Cisco unified data center architecture. True b. Which models from the following list of switches support FCoE ? (Choose two. The Nexus 5548P has 32 fixed ports. How many unified ports does the Nexus 5672 have? a. The objective of the Unified Fabric is to build highly available. you can build end-to-end data center designs based on three-tier architecture or based on Spine-Leaf architecture. True or false? The 40-Gbps ports on the Nexus 6000 can be used as 10 Gbps. b. Dell c. Nexus 2248PQ c. Fujitsu e. and the Nexus 5548UP has 32 fixed unified ports. The Nexus 5548P has 32 fixed ports. It offers high-density 10 G.) a. a. 40 G. which is the Unified Fabric. d. whereas the Nexus 5548UP has all ports unified.) a.9. which are Ethernet only. Which of the following statements are correct? (Choose two. All the above 13. HP b. 24 12. IBM c. Modern data center designs need the following properties: . The Nexus 5548P has 32 fixed ports and can have 16 unified ports. The Nexus 5548P has 48 fixed ports and Nexus 5548UP has 32 fixed ports and 16 unified ports. highly secure network fabrics.

especially virtual hosts. This is addressed today using Layer 2 multipathing technologies such as FabricPath and virtual port channels (vPC). Figure 2-1 shows the different product types available at the time this chapter was written. These products are explained and discussed in the following sections. Rather than using. That will make it easy to build and tear down test and development environments. Using the concept of a service profile and booting from SAN in the Cisco Unified Computing system will reduce the time to instantiate new servers. which happens by building a computing fabric and dealing with CPU and memory as resources that are utilized when needed. for example. which in turn reduces cooling requirements. with consistent network and security policies. Hybrid clouds extend your existing data center to public clouds as needed. Ways to address this problem include using Unified Fabric (converged SAN and LAN). configuration changes. 8 × 10 G links. Improved reliability during software updates. Computing resources must be optimized. you can use 2 × 40 G links and so on. In this chapter you will understand the Cisco Nexus product family. or adding components to the data center environment. Doing capacity planning for all the workloads and identifying candidates to be virtualized help reduce the number of compute nodes in the data center. Hosts. that should happen with minimum disruption. must move without the need to change the topology or require an address change. Reducing cabling creates efficient airflow. The concept of hybrid clouds can benefit your organization. Cisco is helping customers utilize this concept using Intercloud Fabric. . and using technologies such as VM-FEX and Adapter-FEX. Power and cooling are key problems in the data center today. or the design is limiting us to use Active/Standby NIC teaming.Effective use of available bandwidth in designs where multiple links exist between the source and destination and one path is active and the other is blocked by spanning tree. using Cisco virtual interface cards.

the design follows the standard three-tier architecture. they function as a classical Nexus switch. they provide an application-centric infrastructure. In this case. . and the 16-slot Nexus 9516.Figure 2-1 Cisco Nexus Product Family Cisco Nexus 9000 Family There are two types of switches in the Nexus 9000 Series: the Nexus 9500 modular switches and the Nexus 9300 fixed configuration switches. the design follows the Spine-Leaf architecture shown in Figure 2-2. Therefore. as shown in Figure 2-3: the 4-slot Nexus 9504. Figure 2-2 Nexus 9000 Spine-Leaf Architecture Cisco Nexus 9500 Family The Nexus 9500 family consists of three types of modular chassis. They can run in two modes. When they run in ACI mode and in combination with Cisco Application Policy Infrastructure Controller (APIC). the 8-slot Nexus 9508. When they run in NX-OS mode and use the enhanced NX-OS software.

Table 2-2 shows the comparison between the different models of the Nexus 9500 switches. supervisors. . line cards.Figure 2-3 Nexus 9500 Chassis Options The Cisco Nexus 9500 Series switches have a modular architecture that consists of the following: Switch chassis Supervisor engine System controllers Fabric modules Line cards Power supplies Fan trays Optics Among these parts. system controllers. and power supplies are common components that can be shared among the entire Nexus 9500 product family.

Because there is no midplane with a precise alignment mechanism. fabric cards and line cards align together. as shown in Figure 2-4.Table 2-2 Nexus 9500 Modular Platform Comparison Chassis The Nexus 9500 chassis doesn’t have a midplane. Midplanes tend to block airflow. . which results in reduced cooling efficiency.

and 16 GB RAM. Each supervisor module consists of a Romely 1.8 GHZ CPU. The supervisor modules manage all switch operations. upgradable to 48 GB RAM and 64 GB SSD storage. an RS-232 Serial Port (RJ-45). pulse per second (PPS). including two USB ports. The supervisor has an external clock source. There are multiple ports for management.Figure 2-4 Nexus 9500 Chassis Supervisor Engine The Nexus 9500 modular switch supports two redundant half-width supervisor engines. as shown in Figure 2-5. The supervisor engine is responsible for the control plane function. that is. and a 10/100/1000-MBps network port (RJ-45). 4 core. .

and fabric modules. They host two main control and management paths—the Ethernet Out-of-Band Channel (EOBC) and Ethernet Protocol Channel (EPC) —between supervisor engines. . and the EPC channel handles the intrasystem data plane protocol communication. The system controller is responsible for managing power supplies and fan trays. The EOBC provides the intrasystem management communication across modules. as shown in Figure 2-6. line cards. They offload chassis management functions from the supervisor modules.Figure 2-5 Nexus 9500 Supervisor Engine System Controller There is a pair of redundant system controllers at the back of the Nexus 9500 chassis.

and the T2 uses 24 40GE ports to guarantee the line rate. each fabric module consists of multiple NFEs. and the Nexus 9516 has four.Figure 2-6 Nexus 9500 System Controllers Fabric Modules The platform supports up to six fabric modules. the Nexus 9508 has two. The packet lookup and forwarding functions involve both the line cards and the fabric modules. as shown in Figure 2-7. both contain multiple network forwarding engines (NFE). . All fabric modules are active. The Nexus 9504 has one NFE per fabric module. The NFE is a Broadcom trident two ASIC (T2).

All line cards have multiple NFEs for packet lookup and forwarding. you will need six fabric modules to achieve line-rate speeds. There are line cards that can be used in application-centric infrastructure mode (ACI) only. the ASE performs ACI spine functions when the Nexus 9500 is used as a spine in the ACI mode. Figure 2-8 . In addition. so to install them you must remove the fan trays. When you use the 36-port 40GE line cards. in a classical design. ALE performs the ACI leaf function when the Nexus 9500 is used as a leaf node when deployed in the ACI mode. Line Cards It is important to understand that there are multiple types of Nexus 9500 line cards. The ACI-only line cards contain an additional ASIC called application spine engine (ASE). There are also line cards that can be used in both modes: standalone mode using NX-OS and ACI mode. Note The fabric modules are behind the fan trays. the ACI-ready leaf line cards contain an additional ASIC called Application Leaf Engine (ALE).Figure 2-7 Nexus 9500 Fabric Module When you use the 1/10G + 4 40GE line cards. There are cards that can be used in standalone mode when used with enhanced NX-OS. you need a minimum of three fabric modules to achieve line-rate speeds.

and offloading BFD protocol handling from the supervisors. collecting and sending line card counters. Figure 2-8 Nexus 9500 Line Cards Positioning Nexus 9500 line cards are also equipped with dual-core CPUs. This CPU is used to speed up some control functions.shows the high-level positioning of the different cards available for the Nexus 9500 Series network switches. Table 2-3 shows the different types of cards available for the Nexus 9500 Series switches and their specification. statistics. such as programming the hardware table resources. .

Table 2-3 Nexus 9500 Modular Platform Line Card Comparison .

Figure 2-9 Nexus 9500 AC Power Supply Note The additional four power supply slots are not needed with existing line cards shown in Table 2-3. Dynamic speed is driven by temperature sensors and front-to-back air flow with N+1 redundancy per tray. . The 3000W AC power supply shown in Figure 2-9 is 80 Plus platinum rated and provides more than 90% efficiency. and optics. each tray consists of three fans. Fan Trays The Nexus 9500 consists of three fan trays. however. Fan trays are installed after the fabric module installation.Power Supplies The Nexus 9500 platform supports up to 10 power supplies. they are accessible from the front and are hot swappable. as shown in Figure 2-10. they support N+1 and N+N (grid redundancy). they offer head room for future port densities. Two 3000W AC power supplies can operate a fully loaded chassis. bandwidth.

Bi-Di optics are standard based. . If one of the fan trays is removed. Cisco QSFP Bi-Di Technology for 40 Gbps Migration As data center designs evolve from 1 G to 10 G at the access layer. the fan tray must be removed first. when you migrate from 10 G to 40 G. 40 G operates on eight strands of fiber. however.11 shows the difference between the QSFP SR and the QSFP Bi-Di. Figure 2. you must replace the cabling. and second. the first is the cost barrier of the 40 G port itself. The 40 G adoption is slow today because of multiple barriers.Figure 2-10 Nexus 9500 Fan Tray Note To service the fabric modules. the other two fan trays will speed up to compensate for the loss of cooling. 10 G operates on what is referred to as two strands of fiber. access to aggregation and SpineLeaf design at the spine layer will move to 40 G. and they enable customers to take the current 10 G cabling plant and use it for 40 G connectivity without replacing the cabling.

Table 2-4 summarizes the different specifications of each chassis. There are currently four chassis-based models in the Nexus 9300 platform. The Nexus 9300 is designed for top-of-rack (ToR) and mid-of-row (MoR) deployments.Figure 2-11 Cisco Bi-Di Optics Cisco Nexus 9300 Family The previous section discussed the Nexus 9500 Series modular switches. Table 2-4 Nexus 9500 Fixed-Platform Comparison . This section discusses details of the Cisco Nexus 9300 fixed configuration switches.

8 out of the 12 × 40 Gbps QSFP+ ports will be available. The uplink module is the same for all switches. redundant power supplies. and redundant fabric cards for high availability. the Nexus 9396PX. If used with the Cisco Nexus 93128TX. 40 G and 100 G—in a high density. . Figure 2-12 Cisco Nexus 9300 Switches Cisco Nexus 7000 and Nexus 7700 Product Family The Nexus 7000 Series switches form the core data center networking fabric. and 93128TX are provided on an uplink module that can be serviced and replaced by the user. The modular design of the Nexus 7000 and Nexus 7700 enables them to offer different types of network interfaces—1 G. As shown in Table 2-4. scalable way with a switching capacity beyond 15 Tbps for the Nexus 7000 Series and 83 Tbps for the Nexus 7700 Series. and 93128TX can operate in NX-OS mode and in ACI mode (acting as a leaf node). The Nexus 9336PQ can operate in ACI mode only and act as as spine node. 9396TX. 10 G. There are multiple chassis options from the Nexus 7000 and Nexus 7700 product family. as shown in Table 2-5.The 40-Gbps ports for Cisco Nexus 9396PX. 9396TX. you can utilize NX-OS APIs to manage it. The Nexus 7000 and the Nexus 7700 switches offer a comprehensive set of features for the data center network. It is very reliable by supporting in-service software upgrades (ISSU) with zero packet loss. Figure 2-12 shows the different models available today from the Nexus 9300 switches. It is easy to manage using the command line or through data center network manager. The Nexus 7000 hardware architecture offers redundant supervisor engines.

Table 2-5 Nexus 7000 and Nexus 7700 Modular Platform Comparison The Nexus 7000 and Nexus 7700 product family is modular in design with great focus on the redundancy of all the critical components. the supervisor modules are not interchangeable between the Nexus 7000 and Nexus 7700 chassis. which provide seamless and stateful switchover in the event of a supervisor module failure. and system software aspects of the chassis. You can have multiple fabric modules. environmental. Switch fabric redundancy: The fabric modules support load sharing. Supervisor module redundancy: The chassis can have up to two supervisor modules operating in active and standby modes. State and configuration are in sync between the two supervisors. power. . this has been applied across the physical. With the current shipping fabric cards and current I/O modules the switches support N+1 redundancy. Note There are dedicated slots for the supervisor engines in all Nexus chassis. the Nexus 7000 supports up to five fabric modules and Nexus 7700 supports up to six fabric modules.

allowing it to be connected to separate isolated A/C supplies. fabric. Grid redundancy. Full redundancy. otherwise commonly called N+1 redundancy. Cisco NX-OS In-Service Software Upgrade (ISSU): With the NX-OS modular architecture you can support ISSU. which is the combination of PSU redundancy and grid redundancy. and Table 2-5 shows their specifications. Most of the services allow stateful restart. where the total power available is the sum of the outputs of all the power supplies installed. Modular software upgrades: The NX-OS software is designed with a modular architecture. Cooling subsystem: The system has redundant fan trays. Power subsystem availability features: The system will support the following power redundancy options: Combined mode. Each service running is an individual memory protected process. Cable management: The integrated cable management system is designed to support the cabling requirements of a fully configured system to either or both sides of the switch. There are multiple fans on the fan trays. fan. Nexus 7000 and Nexus 7700 have multiple models with different specifications.) PSU redundancy. and any failure of one of the fans will not result in loss of service. System-level LEDs: A series of LEDs at the top of the chassis provides a clear summary of the status of the major system components. . and I/O module status.Note The fabric modules between the Nexus 7000 and Nexus 7700 are not interchangeable. In most cases. The LEDs alert operators to the need to conduct further investigation. enabling a service experiencing a failure to be restarted and resume operation without affecting other services. supervisor. 50% of power is secure. which helps to address specific issues and minimize the system overall impact. Figure 2-13 shows these switching models. The transparent front door allows observation of cabling and module indicator and status lights. allowing maximum flexibility. where the total power available is the sum of all power supplies minus 1. In the event of an A/C supply failure. Each PSU has two supply inputs. which enables you to do a complete system upgrade without disrupting the data plane and achieve zero packet loss. including multiple instances of a particular service that provide effective fault isolation between services and that make each service individually monitored and managed. These LEDs report the power supply. PSU Redundancy is the default. where the total power available is the sum of the power from only one input on each PSU. Cable management: The cable management cover and optional front module doors provide protection from accidental interference with both the cabling and modules that are installed in the system. but not both at the same time. (This is not redundant. this will be the same as grid mode but will assure customers that they are protected for either a PSU or a grid failure.

(4400Gbps) × (2 for full duplex operation) = 83. So.Figure 2-13 Cisco Nexus 7000 and Nexus 7700 Product Family Note When you go through Table 2-5 you might wonder how the switching capacity was calculated. the local I/O module fabrics are connected back-to-back to form a two-stage crossbar that interconnects the I/O modules and the supervisor engines. Using this type of calculation. the calculation is [(2. and that can be achieved by upgrading the fabric cards for the chassis. so that makes a total of 5 crossbar channels. For the Nexus 7700 Series. . for example.6 Tbps. Therefore. the Nexus 7700 is capable of more than double its capacity. It is worth mentioning that the Nexus 7004 doesn’t have any fabric modules. the Nexus 7010 has 8 line cards slots. the Cisco Nexus 7010 bandwidth is being calculated like so: [(550 Gbps/slot) × (9 payload slots) = 4950 Gbps.3 Tbps. In some bandwidth calculations for the chassis. For example. Cisco Nexus 7004 Series Switch Chassis The Cisco Nexus 7004 switch chassis shown in Figure 2-14 has two supervisor module slots and two I/O modules.2 Tbps system bandwidth].9 Tbps system bandwidth]. the supervisor slots are taken into consideration because each supervisor slot has a single channel of connectivity to each fabric card. so the calculation is [(550 Gbps/slot) × (8 payload slots) = 4400 Gbps. It has one fan tray and four 3KW power supplies. we’ll use the Nexus 7718 as an example.7 Tbps. and the 18-slot chassis will have 18. the complete specification for the Nexus 7004 is shown in Table 2-5. (4950 Gbps) × (2 for full duplex operation) = 9900 Gbps = 9. if the bandwidth/slot is 2. (4400 Gbps) × (2 for full duplex operation) = 8800 Gbps = 8. a 9-slot chassis will have 8.8 Tbps system bandwidth].8 Tbps.6 Tbps.6Tbps/slot) × (16 payload slots) = 41. Although the bandwidth/slot shown in the table give us 1.

and the system will continue to function. The fan redundancy is two parts: individual fans in the fan tray and fan tray controllers. The fan controllers are fully redundant. there is redundancy within the cooling system due to the number of fans.Figure 2-14 Cisco Nexus 7004 Switch Cisco Nexus 7009 Series Switch Chassis The Cisco Nexus 7009 switch chassis shown in Figure 2-15 has two supervisor module slots and seven I/O module slots. The Nexus 7009 switch has a single fan tray. If an individual fan fails. there is a solution for hot aisle–cold aisle design with the 9-slot chassis. So. . reducing the probability of a total fan tray failure. giving you time to get a spare and replace the fan tray. The fans in the fan trays are individually wired to isolate any failure and are fully redundant so that other fans in the tray can take over when one or more fans fail. Figure 2-15 Cisco Nexus 7009 Switch Although the Nexus 7009 has side-to-side airflow. the complete specification for the Nexus 7009 is shown in Table 2-5. other fans automatically run at higher speeds.

For the 7018 system there are two system fan trays.Cisco Nexus 7010 Series Switch Chassis The Cisco Nexus 7010 switch chassis shown in Figure 2-16 has two supervisor engine slots and eight I/O modules slots. the system will keep running without overheating as long as the operating environment is within specifications. There are two fabric fan modules in the 7010. Each row cools three module slots (I/O and supervisor). Each fan tray contains 12 fans that are in three rows or four. The fans on the trays are N+1 redundant. and the fan tray will need replacing to restore N+1. the remaining fabric fan will continue to cool the fabric modules until the fan is replaced. one for the upper half and one for the lower half of the system. restoring the N+1 redundancy. the box does not overheat. the complete specification is shown in Table 2-5. If either of the fan trays fails. The . Cisco Nexus 7018 Series Switch Chassis The Cisco Nexus 7018 switch chassis shown in Figure 2-17 has two supervisor engine slots and 16 I/O module slots. Fan tray replacement restores N+1 redundancy. the complete specification is shown in Table 2-5. The system should not be operated with either the system fan tray or the fabric fan components removed apart from the hot swap period of up to 3 minutes. Both fan trays must be installed at all times (apart from maintenance). Any single fan failure will not result in degradation in service. If either fabric fan fails. until the fan tray is replaced. There are multiple fans on the 7010 system fan trays. and they will continue to cool the system. Figure 2-16 Cisco Nexus 7010 Switch There are two system fan trays in the 7010. both are required for normal operation. The failure of a single fan will result in the other fans increasing speed to compensate.

the fabric fans operate only when installed in the upper fan tray position. the remaining fan will cool the fabric modules. but when inserted in the lower position the fabric fans are inoperative. 96 40G ports. Cisco Nexus 7706 Series Switch Chassis The Cisco Nexus 7706 switch chassis shown in Figure 2-18 has two supervisor module slots and four I/O module slots. Integrated into the system fan tray are the fabric fans. There are 192 10-G ports. true front-to-back airflow.fan tray should be replaced to restore the N+1 fan resilience. the complete specification is shown in Table 2-5. redundant fans. The two fans are in series so that the air passes through both to leave the switch and cool the fabric modules. Figure 2-17 Cisco Nexus 7018 Switch Note Although both system fan trays are identical. and redundant fabric cards. Failure of a single fabric fan will not result in a failure. The fabric fans are at the rear of the system fan tray. Fan trays are fully interchangeable. . 48 100-G ports.

redundant fans. the complete specification is shown in Table 2-5. 192 40G ports. and redundant fabric cards. There are 768 10-G ports. true front-to-back airflow. 384 40-G ports.Figure 2-18 Cisco Nexus 7706 Switch Cisco Nexus 7710 Series Switch Chassis The Cisco Nexus 7710 switch chassis shown in Figure 2-19 has two supervisor engine slots and eight I/O module slots. . true front-to-back airflow. and redundant fabric cards. 96 100-G ports. the complete specification is shown in Table 2-5. redundant fans. 192 100-G ports. Figure 2-19 Cisco Nexus 7710 Switch Cisco Nexus 7718 Series Switch Chassis The Cisco Nexus 7718 switch chassis shown in Figure 2-20 has two supervisor engine slots and 16 I/O module slots. There are 384 1-G ports.

Redundancy is achieved by having both supervisor slots populated.Figure 2-20 Cisco Nexus 7718 Switch Cisco Nexus 7000 and Nexus 7700 Supervisor Module Nexus 7000 and Nexus 7700 Series switches have two slots that are available for supervisor modules. Table 2-6 describes different options and specifications of the supervisor modules. .

As shown in Table 2-6. the operating system runs on a dedicated dualcore Xeon processor. The USB ports allow access to USB flash memory devices to software image loading and recovery. There is dual redundant Ethernet out-of-band channels (EOBC) to each I/O and fabric modules to provide resiliency for the communication between control and line card processors. Figure 2-21 Cisco Nexus 7000 Supervisor Module 1 The Connectivity Management Processor (CMP) provides an independent remote system management and monitoring capability.Table 2-6 Nexus 7000 and Nexus 7700 Supervisor Modules Comparison Cisco Nexus 7000 Series Supervisor 1 Module The Cisco Nexus 7000 supervisor module 1 shown in Figure 2-21 is the first-generation supervisor module for the Nexus 7000. dual supervisor engines run in active-standby mode with stateful switch over (SSO) and configuration synchronization between both supervisors. An embedded packet analyzer reduces the need for a dedicated packet analyzer to provide faster resolution for control plane problems. It removes the need for separate terminal server devices for OOB .

which has single-core CPU and 8 G of memory. it has a quad-core CPU and 12 G of memory compared to supervisor module 1. Supervisor module 2E is the enhanced version of supervisor module 2 with two quad-core CPUs and 32 G of memory. Figure 2-22 Cisco Nexus 7000 Supervisor 2 Module Supervisor module 2 and supervisor module 2E have more powerful CPUs. Note .management. This is useful in detecting hardware faults. and nextgeneration ASICs that together will result in improved performance. when choosing the proper line card. If a fault is detected. corrective action can be taken to mitigate the fault and reduce the risk of a network outage. The Power-on Self Test (POST) and Cisco Generic Online Diagnostics (GOLD) provide proactive health monitoring both at startup and during system operation. and it also allows access to supervisor logs and full console control on the supervisor engine. Both supervisor module 2 and supervisor module 2E support FCoE. they support CPU Shares. Administrators must authenticate to get access to the system through CMP. and a higher control plane scale. As shown in Table 2-6. such as higher VDC and FEX. faster boot and switchover times. It has the capability to initiate a complete system restart and shutdown. which will enable you to carve out CPU for higher priority VDCs. such as enhanced user experience. Sup2 scale is the same as Sup1. it will support 4+1 VDCs. The Cisco Nexus 7000 supervisor module 1 incorporates highly advanced analysis and debugging capabilities. and it offers complete visibility during the entire boot process. Cisco Nexus 7000 Series Supervisor 2 Module The Cisco Nexus 7000 supervisor module 2 shown in Figure 2-22 is the next-generation supervisor module. larger memory. Sup2E supports 8+1 VDCs.

Figure 2-23 shows the different fabric modules for the Nexus 7000 and Nexus 7700 products. In Nexus 7700. which is 110 Gbps. When using Fabric Module 2. Sup2 and Sup2E can be mixed for migration only. you can deliver a maximum of 230 Gbps per slot using five fabric modules. Figure 2-24 . The Nexus 7000 has five fabric modules.32 Tbps per slot. adding fabric modules increases the available bandwidth per I/O slot because all fabric modules are connected to all slots. All fabric modules support load sharing.You cannot mix Sup1 and Sup2 in the same chassis. In case of a failure or removal of one of the fabric modules. Figure 2-23 Cisco Nexus 7000 Fabric Module In the case of Nexus 7000. The Nexus 7000 implements a three-stage crossbar switch. you can deliver a maximum of 550 Gbps per slot. and the Nexus 7700 has six. when using Fabric Module 1. you can deliver a maximum of 1. This will be a nondisruptive migration. Nexus 7000 supports virtual output queuing (VOQ) and credit-based arbitration to the crossbar to increase performance. which is 220 Gbps per slot. and the architecture supports lossless fabric failover. which is 46 Gbps. by using Fabric Module 2. the remaining fabric modules will load balance the remaining bandwidth to all the remaining line cards. VOQ and credit-based arbitration allow fair sharing of resources when a speed mismatch exists to avoid head-of-line (HOL) blocking. Note that this will be a disruptive migration requiring removal of supervisor 1. and stage 2 is implemented on the fabric module. Cisco Nexus 7000 and Nexus 7700 Fabric Modules The Nexus 7000 and Nexus 7700 fabric modules provide interconnection between line cards and provide fabric channels to the supervisor modules. Fabric stage 1 and fabric stage 3 are implemented on the line card module.

Cisco Nexus 7000 and Nexus 7700 Licensing Different types of licenses are required for the Nexus 7000 and the Nexus 7700. Each supervisor module has a single 23 Gbps trace to each fabric module. providing 230 Gbps of switching capacity per I/O slot for a fully loaded chassis. Note Cisco Nexus 7000 fabric 1 modules provide two 23 Gbps traces to each fabric module. These connections are also 55 Gbps. the total number of connections from the fabric cards to each line card is 24. When populating the chassis with six fabric modules.shows how these stages are connected to each other. It provides an aggregate bandwidth of 1. There are four connections from each fabric module to the line cards. there are 12 connections from the switch fabric to the supervisor module. Figure 2-24 Cisco Nexus 7700 Crossbar Fabric There are two connections from each fabric module to the supervisor module. providing an aggregate bandwidth of 275 Gbps. When all the fabric modules are installed.32 Tbps per slot. Table 2-7 describes each license and the features it enables. and each one of these connections is 55 Gbps. .

.

Enabling a licensed feature that does not have a license key starts a counter on the grace period. The host ID is also referred to as the device serial number. disable the use of that feature. To get the license file you must obtain the serial number for your device by entering the show license host-id command. which is the amount of time the features in a license package can continue functioning without a license. Evaluation licenses are time bound (valid for a specified number of days) and are tied to a host ID (device serial number). or disable the grace period feature. Example 2-1 NX-OS Command to Obtain the Host ID . Note To manage the Nexus 7000 two types of licenses are needed: the DCNM LAN and DCNM SAN. If at the end of the 120-day grace period the device does not have a valid license key for the feature. each of which is a separate license. the Cisco NX-OS software automatically disables the feature and removes the configuration from the device. There is also an evaluation license. which is a temporary license.Table 2-7 Nexus 7000 and Nexus 7700 Software Licensing Features It is worth mentioning that Nexus switches have a grace period. as shown in Example 2-1. You then have 120 days to install the appropriate license keys.

In this example. Perform the installation by using the install license command on the active supervisor module from the device console. the host ID is FOX064317SQ.Click here to view code image switch# show license host-id License hostid: VDH=FOX064317SQ Tip Use the entire ID that appears after the equal sign (=). or the usb2: device.done You can check what licenses are already installed by issuing the command shown in Example 2-3. After executing the copy licenses command from the default VDC. save your license file to one of four locations—the bootflash: directory. the usb1: device. as shown in Example 2-2. Each has different performance metrics and features.. Table 2-8 shows the comparison between M-Series modules. the slot0: device. . Example 2-2 Command Used to Install the License File Click here to view code image switch# install license bootflash:license_file. Example 2-3 Command Used to Obtain Installed Licenses Click here to view code image switch# show license usage Feature Ins Lic Status Expiry Date Comments Count -------------------------------------------------------------------------------LAN_ENTERPRISE_SERVICES_PKG Yes In use Never -------------------------------------------------------------------------------- Cisco Nexus 7000 and Nexus 7700 Line Cards Nexus 7000 and Nexus 7700 support various types of I/O modules. There are two types of I/O modules: the M-I/O modules and the F-I/O modules.lic Installing license .

Table 2-8 Nexus 7000 and Nexus 7700 M-Series Modules Comparison Note From the table. you see that the M1 32 port I/O module is the only card that supports .

LISP. Table 2-9 shows the comparison between F-Series modules. .

no downtime in replacing power supplies.0-KW AC Power Supply Module The 3. . Power Redundancy: Multiple system-level options for maximum data center availability. connecting to low line nominal voltage (110 VAC) will produce a power output of 1400W. lower fan speeds use lower power. Fully Hot Swappable: Continuous system operations. reducing power wasted as heat and reducing associated data center cooling requirements. When connecting to high line nominal voltage (220 VAC) it will produce a power output of 3000W. Variable-speed fans adjust dynamically to lower power consumption and optimize system cooling for true load. for the right sizing of power supplies. and modules enabling accurate power consumption monitoring. and environmental cooling. Internal fault monitoring: Detects component defect and shuts down unit. They offer visibility into the actual power consumption of the total system. UPSs. Cisco Nexus 7000 and Nexus 7700 Series 3. It is a single 20 Ampere (A) AC input power supply. Variable Fan Speed: Automatically adjusts to changing thermal characteristics.Table 2-9 Nexus 7000 and Nexus 7700 F-Series Modules Comparison Cisco Nexus 7000 and Nexus 7700 Series Power Supply Options The Nexus 7000 and Nexus 7700 use power supplies with +90% power supply efficiency. Temperature measurement: Prevents damage due to overheating (every ASIC on the board has a temperature sensor).0-KW AC power supply shown in Figure 2-25 is designed only for the Nexus 7004 chassis and is used across all the Nexus 7700 Series chassis. The switches offer different types of redundancy modes. Real-time power draw: Shows real-time power consumption.

the system will log an error complaining about the wrong power supply in the system. different PIDs are used on each platform. it is not officially supported by Cisco.0-KW DC power supply shown in Figure 2-26 is designed only for the Nexus 7004 chassis and is used across all the Nexus 7700 Series chassis.0-KW DC Power Supply Module The 3. each delivering up to 1500W of output power.Figure 2-25 Cisco Nexus 7000 3. Therefore. The Nexus 3. Each stage uses a -48V DC connection.0-kW DC power supply has two isolated input stages. although technically this might work. The unit will deliver 1551W when only one input is active and 3051W when two inputs are active. Cisco Nexus 7000 and Nexus 7700 Series 3. if you interchange the power supplies. .0-KW AC Power Supply Note Although the Nexus 7700 chassis and the Nexus 7004 use a common power supply architecture.

Figure 2-26 Cisco Nexus 7000 3.0-KW and 7.0-KW and 7.5-KW power supplies shown in Figure 2-27 are common across Nexus 7009.0-KW DC Power Supply Cisco Nexus 7000 Series 6. with battery backup capability.5-KW AC Power Supply Module The 6. They allow mixed-mode AC and DC operation enabling migration without disruption and providing support for dual environments with unreliable AC power. 7010. . and 7018.

Table 2-10 Nexus 7000 and Nexus 7700 6.5-KW Power Supply Specifications Cisco Nexus 7000 Series 6.Figure 2-27 Cisco Nexus 7000 6.0-KW DC Power Supply Module The 6-KW DC power supply shown in Figure 2-28 is common to the 7009. each delivering up to 1500W of power (6000W total on full load) with peak efficiency of 91% (high for a DC power supply). 7010.5-KW Power Supply Table 2-10 shows the specifications of both power supplies with different numbers of inputs and input types. It supports the same operational characteristics as the AC units: redundancy modes (N+1 and N+N). The power supply can be used in combination with AC units or as an all DC setup. The 6-KW has four isolated input stages. .0-KW and 7. and 7018 systems.0-KW and 7.

it is always better to choose the mode of power supply operation based on the requirements and needs. where the total power available is the sum of the power from only one input on each PSU.Real-time power—actual power levels Single input mode (3000W) Online insertion and removal Integrated Lock and On/Off switch (for easy removal) Figure 2-28 Cisco Nexus 7000 6. Full redundancy. which is the combination of PSU redundancy and grid redundancy. Each PSU has two supply inputs. An example of each mode is shown in Figure 2-29. You can lose one power supply or one grid. where the total power available is the sum of all power supplies – 1. . (This is not redundant. 50% of power is secure. Grid redundancy. otherwise commonly called N+1 redundancy.0-KW DC Power Supply Multiple power redundancy modes can be configured by the user: Combined mode. However. so it is recommended.) PSU redundancy. Full redundancy provides the highest level of redundancy. allowing them to be connected to separate isolated A/C supplies. In the event of an A/C supply failure. where the total power available is the sum of the outputs of all the power supplies installed. in most cases this will be the same as grid redundancy.

com/go/powercalculator. and Nexus 6004X. low-latency 1G/10G/40G/FCoE Ethernet port. Multiple models are available: Nexus 6001T. high-density. Cisco has made a power calculator that can be used as a starting point. Cisco Nexus 6000 Product Family The Cisco Nexus 6000 product family is a high-performance.0-KW Power Redundancy Modes To help with planning for the power requirements. It is worth mentioning that the power calculator cannot be taken as a final power recommendation. The power calculator can be located at http://www.Figure 2-29 Nexus 6. Nexus 6001P. Nexus 6004EF. . Table 2-11 describes the difference between the Nexus 6000 models. When using the unified module on the Nexus 6004 you can get native FC as well.cisco.

the Nexus 6004X was renamed Nexus 5696Q. Port-to-port latency is around 1 microsecond. It provides integrated Layer 2 and Layer 3 at wire speed. It offers two choices of airflow. It offers two choices of air flow. It provides 48 1G/10G SFP+ ports and four 40-G Ethernet QSFP+ ports. The Nexus 6001T is a fixed configuration 1RU Ethernet switch. .Table 2-11 Nexus 6000 Product Specification Note At the time this chapter was written. Each 40 GE port can be split into 4 × 10GE ports. this can be done by using a QSFP+ break-out cable (Twinax or Fiber). The split can be done by using a QSFP+ break-out cable (Twinax or Fiber). Both switches are shown in Figure 2-30. Cisco Nexus 6001P and Nexus 6001T Switches and Features The Nexus 6001P is a fixed configuration 1RU Ethernet switch. or 7 cables are used. It provides integrated Layer 2 and Layer 3 at wire speed.3 microseconds. port-to-port latency is around 3. 6a. front-to-back (port side exhaust) and back-to-front (port side intake). front-to-back (port side exhaust) and back-to-front (port side intake). Nexus 6001T supports FCOE on the RJ-45 ports for up to 30M when CAT 6. It provides 48 1G/10G BASE-T ports and four 40 G Ethernet QSFP+ ports. Each 40 GE port can be split into 4 × 10 GE ports.

It offers up to 96 40 G QSFP ports and up to 384 10 G SFP+ ports. There are two choices of LEMs: 12-port 10 G/40 G QSFP and 20-port unified ports offering either 1 G/10 G SFP+ or 2/4/8 G FC. Port-to-port latency is approximately 1 microsecond for any packet size. Currently. .Figure 2-30 Cisco Nexus 6001 Switches Cisco Nexus 6004 Switches Features The Nexus 6004 is shown in Figure 2-31. The main difference is that the Nexus 6004X supports Virtual Extensible LAN (VXLAN). The Nexus 6004 is 4RU 10 G/40 G Ethernet switch. The Nexus 6004 offers integrated Layer 2 and Layer 3 at wire rate. it offers eight line card expansion module (LEM) slots. the Nexus 6004EF and the Nexus 6004X. there are two models.

This converts each QSFP+ port to one SFP+ port. enabling a licensed feature that does not have a license key to start a counter on the grace period. You then have 120 days to install the appropriate license keys. Cisco Nexus 6000 Switches Licensing Options Different types of licenses are required for the Nexus 6000. or disable the grace . disable the use of that feature.Figure 2-31 Nexus 6004 Switch Note The Nexus 6004 can support 1 G ports using a QSA converter (QSFP to SFP adapter) on a QSFP interface. The SFP+ can support both 10 G and 1 GE transceivers. Table 2-12 describes each license and the features it enables. It is worth mentioning that Nexus switches have a grace period. which is the amount of time the features in a license package can continue functioning without a license.

If at the end of the 120-day grace period the device does not have a valid license key for the feature. the Cisco NX-OS software automatically disables the feature and removes the configuration from the device.period feature. .

Table 2-13 shows the comparison between different models. which is a temporary license. Cisco Nexus 5500 and Nexus 5600 Product Family The Cisco Nexus 5000 product family is a Layer 2 and Layer 3 1G/10G Ethernet with unified ports. Evaluation licenses are time bound (valid for a specified number of days) and are tied to a host ID (device serial number). it includes Cisco Nexus 5500 and Cisco Nexus 5600 platforms. Note To manage the Nexus 6000 two types of licenses are needed: the DCNM LAN and DCNM SAN. Each of them is a separate license.Table 2-12 Nexus 6000 Licensing Options There is also an evaluation license. .

.

Refer to www. The switch has three expansion modules. the switch supports unified ports on all SFP+ ports.com for more information. so they are not covered in this book. native Fibre Channel and FCoE switch. The Nexus 5548UP has the 32 ports unified ports. the 10 G BASE-T ports support FCoE up to 30m with Category 6a and Category 7 cables. Figure 2-32 Nexus 5548P and Nexus 5548UP Switches Cisco Nexus 5596UP and 5596T Switches Features The Nexus 5596T shown in Figure 2-33 is a 2RU 1/10 Gbps Ethernet.cisco. Figure 2-32 shows the layout for them. . Cisco Nexus 5548P and 5548UP Switches Features The Nexus 5548P and the Nexus 5548UP are 1/10-Gbps switches with one expansion module. The Nexus 5548P has all the 32 ports 1/10 Gbps Ethernet only. It has 32 fixed ports of 10 G BASE-T and 16 fixed ports of SFP+.Table 2-13 Nexus 5500 and Nexus 5600 Product Specification Note The Nexus 5010 and 5020 switches are considered end-of-sale. meaning that the ports can run 1/10 Gbps or they can run 8/4/2/1 Gbps native FC or mix between both.

and the Nexus 5596UP/5596T has three. and 8 ports 8/4/2/1-Gbps Native Fibre Channel ports using . The Cisco N55-M16P module shown in Figure 2-34 has 16 ports 1/10-Gbps Ethernet and FCoE using SFP+ interfaces. The Nexus 5548P/5548UP has one expansion module. It has 8 ports 1/10-Gbps Ethernet and FCoE using SFP+ interfaces. Figure 2-34 Cisco N55-M16P Expansion Module The Cisco N55-M8P8FP module shown in Figure 2-35 is a 16-port module.Figure 2-33 Nexus 5596UP and Nexus 5596T Switches Cisco Nexus 5500 Products Expansion Modules You can have additional Ethernet and FCoE ports or native Fibre Channel ports with the Nexus 5500 products by adding expansion modules.

It has 16 ports 1/10-Gbps Ethernet and FCoE using SFP+ interfaces or up to 16 ports 8/4/2/1-Gbps Native Fibre Channel ports using SFP+ and SFP interfaces. Figure 2-36 Cisco N55-M16UP Expansion Module The Cisco N55-M4Q shown in Figure 2-37 is a 4-port 40-Gbps Ethernet module. . Each QSFP 40-GE port can only work in 4×10G mode and supports DCB and FCoE. Figure 2-35 Cisco N55-M8P8FP Expansion Module The Cisco N55-M16UP shown in Figure 2-36 is a 16 unified ports module.SFP+ and SFP interfaces.

which can be ordered with the system. Figure 2-38 Cisco N55-M12T Expansion Module The Cisco 5500 Layer 3 daughter card shown in Figure 2-39 is used to enable Layer 3 on the Nexus 5548P and 5548UP. This daughter card provides 160 Gbps of Layer 3 forwarding (240 million packets per second [mpps]).Figure 2-37 Cisco N55-M4Q Expansion Module The Cisco N55-M12T shown in Figure 2-38 is a 12-port 10-Gbps BASE-T module. or it is field upgradable as a spare. which is shared among all 48 ports. This module is supported only in the Nexus 5596T. . it supports FCoE up to 30m on category 6a and category 7 cables.

There is no need to remove the switch from the rack. Figure 2-40 Cisco Nexus 5548P and 5548UP Layer 3 Daughter Card Upgrade Procedure To enable Layer 3 on the Nexus 5596P and 5596UP. you must replace the Layer 2 I/O module. you can have only one Layer 3 expansion module per Nexus 5596P and 5596UP. you must have a Layer 3 expansion module. which can be ordered with the system or as a spare. This daughter card provides 160 Gbps of Layer 3 forwarding (240 million packets per second [mpps]). Figure 241 shows the Layer 3 expansion module. and follow the steps as shown in Figure 2-40. currently. . power off the switch. which is shared among all 48 ports.Figure 2-39 Cisco Nexus 5548P and 5548UP Layer 3 Daughter Card To install the Layer 3 module.

Table 2-14 shows the summary of the features. 1-microsecond port-toport latency with all frame sizes. the maximum FEXs per Cisco Nexus 5500 Series switches is 24 with Layer 2 only. Both models bring integrated L2 and L3. and 25 MB buffer per port ASIC. Enabling Layer 3 makes the supported number per Nexus 5500 to be 16. Verify the scalability limits based on the NX-OS you will be using before creating a design. Nexus 5672UP and Nexus 56128P. Cisco Nexus 5600 Product Family The Nexus 5600 is the new generation of the Nexus 5000 switches. true 40-Gbps flow. 40-Gbps FCoE. . For example.Figure 2-41 Cisco Nexus 5596P and 5596UP Layer 3 Daughter Card Upgrade Procedure Enabling Layer 3 affects the scalability limits for the Nexus 5500. cut-through switching for 10/40 Gbps. The Nexus 5600 has two models.

. The switch has two redundant power supplies and three fan modules. meaning that the ports can run 8/4/2 Gbps Fibre Channel as well as 10 Gigabit Ethernet and FCoE connectivity options. The switch supports both port-side exhaust and port-side intake. of which 16 ports are unified. True 40 Gbps ports use QSFP+ for Ethernet/FCOE.Table 2-14 Nexus 5600 Product Switches Feature Cisco Nexus 5672UP Switch Features The Nexus 5672UP shown in Figure 2-42 has 48 fixed 1/10-Gbps SFP+ ports.

The Cisco Nexus 56128P has two expansion modules that support 24 unified ports. . The 48 fixed SFP+ ports and four 40 Gbps QSFP+ ports support FCOE as well. It has 48 fixed 1/10-Gbps Ethernet SFP+ ports and four 40-Gbps QSFP+ ports.Figure 2-42 Cisco Nexus 5672UP Cisco Nexus 56128P Switch Features The Cisco Nexus 56128P shown in Figure 2-43 is a 2RU switch.

the Cisco NX-OS software automatically disables the feature and removes the configuration from the device. hot-swappable power supplies. Table 2-15 describes each license and the features it enables. disable the use of that feature. which is the amount of time the features in a license package can continue functioning without a license. The Nexus 56128P currently supports one expansion module. the N56M24UP2Q expansion module. four N+1 redundant. plus two 40 Gbps ports. Figure 2-44 Cisco Nexus 56128P Unified Port Expansion Module Cisco Nexus 5500 and Nexus 5600 Licensing Options Different types of licenses are required for the Nexus 5500 and Nexus 5600. Nexus switches have a grace period. and a management and console interface on the fan side of the switch.Figure 2-43 Cisco Nexus 56128P The 24 unified ports provide 8/4/2 Gbps Fibre Channel as well as 10 Gigabit Ethernet and FCoE connectivity options. Evaluation licenses are time bound (valid for a specified number of days) and are tied to a host ID (device serial number). or disable the grace period feature. Enabling a licensed feature that does not have a license key starts a counter on the grace period. There is also an evaluation license. That module provides 24 ports 10 G Ethernet/FCoE or 2/4/8 G Fibre Channel and 2 ports 40 Gigabit QSFP+ Ethernet/FCoE ports. If at the end of the 120-day grace period the device does not have a valid license key for the feature. hot-swappable independent fans. as shown in Figure 2-44. It has four N+N redundant. Cisco Nexus 5600 Expansion Modules Expansion modules enable the Cisco Nexus 5600 switches to support unified ports with native Fibre Channel connectivity. . You then have 120 days to install the appropriate license keys. which is a temporary license.

.

and ultra low latency. they offer high performance and high density at ultra low latency. Nexus 3100 Series. Each is a separate license. and Nexus 3500 switches. they are shown in Figure 2-45. The Nexus 3000 switches are 1 RU server access switches. These are compact 1RU 1/10/40-Gbps Ethernet switches. The Nexus 3000 Series consists of five models: Nexus 3064X. Nexus 3064T. Cisco Nexus 3000 Product Family The Nexus 3000 switches are part of the Cisco Unified Fabric architecture. This product family consists of multiple switch models: the Nexus 3000 Series.Table 2-15 Nexus 5500 Product Licensing Note To manage the Nexus 5500 and Nexus 5600. All of them offer wire rate Layer 2 and Layer 3 on all ports. Nexus 3064-32T. . Table 2-16 gives a comparison and specifications of these different models. two types of licenses are needed: the DCNM LAN and DCNM SAN. Nexus 3016Q. and 3048.

Figure 2-45 Cisco Nexus 3000 Switches .

.

Shown in Figure 2-46. all of them offer wire rate Layer 2 and Layer 3 on all ports and ultra low latency. Table 2-17 gives a comparison and specifications of different models.Table 2-16 Nexus 3000 Product Model Comparison The Nexus 3100 switches have four models: Nexus 3132Q. and Nexus 3172TQ. Figure 2-46 Cisco Nexus 3100 Switches . Nexus 3164Q. Nexus 3172Q. These are compact 1RU 1/10/40-Gbps Ethernet switches.

Table 2-18 gives a comparison and specifications of the different models. Both offer wire rate Layer 2 and Layer 3 on all ports and ultra low latency. They are shown in Figure 2-47.Table 2-17 Nexus 3100 Product Model Comparison The Nexus 3500 switches have two models. Nexus 3524 and Nexus 3548. These are compact 1RU 1/10-Gbps Ethernet switches. .

Table 2-19 shows the license options. Algo boost provides ultra-low latency—as low as 190 nanoseconds (ns). They are very well suited for highperformance trading and high-performance computing environments. Cisco Nexus 3000 Licensing Options There are different types of licenses for the Nexus 3000. It is worth mentioning that the Nexus 3000 doesn’t support the grace period feature. which uses a hardware technology called Algo boost.Figure 2-47 Cisco Nexus 3500 Switches Table 2-18 Nexus 3500 Product Model Comparison Note The Nexus 3524 and the Nexus 3548 support a forwarding mode called warp mode. .

.

It also enables highly scalable servers access design without the dependency on spanning tree. but from an operation point of view this design looks like an EoR design. which is placed at the top of the rack. reducing cabling between racks.Table 2-19 Nexus 3000 Licensing Options Cisco Nexus 2000 Fabric Extenders Product Family The Cisco Nexus 2000 Fabric Extenders behave as remote line cards. or Nexus 9000 Series switch that is installed in the EoR position as the parent switch. . Multiple connectivity options can be used to connect the FEX to the parent switch. Only the cables between the Nexus 2000 and the parent switches will run between the racks. This type of architecture provides the flexibility and benefit of both architectures: top-of-rack (ToR) and end-of-row (EoR). Nexus 6000. This is a ToR design from a cabling point of view. They appear as an extension to the parent switch to which they connect. as shown in Figure 2-49. The uplink ports on the Cisco Nexus 2000 Series Fabric Extenders can be connected to a Cisco Nexus 5000. which is explained shortly. Figure 2-48 Cisco Nexus 2000 Top-of-Rack and End-of-Row Design As shown in Figure 2-48. which can be 10 Gbps or 40 Gbps. Nexus 6000. So no configuration or software maintenance tasks need to be done with the FEXs. The cabling between the servers and the Cisco Nexus 2000 Series Fabric Extenders is contained within the rack. and Nexus 9000 Series switches. Using Nexus 2000 gives you great flexibility when it comes to the type of connectivity and physical topology. The parent switch can be Nexus 5000. Nexus 7000. Rack-01 is using dual redundant Cisco Nexus 2000 Series Fabric Extender. Figure 2-48 shows both types. All Nexus 2000 Fabric Extenders connected to the same parent switch are managed from a single point. because all these Nexus 2000 Fabric Extenders will be managed from the parent switch. Cisco Nexus 7000. ToR and EoR.

The host interfaces are divided by the number of the max-links and distributed accordingly. The server port will always use the same uplink port. You must use the pinning max-links command to create pinned fabric interface connections so that the parent switch can determine a distribution of host interfaces. Active-active FEX using vPC: In this scenario. for Layer 2 it uses Source MAC and Destination MAC. This method is used when the FEX is connected straight through to the parent switch. the host port will be statically pinned to one of the uplinks between the FEX and the parent switch. using static pinning: To achieve a deterministic relationship between the host port on the FEX to which the server connects and the fabric interfaces that connect to the parent switch. . This method bundles multiple uplink interfaces to one logical port channel. The traffic is being distributed using a hashing algorithm. The default value is maxlinks equals 1. and Source IP Destination IP. Static pinning and active-active FEX are currently supported only on the Cisco Nexus 5000 Series switches. Straight-through. Note The Cisco Nexus 7000 Series switches currently support only straight-through deployment using dynamic pinning. Table 2-20 shows the different models of the 1/10-Gbps fabric extenders. Note The Cisco Nexus 2248PQ Fabric Extender does not support the static pinning fabric interface connection. using dynamic pinning (port channel): You can use this method to load balance between the down link ports connected to the server and the fabric ports connected to the parent switch.Figure 2-49 Cisco Nexus 2000 Connectivity Options Straight-through. Destination MAC. the FEX is dual homed using vPC to different parent switches. for Layer 3 it uses Source MAC.

.Table 2-20 Nexus 2000 1/10-Gbps Model Comparison Table 2-21 shows the different models of the 100M and 1 Gbps fabric extenders.

and IBM. It extends the FEX to third-party blade centers from HP. . Dell. Fujitsu.Table 2-21 Nexus 2000 100 Mbps/1 Gbps Model Comparison As shown in Figure 2-50. They simplify the operational model by making the blade switch management part of the parent switch and make it appear as a remote line card to the parent switch. the Cisco Nexus B22 Fabric Extender is part of the fabric extenders family. similar to the Nexus 2000 fabric extenders. There is a separate hardware model for each vendor.

It is similar to the Nexus 2000 Series management architecture. host ports for blade server attachments and uplink ports. which are called fabric ports. This architecture gives the benefit of centralized management through the Nexus parent switch.Figure 2-50 Cisco Nexus B22 Fabric Extender The B22 topology shown in Figure 2-51 is creating a highly scalable server access design with no spanning tree running. The uplink ports are visible and are used for the connectivity to the upstream parent switch. as shown in Figure 2-51. Similar to the Nexus 2000 Series. . the B22 consists of two types of ports.

and Nexus 5000 to build standard three-tier data center architectures.Figure 2-51 Cisco Nexus B22 Access Topology The number of host ports and fabric ports for each model is shown in Table 2-22. and nothing connects to the spines except the leaf switches. The leaf nodes are at the edge. you can build a Spine-Leaf architecture called Dynamic Fabric Automation (DFA). as shown in Figure 2-52. and this is where the host is connected. The DFA architecture includes fabric management using Central Point of Management (CPoM) and the automation of configuration of the fabric components. Nexus 6000. Table 2-22 Nexus B22 Specifications Cisco Dynamic Fabric Automation Besides using the Nexus 7000. Using open application programming interfaces (API). The spines are at the core of the network. . as well as the L4 to L7 services. you can integrate automation and orchestration tools to provide control for provisioning of the fabric components.

The Nexus product family is designed to meet the stringent requirements of the next-generation data center. supervisor module 2. The Nexus 7000 can scale up to 550 Gbps per slot. and the Nexus 9000 switches. if not totally extinct. Nexus modular switches are the Nexus 7000. It supports true 40 Gbps ports with 40 Gbps flows.32 Tbps per slot.Figure 2-52 Cisco Nexus B22 Topology Cisco DFA enables creation of tenant-specific virtual fabrics and allows these virtual fabrics to be extended anywhere within the physical data center fabric. and resource efficiency. Nexus modular switches go from 1 Gbps ports up to 100 Gbps ports. true . The Nexus family provides operational continuity to meet your needs for an environment where system availability is assumed and maintenance windows are rare. Nexus 7700. If you want 100 Mbps ports you must use Nexus 2000 fabric extenders.84 Tbps per slot. The Nexus 5600 Series is the latest addition to the Nexus 5000 family of switches. the Nexus 7700 can scale up to 1. you learned about the different Cisco Nexus family of switches. and the Nexus 9000 can scale up to 3. and that helps you increase energy. Following are some points to remember: With the Nexus product family you can have an infrastructure that can be scaled cost effectively. The Nexus product family offers different types of connectivity options that are needed in the data center. One of the key features it supports is VXLAN and true 40 Gbps ports. It uses a 24-bit (16 million) segment identifier to support a large-scale virtual fabric that can scale beyond the traditional 4000 VLANs. The Nexus 6004X is the latest addition to the Nexus 6000 family. budget. Multiple supervisor modules are available for the Nexus 7000 switches and the Nexus 7700 switches: supervisor module 1. spanning from 100 Mbps per port and going up to 100 Gbps per port. Summary In this chapter. It supports unified ports. and supervisor module 2E.

They act as the parent switch for the Nexus 2000. Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter.40 Gbps. noted with the key topic icon in the outer margin of the page. Nexus 3000 is part of the unified fabric architecture. Nexus 6000. Table 2-23 lists a reference for these key topics and the page numbers on which each is found. . It is an ultra-low latency switch that is well suited to the high-performance trading and high-performance computing environments. Nexus 9000. The Nexus 2000 can be connected to Nexus 7000. and is the only switch in the Nexus 6000 family that supports VXLAN. and Nexus 5000 switches. Nexus 2000 fabric extenders must have a parent switch connected to them.

Appendix C. and complete the tables and lists from memory.” also on the disc.Table 2-23 Key Topics for Chapter 2 Complete Tables and Lists from Memory Print a copy of Appendix B. or at least the section for this chapter. includes completed tables and lists to check your work. Define Key Terms Define the following key terms from this chapter. “Memory Tables Answer Key. “Memory Tables” (found on the disc). and check your answers in the glossary: Application Centric Infrastructure (ACI) port channel (PC) virtual port channel (vPC) Fabric Extender (FEX) Top of Rack (ToR) End of Row (EoR) Dynamic Fabric Automation (DFA) Virtual Extensible LAN (VXLAN) .

Chapter 3. That approach has its own benefits and business requirements. Choose the different types of VDCs that can be created on the Nexus 7000. Admin VDC . Default VDC b.” Table 3-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. Table 3-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. If you do not know the answer to a question or are only partially sure of the answer. You can find the answers in Appendix A. “Answers to the ‘Do I Know This Already?’ Quizzes. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. too. The first is how to partition one physical switch and make it appear as multiple switches. each with its own administrative domain and security policy.) a. How many VDCs are supported on Nexus 7000 SUP 1 and SUP 2E (do not count admin VDC)? a. Virtualizing Cisco Network Devices This chapter covers the following exam topic: Describe Device Virtualization There are two types of device virtualization when it comes to Nexus devices. 1. you should mark that question as wrong for purposes of the self-assessment. SUP1 support 4 * SUP 2E support 8 2. SUP1 support 2 * SUP 2E support 8 d. That approach has its own benefits and business requirements. read the entire chapter. (Choose three. This chapter covers the virtualization capabilities of the Nexus 7000 and Nexus 5000 switches using Virtual Device Context (VDC) and Network Interface Virtualization (NIV). Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. The other option is how to make multiple switches appear as if they are one big modular switch. SUP1 support 4 * SUP 2E support 4 b. SUP1 support 4 * SUP 2E support 6 c.

Without creating any VDC on the Nexus 7000. VN-Tag initial frame b.Qbr 7. You can assign a line card to a VDC or a set of ports to the VDC. True or false? The Nexus 2000 Series switch can be used as a standalone access switch. 802. The VN-Tag is being standardized under which IEEE standard? a. 802. VN-Tag interface Foundation Topics Describing VDCs on the Cisco Nexus 7000 Series Switch The Virtual Device Context (VDC) is used to virtualize the Nexus 7000 switch. a. you can have a unique set of VLANs and VRFs allowing the virtualization of the control plane. True b.1Qbh c. Storage VDC d. what does the acronym VIF stand for? a. switchto vdc{vdc name} c.1qpg b. Each VDC will have its own administrative domain allowing the management plane as well to be virtualized. 802. Virtual interface c. within this VDC are multiple processes and Layer 2 and Layer 3 services running. meaning presenting the Nexus 7000 switch as multiple logical switches. 802. OTV VDC 3. Within a VDC. In the VN-Tag fields. True or false? We can assign physical interfaces to an admin VDC. switchto {vdc name} 5. False 6. VDC {id} command and press Enter d.c. a. the control plane runs a single VDC.1BR d. as shown in Figure 3-1. This VDC is always active and cannot be deleted. . False 4. connectTo b. Which command is used to access a nondefault VDC from the default VDC? a. Virtualized interface d. True b.

these processes are replicated for each VDC. Figure 3-2 shows what the structure looks like when creating multiple VDCs. When enabling Role-Based Access Control (RBAC) for each VDC. separate administration per VDC. When enabling multiple VDCs. and hardware partitioning.Figure 3-1 Single VDC Operation Mode Structure Virtualization using VLANS and VRFs within this VDC can still be used. each VDC administrator will interface with the process for his or her own VDC. . You can have fault isolation. meaning you can have VRF production in VDC1 and VRF production in VDC2. Within each VDC you can have duplicate VLAN names and VRF names. separate data traffic.

development. which can be used in the design of the data center: Creation of separate production. and test environments Creation of intranet. and cooling. extranet. Horizontal Consolidation Scenarios . power.Figure 3-2 Multiple VDC Operation Mode Structure The following are typical use cases for the Cisco Nexus 7000 VDC. which in turn saves space. VDC Deployment Scenarios There are multiple deployment scenarios using the VDCs. and Common Criteria Evaluation and Validation Scheme (CCEVS) 10349 certification with EAL4 conformance. which enable the reduction of the physical footprint in the data center. it has NSS labs for Payment Card Industry (PCI) compliant environments. Federal Information Processing Standards (FIPS 140-2) certification. and DMZ Creation of separate organizations on the same physical switch Creation of separate application environments Creation of separate departments for the same organization due to different administration domains VDC separation is industry certified.

Figure 3-4 shows multiple aggregation layers created on the same physical devices. This can happen by creating two VDCs. Figure 3-3 Horizontal Consolidations for Core Layer Aggregation layers can also be consolidated. VDC1 will accommodate Core A and VDC2 will accommodate Core B. In horizontal consolidation. you can consolidate core functions and aggregation functions. You can allocate ports or line cards to the VDCs. so rather than building a separate aggregation layer for test and development. as shown in Figure 3-3. using a pair of Nexus 7000 switches you can build two redundant data center cores. and after the migration you can reallocate the old core ports to the new core. Aggregation 1 and Aggregation 2 are completely isolated with a separate role-based access and separate configuration file. you can create multiple VDCs called Aggregation 1 and Aggregation 2. which can be useful in facilitating migration scenarios. For example.VDCs can be used to consolidate multiple physical devices that share the same function role. .

using a pair of Nexus 7000 switches. as shown in Figure 3-5. For example. You can allocate ports or line cards to the VDCs. This can happen by creating two VDCs. you can consolidate core functions and aggregation functions. .Figure 3-4 Horizontal Consolidation for Aggregation Layers Vertical Consolidation Scenarios When using VDCs in vertical consolidation. Although the port count is the same. VDC2 accommodates Core B and Aggregation B. it can be useful because it reduces the number of physical switches. VDC1 accommodates Core A and Aggregation A. you can build a redundant core and aggregation layer.

and Access Layers VDCs for Service Insertion VDCs can be used for service insertion and policy enforcement.Figure 3-5 Vertical Consolidation for Core and Aggregation Layers Another scenario is consolidation of core aggregation and access while you maintain the same hierarchy. By creating two separate VDCs for two logical areas inside and outside. as shown in Figure 3-7. Aggregation. we used a pair of Nexus 7000 switches to build a three-tier architecture. as you can see. . Figure 3-6 shows that scenario. and traffic must go out VDC-A to reach VDC-B. Figure 3-6 Vertical Consolidation for Core. This design can make service insertion more deterministic and secure. for traffic to cross VDC-A to go to VDC-B it must pass through the firewall. Each VDC has its own administrative domain. The only way is through the firewall.

Connecting two VDCs on the same physical switch must be done through external cables. The default VDC has certain characteristics and tasks that can only be done in the default VDC. and management plane. which can be done only in these two VDC types. the admin always uses it to manage the physical device and other VDCs created. The default VDC is a full-function VDC. CPU shares are supported only with SUP2/2e. memory NX-OS upgrade across all VDCs EPLD upgrade—as directed by TAC or to enable new features Ethanalyzer captures—control plane traffic Feature-set installation for Nexus 2000.Figure 3-7 Using VDCs for Firewall Insertion Understanding Different Types of VDCs As explained earlier in the chapter. Changes in the nondefault VDC affect only that VDC. which can be summarized in the following: VDC creation/deletion/suspend Resource allocation—interfaces. you log in to the default VDC. data plane. You must be in the default VDC or the Admin VDC (discussed later in the chapter) to do certain tasks. FabricPath. you cannot connect VDCs together from inside the chassis. VDCs simply partition the physical Nexus device into multiple logical devices with a separate control plane. An independent process is started for each protocol per VDC created. and FCoE Control Plane Policing (CoPP) Port channel load balancing Hardware IDS checks control ACL capture feature enabled Note With CPU shares. There is a separate . the NX-OS on the Nexus 7000 supports Virtual Device Contexts (VDC). you don’t dedicate a CPU to a VDC rather than giving access and assigning priorities to the CPU. The following list describes the different types of VDCs that can be created on the Nexus 7000: Default VDC: When you first log in to the Nexus 7000. Nondefault VDC: The nondefault VDC is a fully functional VDC that can be created from the default VDC or the Admin VDC.

you will be prompted to select an Admin VDC. You can enable starting from NX-OS 6. Storage VDC: The storage VDC is a nondefault VDC that helps maintain the operation model when the customer has a LAN admin and a SAN admin. it replaces the default VDC.configuration file per VDC. ADMIN VDC: The Admin VDC is used for administration functions. There are two types of interfaces. you can create the admin VDC in different ways: During a fresh switch boot. SNMP. The storage VDC relies on the FCoE license and doesn’t need a VDC license. In that case. You cannot assign any interface to the Admin VDC. We can create only one storage VDC on the physical device. it cannot be deleted or changed back to the default VDC. the default VDC becomes the Admin VDC. and each VDC has its own RBAC. and starting from the NX-OS 6. After the Admin VDC is created. All nonglobal configuration on the default VDC will be migrated to the new VDC. Enter the system admin-vdc migrate new vdc name command.1 for SUP2/2E modules. dedicated FCoE interfaces or shared interfaces. When you enable the Admin VDC. To change back to the default VDC you must erase the configuration and do a fresh boot script. Note Creating the Admin VDC doesn’t require the advanced package license or VDC license. only mgmt0 can be assigned to the Admin VDC.2(2) for SUP1 module. which can carry both Ethernet and FCoE traffic. Figure 3-8 shows the two types of ports. and so on. we assign interfaces to it. . The storage VDC counts toward the total number of VDCs. and all nonglobal configuration in the default VDC will be lost. After the creation of the storage VDC. You must be careful or you will lose the nonglobal configuration. Features and feature sets cannot be enabled in an Admin VDC. The Admin VDC has certain characteristics that are unique to the Admin VDC. Enter the system admin-vdc command after boot.

all interfaces belong to the default VDC. The physical interface gets configured within the VDC. in the N7K-M132XP-12 (4 interfaces × 8 port groups = 32 interfaces) and N7K-M132XP-12L (same as non-L M132) (1 interface × 8 port groups = . the physical interface belongs to one Ethernet VDC and one storage VDC at the same time. Note All members of a port group are automatically allocated to the VDC when you allocate an interface. Sup2/2e is required. Logical interfaces. With Supervisor 1 you can have up to four VDCs and one Admin VDC.2. When you create a shared interface. which requires 8 GB of RAM because you must upgrade to NX-OS 6. all configurations that existed on it get erased. you start assigning interfaces to that VDC. When you allocate the physical interface to a VDC. Interface Allocation The only thing you can assign to a VDC are the physical ports. At the beginning and before creating a VDC. Beyond four VDCs on Supervisor 2E we require a license to increment the VDCs with an additional four.Figure 3-8 Storage VDC Interface Types Note To run FCoE. With Supervisor engine 2E you can have up to eight VDCs and one Admin VDC. For example. A physical interface can belong to only one VDC at a given time. Supervisor engine 2 can have up to four VDCs and one Admin VDC. physical and logical interfaces can be assigned to VLANs or VRFs. such as tunnel interfaces and switch virtual interfaces. are created within the VDC. and within a VDC. After you create a VDC.

Network-operator: The second default role that exists on Cisco Nexus 7000 Series switches is the network-operator role. The role must be assigned specifically to a user by a user who has network-admin rights. This role includes the ability to create. The network-operator role includes the right to issue the switchto command. All M132 cards require allocation in groups of four ports. VDC-operator: The VDC-operator role has read-only rights for a specific VDC. delete. no users are assigned to this role. each VDC maintains its user role . User roles are not shared between VDCs. The VDC-admin role is automatically assigned to this admin user on a nondefault VDC. Figure 3-9 VDC Interface Allocation Example VDC Administration The operations allowed for a user in VDC depend on the role assigned to that user. and you can configure eight port groups. Note You can create custom roles for a user. VDC-admin: When a new VDC is created. and it is available only in the default VDC. This role grants the user read-only rights in the default VDC. and the NX-OS provides four default user roles: Network-admin: The first user account that is created on a Cisco Nexus 7000 Series switch in the default VDC is admin. However. See the example for this module in Figure 3-9. that user is mapped to a role of the same level in the nondefault VDC. but you cannot change the default user roles for a VDC.8 interfaces). When a network-admin or network-operator user accesses a nondefault VDC by using the switchto command. this user does not have any rights in any other VDC and cannot access other VDCs through the switchto command. That means a user with the network-admin role is given the VDC-admin role in the nondefault VDC. or change nondefault VDCs. Interfaces belonging to the same port group must belong to the same VDC. The network-admin role is automatically assigned to this user. By default. which can be used to access a nondefault VDC from the default VDC. Users can have multiple roles assigned to them. The network-admin role gives a user complete read and write access to the device. This role gives a user complete control over the specific nondefault VDC. the first user account on that VDC is admin. This role has no rights to any other VDC. A user with the network-operator role is given the VDC-operator role in the nondefault VDC.

The storage VDC is enabled using the storage license associated with a specific line card. Note The grace period operates across all features in a license package. When you enable a feature. If you are executing the command from the default VDC. for example. When the grace period expires. any nondefault VDCs will be removed from the switch configuration. The show vdc command displays different outputs depending on where you are executing it from. VDCs are created or deleted from the default VDC only. VDC Requirements To create VDCs. Example 3-1 Displaying the Status of a Feature Set on a Switch Click here to view code image N7K-VDC-A# show feature-set Feature Set Name ID State -------------------.-------fabricpath 2 enabled fex 3 disabled N7K-VDC-A# The show feature-set command in Example 3-1 shows that the FabricPath feature is enabled and the FEX feature is disabled.-------. To stop the grace period. the countdown for the 120 days doesn’t stop if a single feature of the license package is already enabled. all the features of the license package must be stopped. you are left with 20 days to install the license.database. You can try the feature for the 120-day grace period. As shown in Example 3-1. the show feature-set command is used to verify and show which feature has been enabled or disabled in a VDC. Example 3-2 Displaying Information About All VDCs (Running the Command from the Default VDC) Click here to view code image . this command displays information about all VDCs on the physical device. but another feature was enabled. Physical interfaces also are assigned to nondefault VDCs from the default VDC. Verifying VDCs on the Cisco Nexus 7000 Series Switch The next few lines show certain commands to display useful information about the VDC. You need network admin privileges to do that. an advanced license is required. As stated before. The license is associated with the serial number of a Cisco Nexus 7000 switch chassis. so if you enable a feature to try it. the license package can contain several features. 100 days before.

N7K-Core# show vdc vdc_id -----1 2 3 vdc_name -------prod dev MyVDC state ----active active active mac ---------00:18:ba:d8:3f:fd 00:18:ba:d8:3f:fe 00:18:ba:d8:3f:ff If you are executing the show vdc command within the nondefault VDC. as shown in Example 3-4. execute show vdc detail from the default VDC. the output displays information about the current VDC. Example 3-3 Displaying Information About the Current VDC (Running the Command from the Nondefault VDC) Click here to view code image N7K-Core-Prod# show vdc vdc_id -----1 vdc_name -------prod state ----active mac ---------00:18:ba:d8:3f:fd To display the detailed information about all VDCs. Example 3-4 Displaying Detailed Information About All VDCs Click here to view code image N7K-Core# show vdc detail vdc id: 1 vdc name: N7K-Core vdc state: active vdc mac address: 00:22:55:79:a4:c1 vdc ha policy: RELOAD vdc dual-sup ha policy: SWITCHOVER vdc boot Order: 1 vdc create time: Thu May 14 08:14:39 2012 vdc restart count: 0 vdc vdc vdc vdc vdc vdc vdc vdc vdc id: 2 name: prod state: active mac address: 00:22:55:79:a4:c2 ha policy: RESTART dual-sup ha policy: SWITCHOVER boot Order: 1 create time: Thu May 14 08:15:22 2012 restart count: 0 vdc vdc vdc vdc vdc vdc vdc vdc id: 3 name: dev state: active mac address: 00:22:55:79:a4:c3 ha policy: RESTART dual-sup ha policy: SWITCHOVER boot Order: 1 create time: Thu May 14 08:15:29 2012 . which in our case is the N7K-Core-Prod.

Use the copy running-config startup-config vdc-all command. it will show the interface membership only for that VDC. execute show vdc membership from the default VDC.vdc restart count: 0 You can display the detailed information about the current VDC by executing show vdc {vdc name} detail from the nondefault VDC. Example 3-6 Displaying Interface Allocation per VDC Click here to view code image N7K-Core# show vdc membership vdc_id: 1 vdc_name: N7K-Core interfaces: Ethernet2/1 Ethernet2/2 Ethernet2/4 Ethernet2/5 Ethernet2/7 Ethernet2/8 Ethernet2/10 Ethernet2/11 Ethernet2/13 Ethernet2/14 Ethernet2/16 Ethernet2/17 Ethernet2/19 Ethernet2/20 Ethernet2/22 Ethernet2/23 Ethernet2/25 Ethernet2/26 Ethernet2/28 Ethernet2/29 Ethernet2/31 Ethernet2/32 Ethernet2/34 Ethernet2/35 Ethernet2/37 Ethernet2/38 Ethernet2/40 Ethernet2/41 Ethernet2/43 Ethernet2/44 Ethernet2/48 vdc_id: 2 vdc_name: prod interfaces: Ethernet2/47 vdc_id: 3 vdc_name: dev interfaces: Ethernet2/46 Ethernet2/3 Ethernet2/6 Ethernet2/9 Ethernet2/12 Ethernet2/15 Ethernet2/18 Ethernet2/21 Ethernet2/24 Ethernet2/27 Ethernet2/30 Ethernet2/33 Ethernet2/36 Ethernet2/39 Ethernet2/42 Ethernet2/45 If you execute show VDC membership from the VDC you are currently in. Example 3-5 Displaying Detailed Information About the Current VDC When You Are in the Nondefault VDC Click here to view code image N7K-Core-prod# show vdc prod detail vdc id: 2 vdc name: prod vdc state: active vdc mac address: 00:22:55:79:a4:c2 vdc ha policy: RESTART vdc dual-sup ha policy: SWITCHOVER vdc boot Order: 1 vdc create time: Thu May 14 08:15:22 2012 vdc restart count: 0 To verify the interface allocation per VDC. as shown in Example 3-5. as shown in Example 3-6. You can save all the running configurations to the startup configuration for all VDCs using one command. which is executed from the .

Virtual Interface (VIF): Logical construct consisting of a front panel port. use the switchback command.php and http://www. executed from the default VDC.php N7K-Core-Prod# To switch back to the default VDC. you use Secure Shell (SSH) and Telnet to connect to the desired VDC. you must do switchback to the default VDC first before going to another VDC. You can navigate from the default VDC to the nondefault VDC using the switchto command. Describing Network Interface Virtualization You can deliver an extensible and scalable fabric solution using the Cisco FEX technology.0 or the GNU Lesser General Public License (LGPL) Version 2. when you create the user accounts and configure the IP connectivity. Host Interface (HIF): Front panel port on FEX.1. The copyrights to certain works contained in this software are owned by other third parties and used and distributed under license. You cannot execute the switchto command from the nondefault VDC.org/licenses/lgpl-2.opensource.opensource. use the show running-config vdc-all command. connecting to parent switch.default VDC. Inc.cisco. and several .1. Cisco Systems. as shown in Example 3-8. A copy of each such license is available at http://www.0. VLAN. All rights reserved. Cisco Nexus 2000 FEX Terminology The following terminology and acronyms are used with the Nexus 2000 fabric extenders: Network Interface (NIF): Port on FEX. This technology enables operational simplicity by having a single point of management and policy enforcement on the access parent switch. You cannot see the configuration for other VDCs except from the default VDC. The FEX technology can be extended all the way to the server hypervisor. Example 3-7 Saving All Configurations for All VDCs Click here to view code image N7K-Core# copy running-config startup-config vdc-all [########################################] 100% To display the running configurations for all VDCs. Certain components of this software are licensed under the GNU General Public License (GPL) version 2. Use these commands for the initial setup.org/licenses/gpl-2. Example 3-8 Navigating Between VDCs on the Nexus 7000 Switch Click here to view code image N7K-Core# switchto vdc Prod TAC support: http://www.com/tac Copyright (c) 2002-2008. after that. connecting to server.

Parent switch: Switch (N7K/N5K) where FEX is connected. Figure 3-10 shows the ports with reference to the physical hardware. Only a small number of cables will run between the racks. Based on the type and the density of the access ports required. The cabling between the servers and the Nexus 2000 Series fabric extenders is contained within the rack. The FEX acts as a . or Nexus 5000 Series. Fabric links (Fabric Port Channel or FPC): Links that connect FEX NIF and FEX fabric ports on parent switch. Fabric Port Channel: Port channel between N7K and FEX.other parameters. This type of design combines the benefit of the Top-of-Rack (ToR) design with the benefit of the End-of-Row (EoR) design. this design is an EoR design. actively forwarding traffic to two parent switches. Nexus 6000. Nexus 5600. which will be installed at the EoR. which will be 10/40 Gbps. dual redundant Nexus 2000 fabric extenders are placed at the top of the rack. also referred to as server port or host port in this presentation. Logical interface (LIF): Presentation of a front panel port in parent switch. Fabric Port: N7K side of the link connected to FEX. it needs a parent switch. From the logical network deployment point of view. Nexus 7000. FEX port: Server-facing port on FEX. Figure 3-10 Nexus 2000 Interface Types and Acronyms Nexus 2000 Series Fabric Extender Connectivity The Cisco Nexus 2000 Series cannot run as a standalone switch. this design is a ToR design. The uplink ports from the Nexus 2000 will be connected to the parent switch. From a cabling point of view. FEX uplink: Network-facing port on the FEX side of the link connected to N7K. FEX A/A or dual-attached FEX: Scenario in which the fabric links of FEX are connected. as stated before. This switch can be a Nexus 9000. The FEX uplink is also called a Network Interface (NIF). Fabric Extender (FEX): NEXUS 2000 is the instantiation of FEX. FEX port is also called a Host Interface (HIF).

A frame exchanged between the Nexus 2000 fabric extender and the parent switch will have an information tag inserted in it called VN-Tag. VN-Tag was being standardized under the IEEE 802.1Qbh working group. so all traffic must be sent to the parent switch. At the beginning. but this is not a recommended solution. From an operation perspective. you need to turn off Bridge Protocol Data Units (BPDUs) from the access switch and turn on “bpdufilter” on the Nexus 2000. Figure 3-11 Nexus 2000 Deployment Physical and Logical View VN-Tag Overview The Cisco Nexus 2000 Series fabric extender acts as a remote line card to a parent switch. The physical ports on the Nexus 2000 fabric extender appear as logical ports on the parent switch. The server appears as if it is connected directly to the parent switch. and no operation tasks are required to be performed from the FEXs. and the hosts connected to the Nexus 2000 fabric extender appear as if they are directly connected to the parent switch. The host connected to the Nexus 2000 fabric extender is unaware of any tags. all configuration and maintenance tasks are done from the parent switch. Note You can connect a switch to the FEX host interfaces. Forwarding is performed by the parent switch.1Qbh was withdrawn and is no longer an approved IEEE project. which enables advanced functions and policies to be applied to it. The effort was moved to the IEEE 802. 802. All control and management functions are performed by the parent switch. VN-Tag is a Network Interface Virtualization (NIV) technology. this design has the advantage of the EoR design. which might create a loop in the network. Refer to Chapter 2. as stated earlier. Figure 3-11 shows the physical and logical view for the Nexus 2000 fabric extender deployment. Figure 3- . At the time of writing this book there is no local switching happening on the Nexus 2000 fabric extender. “Cisco Nexus Product Family” in the section “Cisco Nexus 2000 Fabric Extenders Product Family” to see the different models and specifications. However.1BR in July 2011.remote line card. To make this work. The Nexus 2000 host ports (HIFs) do not expect to receive BPDUs on the host ports and expect only end hosts like servers. Figure 3-10 shows the physical and the logical architecture of the FEX implementation.

2. Figure 3-12 VN-Tag Fields in an Ethernet Frame d: Direction bit (0 is host-to-network forwarding. these events occur: 1. The Cisco Nexus 2000 Series switch adds a unique VN-Tag for each Cisco Nexus 2000 Series host interface. and the packet is forwarded over a fabric link using a specific VN-Tag. These are the VN-Tag field values: .12 shows the VN-Tag fields in an Ethernet frame.1 is network-to-host) p: Pointer bit (set if this is a multicast frame and requires egress replication) L: Looped filter (set if sending back to source Cisco Nexus 2000 Series) VIF: Virtual interface Cisco Nexus 2000 FEX Packet Flow The packet processing on the Nexus 2000 happens in two parts: when the host send a packet to the network and when the network sends a packet to the host. The Cisco Nexus 2000 Series switch adds a VN-Tag. Figure 3-13 shows the two scenarios. Figure 3-13 Cisco Nexus 2000 Packet Flow When the host sends a packet to the network (diagrams A and B). The frame arrives from the host.

which identifies the logical interface that corresponds to the physical host interface on the Cisco Nexus 2000 Series. l (looped). The destination virtual interface is set to be the Cisco Nexus 2000 Series port VN-Tag. The packet is received over the fabric link using a specific VN-Tag. 2. indicating host-to network forwarding. The Cisco Nexus switch extracts the VN-Tag. and destination virtual interface are undefined (0). The frame is received on the physical or logical interface. 12-port 40-Gigabit Ethernet F3-Series QSFP I/O module (N7K-F312FQ-25) for Cisco Nexus 7000 Series switches 24-port Cisco Nexus 7700 F3 Series 40-Gigabit Ethernet QSFP I/O module (N77-F324FQ-25) 48-port Cisco Nexus 7700 F3 Series 1/10-Gigabit Ethernet SFP+ I/O module (N77-F348XP23) 32-port 10-Gigabit Ethernet SFP+ I/O module (N7K-M132XP-12) 32-port 10-Gigabit Ethernet SFP+ I/O module (N7K-M132XP-12L) 24-port 10-Gigabit Ethernet I/O M2 Series module XL (N7K-M224XP-23L) 48-port 1/10-Gigabit Ethernet SFP+ I/O F2 Series module (N7K-F248XP-25) Enhanced 48-port 1/10-Gigabit Ethernet SFP+ I/O module (F2e Series) (N7K-F248XP-25E) . The Cisco Nexus switch applies an ingress policy that is based on the physical Cisco Nexus Series switch port and logical interface: Access control and forwarding are based on frame fields and virtual (logical) interface policy. The Cisco Nexus switch strips the VN-Tag and sends the packet to the network. Physical link-level properties are based on the Cisco Nexus Series switch port. The l (looped) bit filter is set if sending back to a source Cisco Nexus 2000 Series switch. The source virtual interface is set if the packet was sourced from a Cisco Nexus 2000 Series port. The Cisco Nexus switch inserts a VN-Tag with these characteristics: The direction is set to 1 (network to host). The source virtual interface is set based on the ingress host interface. depending on the type and model of card installed in it. 4. The p bit is set if this frame is a multicast frame and requires egress replication. 3. and the frame is forwarded to the host interface. these events occur: 1. The Cisco Nexus switch performs standard lookup and policy processing when the egress port is determined to be a logical interface (Cisco Nexus 2000 Series) port. The Cisco Nexus 2000 Series switch strips the VN-Tag. The packet is forwarded over a fabric link using a specific VN-Tag. The p (pointer).The direction bit is set to 0. There is a best practice for connecting the Nexus 2000 fabric extender to the Nexus 7000. When the network sends a packet to the host (diagram C). Cisco Nexus 2000 FEX Port Connectivity You can connect the Nexus 2000 Series fabric extender to one of the following Ethernet modules that are installed in the Cisco Nexus 7000 Series. 3.

you can configure the FEX to use a port channel fabric interface connection. You must use the pinning max-links command to create several pinned fabric interface connections so that the parent switch can determine a distribution of host interfaces. but instead is always specified by you. If all links in the fabric port channel go down. A fabric interface that fails in the port channel does not trigger a change to the host interfaces. the bandwidth that is dedicated to each end host toward the parent switch is never changed by the switch. To provide load balancing between the host interfaces and the parent switch. Traffic is automatically redistributed across the remaining links in the port channel fabric interface.48-port 1/10-Gigabit Ethernet SFP+ I/O module (F2e Series) (N77-F248XP-23E) Figure 3-14 shows the connectivity between the Cisco Nexus 2000 fabric extender and the N7KM132XP-12 line card. when the FEX is powered up and connected to the parent switch. which makes the FEX use individual links connected to the parent switch. all the associated host interfaces go down and will remain down as long as the fabric port is down. The host interfaces are divided by the number of the max-links and distributed accordingly. . Figure 3-14 Cisco Nexus 2000 Connectivity to N7K-M132XP-12 There are two ways to connect the Nexus 2000 to the parent switch: either use static pinning. all host interfaces on the FEX are set to the down state. The default value is max-links 1. As a result. or use a port channel when connecting the fabric interfaces on the FEX to the parent switch. The drawback here is that if the fabric link goes down. In static pinning. This connection bundles 10-Gigabit Ethernet fabric interfaces into a single logical channel. its host interfaces are distributed equally among the available fabric interfaces. This is explained in more detail in the next paragraphs.

When you have VDCs created on the Nexus 7000. you must follow certain rules. it is allowed in all VDCs. FEX IDs are unique across VDCs (the same FEX ID cannot be used across different VDCs). Figure 3-15 shows the FEX connectivity to the Nexus 7000 when you have multiple VDCs. Use the following steps to connect the Cisco Nexus 2000 to the Cisco 7000 Series parent switch when connected to one of the supported line cards. 6000. Example 3-9 Installing the Fabric Extender Feature Set in Default VDC Click here to view code image N7K-Agg(config)# N7K-Agg(config)# install feature-set fex Example 3-10 Enabling the Fabric Extender Feature Set Click here to view code image N7K-Agg(config)# N7K-Agg(config)# feature-set fex By default. Example 3-11 Entering the Chassis Mode for the Fabric Extender Click here to view code image N7K-Agg(config)# fex 101 . all FEX host ports belong to the same VDC. Figure 3-15 Cisco Nexus 2000 and VDCs Cisco Nexus 2000 FEX Configuration on the Nexus 7000 Series How you configure the Cisco Nexus 2000 Series on the Cisco 7000 Series is different from the configuration on Cisco Nexus 5000. The difference comes from the VDC-based architecture of the Cisco Nexus 7000 Series switches. and 5600 Series switches. You can disallow the installed fabric extender feature set in a specific VDC on the device. All FEX fabric links belong to the same VDC. when you install the fabric extender feature set.

N7K-Agg(config-fex)# description Rack8-N2k N7K-Agg(config-fex)# type N2232P Example 3-12 Defining the Number of Uplinks Click here to view code image N7K-Agg(config-fex)# pinning max-links 1 You can use this command if the fabric extender is connected to its parent switch using one or more statically pinned fabric interfaces. In Example 3-14. Example 3-13 Disallowing Enabling the Fabric Extender Feature Set Click here to view code image N7K-Agg# configure terminal N7K-Agg (config)# vdc 1 N7K-Agg (config-vdc)# no allow feature-set fex N7K-Agg (config-vdc)# end N7K-Agg # Note The N7k-agg# VDC1 is the VDC_ID. we create a port channel with four member ports. so it might be different depending on which VDC you want to connect to. Example 3-14 Associate the Fabric Extender to a Port Channel Interface Click here to view code image N7K-Agg# configure terminal N7K-Agg (config)# interface ethernet 1/28 N7K-Agg (config-if)# channel-group 4 N7K-Agg (config-if)# no shutdown N7K-Agg (config-if)# exit N7K-Agg (config)# interface ethernet 1/29 N7K-Agg (config-if)# channel-group 4 N7K-Agg (config-if)# no shutdown N7K-Agg (config-if)# exit N7K-Agg (config)# interface ethernet 1/30 N7K-Agg (config-if)# channel-group 4 N7K-Agg (config-if)# no shutdown N7K-Agg (config-if)# exit N7K-Agg (config)# interface ethernet 1/31 N7K-Agg (config-if)# channel-group 4 N7K-Agg (config-if)# no shutdown N7K-Agg (config-if)# exit N7K-Agg (config)# interface port-channel 4 . There can be only one port channel connection. The next step is to associate the fabric extender to a port channel interface on the parent device.

you must do some verification to make sure everything is configured properly. State: Active Fex Port State Fabric Port Primary Fabric Eth101/1/1 Up Po101 Po101 Eth101/1/2 Up Po101 Po101 Eth101/1/3 Down Po101 Po101 Eth101/1/4 Down Po101 Po101 Example 3-17 Display Which FEX Interfaces Are Pinned to Which Fabric Interfaces Click here to view code image N7K-Agg# show interface port-channel 101 fex-intf Fabric FEX Interface Interfaces --------------------------------------------------Po101 Eth101/1/2 Eth101/1/1 .Interface Up. State: Active Eth4/2 .1(1) Extender Model: N2K-C2248TP-1GE. Example 3-15 Verifying FEX Association Click here to view code image N7K-Agg# show fex FEX FEX FEX FEX Number Description State Model Serial -----------------------------------------------------------------------101 FEX0101 Online N2K-C2248TP-1GE JAF1418AARL Example 3-16 Display Detailed Information About a Specific FEX Click here to view code image N7K-Agg# show fex 101 detail FEX: 101 Description: FEX0101 state: Online FEX version: 5.Interface Up. Num Macs: 64 Module Sw Gen: 21 [Switch Sw Gen: 21] pinning-mode: static Max-links: 1 Fabric port for control traffic: Po101 Fabric interface state: Po101 .Interface Up.159.6) Switch Interim version: 5.1(1) [Switch version: 5. State: Active Eth2/2 .Interface Up. Extender Serial: JAF1418AARL Part No: 73-12748-05 Card Id: 99. State: Active Eth2/1 .N7K-Agg (config-if)# switchport N7K-Agg (config-if)# switchport mode fex-fabric N7K-Agg (config-if)# fex associate 101 After the FEX is associated successfully. State: Active Eth4/1 .Interface Up. Mac Addr: 54:75:d0:a9:49:42.1(1)] FEX Interim version: 5.1(0.

. Table 3-2 lists a reference for these key topics and the page numbers on which each is found. The FEX technology can be extended all the way to the server hypervisor. This technology enables operational simplicity by having a single point of management and policy enforcement on the access parent switch Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter.Example 3-18 Display the Host Interfaces That Are Pinned to a Port Channel Fabric Interface Click here to view code image N7K-Agg# show interface port-channel 4 fex-intf Fabric FEX Interface Interfaces --------------------------------------------------Po4 Eth101/1/48 Eth101/1/47 Eth101/1/46 Eth101/1/44 Eth101/1/43 Eth101/1/42 Eth101/1/40 Eth101/1/39 Eth101/1/38 Eth101/1/36 Eth101/1/35 Eth101/1/34 Eth101/1/32 Eth101/1/31 Eth101/1/30 Eth101/1/28 Eth101/1/27 Eth101/1/26 Eth101/1/24 Eth101/1/23 Eth101/1/22 Eth101/1/20 Eth101/1/19 Eth101/1/18 Eth101/1/16 Eth101/1/15 Eth101/1/14 Eth101/1/12 Eth101/1/11 Eth101/1/10 Eth101/1/8 Eth101/1/7 Eth101/1/6 Eth101/1/4 Eth101/1/3 Eth101/1/2 Eth101/1/45 Eth101/1/41 Eth101/1/37 Eth101/1/33 Eth101/1/29 Eth101/1/25 Eth101/1/21 Eth101/1/17 Eth101/1/13 Eth101/1/9 Eth101/1/5 Eth101/1/1 Summary VDCs on the Nexus 7000 switches enable true device virtualization. To send traffic between VDCs you must use external physical cables that lead to greater security of user data. Cisco Nexus 2000 cannot operate in a standalone mode. it needs a parent switch to connect to. allowing the management plane also to be virtualized. each VDC will have its own management domain. noted with the key topics icon in the outer margin of the page.

Table 3-2 Key Topics for Chapter 3 Define Key Terms Define the following key terms from this chapter. and check your answers in the Glossary: Virtual Device Context (VDC) supervisor engine (SUP) demilitarized zone (DMZ) erasable programmable logic device (EPLD) Fabric Extender (FEX) role-based access control (RBAC) network interface (NIF) host interface (HIF) virtual interface (VIF) logical interface (LIF) End-of-Row (EoR) Top-of-Rack (ToR) VN-Tag .

ASR1000 Series d. Catalyst 6500 Series 2. False 3. ASR9000 Series c. True b.” Table 4-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter.Chapter 4. Which of the following Cisco products supports OTV? a. In this chapter. we can set up static MAC-to-IP mappings in OTV. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you do not know the answer to a question or are only partially sure of the answer. you should mark that question as wrong for purposes of the self-assessment. server clustering. and workload mobility. Which version of IGMP is used on the OTV join interface? . “Answers to the ‘Do I Know This Already?’ Quizzes. Data Center Interconnect This chapter covers the following exam topics: Identify Key Differentiators Between DCI and Network Interconnectivity Cisco Data Center Interconnect (DCI) solutions enable IT organizations to meet two main objectives: business continuity in case of disaster events and corporate and regularity compliance to improve data security. read the entire chapter. 1. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. you learn about the latest Cisco innovations in the Data Center Extension solutions and in LAN Extension in particular. True or false? For silent hosts. a. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. Table 4-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A. Nexus 7000 Series b. which enables you to have data replication. DCI solutions in general extend LAN and SAN connectivity between geographically dispersed data centers. which is called Overlay Transport Virtualization (OTV).

VPLS delivers a virtual multiaccess Ethernet network. dark fiber might be available to build private optical connections between data centers. 4 d. IGMPv3 d. 2 c. However. Layer 2 extensions might be required at different layers in the data center to enable the resiliency and clustering mechanisms offered by the different applications at the web. a. Virtual Private LAN Services (VPLS): Similar to EoMPLS. Dense wavelength-division multiplexing (DWDM) or coarse wavelengthdivision multiplexing (CWDM) can increase the number of Layer 2 connections that can be run through the fibers. 8 5. Different types of technologies provide Layer 2 extension solutions: Ethernet over Multiprotocol Label Switching (EoMPLS): EoMPLS can be used to provision point-to-point Layer 2 Ethernet connections between two sites using MPLS as the transport. A solution that provides Layer 2 connectivity. yet restricts the reach of the flood domain. and database layers. How many join interfaces can you create per logical overlay? a. application. Using these technologies to extend the LAN across multiple data centers creates a series of challenges: Failure isolation: The extension of Layer 2 domains between multiple data center sites can cause the data centers to share protocols and failures that would normally have been isolated when interconnecting data centers over an IP network. is necessary to contain failures and preserve the resiliency achieved by the use of multiple data centers. Transport infrastructure nature: The nature of the transport between data centers varies depending on the location of the data centers and the availability and cost of services in the . False Foundation Topics Introduction to Overlay Transport Virtualization Data Center Interconnect (DCI) solutions have been available for quite some time. True b. instead of point-to-point Ethernet pseudowires. VPLS uses an MPLS network as the underlying transport network. IGMPv2 c. No need to enable IGMP on the OTV join interface 4. Dark fiber: In some cases. 1 b.a. IGMPv1 b. it is mainly used to enable the benefit of geographically separated data centers. These technologies increase the total bandwidth over the same number of fibers. True or False? You need to do no shutdown for the overlay interface after adding it.

The loop prevention mechanisms prevent loops on the overlay and prevent loops from the sites that are multihomed to the overlay. Mechanisms must be provided to prevent loops that might be induced when connecting bridged networks that are multihomed. This is built-in functionality. This has a huge benefit over traditional data center interconnect solutions. Finally. Bandwidth utilization with replication. therefore. OTV provides Layer 2 extension between remote data centers by using a concept called MAC address routing. Complex operations: Layer 2 VPNs can provide extended Layer 2 connectivity across data centers. following are the minimum requirements to run OTV on each platform: Cisco Nexus 7000 Series and Cisco Nexus 7700 platform: . meaning a control plane protocol is used to exchange MAC address reachability information between network devices providing the LAN extension functionality. scale. and no extra configuration is needed to make it work. An IP-capable transport is the most generalized offering. and failure doesn’t propagate into other data centers and stops at the edge device. but will usually involve a mix of complex protocols. which is independent from the other site even though all sites have a common Layer 2 domain. the use of available bandwidth between data centers must be optimized to obtain the best connectivity at the lowest cost. OTV doesn’t require additional configuration to offer multihoming. which typically rely on data plane learning and the use of flooding across the transport infrastructure to learn reachability. A cost-effective solution for the interconnection of data centers must be transport agnostic and give the network designer the flexibility to choose any transport between data centers based on business and operational preferences. and it provides flexibility and enables long-reach connectivity. load balancing.different areas. Bridge Protocol Data Units (BPDU) are not forwarded on the overlay. and there is no need to establish virtual circuits between the data center sites. multihoming of the Layer 2 sites onto the VPN is required. Therefore. Each site will have its own spanning-tree domain. Multihoming with end-to-end loop prevention: LAN extension techniques should provide a high degree of resiliency. and path diversity: When extending Layer 2 domains across data centers. and convergence times continue to improve. Multicast and broadcast traffic should also be replicated optimally to reduce bandwidth consumption. A simple overlay protocol with built-in capability and point-to-cloud provisioning is crucial to reducing the cost of providing this connectivity. and an operationally intensive hierarchical scaling model. Although the features. distributed provisioning. Balancing the load across all available paths while providing resilient connectivity between the data center and the transport network requires added intelligence above and beyond that available in traditional Ethernet switching and Layer 2 VPNs. it is built in natively in OTV using the same control plane protocol along with loop prevention mechanisms. A solution capable of using an IP transport is expected to provide the most flexibility. There is no need to extend Spanning Tree Protocol (STP) across the transport infrastructure. OTV must be deployed only at specific edge devices. this results in failure containment within a data center. Cisco has two main products that currently support OTV: Cisco Nexus 7000 Series switches and Cisco ASR 1000 Series aggregation services routers. Each Ethernet frame is encapsulated into an IP packet and delivered across the transport infrastructure. addition or removal of an edge device does not affect other edge devices on the overlay.

Each site can have one or more edge devices. OTV configuration is done on the edge device and is completely transparent to the rest of the infrastructure.0(3) or later (Cisco Nexus 7000 Series) or Cisco NX-OS Release 6.5 or later Advanced IP Services or Advanced Enterprise Services license Note This chapter focuses on OTV implementation using the Nexus 7000 Series. it receives the Layer 2 traffic for all VLANS that need to be extended to remote sites and dynamically encapsulates the Ethernet frames into IP packets and sends them across through the transport infrastructure. These terminologies are used throughout the chapter. Because the edge device needs to see the .2(2) or later (Cisco Nexus 7700 platform) Transport Services license Cisco ASR 1000 Series: Cisco IOS XE Software Release 3. Figure 4-1 OTV Terminology OTV edge device: The edge device performs OTV functions. you must know OTV terminology. OTV Terminology Before you learn how OTV control plane and data plane works.Any M-Series (Cisco Nexus 7000 Series) or F3 (Cisco Nexus 7000 Series or 7700 platform) line card for encapsulation Cisco NX-OS Release 5. which is shown in Figure 4-1.

It can be owned by the customer or by the service provider. It is recommended to use a dedicated VLAN as a site VLAN that is not extended across the overlay. one of the recommended designs is to have a separate VDC on the Nexus 7000 acting as the OTV edge device. starting with the 5. Authoritative edge device: OTV elects a designated forwarding edge device per site for each VLAN to provide loop-free multihoming. It uses the site VLAN to elect the authoritative edge device for the VLANs. the frame is logically forwarded to the overlay interface. and you can have one join interface shared across multiple overlays. It is part of the OTV control plane and doesn’t need additional configuration. OTV edge devices continue to use the site VLAN to . such as local switching. OTV used the Site VLAN within a site to detect and establish adjacencies with other OTV edge devices. however. The IP address of the join interface is used to advertise reachability of MAC addresses present in the site. Site: An OTV site is the same as a data center. So we create one VDC per each Nexus 7000 to have redundancy. which gets encapsulated in an IP unicast or multicast packet and sent to the remote data center.Layer 2 traffic. Transport network: This is the network that connects the OTV sites. or both. it can be a logical interface. When the OTV edge device receives a Layer 2 frame that needs to be delivered to a remote data center site. typical Layer 2 functions. To address this concern. OTV internal interface: The Layer 2 interface on the edge device that connects to the VLANS to be extended is called the internal interface. Prior to NX-OS release 5. which has VLANs that must be extended to another site—meaning another data center. It must be defined by the user. The internal interface is usually configured as an access or trunk interface and receives Layer 2 traffic. All the OTV configuration is applied to the overlay interface. it can be a physical interface or subinterface. However. each OTV device maintains dual adjacencies with other OTV edge devices belonging to the same DC site. such as a Layer 3 port channel or a Layer 3 port-channel subinterface. it is recommended to have it at the Layer 3 boundary for the site so it can see all the VLANs. are performed on the internal interfaces. The forwarder is known as the Authoritative Edge Device (AED). At the time of writing this book.2 (1) NX-OS release. the only requirement is to allow IP reachability between the edge devices. The OTV edge device can be deployed in many places. thereby ending up in an active/active mode (for the same data VLAN). Site VLAN: OTV edge devices send hellos on the site VLAN to detect other OTV edge devices in the same site. the mechanism of electing AEDs that is based only on the communication established on the site VLAN can create situations (resulting from connectivity issues or misconfiguration) where OTV edge devices belonging to the same site can fail to detect one another. The join interface “joins” the overlay network and discovers the other overlay remote edge devices. You can have only one join interface associated with a given overlay. Overlay network: This is a logical network that interconnects OTV edge devices. and flooding. OTV overlay interface: The OTV overlay interface is a logical multiaccess and multicast capable interface. OTV join interface: The OTV join interface is a Layer 3 interface. which need to be extended. The edge devices talk to each other on the internal interfaces to elect the AED. This could ultimately result in the creation of a loop scenario. The internal interface doesn’t need any specific OTV configuration. spanning-tree.2(1).

all OTV edge devices should be configured to join a specific Any Source Multicast (ASM) group. To be able to exchange MAC reachability information. The control protocol function is to advertise MAC address reachability information instead of using data plane learning. This is required to communicate its existence and to trigger the establishment of control plane adjacencies within the same overlay. the original frames must be OTV encapsulated. Each OTV edge device sends an IGMP report to join the specific ASM group.discover and establish adjacency with other OTV edge devices in a site. we use a multicast group to exchange control protocol messages between the OTV edge devices. which runs between OTV edge devices. This happens without enabling PIM on this interface. only IGMPv3 is enabled on the interface using the IP IGMP version 3 command. You need only to specify the ASM group to be used and associate it with a given overlay interface. All OTV edge devices belonging to a specific overlay register with the available list of the adjacency servers. named overlay adjacency. The combination of the site identifier and the IS-IS system ID is used to identify a neighbor edge device in the same site. it is now mandatory to configure each OTV device also with a site-identifier value. starting from NX-OS 5. OTV Control Plane One of the key differentiators of OTV is the use of the control plane protocol. The following steps explain the sequence. This adjacency is called site adjacency. All edge devices that are in the same site must be configured with the same site identifier. This site identifier is advertised in IS-IS hello packets sent over both the overlay and on the site VLAN. adding an external IP header. the enterprise must negotiate the use of this ASM group with the service provider. you can use one or more OTV edge devices as an adjacency server. To enable this new functionality. The source IP address in the external header is set to the IP address of the join . If the transport infrastructure is owned by the service provider. The entire edge device will be a source and a receiver on that ASM group. The OTV control protocol running on each OTV edge device generates hello packets that need to be sent to all other OTV edge devices. For this to happen. The OTV hello messages are sent across the logical overlay to reach all OTV remote edge devices. OTV devices also maintain a second adjacency.2(1). 2. established via the join interfaces across the Layer 3 network domain. The edge devices join the group as hosts. which is used to carry control protocol exchanges. Unicast Enabled Transport: If the transport infrastructure is not enabled for multicast. Multicast Enabled Transport Infrastructure When the transport infrastructure is enabled for multicast. depending on the nature of the transport infrastructure: Multicast Enabled Transport: If the transport infrastructure is enabled for multicast. which relies on flooding of Layer 2 traffic across the transport infrastructure. This can happen in two ways. which leads to the discovery of all OTV edge devices belonging to a specific overlay: 1. OTV edge devices must become adjacent to each other. 3. leveraging the join interface. In addition to the site adjacency.

it doesn’t require a specific configuration command to do it. The MAC address reachability information is imported to the MAC address tables of the edge devices. whereas the destination is the multicast address of the ASM group dedicated to carry the control protocol. the device doesn’t display any OTV commands until you enable the feature. 5. The IP destination address of the packet in the outer header is the multicast group used for OTV control protocol exchanges. Unicast-Enabled Transport Infrastructure OTV can be deployed with a unicast-only transport infrastructure. it is possible to leverage the IS-IS HMAC-MD5 authentication feature to add an HMAC-MD5 digest to each OTV control protocol message. After the OTV devices discover each other. An OTV update message is created containing information for the MAC address learned through the internal interface. The message is OTV encapsulated and sent into the OTV transport infrastructure. For security.interface of the edge device. The hellos are passed to the control protocol process. the OTV control plane protocol is enabled. The OTV update is replicated in the transport infrastructure and delivered to all the remote OTV edge devices on the same overlay. Note To enable OTV functionality you must use the feature otv command. 4. the unicast deployment of OTV is . The receiving OTV edge devices decapsulate the packets. the next step is to exchange MAC address reachability information: 1. After enabling the OTV overlay interface. The same process occurs in the opposite direction. The multicast frames are carried across the transport and optimally replicated to reach all the OTV edge devices that joined that multicast group. The OTV edge device learns new MAC addresses on its internal interface using traditional data plane learning. It was selected because it is a standard-based protocol. and the end result is the creation of OTV control protocol adjacencies between all edge devices. The OTV edge devices receiving the update decapsulate the message and deliver it to the OTV control process. The resulting multicast frame is then sent to the join interface toward the Layer 3 network domain. The use of the ASM group to transport the hello messages allows the edge devices to discover each other as if they were deployed on a shared LAN segment. 6. The only difference with the traditional MAC address table entry is that for the MAC address in the remote data center. originally designed with the capability of carrying MAC address information in the TLV. The routing protocol used to implement the OTV control plane is IS-IS. 3. 4. the source IP address in the external header is set to the IP address of the join interface of the OTV edge device. 2. it will point to the joint interface of the remote OTV edge device. This prevents unauthorized routing messages from being injected into the network routing domain. instead of pointing to a physical interface.

When the adjacency server receives hello messages from the remote OTV edge devices. There are two types of Layer 2 communication: intrasite (traffic within the same site) and intersite (traffic between two different sites on the same overlay). MAC 1 needs to talk to MAC 2. The list is called a unicast-replication list. This list is periodically sent to all the listed OTV edge devices. Data Plane for Unicast Traffic After all OTV edge devices are part of an overlay establish adjacency and MAC address reachability information is exchanged. When a Layer 2 frame is received at the aggregation device. Each OTV edge device must join a specific overlay register with the adjacency server.used when a multicast-enabled transport infrastructure is not an option. a normal Layer 2 lookup determines how to reach MAC 2. which is the OTV edge device in this case. They both belong to Site A and they are in the same VLAN. each OTV edge device needs to know the list of the other devices that are part of the same overlay. The following process details the communication: Intrasite unicast communication: Intrasite means that both servers that need to talk to each other reside in the same data center. This is achieved by having one or more OTV edge devices as an adjacency server. To be able to send control protocol messages to all the OTV edge devices. The only difference is that each OTV edge device must create a copy of the control plane packet and send it to each remote OTV edge device. it builds the list of all the OTV edge devices that are part of the same overlay. As shown in Figure 4-2. The OTV control plane over unicast transport infrastructure works in the same way the multicast transport works. Figure 4-2 Intrasite Layer 2 Unicast Communication Intersite unicast communication: Intersite means that the servers that need to talk to each other . the information in the MAC table points to Eth1 and the frame is delivered to the destination. traffic can start flowing across the overlay.

As shown in Figure 4-3. MAC 1 belongs to Site A and MAC 3 belongs to Site B. MAC 3 will point to a local physical interface. The destination IP address in the outer header is the IP address of the joint interface of the remote OTV edge device in Site B. MAC 1 needs to talk to MAC 3. 3. 4. a normal Layer 2 lookup happens. The OTV edge device in Site A encapsulates the Layer 2 frame. it decapsulates it and exposes the original Layer 2 packet. The OTV edge device removes the CRC and the 802. 2. The OTV edge device at Site B performs another Layer 2 lookup on the original Ethernet frame. . When the remote OTV edge device receives the IP packets. The Layer 2 frame is received at the aggregation device. The OTV encapsulation increases the overall MTU size by 42 bytes. The following procedure details the process for MAC 1 to establish communication with MAC 3: 1. after that. This time.1Q fields from the original Layer 2 frame and adds an OTV shim (containing the VLAN and the overlay information) and an external IP header. a normal IP unicast packet is sent across the transport infrastructure. which is the OTV edge device at Site A in this case. the source IP of the outer header is the IP address of the joint interface of the OTV edge device in Site A. Figure 4-3 Intersite Layer 2 Unicast Communication Figure 4-4 shows the OTV data plane encapsulation on the original Ethernet frame. MAC 3 in the MAC table in Site A points to the IP address of the remote OTV edge device in Site B.reside in different data centers but belong to the same overlay. and the frame is delivered to MAC 3 destination.

This can happen when the multicast sources are deployed in one site and the multicast receivers are in another remote site. the assumption is that all intermediate LAN segments support at least the configured interface MTU size of the host. Lower the MTU on all servers so that the total packet size does not exceed the MTU of the interfaces where traffic is encapsulated. That means you need to accommodate the additional 42 bytes in the MTU size along the path between source and destination OTV edge devices. A multicast source is activated on the west side and starts streaming traffic to the multicast group Gs. These groups are different from the ASM groups used for the OTV control plane. Data Plane for Multicast Traffic You might be faced with situations in which there is a need to establish multicast communication between remote sites. Step 1 starts when a multicast source is activated in a given data center site. In the OTV control plane section. The multicast traffic needs to flow across the OTV overlay. we distinguished between two scenarios where the transport infrastructure is multicast enabled or not. The mapping happens between the site multicast groups and the SSM group range in the transport infrastructure.Figure 4-4 OTV Data Plane Encapsulation All OTV control and data plane packets originate from an OTV edge device with the Don’t Fragment (DF) bit set. which is explained in the following steps and shown in Figure 4-5. There are two ways to overcome the issue of the increased MTU size: Configure a larger MTU on all interfaces where traffic will be encapsulated. In a Layer 2 domain. Note Nexus 7000 doesn’t support fragmentation and reassembly capabilities. including the join interface and any links between the data centers that are in the OTV transport. . This means that mechanisms like Path MTU Discovery (PMTUD) are not an option in this case. we use a set of Source Specific Multicast (SSM) groups in the transport to carry the Layer 2 multicast streams. 1. in this case.

1. 3. The OTV device sends an OTV control protocol message to all the remote edge devices to communicate this information. The range of SSM groups to be used to carry Layer 2 multicast data streams are specified during the configuration of the overlay interface. Gd) SSM group. This allows building an SSM tree (group Gd) across the transport infrastructure that can be used to deliver the multicast stream Gs. The client sends an IGMP report inside the east site to join the Gs group. Figure 4-5 Mapping of the Site Multicast Group to the SSM Range Step 2 starts when a receiver in the remote site deployed in the same VLAN as the multicast source decides to join the multicast group Gs. 2. The OTV edge device snoops the IGMP message and realizes there is an active receiver in the site interested in group Gs. 4. in turn. 5. 3. The OTV control protocol is used to communicate the Gs-to-Gd mapping to all remote OTV edge devices. The east edge device.2. The local OTV edge device receives the first multicast frame and creates a mapping between the group Gs and a specific SSM group Gd available in the transport infrastructure. . which is explained in the following steps and shown in Figure 4-6. The remote edge device in the west side receives the GM-Update and updates its Outgoing Interface List (OIL) with the information that group Gs needs to be delivered across the OTV overlay. sends an IGMPv3 report to the transport to join the (IP A. The edge device in the east side finds the mapping information previously received from the OTV edge device in the west side identified by the IP address IP A. The mapping information specifies the VLAN (VLAN A) to which the multicast source belongs and the IP address of the OTV edge device that created the mapping. belonging to VLAN A.

Failure Isolation Data center interconnect solutions should provide a Layer 2 extension between multiple data center sites without giving up resilience and stability. you don’t need to add any configuration to achieve it. This control plane feature makes each site STP domain independent.Figure 4-6 Receiver Joining Multicast Group Gs When the OTV transport infrastructure is not multicast enabled. IGMP snooping is used to ensure that Layer 2 multicast traffic is sent to the remote DC only when an active receiver exists. and broadcast policy control. we leverage the same ASM multicast groups in the transport already used for OTV control protocol. head-end replication performed on the OTV device in the site originating the broadcast would ensure traffic delivery to all the remote OTV edge devices that are part of the unicast-only list. OTV achieves this goal by providing four main functions: Spanning Tree Protocol (STP) isolation. This feature enables the administrator to configure a dedicated broadcast group to be used by OTV in the multicast core network. Unknown Unicast Handling . For unicast only transport infrastructure. This helps to reduce the amount of head-end replication. Note Cisco NX-OS 6. Unknown Unicast Traffic suppression.2(2) introduces a new optional feature called dedicated broadcast group. The STP root placement configuration and parameters are decided for each site separately. When we have a multicast-enabled transport. you can now enforce different QoS treatments for data traffic and control traffic. Broadcast traffic will be handled the same way as the OTV hello messages. This section explains each of these functions STP Isolation OTV doesn’t forward STP BPDUs across the overlay. ARP suppression. By separating the two multicast addresses. Layer 2 multicast traffic can be sent across the overlay using head-end replication by the OTV edge device deployed in the DC where the multicast source is enabled. This is native in OTV.

The OTV edge device in Site A can snoop the ARP reply and cache the contained mapping information. all unknown unicast frames will not be forwarded across the logical overlay. you can statically add the MAC address of the silent host in the OTV edge device MAC address table. If the host’s default gateway is placed on a device different from Nexus 7000. which is a broadcast message.2(2). This helps to reduce the amount of traffic sent across the transport infrastructure.Unknown Unicast Handling OTV control protocol advertises MAC address reachability information between OTV edge devices. It is assumed that no silent hosts are available and eventually the OTV edge device will learn the MAC address of the host and will advertise it to the other OTV edge devices on the overlay. OTV control protocol will map the MAC address to the destination IP address of the join interface of the remote data center. the Layer 2 traffic is flooded out the internal interfaces. Having more than one OTV edge device at a given site. for example. ARP Optimization ARP optimization is one of the native features in OTV that helps to reduce the amount of traffic sent across the transport infrastructure. otherwise. When a subsequent ARP request originates to the same IP address in Site A. By default on the Nexus 7000. Note For the ARP caching. When the OTV edge device receives a frame destined to a MAC address that is not in the MAC address table. Starting with NX-OS 6. it will not be forwarded across the OTV overlay but will be locally answered by the OTV edge device. this request is sent across the OTV overlay to Site B. however. which require the flooding of Layer 2 traffic to function. This feature is important. in the case of Microsoft Network Load Balancing Services (NLBS). consider the interaction between ARP and CAM table aging timers because incorrect settings can lead to traffic black holing. because OTV . sends out an ARP request. The ARP aging timer on the OTV edge devices should always be set lower than the CAM table aging timer. selective unknown unicast flooding based on the MAC address is supported. OTV Multihoming Multihoming is a native function in OTV where two or more OTV edge devices provide a LAN extension for a given site. If the MAC address is in a remote data center. This is the normal behavior of a classical Ethernet interface. When a device in one of the data centers. such as Site A. As a workaround. the OTV ARP aging timer is 480 seconds and the MAC aging timer is 1800 seconds. you should put the ARP aging timer of the device to a value lower than its MAC aging timer. which in turn creates an ARP reply message and sends it back to the originating host. The command otv flood mac mac-address vlan vlan-id must be added. The ARP request eventually reaches the destination machine. that doesn’t happen via the overlay.

it allows the use of the same FHRP configuration in different sites. To enable this functionality. It is important that you enable the filtering of FHRP messages across the overlay. can lead to an end-to-end loop. . the same default gateway is available. All OTV edge devices in the same site are configured with the same site-identifier value. OTV Configuration Example with Multicast Transport This configuration example addresses OTV deployment at the aggregation layer with a multicast transport infrastructure. You will be using VLANs 10 and 5 to be extended to VLAN 15 as the site VLAN. and it is characterized by the same virtual IP and virtual MAC addresses in each data center.doesn’t send spanning tree BPDUs across the overlay. which happens via the OTV join interface across the Layer 3 network domain. the outbound traffic will be able to follow the optimal and shortest path. Examples 4-1 through 4-5 use a dedicated VDC to do the OTV function. if we don’t filter the hello messages between the sites it will lead to the election of a single default gateway for a given VLAN that will lead to a suboptimal path for the outbound traffic. and so on) messages across the overlay. VRRP. OTV uses a VLAN called Site VLAN within a site to detect adjacency with other OTV edge devices. Because we have the same VLANs exist in multiple sites. OTV maintains a second adjacency named overlay adjacency. In addition to the site adjacency. It is assumed the VDC is already created. The combination of the system ID and the site identifier is used to identify a neighbor edge device belonging to the same site. First Hop Redundancy Protocol Isolation Another capability introduced by OTV is to filter the First Hop Redundancy Protocol (FHRP —HSRP. so you will not see steps to create the Nexus 7000 VDC. In this case. To be able to support multihoming. In this case. This is necessary to localize the gateway at a given site and optimize the outbound traffic from that site. always using the local default gateway. Figure 4-7 shows the topology used. The AED role is elected on a per-VLAN basis at a given site. and to avoid creating an end-to-end loop. an Authoritative Edge Device (AED) concept is introduced. the OTV device must be configured with a site-identifier value. The site identifier is advertised through IS-IS hello packets.

255. 15 Example 4-2 Sample Configuration of the Join Interface Click here to view code image OTV-VDC-A(config)# interface ethernet 2/2 OTV-VDC-A(config-if)# description [ OTV Join-Interface ] OTV-VDC-A(config-if)# ip address 172.26.26.Figure 4-7 OTV Topology for Multicast Transport Example 4-1 Sample Configuration for How to Enable OTV VDC VLANs Click here to view code image OTV-VDC-A(config)# vlan 5-10.255.98/30 OTV-VDC-A(config-if)# ip igmp version 3 OTV-VDC-A(config-if)# no shutdown Example 4-3 Sample Configuration of the Point-to-Point Interface on the Aggregation Layer Facing the Join Interface Click here to view code image AGG-VDC-A(config)# interface ethernet 2/1 AGG-VDC-A(config-if)# description [ Connected to OTV Join-Interface ] AGG-VDC-A(config-if)# ip address 172.99/30 AGG-VDC-A(config-if)# ip pim sparse-mode AGG-VDC-A(config-if)# ip igmp version 3 AGG-VDC-A(config-if)# no shutdown Example 4-4 Sample PIM Configuration on the Aggregation VDC Click here to view code image .

Enable the OTV feature. Configure the OTV site identifier.0.0/8 Example 4-5 Sample Configuration of the Internal Interfaces Click here to view code image interface ethernet 2/10 description [ OTV Internal Interface ] switchport switchport mode trunk switchport trunk allowed vlan 5-10. The following shows a sample configuration for the overlay interface for a multicast-enabled transport infrastructure: 1.) Click here to view code image OTV-VDC-A(config)# otv site-identifier 0x1 3.1. you will not see any OTV commands: Click here to view code image OTV-VDC-A(config)# feature otv 2.0. Configure the OTV control group: Click here to view code image OTV-VDC-A(config-if)# otv control-group 239. (It should be the same for all the OTV edge devices belonging to the same site. 15 no shutdown Before creating and configuring the overlay interface. Configure the OTV site VLAN: Click here to view code image OTV-VDC-A(config)# otv site-vlan 15 4. Create and configure the overlay interface: Click here to view code image OTV-VDC-A(config)# interface overlay 1 OTV-VDC-A(config-if)# otv join-interface ethernet 2/2 5.101 ip pim ssm range 232.AGG-VDC-A# show running pim feature pim interface ethernet 2/1 ip pim sparse-mode interface ethernet 1/10 ip pim sparse-mode interface ethernet 1/20 ip pim sparse-mode ip pim rp-address 172.26.255. certain configurations must be done. Configure the OTV data group: . otherwise.1 6.1.

1 Data group range(s) : 232.1.0/0 172. Create a static default route to point out to the join interface. it is shut down by default: Click here to view code image OTV-VDC-A(config-if)# no shutdown Example 4-6 Verify That the OTV Edge Device Joined the Overlay Click here to view code image OTV-VDC-A# sh otv overlay 1 OTV Overlay Information Overlay interface Overlay1 VPN name : Overlay1 VPN state : UP Extended vlans : 5-10 (Total:6) Control group : 239.255.7c42 172.1.0/26 7.1.54c2. Configure the VLAN range to be extended across the overlay: Click here to view code image OTV-VDC-A(config-if)# otv extend-vlan 5-10 8.1.1.0/26 Join interface(s) : Eth2/2 (172.-------------.-----.94 Up Time 1w0d 1w0d Adj-State UP UP Example 4-7 Check the MAC Addresses Learned Through the Overlay Click here to view code image OTV-VDC-A# show otv route OTV Unicast MAC Routing Table For Overlay1 VLAN MAC-Address Metric Uptime Owner ---.98) Site vlan : 15 (up) OTV-VDC-A# show otv adjacency Overlay Adjacency database Overlay-Interface Overlay1 : Hostname System-ID Dest Addr OTV-VDC-C 0022.255.255.5579.26.99 9. This is required to allow communication from the OTV VDC to the Layer 3 network domain: Click here to view code image OTV-VDC-A(config)# ip route 0.5579.26.Click here to view code image OTV-VDC-A(config-if)# otv data-group 232.0.90 OTV-VDC-D 0022.3dc1 1 00:00:55 Overlay Next-hop(s) ----------ethernet 2/10 OTV-VDC-C OTV Configuration Example with Unicast Transport Examples 4-8 through 4-13 show sample configurations for an OTV deployment in a unicast-only .ac64 1 00:00:55 site 10 001b.1.26.--------10 0000.0c07.-------. You must bring up the overlay interface.36c2 172.0.26.255.

1 unicast-only otv extend-vlan 5-10 Example 4-10 Client OTV Edge Device Configuration Click here to view code image feature otv . It is always recommended to configure a pair with a secondary adjacency server in another site for redundancy. meaning acting as a client.1. Figure 4-8 OTV Topology for Unicast Transport Example 4-8 Primary Adjacency Server Configuration Click here to view code image feature otv otv site-identifier 0x1 otv site-vlan 15 interface Overlay1 otv join-interface e2/2 otv adjacency-server unicast-only otv extend-vlan 5-10 Example 4-9 Secondary Adjacency Server Configuration Click here to view code image feature otv otv site-identifier 0x2 otv site-vlan 15 interface Overlay1 otv join-interface e1/2 otv adjacency-server unicast-only otv use-adjacency-server 10. When a new site is added. Figure 4-8 show the topology that will be used in Examples 4-8 through 4-13.1. The client devices are configured with the address of the adjacent server.transport. one of the OTV edge devices. only the OTV edge devices for the new site need to be configured with adjacency server addresses. The first step is to define the role of the adjacency server. The second step is to do the required configuration on the OTV edge device not acting as an adjacency server.

3) Site vlan : 15 (up) AED-Capable : Yes .1 / [None] Example 4-13 Identifying the Role of the Client OTV Edge Device Click here to view code image Generic_OTV# show otv overlay 1 OTV Overlay Information Site Identifier 0000.1.3.2.1.0000.1.otv site-identifier 0x3 otv site-vlan 15 interface Overlay1 otv join-interface e1/1 otv use-adjacency-server 10.3.0003 Overlay interface Overlay1 VPN name : Overlay1 VPN state : UP Extended vlans : 5-10 (Total:6) Join interface(s) : E1/1 (10.1.2.1.2.0000.2 unicast-only otv extend-vlan 5-10 Example 4-11 Identifying the Role of the Primary Adjacency Server Click here to view code image Primary_AS# show otv overlay 1 OTV Overlay Information Site Identifier 0000.2) Site vlan : 15 (up) AED-Capable : Yes Capability : Unicast-Only Is Adjacency Server : Yes Adjacency Server(s) : 10.1) Site vlan : 15 (up) AED-Capable : Yes Capability : Unicast-Only Is Adjacency Server : Yes Adjacency Server(s) : [None] / [None] Example 4-12 Identifying the Role of the Secondary Adjacency Server Click here to view code image Secondary_AS# show otv overlay 1 OTV Overlay Information Site Identifier 0000.0000.2.0001 Overlay interface Overlay1 VPN name : Overlay1 VPN state : UP Extended vlans : 5-10 (Total:6) Join interface(s) : E2/2 (10.1.0002 Overlay interface Overlay1 VPN name : Overlay1 VPN state : UP Extended vlans : 5-10 (Total:6) Join interface(s) : E1/2 (10.1 10.

Table 4-2 lists a reference of these key topics and the page numbers on which each is found. and check your answers in the glossary: overlay transport virtualization (OTV) Data Center Interconnect (DCI) Ethernet over Multiprotocol Label Switching (EoMPLS) Multiprotocol Label Switching (MPLS) Virtual Private LAN Services (VPLS) coarse wavelength-division multiplexing (CWDM) Spanning Tree Protocol (STP) Authoritative Edge Device (AED) virtual LAN (VLAN) bridge protocol data unit (BPDU) .2. and is simple to configure. OTV uses a concept called MAC address routing.2 Summary In this chapter. we went through the fundamentals of OTV and how it can be used as a data center interconnect solution. Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter.2. loop prevention end-to-end. noted with the key topics icon in the outer margin of the page. you can run OTV over any transport infrastructure.1. With a few command lines you can extend Layer 2 between multiple sites.Capability : Unicast-Only Is Adjacency Server : No Adjacency Server(s) : 10. Table 4-2 Key Topics for Chapter 4 Define Key Terms Define the following key terms from this chapter. When it comes to Layer 2 extension solutions. OTV is the latest innovation.1 / 10. the only requirement is IP connectivity between the sites.1. It provides native multihoming.

Media Access Control (MAC) address resolution protocol (ARP) Virtual Router Redundancy Protocol (VRRP) First Hop Redundancy Protocol (FHRP) Hot Standby Router Protocol (HSRP) Internet Group Management Protocol (IGMP) Protocol Independent Multicast (PIM) .

This includes initial configuration of devices and ongoing maintenance of all the components. Table 5-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. The Nexus platform provides several methods for device configuration and management. The operational efficiency is an important business priority because it helps in reducing the operational expenditure (OPEX) of the data center. Data Plane. read the entire chapter. An efficient and rich management and monitoring interface enables a network operations team to effectively operate and maintain the data center network. control plane. Management and Monitoring of Cisco Nexus Devices This chapter covers the following exam topics: Control Plane. . “Answers to the ‘Do I Know This Already?’ Quizzes. These methods are discussed with some important commands for initial setup. and verification. This chapter provides an overview of the operational planes of the Nexus platform. If you do not know the answer to a question or are only partially sure of the answer. It explains the functions performed by each plane and the key features of the data plane. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. and Management Plane Initial Setup of Nexus Switches Nexus Device Configuration and Verification Management and monitoring of network devices is an important element of data center operations.” Table 5-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. and management plane. It also provides an overview of out-of-band and in-band management interfaces. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. You can find the answers in Appendix A. you should mark that question as wrong for purposes of the self-assessment. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. The Cisco Nexus platform provides a rich set of management and monitoring mechanisms. configuration. This chapter also identifies the mechanism available in the Nexus platform to protect the control plane of the switch.Chapter 5.

When you execute the write erase command. User plane 4. The initial setup script of a Nexus switch performs the startup configuration of a system. 2. NETCONF c. mgmt0 IP address and subnet mask c. Default route in the global routing table 7. SNMP b. STP d. It maintains mac-address-table. Select the protocols that belong to the control plane of the switch. Admin user password d. it deletes the startup configuration of the switch. Switch name b. Which of the following operational planes of a switch is responsible for configuring and monitoring the switch? a. Which of the following parameters are configurable by initial setup script? a. STP d. c. Which of the following parameters are not deleted by this command? a. Control plane b. Data plane c. It maintains the routing table. IP address of loopback 0 e. LACP 6. It forwards user traffic. What is the function of the data plane in a switch? a. Select the protocols that belong to the management plane of the switch. a. b. NETCONF c. Management plane d. Management plane d. SNMP b. User plane 3. d. a. It stores user data on the switch. Control plane b. Boot variable . Data plane c. LACP 5. Which of the following operational planes is responsible for maintaining routing and switching tables of a multilayer switch? a.1.

FEX is connected to the switch. mgmt0 b. 11. To upgrade Nexus switch software. Port channel e. when you type the interface vlan 10 command. Using DCNM software d. c. From NMS server using SNMP 9. Loopback d. You are configuring a Cisco Nexus switch and would like to create a VLAN interface for VLAN 10. You want to check the current configuration of a Nexus switch. An application on the switch has Cisco Fabric Services (CFS) lock enabled. enable the feature interface-vlan. However. When using ISSU to upgrade the Nexus 7000 switch software. d. I/O module BIOS and image 10. E1/1 c. and then create SVI. show running-config default . Kickstart image b. IPv4 configuration on mgmt0 interface c. What could be the possible reason of this error? a. vPC is configured on the switch.) a. with all the default configuration parameters. which of the following methods can be used? a. 12. b. First. which of the following components can be upgraded? (Choose all that apply. SVI 13. Supervisor BIOS c. you get an error message. Switch name d. From a web browser using HTTP protocol c. Which of the following are examples of logical interfaces on a Nexus switch? a. FEX image e. The command syntax is incorrect. you must create VLAN 10 on the switch. d. b. a. System image d. First. The appropriate software license is missing. c.b. Which of the following commands is used to perform this task? a. Link Aggregation Control Protocol (LACP) fast timers are configured. Admin user password 8. Identify the conditions in which In Service Software Upgrade (ISSU) cannot be performed on a Nexus 5000 switch. From CLI using the install all command b.

these network devices talk to each other using different control protocols to find the best network path. show last 10 logging d. show logging b. These network nodes are switches and routers. This simple explanation of a network node leads to the three operational components of a network node. show interface status d. Figure 5-1 shows the operational planes of a Nexus switch in a data center. show running-config all d. The purpose of building such network controls is to provide a reliable data transport for network endpoints. You want to check the last 10 entries of the log file. When a network endpoint sends a packet (data). show logging 10 15. show interface connected Foundation Topics Operational Planes of a Nexus Switch The data center network is built using multiple network nodes or devices. In a large network with hundreds of these devices. . These components are data plane. You want to see the list of interfaces that are currently in up state. it is important to build a management framework to monitor and manage these devices. and management plane. connected to each other and configured in different topologies. show running-config 14. show startup-config c. control plane.b. Which command will show you a brief list of interfaces with up state? a. show logging last 10 c. show interface up brief b. show interface brief | include up c. These network endpoints are servers and workstations. it is switched by these network devices based on the best path they learned using the control protocols. Which command do you use to perform this task? a. Typically.

. The earlier method requires the packet to be stored in the memory before a forwarding decision can be made. In a typical Layer 2 switch. data packets received from an incoming interface must be completely stored in switch memory or ASIC before sending them to an outgoing interface. the forwarding decisions are based on the destination MAC address of data packets.Figure 5-1 Operational Plane of Nexus Data Plane The data plane is also called a forwarding plane. drop the traffic. however. The advancements in integrated circuit technologies allowed network device vendors to move the Layer 2 forwarding decision from Complex Instruction Set Computing (CISC) and Reduced Instruction Set Computing (RISC) processors to the Application-Specific Integrated Circuits (ASIC) and Field-Programmable Gate Arrays (FPGA). Following are a few characteristics of store-and-forward switching: Error Checking: Store-and-forward switches receive a frame completely and analyze it using the frame-check-sequence (FCS) calculation to make sure the frame is free from data-link errors. thereby improving the data plane latency of network switches to tens of microseconds. This component of a network device receives endpoint traffic from a network interface and decides what to do with this traffic. This action could be to forward traffic to an outgoing interface. with ASICs and FPGAs packets can be switched without storing them in the memory. or send it to the control plane for further processing. This approach is helpful in reducing the packethandling time within the network device. It can process incoming traffic by looking at the destination address in its forwarding table and deciding on an appropriate action. The switch then performs the forwarding process. Store-and-Forward Switching In store-and-forward switches.

Therefore. Cut-Through Switching In the cut-through switching mode. Unified Crossbar Fabric (UCF): It provides a switching fabric by cross-connecting the UPCs. Access Control List: Because a store-and-forward switch stores the entire packet. The forwarding decision is made as soon as the switch sees the portion of the frame with the destination address. . high-performance. Following are a few characteristics of cut-through switching: Low Latency Switching: A cut-through switch can make a forwarding decision as soon as it has looked up the DMAC address of the data packet. Egress Port Congestion: If a cut-though forwarding decision is made. The UCF is always used to schedule and switch packets between the ports of UPC. The Cisco Nexus 5500 switch data plane architecture is primarily based on two custom built ASICs developed by Cisco: Unified Port Controller (UPC): It provides data plane packet processing on ingress and egress interfaces. the switch needs to buffer this packet. cut-through switching only flags but does not drop invalid packets. It is a single-stage. Invalid Packets: It starts forwarding the packet before receiving it completely and making sure it is valid. In this case. In Nexus 5500. which helps ensure line-rate throughput regardless of the internal packet overhead imposed by ASICs. packets with errors are forwarded to other segments of the network. Figure 5-2 shows Nexus 5500 data plane architecture. switches start forwarding a frame before the frame has been completely received. but the egress port is busy transmitting a frame coming in from another interface. for a 10 Gbps port it provides a 20 percent over-speed rate. the switch can examine the necessary portions to permit or deny that frame. The ingress buffering process provides the flexibility to support any mix of Ethernet speeds. Nexus 5500 Data Plane Architecture Data plane architecture of Nexus switches vary based on the hardware platform. Therefore.Buffering: The process of storing and then forwarding enables the switch to handle various networking conditions. which ensures high throughput. the frame is not forwarded in a cut-through fashion. low latency. The store-and-forward switch architecture is simple because it is easier to forward a stored packet. This technique reduces the network latency. however. nonblocking crossbar with an integrated scheduler. and each port has a dedicated data path. Each UPC manages eight ports. The scheduler coordinates the use of the crossbar for a contention-free match between input and output pairs. an enhanced iSLIP algorithm is used for scheduling. The switch does not have to wait for the rest of the packet to make its forwarding decision. In this switching mode. and weighted fairness across all inputs. Each data path connects to Unified Crossbar Fabric (UCF) via a dedicated fabric interface at 12 Gbps. the basic principles of data plane operations are the same.

The switch fabric is designed to forward 10 G packets in cut-through mode. Table 5-2 Nexus 5500 Switching Modes Control Plane . The 1 G to 1 G switching is performed in store-and-forward mode. Table 5-2 shows the interface types and modes of switching. Cut-through switching can be performed only when the ingress data rate is equivalent or faster than the egress data rate.Figure 5-2 Nexus 5500 Data Plane Nexus 5500 utilizes both cut-through and store-and-forward switching.

however. let’s look at the Cisco Nexus 5548P switch.The control plane maintains the information necessary for the data plane to operate. Table 5-3 summarizes the control plane specifications of a Nexus 5500 switch. or BGP are examples of control plane protocol. However. . EIGRP. This information is then processed to create tables that help the data plane to operate. The control plane of this switch runs Cisco NX-OS software on a dual-core 1. and the system is managed inband.7-GHz Intel Xeon Processor C5500/C3500 Series with 8 GB of DRAM. The supervisor complex is connected to the data plane in-band through two internal ports running 1-Gbps Ethernet. the configuration of these components is different based on the platform and hardware version. the control plane is not limited to only routing protocols. Routing protocols such as OSPF. The control plane performs several other functions. Figure 5-3 Control Plane Protocols Nexus 5500 Control Plane Architecture The Nexus switches control plane architecture has a similar component for almost all Nexus switches. This information is collected and computed using complex protocols and algorithms. or through the out-of-band 10/100/1000-Mbps management port. The control plane of a network node can speak to the control plane of its neighbor to share routing and switching information. including Cisco Discovery Protocol (CDP) Bidirectional Forwarding (BFD) Unidirectional Link Detection (UDLD) Link Aggregation Control Protocol (LACP) Address Resolution Protocol (ARP) Spanning Tree Protocol (STP) FabricPath Figure 5-3 shows the Layer 2 and Layer 3 control plane protocols of a Nexus switch. To understand control plane of Nexus.

Figure 5-4 Control Plane of Nexus 5500 Cisco NX-OS software provides isolation between control and data forwarding planes within the device.Table 5-3 Nexus 5500 Control Plane Specifications Figure 5-4 shows the control plane of a Nexus 5500 switch. Control Plane Policing . This isolation means that a disruption within one plane doe not disrupt the other.

Hence. and packet delivery. video. Another example is a denial of service (DoS) attack on the supervisor module. In this case. These packets include router updates and keepalive messages. Redirected packets: Packets that are redirected to the supervisor module. reachability. a high volume of traffic from the data plane to the supervisor module could overload and slow down the performance of the entire switch. This policy map is similar to a normal QoS policy and is applied to all traffic entering the switch from a nonmanagement port. This includes an ICMP unreachable packet and a packet with IP options set. or keepalives Unstable Layer 2 topology Reduced service quality. CoPP classifies the packets to different classes and provides a mechanism to individually control the rate at which the supervisor module receives these packets. or from a denial of service (DoS) attack. such as a broadcast storm. updates. it ensures network stability. resulting in serious network outages. The purpose of CoPP is to protect the control plane of the switch from the anomalies within the data plane. For example.Control Plane Policing The Control Plane Policing (CoPP) feature prevents unnecessary traffic from overwhelming the control plane resources. you will observe the following symptoms on the network: High CPU utilization Route flaps due to loss of the routing protocol neighbor relationship. it is a critical element for overall network stability and availability. Packet . Exception packets: Packets that need special handling by the supervisor module. or a slow response for applications Slow or unresponsive interactive sessions with the CLI Processor resource exhaustion. Some examples of DoS attacks are outlined next: Internet Control Message Protocol (ICMP) echo requests IP fragments TCP SYN flooding If the control plane of the switch is impacted by excessive traffic. the supervisor module spends a large amount of time in handling these malicious packets and preventing the control plane from processing genuine traffic. Glean packets: If a Layer 2 MAC address for a destination IP address is not present in the FIB. the supervisor module receives the packet and sends an ARP request to the host. such as the memory and buffers Indiscriminate drops of incoming packets Different types of packets can reach the control plane: Receive packets: Packets that have the destination address of a router. such as poor voice. Features such as Dynamic Host Configuration Protocol (DHCP) snooping or dynamic Address Resolution Protocol (ARP) inspection redirect some packets to the supervisor module. Any disruption or attacks to the supervisor module can bring down the control and management plane of the switch. The supervisor module of the Nexus switch performs both the management plane and the control plane function. The CoPP feature enables a policy map to be applied to the supervisor bound traffic. therefore. The attacker can generate IP traffic streams to the control plane or management plane at a very high rate.

Peak information rate (PIR): The rate above which data traffic is negatively affected. normal-dhcp. therefore. Moderate: This policy is 1 rate and 2 color and has a BC value of 310 ms (except for the important class. Dense: This policy is 1 rate and 2 color. The Cisco Nexus device hardware performs CoPP on a per-forwarding-engine basis. A policer determines three colors or conditions for traffic.2. These values are 25 percent greater than the strict policy. you should choose rates so that the aggregate traffic does not overwhelm the . which has a value of 1500 ms). normal-dhcp-relay-response. the rate at which packets arrive at the supervisor module can be controlled by two mechanisms. After the initial setup.classification can be done using the following parameters: Source and destination IP address Source and destination MAC address VLAN Source and destination port Exception cause After the packets are classified. The class l2-unpoliced has a BC value of 5 MB. These conditions or colors are as follows: conform (green). undesirable. which has a value of 1000 ms). and default have a BC value of 250 ms. Committed burst (BC): The size of a traffic burst that can exceed the CIR within a given unit of time. or drop the packet. Choosing one of the following CoPP policy options from the initial setup can set the level of protection for the control plane of a switch: Strict: This policy is 1 rate and 2 color and has a BC value of 250 ms (except for the important class. redirect. Single-rate policers monitor the specified committed information rate (CIR) of traffic. which has a value of 1250 ms). One is called policing and the other is called rate limiting. You can define only one action for each condition. the Cisco NX-OS software installs the default CoPP system policy to protect the supervisor module from DoS attacks. and violate (red). management. normal. Dual-rate policers monitor both CIR and peak information rate (PIR) of traffic. exceed (yellow). These values are 50 percent greater than the strict policy. this option is named none. The actions can transmit the packet. NX-OS software applies strict policy. Skip: No control plane policy is applied. Extended burst (BE): The size that a traffic burst can reach before all traffic exceeds the PIR. CoPP does not support distributed policing. If you do not select a policy. In Cisco NX-OS releases prior to 5. The classes critical. and monitoring have a BC value of 1000 ms. The setup utility allows building an initial configuration file using the system configuration dialog. Policing is the monitoring of data rates and burst sizes for a particular class of traffic. Lenient: This policy is 1 rate and 2 color and has a BC value of 375 ms (except for the important class. l2-default. You can configure the following parameters for policing: Committed information rate (CIR): This is the desired bandwidth. mark down the packet. exception. The classes important.

This tool is useful for troubleshooting problems related to the switch itself. Following are the key benefits of Ethanalyzer: It is part of the NX-OS integrated management tool set. JavaScript Object Notation (JSON). DCNM leverages XML APIs of the Nexus platform. It improves troubleshooting and reduces time to resolution. . The management plane is used in day-to-day administration of the network node. Control Plane Analyzer Ethanalyzer is a Cisco NX-OS built-in protocol analyzer.supervisor module. Manage the network node using an element manager such as Data Center Network Manager (DCNM). It does not capture hardware-switched traffic between data ports of the switch. Monitor network statistics by collecting health and utilization data from the network node using network management protocols such as simple network management protocol (SNMP). It preserves and improves the operational continuity of the network infrastructure. and guest shell with support for Python. Representational State Transfer (REST). Application Program Interface (API). Network administrators can learn about the amount and nature of the control-plane traffic within the switch. You can use Ethanalyzer to troubleshoot your network by capturing and decoding the control-plane traffic. It provides an interface to the network administrators (see Figure 5-5) and NMS tools to perform actions such as the following: Configure the network node using command-line interface (CLI). Program the network node using One Platform Kit (onePK). The packets captured by Ethanalyzer are only control plane traffic destined to the supervisor CPU. Ethanalyzer enables you to perform the following functions: Capture packets sent to and received by the switch supervisor CPU Set the number of packets to be captured Set the length of the packets to be captured Display packets with either very detailed protocol information or a one-line summary Open and save the packet data captured Filter packet captures based on many criteria Filter packet displays based on many criteria Decode the internal header of the control packet Management Plane The management plane of the network node deals with the configuration and monitoring of the control plane. Configure the network node using network management protocols such as simple network management protocol (SNMP) and network configuration protocol (NETCONF). based on the command-line version of Wireshark.

Figure 5-5 Management Plane of Nexus Nexus Management and Monitoring Features The Nexus platform provides a wide range of management and monitoring features. Figure 5-6 Nexus Management and Monitoring Tools Out-of-Band Management In a data center network. out-of-band (OOB) management means that management traffic is traversing though a dedicated path in the network. Figure 5-6 shows an overview of the management and monitoring tools. Some of these tools are discussed in the following section. The purpose of creating an out-of-band network is to increase .

This network provides connectivity to management tools. dedicated switches. This network is dedicated for management traffic only and is completely isolated from the user traffic. and firewalls are deployed to create an OOB network. The remote administrator connects to the terminal server router and performs a reverse telnet to connect to the console port of the device. In-band management is discussed later in this section. The advantage of this approach is that during a network outage. network operators can reach the devices using this OOB management network. . and ILO ports of servers. In a data center. It provides an RS-232 connection with an RJ-45 interface. Because this network is built for management of data center devices. In Figure 5-7. monitoring. It is recommended to connect all console ports of the devices to a terminal or comm server router such as Cisco 2900 for remotely connecting to the console port of devices. network administrator 1 is using OOB network management. Figure 5-7 Out-of-Band Management Cisco Nexus switches can be connected to the OOB management network using the following methods: Console port Connectivity Management Processor (CMP) Management port Console Port The console port is an asynchronous serial port used for initial switch configuration. and network administrator 2 is using in-band network management.the security and availability of management capabilities within the data center. and troubleshooting without any dependency on the network build for user traffic. OOB management enables a network administrator to access the devices in the data center for routine changes. routers. it is important to secure it so only administrators can access it. the management port of network devices. Figure 5-8 shows remote access to the console port via OOB network.

The CMP enables lightsout remote monitoring and management of the Cisco Nexus 7000 Series system. Before you begin. Note that CMP is available only in the Nexus 7000 Supervisor engine 1.Figure 5-8 Console Port Connectivity Management Processor For management high availability. ensure that you are in the default virtual device context (VDC). the Cisco Nexus 7000 series switches have a connectivity management processor that is known as the connectivity management processor (CMP). It provides the capability to initiate a complete system restart. its supervisor module. It provides the capability to take complete console control of the supervisor. You can access the CMP from the active supervisor module. The CMP provides OOB management and monitoring capability independent from the primary operating system. . It provides monitoring of supervisor status and initiation of resets. Figure 5-9 shows how to connect to CMP from the active supervisor module. Key features of the CMP include the following: CMP provides a dedicated operating environment independent of the switch processor. It provides complete visibility of the system during the entire boot process. It provides access to critical log information on the supervisor. and all other modules without the need for separate terminal servers.

SVI. If the network is down. In Cisco NX-OS.Figure 5-9 Connectivity Management Processor Management Port (mgmt0) The management port of the Nexus switch is an Ethernet interface to provide OOB management connectivity. switches are configured with an in-band IP address on a routed interface. Cisco NX-OS devices have a limited number of vty lines. By default. In-band Management In-band management utilizes the same network as user data to manage network devices. you must globally disable the specific protocol. and other management information. Management port is also known as mgmt0 interface. new management sessions cannot be established. In this method. Each VDC has its own representation for mgmt0 with a unique IP address from a common management subnet for the physical switch that can be used to send syslog. It enables you to manage the device using an IPv4 or IPv6 address. This port is part of management VRF on the switch and supports speeds of 10/100/1000 Ethernet. You can configure an inactive session timeout and a maximum sessions limit for virtual terminals. SNMP. to prevent a Telnet session to the vty line. For example. When all vty lines are in use. To help ensure that a device can be accessed through a local or remote management session. proper controls must be enforced on vty lines. OOB connectivity via mgmt0 port is shown earlier in Figure 5-7. the number of configured lines available can be determined by using the show run vshd command. To disable a specific protocol from accessing vty sessions. A vty line is used for all remote network connections supported by the device. The administrator uses Secure Shell (SSH) and Telnet to the vty lines to create a virtual terminal session. The network administrator manages the switch using this in-band IP address. or loopback interface that is reachable via the same path as user data. vty lines automatically accept connections using any configured transport protocols. . There is no need to build a separate OOB management network. regardless of protocol (SSH. SCP. up to 16 concurrent vty sessions are allowed. or Telnet are examples). The OOB management interface (mgmt0) can be used to manage all VDCs. In-band is done using vty of a Cisco NX-OS device. in-band management will also not work. you must disable Telnet globally using the no feature telnet command.

If a user belongs to multiple roles. the user is allowed access to the configuration commands. users log in to the switch and perform tasks as per their role in the organization. users who belong to both Role-A and Role-B can access configuration and debug operations. a user has Role-A. For example. which is a virtual switch within the Nexus chassis. With RBAC. and interfaces of a Nexus switch.Role-Based Access Control To manage a Nexus switch. User Roles User roles contain rules that define the operations allowed on the switch. Role-Based Access Control (RBAC) provides a mechanism to define the amount of access that each user has when the user logs in to the switch. which then determines what the individual user is allowed to do on the switch. however. which allows access to the configuration command. this user also has Role-B. but he is not permitted to make changes to the switch configuration. and each role can contain multiple rules. a permit rule takes priority over deny. a network operations center (NOC) engineer might want to check the health of the network. Table 5-4 Default Roles on Nexus 5000 A Nexus 7000 switch supports features such as VDC. you can define new roles and assign them to the users. Table 5-5 shows the default roles on a Nexus 7000 switch. Each user can have multiple roles. Each role has a list of rules that define the permission to perform a task on the switch. and Role-B allows access to only debug operations. a Nexus 7000 supports additional roles to administer and operate the VDCs. When roles are combined. You can also define roles to limit access to specific VSANs. NX-OS enables you to create local users on the switch and assign a role to these users. you first define the required user roles and then specify the rules for each switch management operation that user role is allowed to perform. For example. However. The roles are defined to restrict user authorization to perform switch management operations. which denies access to a configuration command. VLANs. if Role-A allows access to only configuration operations. Therefore. NX-OS has some default user roles. . These roles are assigned to users so they have limited access to switch operations as per their role. the user can execute a combination of all the commands permitted by these roles. For example. Table 5-4 shows the default roles on a Nexus 5000 switch. In this case. associate the appropriate role with the user account. When you create a user account on the switch.

which represents all commands associated with the feature. Rules are executed in descending order (for example. . RBAC Characteristics and Guidelines Some characteristics and guidelines for RBAC are outlined next: Roles are associated to user accounts in the local database or to the users in a remote Authentication Authorization and Accounting (AAA) database. Rules defined for a user role override the user role policies.Table 5-5 Default Roles on Nexus 7000 Rules A rule is the basic element of a role that defines the switch operations that are permitted or denied by a user role. Users see only the CLI commands they are permitted to execute when using context-sensitive help. They allow limiting access to the interfaces. You can apply rules for the following parameters: Command: NX-OS command or group of commands defined in a regular expression. A user account can belong to 64 roles. The last level of control parameter is the feature group. The next level of control parameter is the feature. and the user does not have access to the interface unless you configure a command rule to permit the interface command. User Role Policies The user role policies are used to limit user access to the switch resources. 64 user-defined roles can be configured per VDC. that user can execute the most privileged union of all the rules in all assigned roles. Role changes do not take effect until the user logs out and logs in again. You can check the available feature names for this command using the show role feature command. The feature group combines related features for the ease of management. VLANs. and VSANs. 256 rules can be assigned to a single role. you define a user role policy to access a specific interface. For example. If a user account belongs to multiple roles. rule 2 is checked before rule 1). Enter the show role feature group command to display the default feature groups available for this parameter. The most granular control parameter of the rule is the command. Feature group: Default or user-defined group of features. Feature: All the commands associated with a switch feature.

SNMP agent: A software component that resides on the device that is managed. the privilege level feature is disabled on the Nexus switch. AAA/TACACS+). the predefined privilege levels are created as roles. Table 5-6 NX-OS Privilege Levels Simple Network Management Protocol Cisco NX-OS supports Simple Network Management Protocol (SNMP) to remotely manage the Nexus devices. RBAC roles can be distributed to multiple Nexus 7000 devices using Cisco Fabric Services (CFS). By default. Table 5-6 shows the IOS privilege levels and their corresponding NX-OS system-defined roles. you must define the relationship between manager and agent. They cannot be referenced in an AAA server command authorization policy (for example. These predefined privilege levels cannot be deleted. A user account can be associated to a privilege level or to RBAC roles. Network administrators use SNMP to remotely manage network performance. To enable SNMP agent. On Nexus switches. This feature is useful in a network environment that uses a common AAA policy for managing Cisco IOS and NX-OS devices. and plan for network growth. the privilege levels can also be modified to meet different security requirements. Role features and feature groups can be referenced only in the local user account database. such as switches and routers. troubleshoot faults. SNMP works at the application layer and facilitates the exchange of management information between network management tools and the network devices. Similar to the RBAC roles.Role features and feature groups should be used to simplify RBAC configuration when applicable. . To enable privilege level support on the Nexus switch. When this feature is enabled. Privilege Levels Privilege levels are introduced in NX-OS to provide IOS-compatible authentication and authorization functionality. use the feature privilege configuration command. the privilege level and RBAC roles can be configured simultaneously. There are three components of SNMP framework: SNMP manager: This is a network management system (NMS) that uses SNMP protocol to manage the network devices.

it can send the inform request again. . SNMPv3 Security was always a concern for SNMP. Figure 5-10 shows components of SNMP. The community string was sent in clear text and used for authentication between the SNMP manager and agent. Figure 5-10 Nexus SNMP The Cisco Nexus devices support the agent and MIB. create network performance. SNMP Notifications In case of a network fault or other important event. for communication between the SNMP manager and agent. These messages are more reliable because of the acknowledgement of the message from the SNMP manager. If the Cisco Nexus device never receives a response for a message. SNMPv3 solves this problem by providing security features. Informs: A message sent from the SNMP agent to the SNMP manager. The information collected from polling is analyzed to help troubleshoot network problems.Management Information Base (MIB): The collection of managed objects on the SNMP agent. The network management system (NMS) uses the Cisco MIB variables to perform set operations to configure a device or a get operation to poll devices for specific information. These messages are less reliable because there is no way for an agent to discover whether the SNMP manager has received the message. verify configuration. and monitor traffic loads. for which the manager must acknowledge the receipt of the message. The security features provided in SNMPv3 are as follows: Message integrity: Ensures that a packet has not been tampered with in transit. You can configure Cisco NX-OS to send notifications to multiple host receivers. NX-OS can generate the following two types of SNMP notifications: Traps: An unacknowledged message sent from the agent to the SNMP managers listed in the host receiver table. the SNMP agent can generate notifications without any polling request from the SNMP manager. such as authentication and encryption. Authentication: Determines the message is from a valid source.

1. The specified object must be an existing SNMP MIB object in standard dot notation (for example.1. The Cisco Nexus device supports SNMPv1. A security level is the permitted level of security within a security model.2. developed by the Internet Engineering Task Force (IETF). Each SNMP version has different security models and levels. The Cisco NX-OS supports RMON alarms. RMON allows various network console systems and agents to exchange network-monitoring data.Encryption: Scrambles the packet contents to prevent it from being seen by unauthorized sources. . and resets the alarm at another threshold value. You can enable RMON and configure your alarms and events by using the CLI or an SNMP-based network management station. triggers an alarm at a specified threshold value. and SNMPv3.6.2. Table 5-7 NX-OS SNMP Security Models and Levels Remote Monitoring Remote Monitoring (RMON) is an industry standard remote network monitoring specification.1. RMON is disabled by default. A security model is an authentication strategy that is set up for a user and the role in which the user resides. You can set an alarm on any MIB object that resolves into an SNMP INTEGER type.2. RMON Alarms An RMON alarm monitors a specific management information base (MIB) object for a specified interval. SNMPv2c. In a Cisco Nexus switch.3. and no events or alarms are configured. A combination of a security model and a security level determines which security mechanism is employed when handling an SNMP packet.17).1. you specify the following parameters: MIB: The object to monitor.17 represents ifOutOctets. events. When you create an alarm. You can use alarms with RMON events to generate a log entry or an SNMP notification when the RMON alarm triggers. Sampling interval: The interval to collect a sample value of the MIB object. Table 5-7 shows SNMP security models and levels supported by NX-OS. and logs to monitor a Cisco Nexus device.

Nexus switches support logging of system messages. the system outputs messages at that level and lower. Different events can be generated for a falling alarm and a rising alarm. Falling threshold: Device triggers a falling alarm or resets a rising alarm on this threshold. You can configure the Nexus switch to send syslog messages to a maximum of eight syslog servers. RMON supports the following type of events: SNMP notification: Sends an SNMP notification when a rising alarm or a falling alarm is triggered Log: Adds an entry in the RMON log table when the associated alarm is triggered Both: Sends an SNMP notification and adds an entry in the RMON log table when the associated alarm is triggered Syslog Syslog is a standard protocol defined in IETF RFC 5424 for logging system messages to a remote server. Rising threshold: Device triggers a rising alarm or resets a falling alarm on this threshold. a log file. To support the same configuration of syslog servers on all switches in a fabric. you can use Cisco Fabric Services (CFS) to distribute the syslog server configuration. By default. Cisco Nexus switches only log system messages to a log file and output them to the terminal sessions. Events: The action taken when an alarm (rising or falling) is triggered. Table 5-8 System Message Severity Levels Embedded Event Manager . You can associate an event to an alarm.Sample type: Absolute sample is the value of MIB object. messages are sent to syslog servers only after the network is initialized. Delta sample is the difference between two consecutive sample values. and syslog servers on remote systems. When the switch first initializes. Table 5-8 describes the severity levels used in system messages. When you configure the severity level. RMON Events RMON events are generated by RMON alarms. This log can be sent to the terminal sessions.

There are three major components of EEM: Event statement Action statement Policies Event Statements An event is any device activity for which some action.Cisco NX-OS Embedded Event Manager provides real-time network event detection. The event statement defines the event to look for as well as the filtering characteristics for the event. However. based on the configuration. Event statements specify the event that triggers a policy to run. Log an exception. Update a counter. Each policy can have multiple action statements. and automation within the device. Action Statements Action statements describe the action triggered by a policy. Reload the device. EEM defines event filters so only critical events or multiple occurrences of an event within a specified time period trigger an associated action. EEM still observes events but takes no actions. Policies NX-OS EEM policy consists of an event statement and one or more action statements. In many cases. these events are related to faults in the device. beginning in Cisco NX-OS Release 5. should be taken. If no action is associated with a policy. These actions could be sending an e-mail or disabling an interface to recover from an event. . The action statement defines the action EEM takes when the event occurs. such as a workaround or a notification. In Cisco NX-OS releases prior to 5. Generate a Call Home event.2. Shut down specified modules because the power is over budget. you can configure multiple event triggers. Use the default action for the system policy. you can configure only one event statement per policy. The Embedded Event Manager (EEM) monitors events that occur on the device and takes action to recover or troubleshoot these events. Generate an SNMP notification. such as when an interface or a fan malfunctions. Force the shutdown of any module. Generate a syslog message. NX-OS EEM supports the following actions in action statements: Execute any CLI commands. Figure 5-11 shows a high-level overview of NX-OS Embedded Event Management.2. programmability.

the switch takes corrective action to mitigate the fault and to reduce potential network outages. If a fault is detected.Figure 5-11 Nexus Embedded Event Manager Generic Online Diagnostics (GOLD) Cisco generic online diagnostics (GOLD) is a suite of diagnostic functions that provide hardware testing and verification. GOLD provides the following tests as part of the feature set: Boot diagnostics Continuous health monitoring On-demand tests Scheduled tests Figure 5-12 shows an overview of Nexus GOLD. It helps in detecting hardware faults and to make sure that internal data paths are operating as designed. .

For example. Multiple concurrent message destinations. You can use this feature to notify any external entity when an important event occurs on your device. GOLD tests are executed when the chassis is powered up or during an online insertion and removal (OIR) event. as per schedule. Multiple message format options. corrective actions are taken through Embedded Event Manager (EEM) policies. Smart Call Home The Call Home feature provides e-mail-based notification of critical system events. You can also start or stop an on-demand diagnostic test. If a failure condition is observed during the test. This feature can be used to page a network support engineer. These alerts are grouped into alert groups. or use Cisco Smart Call Home services to automatically generate a case with the technical assistance center (TAC). To display the diagnostic level that is currently in place. It is recommended to start a disruptive diagnostic test manually during a scheduled network maintenance window. and on demand from CLI. e-mail a network operations center.Figure 5-12 Nexus GOLD During the boot diagnostics. The switch includes the command output in the transmitted Call Home message. XML: Matching readable format that uses the Extensible Markup Language (XML) and the Adaptive Messaging Language (AML) XML schema definition (XSD). Perform Initial Setup . Call Home includes a fixed set of predefined alerts on your switch. Full Text: Fully formatted message information suitable for human reading. and some important CLI commands to configure and verify network connectivity. Runtime diagnostics (also known as health monitoring diagnostics) include nondisruptive tests that run in the background during the normal operation of the switch. Boot diagnostics include disruptive tests and nondisruptive tests that run during system boot and system reset. basic switch configuration. Continuous health checks occur in the background. You can configure up to 50 e-mail destination addresses for each destination profile. and CLI commands are assigned to execute when an alert in an alert group occurs. to start an on-demand test on module 6. use the diagnostic bootup level complete command. use show diagnostic boot level. Call Home can deliver alerts to multiple recipients that you configure in destination profiles. you can use following command: diagnostic start module 6 test all. The XML format enables communication with the Cisco Systems Technical Assistance Center (Cisco-TAC). such as the following: Short Text: Suitable for pagers or printed reports. Following are advantages of using the Call Home feature: Automatic execution and attachment of relevant CLI command output. To enable boot diagnostics. Nexus Device Configuration and Verification This section reviews the process of initial setup of Nexus switches.

if a default value is displayed on the prompt and you press Enter for that step. the setup utility changes to a configuration using that default. you can run the initial setup script anytime to perform a basic device configuration by using the setup command from the CLI. Therefore. just press Enter. This initial setup mode indicates that the switch is unable to find the configuration file at the startup. the setup utility uses what was previously configured on the switch and goes to the next question. you can use CLI to perform additional configurations of the switch. The startup configuration enables you to build an initial configuration file with enough information to provide basic system management connectivity. the switch creates an admin user as the only initial user. After the initial configuration file is created. Figure 5-13 Initial Configuration Procedure During the startup configuration. if you press Enter to skip the IP address configuration of mgmt0 interface. The initial setup script for the startup configuration is executed automatically the first time the switch is started. However. therefore. when you power on the switch the first time. the setup utility does not change that configuration. If a default answer is not available. You can press Ctrl+C at any prompt of the script to skip the remaining configuration steps and use what you have configured up to that point. To start a new switch configuration. To use the default answer for a question. and not the configured value. For example. there is no default configuration on these switches. first you perform the startup configuration using the console port of the switch.When Nexus switches are shipped from the factory. Figure 5-13 shows the initial configuration process of a Nexus switch. and you have already configured the mgmt0 interface earlier. This setup script can have slightly different steps based on the Nexus platform. The default . However. The only step you cannot skip is to provide a password for the administrator. it goes into an initial setup mode. this initial setup mode is also called startup configuration mode. and there is no configuration file found in the NVRAM of the switch. The startup configuration mode of the switch is an interactive system configuration dialog that guides you through the basic configuration of the system.

16.0/0 172. Use ctrl-c at anytime to skip the remaining dialogs. you do not need to edit it.1 Configure advanced IP options? (yes/no) [n]: n Enable the telnet service? (yes/no) [n]: n Enable the ssh service? (yes/no) [y]: y Type of ssh key you would like to generate (dsa/rsa) [rsa]: rsa Number of rsa key bits <1024-2048> [1024]: 1024 Configure default interface layer (L3/L2) [L3]:L3 Configure default switchport interface state (shut/noshut) [shut]: shut The following configuration will be applied: password strength-check Switchname N7K interface mgmt0 ip address 172. you can accept this configuration. If the configuration is correct.Basic System Configuration Dialog VDC: 2 ---This setup utility will guide you through the basic configuration of the system. upper. so you can verify it.20 Mgmt0 IPv4 netmask : 255.255.0 no shutdown vrf context management ip route 0.16. During the basic system configuration.option is to use strong passwords. Click here to view code image Do you want to enforce secure password standard (yes/no): yes Enter the password for "admin": C1sco123 Confirm the password for "admin": C1sco123 ---. and numerical characters. the startup script asks you to proceed with basic system configuration. Would you like to enter the basic configuration dialog (yes/no): yes Do you want to enforce secure password standard (yes/no) [y]: y Create another login account (yes/no) [n]: n Configure read-only SNMP community string (yes/no) [n]: n Configure read-write SNMP community string (yes/no) [n]: n Enter the switch name : N7K Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: y Mgmt0 IPv4 address : 172. So setup always assumes system defaults and not the current system configuration values.16.and lowercase.0. you must use a minimum of eight characters with a combination of alphabets.255.0. the switch shows you the initial configuration based on your input. Setup configures only enough connectivity for management of the system.1.1 exit .255. After the initial parameters are provided to the startup script. when no configuration is present. Figure 5-14 shows the initial configuration setups of Nexus switches.255.1.1.0 Configure the default gateway? (yes/no) [y]: y IPv4 address of the default gateway : 172. the default answers are mentioned in square brackets [xxx].16.20 255. You can change the password strength check during the initial setup script by answering no to the enforce secure password standard question. or later using CLI command no password strengthcheck. Press Enter if you would like to accept the default parameter. *Note: setup is mainly used for configuring the system initially.1. Answer yes if you want to enter the basic system configuration dialog. After the admin user password is configured. When you configure a strong password. Press Enter at anytime to skip a dialog. and it is saved in the switch NVRAM.

and the switch will continue to operate normally. ISSU upgrades the software images without disrupting data-plane traffic. This enables you to reschedule the software upgrade to a time less critical for your services to minimize the impact of outage. the Cisco NX-OS software warns you before proceeding with the software upgrade. The software upgrade of a switch in the data center should not have any impact on availability of services. it updates the following components on the system as needed: . The Cisco NX-OS software consists of two images: the kickstart image and the system image. The write erase command erases the entire startup configuration. When an ISSU is initiated. the switch will go into the startup configuration mode. There is minimal disruption to control-plane traffic only. except for the following: Boot variable definitions The IPv4 configuration on the mgmt0 interface. including the IP address and subnet mask The route address in the management VRF To remove the boot variable definitions and the IPv4 configuration on the mgmt0 interface. and you can configure IPv6 later using CLI. To delete the switch startup configuration from NVRAM. a time might come when you have to upgrade the NX-OS software to address the software defects or to add new features to the switch. the network administrator initiates the ISSU manually using one of the following methods: From CLI using a single command From the administrative interface of Cisco Data Center Network Manager (DCNM) On Nexus switches. This command will not delete the running configuration of the switch. To upgrade the switch to newer software. When you restart the switch after erasing its configuration from NVRAM. you can use the write erase boot command. you can disconnect from the console port and access the switch using the IP address configured on the mgmt0 port.no feature telnet ssh key rsa 1024 force feature ssh no system default switchport system default switchport shutdown Would you like to edit the configuration? (yes/no) [n]: <enter> Use this configuration and save it? (yes/no) [y]: y Figure 5-14 Initial Configuration Dialogue After the initial configuration is completed. After installing the Nexus switch and putting it into production. In-Service Software Upgrade Each Nexus switch is shipped with the Cisco NX-OS software. use the write erase command. ISSUs enable a network operator to upgrade NX-OS software without disrupting the operation of the switch. You can erase the configuration of a Nexus switch to return to the factory default. It is important to note that the initial configuration script supports only the configuration of the IPv4 address on the mgmt0. In the case where ISSU will cause disruption to the data-plane traffic.

each fabric extender is upgraded to the Cisco NX-OS software release of the parent switch after the parent switch ISSU completes. In a topology where Cisco Nexus 2000 Series fabric extenders are used.1 and later. . but the data plane of the switch continues its operation and keeps forwarding packets. the data plane keeps forwarding packets.Kickstart image Supervisor BIOS System image Fabric extender (FEX) image I/O module BIOS and image A well-designed data center considers service continuity by implementing switch-level redundancy at each layer within the data center and by dual-homing servers to redundant switches at the access layer. it is running an updated version of NX-OS. a multistage ISSU cycle is initiated by the installer service. and the process varies based on the Nexus platform. examples of Nexus 5000 and Nexus 7000 switches are described in the following sections. ISSU on the Cisco Nexus 5000 The Nexus 5000 switch is a single supervisor system. the control plane of the switch is inactive. therefore. the ISSU performed on the parent switch automatically downloads and upgrades each of the attached fabric extenders. Cisco NX-OS software supports ISSU from release 4. and the ISSU on this switch causes the supervisor CPU to reset and load the new software image. During this reload. This separation in the control and data planes leads to a software upgrade with no service disruption. existing flows for servers connected to the Cisco Nexus device should see no traffic disruption. and supervisor state is restored to the previously known configuration. When a network administrator initiates ISSU. control plane operation is resumed. At this time the runtime state of the control plane and data plane are synchronized.2. This multistage ISSU cycle is designed to minimize overall system interruption with no impact on data traffic forwarding. It takes fewer than 80 seconds to restore control plane operation. Figure 5-15 shows the ISSU process on Nexus 5000 switch. As a result. When the system recovers from the reset. To explain ISSU. During the upgrade.

such as STP. The ISSU process will abort if the system is configured with LACP fast timers.0. No interface should be in a spanning tree designated forwarding state. All the configuration change requests from CLI and SNMP are denied.5. Use the dir command to verify that required NX-OS files are present on the system. and module insertion are not allowed during ISSU operation. After verifying the image on the bootflash.5. ISSU cannot be performed if any application has Cisco Fabric Services (CFS) lock enabled. FSPF.bin system n5000uk9. use the show install all impact command to determine which components of the system will be affected by the upgrade. The syntax of this command is show install all impact kickstart n5000-uk9-kickstart.2. Make sure both the kickstart file and the system file have the same version. these images are copied to the switch bootflash.N2.3.N2. FC Fabric changes that affect zoning. The Nexus 5000 switch undergoing ISSU must not be a root switch in the STP topology.3.2.0. Link Aggregation Control Protocol (LACP) fast timers are not supported. domain manager. Network topology changes.Figure 5-15 ISSU Process on Nexus 5000 To perform ISSU. download the required software image and copy it to your upgrade location. . Typically.bin Some of the important considerations for Nexus 5000 ISSU are the following: The Nexus 5000 switch does not allow configuration changes during ISSU.

except when the ISSU process resets the CPU of the switch being upgraded. While upgrading the switches in vPC topology. However. In this process. ISSU on the Cisco Nexus 7000 The Nexus 7000 switch is a modular system with supervisor and I/O modules installed in a chassis. In this process. It is okay to have bridge assurance enabled on the vPC peer-link. . Therefore. the linecards or I/O modules are upgraded. the nondisruptive software upgrade is not supported when the Layer 3 license is installed. ISSU is achieved using a rolling update process. the primary switch is responsible for upgrading the FEX. When both supervisors are upgraded successfully. Figure 5-16 shows the ISSU process for the Nexus 7000 platform. and a failover is done so the standby supervisor with the new image becomes active. the other peer switch locks the configuration until the ISSU is completed. The Cisco Nexus 5500 platform switches support Layer 3 functionality. In FEX virtual port channel (vPC) configurations. first the standby supervisor is upgraded to the new software. ISSU is supported on Nexus 5000 in a vPC configuration.If a zone merge or zone change request is in progress. It is a fully distributed system with nondisruptive Stateful Switch Over (SSO) and ISSU capabilities. Note Always refer to the latest release notes for ISSU guidelines. When one peer switch is undergoing an ISSU. The software is updated on the supervisor that is now in standby mode (previously active). The vPC switches continue their control plane communication during the entire ISSU process. both the data plane and the control plane are available during the entire software upgrade process. on Nexus 7000. The peer switch is responsible for holding on to its state until the ISSU process is completed successfully. The Nexus 7000 ISSU leverages the SSO capabilities of the switch to perform a nondisruptive software upgrade. you should start with the primary vPC switch. Bridge assurance must be disabled for nondisruptive ISSU. ISSU will be aborted. Please note that the SSO feature is available only when dual supervisor modules are installed in the system. the control plane of the switch is running on new software. After this step. The secondary vPC switch should be upgraded after the ISSU process completes successfully on the primary device.

Figure 5-17 shows the state of supervisor modules before and after the software upgrade. .Figure 5-16 ISSU Process on Nexus 7000 It is important to note that the ISSU process results in a change in the state of active and standby supervisors.

one I/O module is upgraded at a time.Figure 5-17 Nexus 7000 Supervisors Before and After ISSU Before you start ISSU on a Nexus 7000 switch. support for simultaneous line card upgrade is added into this release to reduce the ISSU time. This method was very time consuming. make sure that the upgrade process will be nondisruptive. Starting from NX-OS Release 6. You can start the ISSU using the install all command. a parallel upgrade on the FEX is supported by ISSU. It is not possible to upgrade the software individually for each VDC. The software upgrade might be disruptive under the following conditions: A single-supervisor system with a kickstart or system image changes A dual-supervisor system with incompatible system software images VPC peers can only operate dissimilar versions of the Cisco NX-OS software during the upgrade or downgrade process.2(1). that is. Prior to NX-OS release 5. For releases . Operating VPC peers with dissimilar versions after the upgrade or downgrade process is complete is not supported. Following are some of the important considerations for a Nexus 7000 ISSU: When you upgrade Cisco NX-OS software on Nexus 7000. An ISSU might be disruptive if you have configured features that are not supported on the new software image. use the parallel keyword in the command. therefore. To start a parallel upgrade of line cards. It is recommended to use the show install all impact command to check whether the images are compatible for an upgrade. including all VDCs on the physical device. use the following ISSU command: install all kickstart <image> system <image> parallel. Configuration mode is blocked during the Cisco ISSU to prevent any changes. ISSU performs the linecard software upgrade serially. To determine ISSU compatibility. it is upgraded for the entire system.1(1). To initiate parallel FEX software upgrades. use the show incompatibility system command.

Example 5-1 Command-Line Help Click here to view code image N5K-A# configure terminal Enter configuration commands. configuration. N5K-A(config)# interface ? ethernet Ethernet IEEE 802. user-defined names. you first must enter into the group hierarchy and then type the command. you use the configure terminal command and then use the appropriate command to configure the system. you can type an incomplete command and then press the Tab key to complete the rest of the command. The commands that perform similar functions are grouped together at the same level. the Tab key .prior to Cisco NX-OS Release 6. This section discusses important NX-OS CLI commands. In Example 5-2. Example 5-1 shows how to enter into switch configuration mode. Command-Line Help To obtain a list of available commands. the use of the where command to find current context. This form of help is called command syntax help. If you would like to use a command. or hardware information are grouped under the show command. Important CLI Commands In NX-OS all CLI commands are organized hierarchically. Pressing the Tab key displays a brief list of the commands that are available at the current branch. These commands will help you in configuration and verification of basic switch connectivity. command completion may also be used to complete a system-defined name and. the upgrade will be a serial upgrade. Even if a user types the parallel keyword in the command. to configure a system. type a question mark (?) at the system prompt. one per line. and command syntax help for the interface command. End with CNTL/Z. Pressing the Tab key provides command completion to a unique but partially typed command string. all the commands that display system.1(1). For example. and all commands that are used to configure the switch are grouped under the configure terminal command. in some cases. For example. only a serial upgrade of FEX is supported.3z fc Fiber Channel interface loopback Loopback interface mgmt Management interface port-channel Port Channel interface san-port-channel SAN Port Channel interface vfc Virtual FC interface vlan Vlan interface N5K-A(config)# interface <press-tab-key> ethernet loopback port-channel fc mgmt san-port-channel N5K-A(config)# interface vfc vlan Automatic Command Completion In any command mode. enter a question mark (?) in place of a keyword or argument. To list keywords or arguments of a command. In syntax help. you see full parser help and descriptions. without any detail description. In the Cisco NX-OS software.

NX-OS will go out of the context to find the match. you can do so by running the command with the feature specified. If you need to view a specific feature’s portion of the configuration. it displays all the configuration. All interface-related commands for interface Ethernet 1/1 will go under the interface. For example. For example. Example 5-3 Configuration Context Click here to view code image N5K-A(config)# interface ethernet 1/1 N5K-A(config-if)# where conf. When you type a command that does not belong to the current context. Example 5-2 Automatic Command Completion Click here to view code image N5K-A(config)# vrf con<press-tab-key> N5K-A(config)# vrf context N5K-A(config)# vrf context man<press-tab-key> N5K-A(config)# vrf context management N5K-A(config-vrf)# Entering and Exiting Configuration Context To perform a configuration task. You can use the where command to display the current context and login credentials. first you must switch to the appropriate configuration context in the group hierarchy. you enter into the interface context by typing interface ethernet 1/1. The Exit command will take you back one level in the hierarchy. including the default configuration of the switch. show running-config interface ethernet 1/1 shows the configuration of interface Ethernet 1/1. For example. The Ctrl+Z key combination exits configuration mode but retains the user input on the command line. If you run this command with command-line parameter all. interface Ethernet1/1 admin@N5K-A N5K-A(config-if)# exit N5K-A(config)# show <Press CTRL-Z> N5K-A# show Display Current Configuration The show running-config command displays the current running configuration of the switch. you can run all show commands from the configuration context. Commands discussed in this section and their output are shown in Example 5-3. Example 5-4 show running-config Click here to view code image N5K-A# sh running-config ipqos !Command: show running-config ipqos . to configure an interface. as shown in Example 5-4.is used to complete the VRF command and then the name of a VRF on the command line.

including the following: Get regular expression (grep) and extended get regular expression (egrep) Less No-more Head and tail The pipe option can also be used multiple times or recursively. the show spanning-tree | egrep ignore-case next 4 mstp command is executed. or filter the output of the command. In Example 5-5. Example 5-5 Command Piping and Parsing Click here to view code image N5K-A# sh spanning-tree | egrep ? WORD Search for the expression count Print a total count of matching lines only ignore-case Ignore case difference when comparing strings invert-match Print only lines that contain no matches for <expr> line-exp Print only lines where the match is a whole line line-number Print each match preceded by its line number .!Time: Sun Sep 21 12:45:14 2014 version 5. and ignore-case in the command indicates that this match must not be case sensitive. the Cisco NX-OS software provides the searching and filtering options. redirect. These searching and filtering options follow a pipe character (|) at the end of the show command. The pipe parameter can be placed at the end of all CLI commands to qualify. where the qualifiers indicate to display the lines that match the regular expression mstp. the next 4 in the command means that 4 lines of output following the match of mstp are also to be displayed. The show spanning-tree command’s output is piped to egrep. It also has other options. Many advanced pipe options are available.0(3)N2(2) class-map type qos class-fcoe class-map type queuing class-fcoe match qos-group 1 class-map type queuing class-all-flood match qos-group 2 class-map type queuing class-ip-multicast match qos-group 2 class-map type network-qos class-fcoe match qos-group 1 class-map type network-qos class-all-flood match qos-group 2 class-map type network-qos class-ip-multicast match qos-group 2 system qos service-policy type qos input fcoe-default-in-policy service-policy type queuing input fcoe-default-in-policy service-policy type queuing output fcoe-default-out-policy service-policy type network-qos fcoe-default-nq-policy Command Piping and Parsing To locate information in the output of a show command. The egrep parameter enables you to search for a word or expression and print a word count. When a match is found.

You can also use the show feature command to verify the features that are currently enabled on the switch. such as 1Gbps and 10Gbps Ethernet.73cb. and FCOE by using the feature command. you can see only Ethernet. the process for that feature is not running. not all features are enabled by default. or logical interfaces. and their related configuration contexts are not visible until that feature is enabled on the switch.next prev word-exp Print <num> lines of context after every matching line Print <num> lines of context before every matching line Print only lines where the match is a complete word N5K-A# sh spanning-tree | egrep ignore-case next 4 mstp Spanning tree enabled protocol mstp Root ID Priority 32768 Address 0005. In Example 5-6.6d01 This bridge is the root Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Feature Command Cisco NX-OS supports many features. mgmt. PIM. This approach optimizes the operating system and increases availability of the switch by running only required processes on the switch. vFC. interfaces. and Ethernet port channel interface types in the Cisco NX-OS software command output. Example 5-6 feature Command Click here to view code image N5K-A(config)# sh running-config | include feature feature fcoe feature telnet no feature http-server Feature lldp N5K-A(config)# interface <Press-Tab-Key> ethernet loopback port-channel vfc . After enabling the interface-vlan feature on the Nexus switch. If a feature is not enabled on the switch. protocols. or disable them if they are no longer required. When you use the interface command. the SVI interface type appears as a valid interface option. and the configuration and verification commands for that feature are not available from the CLI. You can enable or disable specific features such as OSPF. For some features. Nexus switch supports physical interfaces. the show running-config | include feature command is used to determine the features that are currently enabled on the switch. NX-OS only enables those features that are required for initial network connectivity. followed by pressing the Tab key for help. vPC. such as SVI. and tunnels. Because of the feature command on the Nexus platform. An example of such a feature is OSPF. the configuration and verification commands become available. After you enable the feature on the switch. only feature telnet and feature lldp are enabled. It is recommended that you keep the features disabled until they are required. In the same manner other features can be enabled before configuring them on the switch. In the default configuration. however. the process will not start until you configure the features.

0 World-Wide-Name(s) (WWN) -------------------------------------------------20:01:00:05:73:cb:6c:c0 to 20:08:00:05:73:cb:6c:c0 -- Mod MAC-Address(es) --.fc N5K-A(config)# N5K-A(config)# ethernet fc N5K-A(config)# mgmt san-port-channel feature interface-vlan interface <Press-Tab-Key> loopback port-channel mgmt san-port-channel vfc vlan Verifying Modules The show module command provides a hardware inventory of the interfaces and module that are installed in the chassis.6cc8 to 0005. If fabric extenders (FEX) are attached. and their range of MAC addresses for the module are also available via this command.0 1. The number and type of expansion modules are dependent on the model of the system. Example 5-7 Nexus 5000 Modules Click here to view code image N5K-A# show module Mod Ports Module-Type Model Status --. This switch has a Supervisor 1 module in slot 5.0(3)N2(2) Hw -----1.-------------------------------.73cb. and a 32-port M1 module in slot 10.0(3)N2(2) 5. a 32-port F1 module in slot 9.----------------------------------.-------------------------------------1 0005. Example 5-8 Nexus 7000 Modules Click here to view code image N7K-1# show module Mod Ports Module-Type Model Status --. they are also visible in this command. If expansion modules are installed. FEX slot numbers are assigned by the administrator during FEX configuration. The number of ports associated with module 1 will vary based on the model of the switch.-----------1 32 O2 32X10GE/Modular Universal Pla N5K-C5548UP-SUP active * 3 0 O2 Daughter Card with L3 ASIC N55-D160L3 ok Mod --1 3 Sw -------------5. Modules that are inserted into the expansion module slots will be numbered according to the slot into which they are inserted.73cb. Software and hardware versions of each module.6ce7 3 0000. their serial numbers. In Nexus 5000 Series switch.----. the main system is referred to as module 1. This number can be seen under the Module-Type column of the show module command.0000. Example 5-7 shows the output of the show module command for the Nexus 5548 switch.0000 to 0000.---------------------. each FEX is also considered an expansion module of the main system.000f N5K-A# Serial-Num ---------JAF1505BGTG JAF1504CCAR The show module command on the Nexus 7000 Series switches identifies which modules are inserted and operational on the switch.0000. Example 5-8 shows output of the show module command on the Nexus 7000 switch.-----------------.----.---------- .

null. all Ethernet interfaces are named Ethernet. speed.5 9 10 0 32 32 Supervisor module-1X 1/10 Gbps Ethernet Module 10 Gbps Ethernet Module Mod --5 9 10 Sw -------------5. Unlike other switching platforms from Cisco. and rate mode (dedicated or shared) is also displayed.2(1) 5. SVI.output removed ----- . Information about the switch port mode.1 1000 1500 -------------------------------------------------------------------------------Ethernet VLAN Type Mode Status Reason Speed Port Interface Ch # -------------------------------------------------------------------------------Eth1/1 1 eth trunk up none 10G(D) -Eth1/2 1 eth trunk down SFP not inserted 10G(D) 60 Eth1/3 1 eth access down SFP not inserted 10G(D) -Eth1/4 -eth routed down SFP not inserted 10G(D) -Eth1/5 1 eth fabric up none 10G(D) 105 Eth1/6 1 eth access down SFP not inserted 10G(D) -Eth1/7 1 eth access down SFP not inserted 10G(D) -Eth1/8 1 eth access down SFP not inserted 10G(D) – ----.2(1) N7K-SUP1 N7K-F132XP-15 N7K-M132XP-12 active * ok ok Hw -----1.1 2. VLAN. on Nexus there is no differentiation in the naming convention for the different speeds.16. Example 5-9 shows the output of the show interface brief command.1.2(1) 5. subinterfaces Management: Management On the Nexus platform. where the name of the interface also indicates the speed. tunnels. loopback.8 1. Example 5-9 show interface brief Command Click here to view code image N5K-A# show interface brief -------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------mgmt0 -up 172. For interfaces that are down. it also lists the reason why the interface is down.0 ----. The show interface brief command lists the most relevant interface parameters for all interfaces on a Cisco Nexus Series switch.output removed ----- Verifying Interfaces The Cisco Nexus switches running NX-OS software support the following types of interfaces: Physical: Ethernet (10/100/1000/10G/40G) Logical: Port channel.

output removed ----- .0(3)N2(2) interface Ethernet1/1 no description lacp port-priority 32768 lacp rate normal priority-flow-control mode auto lldp transmit lldp receive no switchport block unicast no switchport block multicast no hardware multicast hw-hash no cdp enable ----. reliability 255/255. and traffic statistics of the interface.6cc8) MTU 1500 bytes. 252 packets/sec ----. 181148 bytes/sec. 10 Gb/s. Example 5-10 shows the output of the show interface Ethernet 1/1 command. txload 1/255. 195760 bytes/sec.73cb. address: 0005. media type is 10G Beacon is turned off Input flow-control is off. Example 5-11 Showing Default Configuration Click here to view code image N5K-A# show running-config interface ethernet 1/1 all !Command: show running-config interface Ethernet1/1 all !Time: Sun Sep 21 17:39:33 2014 version 5. 237 packets/sec 30 seconds output rate 1566080 bits/sec. DLY 10 usec.output removed ----- Interfaces on the Cisco Nexus Series switches have many default settings that are not visible in the show running-config command. The default configuration parameters for any interface may be displayed using the show running-config interface all command. MAC address. BW 10000000 Kbit. output flow-control is off Rate mode is dedicated Switchport monitor is off EtherType is 0x8100 Last link flapped 3week(s) 6day(s) Last clearing of "show interface" counters never 30 seconds input rate 1449184 bits/sec. MTU. rxload 1/255 Encapsulation ARPA Port mode is trunk full-duplex. Example 5-11 shows the output of the show running-config interface Ethernet 1/1 all command.6cc8 (bia 0005. including the reason why that interface might be down. It also shows details about the port speed. Example 5-10 Displaying Interface Operational State Click here to view code image N5K-A# show interface ethernet 1/1 Ethernet1/1 is up Hardware: 1000/10000 Ethernet. description.The show interface command displays the operational state of any interface.73cb.

The clear logging nvram command clears the logged messages in NVRAM. 2014 Aug 24 22:40:45 N5K-A ntpd[4744]: ntp:precision = 1. the device outputs messages to terminal sessions and to a log file.output removed ----- Viewing NVRAM logs To view the NVRAM logs. Example 5-12 Viewing Log File Click here to view code image N5K-A# sh logging logfile 2014 Aug 24 22:40:45 N5K-A %CDP-6-CDPDUP: CDP Daemon Up. Viewing Logs By default.4) : No such file or directory 2014 Aug 24 22:40:45 N5K-A %STP-6-STATE_CREATED: Internal state created stateless 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: pss init done 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: conditional action being done 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: ARP lib init done 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: ppf lib init done 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: In global cfg db 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: In vlan cfg db 2014 Aug 24 22:40:46 N5K-A last message repeated 12 times ----. You can display or clear messages in the log file. Example 5-13 shows the output of the show logging nvram command.The highlighted portions of the CLI include default parameters. The show logging logfile [start-time yyyy mmm dd hh:mm:ss] [end-time yyyy mmm dd hh:mm:ss] command displays the messages in the log file that have a time stamp within the span entered.000 usec 2014 Aug 24 22:40:45 N5K-A %FEX-5-FEX_PORT_STATUS_NOTI: Uplink-ID 0 of Fex 105 that is connected with Ethernet1/5 changed its status from Created to Configured 2014 Aug 24 22:40:47 N5K-A %SYSLOG-3-SYSTEM_MSG: Syslog could not be sent to server(10. use the show logging nvram [last number-lines] command. Use the following commands: The show logging last number-lines command displays the last number of lines in the logging file.14. Example 5-13 Viewing NVRAM Logs Click here to view code image N5K-A# sh logging nvram 2012 Jul 8 08:30:22 N5K-A 2012 Jul 8 08:30:22 N5K-A 2012 Jul 8 08:30:26 N5K-A 2012 Jul 10 09:21:20 N5K-A %$ %$ %$ %$ VDC-1 VDC-1 VDC-1 VDC-1 %$ %$ %$ %$ %PFMA-2-FEX_STATUS: Fex 105 is online %NOHMS-2-NOHMS_ENV_FEX_ONLINE: FEX-105 On-line %PFMA-2-FEX_STATUS: Fex 105 is online %SATCTRL-FEX105-2-SOHMS_DIAG_ERROR: FEX-105 . Example 5-12 shows the output of the show logging logfile command.113. The clear logging logfile command clears the contents of the log file.

” also on the disc. noted with the key topics icon in the outer margin of the page. . “Memory Tables.” (found on the disc) or at least the section for this chapter. Appendix C.kernel 2013 Mar 23 06:52:08 N5K-A %$ VDC-1 %$ %NOHMS-2-NOHMS_ENV_ERR_TEMPMINALRM: System minor temperature alarm on module 1. Sensor Outlet crossed minor threshold 50C ----. . “Memory Tables Answer Key.kernel 2013 Mar 13 12:21:59 N5K-A %$ VDC-1 %$ Mar 13 12:21:59 %KERN-2-SYSTEM_MSG: IO Error(set): IO space is not enabled.Module 1: Runtime diag detected major event: Voltage failure on power supply: 1 2012 Jul 10 09:21:20 N5K-A %$ VDC-1 %$ %SATCTRL-FEX105-2-SOHMS_DIAG_ERROR: FEX-105 System minor alarm on power supply 1: failed 2012 Jul 10 09:24:26 N5K-A %$ VDC-1 %$ %SATCTRL-FEX105-2-SOHMS_DIAG_ERROR: FEX-105 Recovered: System minor alarm on power supply 1: failed 2013 Jan 13 01:17:35 N5K-A %$ VDC-1 %$ Jan 13 01:17:35 %KERN-2-SYSTEM_MSG: IO Error(set): IO space is not enabled.output removed ----- Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. Table 5-9 lists a reference of these key topics and the page numbers on which each is found. includes completed tables and lists to check your work. Define Key Terms . and complete the tables and lists from memory. base_addr=0x0. Table 5-9 Key Topics for Chapter 5 Complete Tables and Lists from Memory Print a copy of Appendix B. base_addr=0x0.

Define Key Terms Define the following key terms from this chapter. and check your answers in the glossary: Data Center Network Manager (DCNM) Application Specific Integrated Circuit (ASIC) Frame Check Sequence (FCS) Unified Port Controller (UPC) Unified Crossbar Fabric (UCF) Cisco Discovery Protocol (CDP) Bidirectional Forwarding Detection (BFD) Unidirectional Link Detection (UDLD) Link Aggregation Control Protocol (LACP) Address Resolution Protocol (ARP) Spanning Tree Protocol (STP) Dynamic Host Configuration Protocol (DHCP) Connectivity Management Processor (CMP) Virtual Device Context (VDC) Role Based Access Control (RBAC) Simple Network Management Protocol (SNMP) Remote Network Monitoring (RMON) Fabric Extender (FEX) Management Information Base (MIB) Generic Online Diagnostics (GOLD) Extensible Markup Language (XML) In Service Software Upgrade (ISSU) Field-Programmable Gate Array (FPGA) Complex Instruction Set Computing (CISC) Reduced Instruction Set Computing (RISC) .

shown in the following list. There will be no written answers to these questions in this book. This might seem overwhelming to you initially. Table PI-1 Part I Review Checklist Repeat All DIKTA Questions For this task. . you should try to answer these self-assessment questionnaires to the best of your ability. This exercise lists a set of key self-assessment questions. which are relevant to the particular chapter. Self-Assessment Questionnaire Each chapter of this book introduces several key learning items. answer the Part Review questions for this part of the book. using the PCPT software. answer the “Do I Know This Already?” questions again for the chapters in this part of the book. you should go back and refresh on the key topics. without looking back at the chapters or your notes. which will help you remember the concepts and technologies of these key topics as you progress with the book. take the time to reread those topics. but as you read through each new chapter you will become more familiar and comfortable with them. For this self-assessment exercise in this book. Refer to “How to View Only DIKTA Questions by Part” in the Introduction to this book for help with how to make the PCPT software show you DIKTA questions for this part only. If you do not remember some details. Refer to “How to View Only Part Review Questions by Part” in the Introduction to this book for help with how to make the PCPT software show you Part Review questions for this part only. Answer Part Review Questions For this task. Details on each task follow the table.Part I Review Keep track of your part review progress with the checklist shown in Table PI-1. Review Key Topics Browse back through the chapters and look for the Key Topic icons. if you’re not certain. using the PCPT software.

and so on. but the chapter is structured to give you the knowledge and understanding to be able to discuss and explain these self-assessment questions. number 2 is about Cisco Nexus Product Family. For example. .Think of each of these open self-assessment questions as being about the chapters you’ve read. You might or might not find the answer explicitly. number 1 is about Data Center Network Architecture. Virtual Port Channel (vPC) would apply to Chapter 1. Note your answer and try to reference key words in the answer from the relevant chapter. For example.

Part II: Data Center Storage-Area Networking Exam topics covered in Part II: Describe the Network Architecture for the Storage Area Network Describe the Edge/Core Layers of the SAN Describe Fibre Channel Storage Networking Describe Initiator Target Describe the Different Storage Array Connectivity Describe VSAN Describe Zoning Describe Basic SAN Connectivity Describe the Cisco MDS Product Family Describe the Cisco MDS Multilayer Directors Describe the Cisco MDS Fabric Switches Describe the Cisco MDS Software and Storage Services Describe the Cisco Prime Data Center Network Management Describe Storage Virtualization Describe LUN Storage Virtualization Describe Redundant Array of Inexpensive Disk (RAID) Describe Virtualizing SANs (VSAN) Describe Where the Storage Virtualization Occurs Perform Initial Setup on Cisco MDS Switch Verify SAN Switch Operations Verify Name Server Login Configure and Verify VSAN Configure and Verify Zoning Chapter 6: Data Center Storage Architecture Chapter 7: Cisco MDS Product Family Chapter 8: Virtualizing Storage Chapter 9: Fibre Channel Storage-Area Networking .

Chapter 6. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section.1 zettabytes in 2016. and to make that data accessible throughout the enterprise data centers. thousands of devices are newly connected to the Internet. Companies are searching for more ways to efficiently manage expanding volumes of data. and availability. Fibre Channel. It compares Small Computer System Interface (SCSI). social media. and bandwidth at ever-lower costs. You can find the answers in Appendix A. and network-attached storage (NAS) connectivity for remote server storage. the rapid growth of cloud. Devices previously only plugged into a power outlet are connected to the Internet and are sending and receiving tons of data at scales. flexibility. scalability. movement. This demand is pushing the move of storage into the network. and backup. and mobile computing. “Answers to the ‘Do I Know This Already?’ Quizzes. and an improved capability to combine technologies (both hardware and software) in more powerful ways. storage. Gartner predicts that worldwide consumer digital storage needs will grow from 329 exabytes in 2011 to 4. Storage area networks (SAN) are the leading storage infrastructure of today. It covers Fibre Channel protocol and operations.” . the ability to analyze Big Data and turn it into actionable information. Table 6-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. Powerful technology trends include the dramatic increase in processing power. This chapter goes directly into the edge/core layers of the SAN design and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. Data Center Storage Architecture This chapter covers the following exam topics: Describe the Network Architecture for the Storage-Area Network Describe the Edge/Core Layers of the SAN Describe Fibre Channel Storage Networking Describe Initiator Target Describe the Different Storage Array Connectivity Describe VSAN Describe Zoning Describe Basic SAN Connectivity Every day. SANs offer simplified storage management. and improved data access. read the entire chapter. This chapter discusses the function and operation of the data center storage-networking technologies.

b. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. Fibre Channel . 2. Business continuity d. 1. you should mark that question as wrong for purposes of the self-assessment. Each block or storage volume can be treated as an independent disk drive and is controlled by an external server OS.Table 6-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. CIFS b. Block-level storage systems are well suited for bulk file storage. e. Consolidation b. If you do not know the answer to a question or are only partially sure of the answer.) a. They can support external boot of the systems connected to them. Which of the following options describe advantages of block-level storage systems? (Choose all the correct answers. None 3.) a. Which of the following options describe advantages of storage-area network (SAN)? (Choose all the correct answers. Block-level storage systems are generally inexpensive when compared to file-level storage systems. Secure access to all hosts e. c. Storage virtualization c.) a. Which of the following protocols are file based? (Choose all the correct answers. Block-level storage systems are very popular with storage area networks. d.

The domain ID is an 8-bit field. Low latency 6. Which of the following options are correct for Fibre Channel addressing? a. An HBA or a storage device can belong to only a single VSAN—the VSAN associated with the Fx port. The arbitrated loop physical address (AL-PA) is a 16-bit address. c. Traffic management d. array controller. to ensure that the initiator and target process can communicate? a. Port density and topology requirements b. Which type of port is used to create an ISL on a Fibre Channel SAN? a. SCSI d. and Fibre Channel disk drive has a single unique nWWN. c. PLOGI c. Membership is typically defined using the VSAN ID to Fx ports. Which of the following options are correct for VSANs? (Choose all the correct answers. d. Centralized controller and cache system. NP 9. PRLO 8. b. FLOGI b. 10. Which mapping table is inside the front-end array controller and determines which LUNs are advertised through which storage array ports? . PRLI d. E b. c. gateway. Which of the following options should be taken into consideration during SAN design? a.) a. F c. b. An HBA or storage device can belong to multiple VSANs. and one pWWN for each port. Device performance and oversubscription ratios c. Integrated large-scale disk array. On a Cisco MDS switch. b. only 239 domains are available to the fabric. d. NFS 4. 7. Which process enables an N Port to exchange information about ULP support with its target N Port.) a. A dual-ported HBA has three WWNs: one nWWN. TN d. one can define 4096 VSANs. d. Which options describe the characteristics of Tier 1 storage? (Choose all the correct answers. switch. Every HBA. It is used for mission-critical applications.c. Back-up storage product. 5.

HDDs have maintained this position into the modern era of servers and personal computers. which is connected to a motor that spins the .000 gigabytes (GB. HDDs are expected to remain the dominant medium for secondary storage due to predicted continuing advantages in recording capacity. Zoning d. These platters are usually made of some hard material. Multisector operations are possible. LUN mapping b. SSDs are replacing HDDs where speed. Continuously improved. we can view the disk as an array of sectors. write latency. such as aluminum. No correct answer exists Foundation Topics What Is a Storage Device? Data is stored on hard disk drives that can be both read and written on. meaning individual blocks of data can be stored or retrieved in any order rather than sequentially. The drive consists of a large number of sectors (512-byte blocks).a. Data is read in a random-access manner. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte (TB) drive has a capacity of 1. Serial Attached SCSI (SAS). Depending on the methods that are used to run those tasks. indeed. each of which is called a surface. including as of 2011. each platter has two sides. and HDDs became the dominant secondary storage device for general-purpose computers by the early 1960s. A disk can have one or more platters. Most current units are manufactured by Seagate. and then coated with a thin magnetic layer that enables the drive to persistently store bits even when the drive is powered off. price per unit of storage. the primary competing technology for secondary storage is flash memory in the form of solid-state drives (SSD). The basic interface for all modern drives is straightforward. and Fibre Channel. SCSI. The sectors are numbered from 0 to n – 1 on a disk with n sectors. also called IDE or EIDE. Thus. More than 200 companies have produced HDD units. LUN zoning e. each of which can be read or written. IBM introduced the first HDD in 1956. LUN masking c. many file systems will read or write 4KB at a time (or more). and Western Digital. HDDs are accessed over one of a number of bus types. The platters are all bound together around the spindle. where 1 gigabyte = 1 billion bytes). As of 2014. and the HDD technology on which they were built. An HDD retains its data even when powered off. parallel ATA (PATA. The primary characteristics of an HDD are its capacity and performance. described before the introduction of SATA as ATA). the read and write function can be faster or slower. Toshiba. and durability are more important considerations. 0 to n – 1 is thus the address space of the drive. Serial ATA (SATA). A platter is a circular hard surface on which data is stored persistently by inducing magnetic changes to it. power consumption. A hard disk drive (HDD) is a data storage device used for storing and retrieving digital information using rapidly rotating disks (platters) coated with magnetic material. and product lifetime.

And cache is just some small amount of memory (usually around 8 or 16 MB). JBOD (derived from “just a bunch of . and typical modern values are in the 7. when reading a sector from the disk. The most widespread standard for configuring multiple hard drives is Redundant Array of Inexpensive/Independent Disks (RAID). and sector addresses have been replaced by a method called logical block addressing (LBA). Figure 6-1 Components of Hard Disk Drive A specific sector must be referenced through a three-part address composed of cylinder/head/sector information. Data is distributed across the drives in one of several ways. Although the internal system of cylinders. which the drive can use to hold data read from or written to the disk. The rate of rotation is often measured in rotations per minute (RPM). depending on the specific level of redundancy and performance required. “Virtualizing Storage. which comes in several standard configurations and nonstandard configurations. performance. To a large degree. it is also not used much anymore by the systems and subsystems that use disk drives. In Chapter 8. availability. and all corresponding tracks from all disks define a cylinder (see Figure 6-1). Cylinder. logical block addressing facilitates the flexibility of storage networks by allowing many types of disk drives to be integrated more easily into a large heterogeneous storage environment.200 RPM to 15. a drive that rotates at 10. the drive might decide to read in all of the sectors on that track and cache them in its memory. and sectors is interesting.platters around (while the drive is powered on) at a constant (fixed) rate. as well as whole disk failure. Each scheme provides a different balance between the key goals: reliability. doing so enables the drive to quickly respond to any subsequent requests to the same track. There are usually 1024 tracks on a single disk. Note that we will often be interested in the time of a single rotation.” all the RAID levels are discussed in great detail. For example. RAID 1). for example. which makes disks much easier to work with by presenting a single flat address space. and capacity. tracks. The different schemes or architectures are named by the word RAID followed by a number (RAID 0.000 RPM means that a single rotation takes about 6 milliseconds (6 ms). referred to as RAID levels.000RPM range. we call one such concentric circle a track. Data is encoded on each surface in concentric circles of sectors. RAID levels greater than RAID 0 provide protection against unrecoverable (sector) read errors. track.

these blocks are controlled by the server-based operating systems. File-level storage systems are generally inexpensive when compared to block-level storage systems. The raw blocks (storage volumes) are created. The following are advantages of block-level storage systems: Block-level storage systems offer better performance and speed than file-level storage systems. while making them accessible either as independent hard drives or as a combined (spanned) single logical volume with no actual RAID functionality. It stores files and folders and is visible as such to both the systems storing the files and the systems accessing them. Typically. Disk arrays are divided into categories: Network attached storage (NAS) arrays are based on file-level storage.disks”) is an architecture involving multiple hard drives. and the like. power supplies. resiliency. Advantages of file-level storage systems are the following: A file-level storage system is simple to implement and simple to use. disk enclosures. CIFS. Each block or storage volume can be formatted with the file system required by the application (NFS/NTFS/SMB). the storage disk is configured with a particular protocol (such as NFS. Storage-area network (SAN) arrays are based on block-level storage. Hard drives may be handled independently as separate logical volumes. Each block or storage volume can be individually formatted with the required file system. Generally. a disk array provides increased availability. access ports. Each block or storage volume can be treated as an independent disk drive and is controlled by external server OS. File-level storage systems are well suited for bulk file storage. in bulk. They can be configured with common file-level protocols like NTFS (Windows). NFS (Linux). often up to the point where all single points of failure (SPOF) are eliminated from the design. In this type of storage. and so on) and files are stored and accessed from it as such. . and each block can be controlled like an individual hard drive. and so on. fans. File-level storage systems are more popular with NAS-based storage systems—Network Attached Storage. Additionally. Block-level storage systems are very popular with SANs. and maintainability by using existing components (controllers. and so on). or they may be combined into a single logical volume using a volume manager. integration with corporate directories. disk array components are often hot swappable. The file level storage device itself can generally handle operations such as access control. cache.

Hewlett-Packard. to support data backups. They can support external boot of the systems connected to them. Devices in a captive storage topology do not have direct access to the storage network and do not support efficient sharing of storage. IBM. Primary vendors of storage systems include EMC Corporation. or Internet SCSI (iSCSI) and can thus use the storage capacity that the disk subsystem provides (see Figure 6-2). Standard . Hitachi Data Systems. DAS devices require resources on the host and spare disk systems that cannot be used on other systems. and their transport systems are very efficient. DAS devices provide little or no mobility to other servers and little scalability. Servers are connected to the connection port of the disk subsystem using standard I/O techniques such as Small Computer System Interface (SCSI). To access data with DAS. Infortrend. Dell. Block-level storage can be used to store files and also provide the storage required for special applications like databases. The internal structure of the disk subsystem is completely hidden from the server. a user must go through some sort of front-end network. NetApp. Figure 6-2 Servers Connected to a Disk Subsystem Using Different I/O Techniques Direct-attached storage (DAS) is often implemented within a parallel SCSI implementation. Virtual Machine File Systems (VMFS). Oracle Corporation. and the like. For example. DAS devices limit file sharing and can be complex to implement and manage. and other companies that often act as OEMs for the previously mentioned vendors and do not themselves market the storage components they manufacture. which sees only the hard disks that the disk subsystem provides to the server. DAS is commonly described as captive storage.Block-level storage systems are more reliable. Fibre Channel. The controller of the disk subsystem must ultimately store all data on physical hard disks.

but a tape drive must physically wind tape between reels to read any one particular piece of data. manufacturer-specific—I/O techniques are used. These devices can store immense amounts of data. currently ranging from 20 terabytes up to 2. A SAN consists of a communication infrastructure. Serial Attached SCSI (SAS). Fibre Channel. Active/active (no load sharing): In this cabling method the controller uses both I/O channels in normal operation. however. A disk drive can move to any position on the disk in a few milliseconds. the communication goes through the other channel only. tape robot. the disk subsystem switches from the first to the second I/O channel. proprietary—that is. is a storage device that contains one or more tape drives. What Is a Storage-Area Network? The Storage Networking Industry Association (SNIA) defines the meaning of the storage-area network (SAN) as a network whose primary purpose is the transfer of data between computer systems and storage elements and among storage elements. This layer organizes the connections. If one I/O channel fails. as of 2010. and increasingly Serial ATA (SATA). both groups are addressed via the other I/O channel.1 exabytes of data or multiple thousand times the capacity of a typical hard drive and well in excess of capacities achievable with network attached storage. As a result. storage elements. In the event of the failure of the first I/O channel. which provides physical connections and a management layer. a number of slots to hold tape cartridges. and Serial Storage Architecture (SSA) are being used for internal I/O channels between connection ports and controllers as well as between controllers and internal hard disks. comparable to hard disk drives. There are four main I/O channel designs: Active: In active cabling the individual physical hard disks are connected via only one I/O channel. and computer systems so that data transfer is secure and robust. Active/active (load sharing): In this approach. or tape jukebox. sometimes called a tape silo.I/O techniques such as SCSI. Linear Tape-Open (LTO) supported continuous data transfer rates of up to 140 MBps. tape drives have very slow average seek times for sequential access after the tape is positioned. unlike a disk drive. For example. archival data storage. The I/O channels can be designed with built-in redundancy to increase the fault-tolerance of a disk subsystem. tape drives can stream data very quickly. The hard disks are divided into two groups: in normal operation the first group is addressed via the first I/O channel and the second via the second I/O channel. in normal operation the controller communicates with the hard disks via the first I/O channel and the second I/O channel is not used. A tape library. The term SAN is usually (but not necessarily) identified with block I/O services rather than file-access . However. if this access path fails. which provides random access storage. Magnetic tape data storage is typically used for offline. in normal operation the controller divides the load dynamically between the two I/O channels so that the available hardware can be optimally utilized. and an automated method for loading tapes (a robot). A tape drive is a data storage device that reads and writes data on a magnetic tape. If one I/O channel fails. all hard disks are addressed via both I/O channels. it is no longer possible to access the data. Sometimes. a bar-code reader to identify tape cartridges. Active/passive: In active/passive cabling the individual hard disks are connected via two I/O channels. A tape drive provides sequential access storage.

The simplification of the infrastructure consists of six main areas. Additionally. but more powerful.services. availability. A SAN allows an any-to-any connection across the network by using interconnect elements such as switches and directors. A SAN is a specialized. the storage utility might be located far from the servers that it uses. information life-cycle management (ILS). A network might include many storage devices. It also eliminates any restriction to the amount of data that a server can access. including disk. tape. Storage virtualization helps in making complexity nearly transparent and can offer a composite . and the concept that the server effectively owns and manages the storage devices. a SAN introduces the flexibility of networking to enable one server or many heterogeneous servers to share a common storage utility. and optical storage. Figure 6-3 Storage Connectivity The key benefits that a SAN might bring to a highly data-dependent business infrastructure can be summarized into three concepts: simplification of the infrastructure. high-speed network that attaches servers and storage devices. Figure 6-3 portrays a sample SAN topology. and business continuity. In addition. Consolidation is concentrating the systems and resources into locations with fewer. servers and storage pools that can help increase IT efficiency and simplify the infrastructure. currently limited by the number of storage devices that are attached to the individual server. It eliminates the traditional dedicated connection between a server and storage. and disaster tolerance. centralized storage management tools can help improve scalability. Instead.

storage administrators might be able to spend more time on critical. and appliances. By deploying a consistent and safe infrastructure. Storage subsystems. SANs make it possible to meet any availability requirements. or switches and directors for Fibre Channel switched fabric topologies. and networking (TCP/IP). plus all control software. A SAN implementation makes it easier to manage the information life cycle because it integrates applications and data into a single-view system in which the information resides. Integrated storage environments simplify system management tasks and improve security. it’s about identifying your key products and services and the most urgent activities that underpin them. so any combination of devices that can interoperate is likely to be used. electrical interface. SANs play a key role in the business continuity. gateways. As the name suggests. In fact. storage devices. several components can be used to build a SAN. The ILM process manages this information in a manner that optimizes storage and maintains a high level of access at the lowest cost. switches. It is the deployment of these different types of interconnect entities that enables Fibre Channel networks of varying scale to be built. Given this definition. embedding BC into your business is proven to bring business benefits. from conception until intentional disposal. higher-level tasks that are unique to the company’s business mission. your infrastructure might be better able to respond to the information needs of your users. which can improve availability and responsiveness and help protect data as storage needs grow. When all servers have secure access to all data. computer systems. it is a network. access to mainframe storage (FICON). Fibre Channel supports point-to-point. As SANs increase in size and complexity. We talk about storage virtualization in detail in Chapter 8. Business continuity (BC) is about building and improving resilience in your business. which communicates over a network. storage devices. arbitrated loop. including access to open system storage (FCP). then. a Fibre Channel network might be composed of many types of interconnect entities. The committee that is standardizing Fibre Channel is the INCITS Fibre Channel (T11) Technical Committee. Fibre Channel is a serial I/O interconnect that is capable of supporting multiple protocols. In smaller SAN environments you can deploy hubs for Fibre Channel arbitrated loop topologies. whatever its size or cause. It gives you a solid framework to lean on in times of crisis and provides stability and security. Automation is choosing storage components with autonomic capabilities. and server systems can be attached to a Fibre Channel SAN. Depending on the implementation. it is about devising plans and strategies that will enable you to continue your business operations and enable you to recover quickly and effectively from any type of disruption. and switched topologies with various copper and optical links that are running at speeds from 1 Gbps to 16 Gbps. Fibre Channel directors can be introduced to facilitate a more flexible and fault-tolerant configuration. routers. How to Access a Storage Device Each new storage technology introduced a new physical interface. and storage . As soon as day-today tasks are automated. and bridges. including directors. Each component that composes a Fibre Channel SAN provides an individual management capability and participates in an often-complex end-to-end management environment. Information life-cycle management (ILM) is a process for managing information through its life cycle. A storage system consists of storage elements. hubs. after that analysis is complete.view of storage assets.

SCSI is an American National Standards Institute (ANSI) standard that is one of the leading I/O buses in the computer industry. Most file systems are based on a block device. There are several record formats. others provide file access via a network protocol. By separating the data into individual pieces and giving each piece a name. This type of structured data is called blocked. which is a level of abstraction for the hardware responsible for storing and retrieving specified blocks of data. the details vary depending on the particular system. In this chapter we will be categorizing them in three major groups: Blocks. although the block size in file systems may be a multiple of the physical block size. for example. The first version of the SCSI standard was released in 1986.protocol. Some file systems are used on local data storage devices. The intelligent interface model abstracts device-specific details from the storage protocol and decouples the storage protocol from the physical and electrical interfaces. the formats can be fixed-length or variable length. are sequences of bytes or bits. metadata may be associated with the file records to define the record length. SCSI is a block-level I/O protocol for writing and reading data blocks to and from a storage device. as shown in Table 6-2. The structure and logic rule used to manage the groups of information and their names is called a file system. Files are granular containers of data created by the system or an application. the information is easily separated and identified. SCSI has been continuously developed. or by record number. Data transfer is also governed by standards. sometimes called physical records. a block size. This enables multiple storage technologies to use a common storage protocol. In general. which comes in a number of variants. Since then. sequential. Block-Level Protocols Block-oriented protocols (also known as block-level protocols) read and write individual fixedlength blocks of data. usually containing some whole number of records having a maximum length. by key. with different physical organizations or padding mechanisms. The SCSI bus is a parallel bus. Different methods to access records may be provided. Records are similar to a file system. or the data might be part of the record. .

Drafts are sent to INCITS for approval. T13) prepares drafts. drafts become standards and are published by ANSI. The standard process is that a technical committee (T10. After the drafts are approved by INCITS. T11. and users for the creation and maintenance of formal daily IT standards. Figure 6-4 Standard Groups: Storage . producers. ANSI promotes American National Standards to ISO as a joint technical committee member (JTC-1). The Information Technology Industry Council (ITI) sponsors INCITS (see Figure 6-4). INCITS is accredited by and operates under rules approved by the American National Standards Institute (ANSI). These rules are designed to ensure that voluntary standards are developed by the consensus of directly and materially affected interests.Table 6-2 SCSI Standards Comparison The International Committee for Information Technology Standards (INCITS) is the forum of choice for information technology developers.

the SCSI protocol is half-duplex by design and is considered a command/response protocol. One SCSI device (the initiator) initiates communication with another SCSI device (the target) by issuing a command. Figure 6-5 SCSI Commands. and Block Data Between Initiators and Targets The logical unit number (LUN) identifies a specific logical unit (virtual controller) in a target. Commands received by targets are directed to the appropriate logical unit by a task router in the target controller.As a medium. Status. Thus. LUN 0. A target must have at least one LUN. but SCSI operates as a master/slave model. whereas a subsystem might allow hundreds of LUNs to be defined. A logical unit can be thought of as a “black box” processor. to which a response is expected. All SCSI devices are intelligent. Also note that array-based replication software requires a storage controller in the initiating storage array to act as initiator both externally and internally. via a LUN. and the LUN is a way to identify SCSI black boxes. The initiating device is usually a SCSI controller. Although we tend to use the term LUN to refer to a real or virtual storage device. The process of provisioning storage in a SAN storage subsystem involves defining a LUN on a particular target port and then assigning that particular target/LUN pair to a specific logical unit. Table 6-2 lists the SCSI standards. Over time. An . SCSI targets have logical units that provide the processing context for SCSI commands. A SCSI controller in a modern storage array acts as a target externally and acts as an initiator internally. The bus can be realized in the form of printed conductors on the circuit board or as a cable. and might optionally support additional LUNs. a logical unit is a virtual machine (or virtual controller) that handles SCSI communications on behalf of real or virtual storage devices in a target. SCSI defines a parallel bus for the transmission of data with additional lines for the control of communication. a disk drive might use a single LUN. Essentially. SCSI storage devices typically are called targets (see Figure 6-5). so SCSI controllers typically are called initiators. Logical units are architecturally independent of target ports and can be accessed through any of the target’s ports. For instance. a LUN is an access point for exchanging commands and status information between initiators and targets. A so-called daisy chain can connect up to 16 devices together. numerous cable and plug types have been defined that are not directly compatible with one another.

Depending on the version of the SCSI standard. The bus represents one of several potential SCSI interfaces that are installed in the host. target. and LUN. with the ID 7 having the highest priority. The bus. The SCSI protocol permitted only eight IDs. SCSI ID. The target represents a single disk controller on the string. A server can be equipped with several SCSI controllers. each supporting a separate string of disks. Therefore. Figure 6-6 SCSI Addressing Devices (servers and storage devices) must reserve the SCSI bus (arbitrate) before they can send data through it. If the bus is heavily loaded. During the arbitration of the bus.individual logical unit can be represented by multiple LUNs on different ports. and LUN triad is defined from parallel SCSI technology. the device that has the highest priority SCSI ID always wins. a logical unit could be accessed through LUN 1 on Port 0 of a target and also accessed as LUN 8 on port 1 of the same target. More recent versions of the SCSI protocol permit 16 different IDs. the operating system must note three things for the differentiation of devices: controller ID. the IDs 7 to 0 should retain the highest priority so that the IDs 15 to 8 have a lower priority (see Figure 6-6). a maximum of 8 or 16 IDs are permitted per SCSI bus. For reasons of compatibility. this can lead to devices with lower priorities never being . For instance.

In SCSI3. The SCSI version 3 (SCSI-3) application client resides in the host and represents the upper-layer application. In-order delivery was traditionally provided by the SCSI bus and. retrieval of the file requires a hierarchy of functions. and operating system I/O requests.” The SCSI protocol layer sits between the operating system and the peripheral resources. so it has different functional components (see Figure 6-7). Figure 6-7 SCSI I/O Channel. SCSI-3 is an all-encompassing term that refers to a collection of standards that were written as a result of breaking SCSI-2 into smaller. It is often also assumed that SCSI provides in-order delivery to maintain data integrity. the SCSI protocol assumes that proper ordering is provided by the underlying connection technology.allowed to send data. the SCSI protocol does not provide its own reordering mechanism and the network is responsible for the reordering of transmission frames that are received out of order. cables. therefore. and TCP/IP I/O Networking It is important to understand that while SCSI-1 and SCSI-2 represent actual standards that completely define SCSI (connectors. In SCSI-3. The SCSI-3 device server sits in the target device. Applications typically access data as files or records. file system. In other words. such as UDP. These functions assemble raw data blocks into a coherent file that an application can manipulate. even faster bus types are introduced. signals. hierarchical modules that fit within a general framework called SCSI Architecture Model (SAM). and command Protocol). was not needed by the SCSI protocol layer. The SCSI arbitration procedure is therefore “unfair. did not. This is the main reason why TCP was considered essential for the iSCSI protocol that transports SCSI commands and data transfers over IP networking equipment: TCP provided ordering while other upper-layer protocols. along with serial SCSI buses that reduce the cabling overhead . Fibre Channel I/O Channel. responding to requests. Although this information might be stored on disk or tape media in the form of data blocks.

It is at this point where the Fibre Channel model is introduced. iSCSI has the potential to lower the costs of deploying networked storage. It is a data storage networking protocol that transports standard SCSI requests over the standard Transmission Control Protocol/Internet Protocol (TCP/IP) networking technology. the random IOPS numbers are primarily dependent upon the storage device’s internal controller and memory interface speeds. Therefore.and allow a higher maximum bus length. rather than a replacement for. sequential IOPS are reported as a simple MB/s number as follows: IOPS × TransferSizeInBytes = BytesPerSec (and typically this is converted to megabytes per sec) SCSI messages and data can be transported in several ways: Parallel SCSI cable: This transport is mainly used in traditional deployments. there is always a push for faster communications without limitations on distance or on the number of connected devices. and storage-area networks (SAN). the random IOPS numbers are primarily dependent upon the storage device’s random seek time. Because it uses TCP/IP. Fibre Channel cable: This transport is the basis for a traditional SAN deployment. so data can flow in only one direction at a time. The specific number of IOPS possible in any system configuration will vary greatly. In particular. On both types of storage devices. but distance is limited to 25 m and is half-duplex. The SCSI protocol is suitable for block-based. iSCSI is also suited to run over almost any physical network. IOPS (Input/output Operations Per Second. For HDDs and similar electromechanical storage devices. pronounced eye-ops) is a common performance measurement used to benchmark computer storage devices like hard disk drives (HDD). ISCSI is SCSI over TCP/IP: Internet Small Computer System Interface (iSCSI) is a transport protocol that carries SCSI commands from an initiator to a target. As always. where the SAN switch devices allow the mixture of open systems and mainframe traffic. FICON is a prerequisite for IBM z/OS systems to fully participate in a heterogeneous SAN. and the data block sizes. ISCSI enables the implementation of IP-based SANs. whereas for SSDs and similar solid state storage devices. FICON is a protocol that uses Fibre Channel as . enabling clients to use the same networking technologies. SCSI is carried in the payload of a Fibre Channel frame between Fibre Channel ports. including the balance of read and write operations. Fibre Channel connection (FICON) architecture: An enhancement of. solid state drives (SSD). By eliminating the need for a second network technology just for storage. Fibre Channel has a lossless delivery mechanism by using bufferto-buffer credits (BB_Credits). the demands and needs of the market push for new technologies. the mix of sequential and random access patterns. Latency is low. A SAN is Fibre Channel-based (FC-based). depending on the variables the tester enters into the program. structured applications such as database applications that require many I/O operations per second (IOPS) to achieve high performance. the sequential IOPS numbers (especially when using a large block size) typically indicate the maximum sustained bandwidth that the storage device can handle. which are explained in detail later in this chapter. for both storage and data networks. the number of worker threads and queue depth. the traditional IBM Enterprise Systems Connection (ESCON) architecture. Often. Latency is low and bandwidth is high (16 GBps).

its physical medium. Fibre Channel over Ethernet (FCoE): This transport replaces the Fibre Channel cabling with 10 Gigabit Ethernet cables and provides lossless delivery over unified I/O. at a relatively low cost. The assumption that this might lead to is that the industry decided that Fibre Channel-based SANs are more than appropriate. the attraction of being able to link geographically dispersed SANs. At the time of writing this book. FCIP encapsulates Fibre Channel block data and then transports it over a TCP socket. It is a method to allow the transmission of Fibre Channel information to be tunneled through the IP network. FICON can also increase the number of control unit images per link and the number of device addresses per control unit link. FCoE is discussed in detail in Chapters 10 and 11. The major consideration with FCIP is that it does not replace Fibre Channel with IP. Figure 6-8 portrays networking stack comparison for all block I/O protocols. Fibre . (See Figure 6-9.) HBAs are I/O adapters that are designed to maximize performance by performing protocol-processing functions in silicon. TCP/IP services are used to establish connectivity between remote SANs. is enormous. FICON channels can achieve data rates up to 850 MBps and extend the channel distance (up to 100 km). but HBAs are optimized for SANs and provide features that are specific to storage. Because most organizations already have an existing IP infrastructure. Fibre Channel over IP (FCIP): Also known as Fibre Channel tunneling or storage tunneling. HBAs are roughly analogous to network interface cards (NIC). Another possible assumption is that the only need for the IP connection is to facilitate any distance requirement that is beyond the current scope of an FCP SAN. Figure 6-8 Networking Stack Comparison for All Block I/O Protocols Fibre Channel provides high-speed transport for SCSI payload via Host Bus Adapter (HBA). it allows deployments of Fibre Channel fabrics by using IP tunneling. Any congestion control and management and data error and data loss recovery is handled by TCP/IP services and does not affect Fibre Channel fabric services. The protocol can also retain the topology and switch management characteristics of ESCON.

FC-4 is where the upper-level protocols are used. Figure 6-9 Comparison Between Fibre Channel HBA and Ethernet NIC Figure 6-9 contrasts HBAs with NICs. and Fibre Channel connection (FICON). which is divided into four lower layers (FC-0. With NICs. and FC-3) and one upper layer (FC-4). The HBA offloads these protocol-processing functions onto the HBA hardware with some combination of an ASIC and firmware. Figure 6-10 shows an overview of the Fibre Channel model. and error correction. HIPPI data framing. sequencing. host speeds of 100 to 1600 MBps (1–16 Gbps). including addressing for up to 16 million nodes. FC-2. FC-1. but HBAs are optimized for SANs and provide features that are specific to storage. . Internet Protocol (IP). such as SCSI-3. support for multiple protocols. Fibre Channel Protocol (FCP) has an ANSI-based layered architecture that can be considered a general transport vehicle for upper-layer protocols (ULP) such as SCSI command sets. The diagram shows the Fibre Channel. Offloading these functions is necessary to provide the performance that storage networks require. IP. HBAs are roughly analogous to NICs. HBAs are I/O adapters that are designed to maximize performance by performing protocol-processing functions in silicon. and others. and combines best attributes of a channel and a network together. illustrating that HBAs offload protocol-processing functions into silicon. loop (shared) and fabric (switched) transport. segmentation and reassembly. software drivers perform protocol-processing functions such as flow control.Channel overcomes many shortcomings of Parallel I/O.

50 MBps. The FC-PH specification defines baud rate as the encoded bit rate per second. and error control. and 100 MBps were introduced on several different copper and fiber media. 200 MBps as 2-Gbps. Specifications exist here but are rarely implemented. In the initial FC specification. connectors. 25 MBps. provides a raw bit rate of 1. and multicast management and stacked connect-requests.0625 GBaud. which means the baud rate and raw bit rate are equal. FC-1 encoding: Defines the transmission protocol that includes the serial encoding. FCP is the FC-4 mapping of SCSI-3 onto FC. FC-1 variants up to and including 8 Gbps use the same encoding scheme (8B/10B) as GE fiber optic variants.7 Gbps. Instead. including 200 MBps and 400 MBps. 1-Gbps FC operates at 1.25 Gbps. 4-Gbps FC operates at 4. The FC-PI specification redefines baud rate more accurately and states explicitly that FC encodes 1 bit per baud. and so on. address assignment. and 400 MBps as 4-Gbps. Fibre Channel throughput is commonly expressed in bytes per second rather than bits per second.25 GBaud. To derive ULP throughput. Note that FCP does not define its own header. alias address definition. Additional rates were subsequently introduced. FC-2 functions include several classes of service. Fibre Channel is described in greater depth throughout this chapter.Figure 6-10 Fibre Channel Protocol Architecture These are the main functions of each Fibre Channel layer: FC-4 upper-layer protocol (ULP) mapping: Provides protocol mapping to identify the ULP that is encapsulated into a protocol data unit (PDU) for delivery to the FC-2 layer.5 MBps. provides a raw bit rate of 4. FC-2 framing and flow control: Provides the framing and flow control that is required to transport the ULP over Fibre Channel. 100 MBps FC is also known as 1-Gbps FC. protocols for hunt group. provides a raw bit rate of 2. decoding. and provides a data bit rate of 3. sequence disassembly and reassembly. rates of 12. exchange management. frame format definition. fields within the FC-2 . and provides a data bit rate of 1.125 GBaud.125 Gbps. FC-0 physical interface: Provides physical connectivity. the FC-2 header and interframe spacing overhead must be subtracted. FC-3 generic services: Provides the Fibre Channel Generic Services (FC-GS) that is required for fabric management. 2-Gbps FC operates at 2.0625 Gbps.4 Gbps. and provides a data bit rate of 850 Mbps. including cabling. Table 6-3 shows the evolution of Fibre Channel speeds.

2 Gbps FC.30608 Gbps for 1 Gbps. the ULP throughput rate is 826. and NFS is found primarily on UNIX and Linux servers. Common Internet File System (CIFS) and Network File System (NFS) are file-based protocols that are used for reading and writing files across a network. and 4 Gbps. These ULP throughput rates are available directly to SCSI. The 1 Gbps FC. 1. The client and the server communicate with each other through established protocols. Unlike the 10 Gbps FC standards. Table 6-3 Evolution of Fibre Channel Speeds File-Level Protocols File-oriented protocols (also known as file-level protocols) read and write variable-length files. 4 Gbps FC.header are used by FCP. These resources can be made available through NFS. The configuration of the network depends on the resource requirement of the environment. database. Assuming the maximum payload (2112 bytes) and no optional FC-2 headers. and the system that requires the resources is called the client. The benefits of client/server architecture include cost reduction because of hardware and space requirements. and files. The local workstations do not need as much disk space because commonly used data . and the 10G and 16GFC standard uses 64B/66B encoding. 2 Gbps. Files are segmented into blocks before being stored on disk or tape. A distributed (client/server) network might contain multiple servers and multiple clients. Examples of resources are e-mail. and 3. The theory of client/server architecture is based on the concept that one computer has the resources that another computer requires. 8 Gbps FC designs all use 8B/10B encoding. CIFS is found primarily on Microsoft Windows servers (a Samba service implements CIFS on UNIX systems). Inter-frame spacing adds another 24 bytes. The system with the resources is called the server. respectively.65304 Gbps.519 Mbps. or multiple clients and one server. 16 Gbps FC provides backward compatibility with 4 Gbps FC and 8 Gbps FC. The basic FC-2 header adds 36 bytes of overhead.

Status Monitor (sm_1_main).can be stored on the server. These file-based protocols are suitable for file-based applications: Microsoft Office. if rpcbind is not running on the server. Because CIFS and NFS are transported over TCP/IP. Other benefits include centralized support (backups and maintenance) that is performed on the server. . SCSI may be transported over many physical media. One benefit of NAS storage is that files can be shared easily between users within a workgroup. Network Lock Manager (nlm_main). NFS communication cannot be established between the client and the server. Similarly. and the client can be one of many versions of the Unix or Linux operating system. and File and Print services. latency is high. Each service is required for successful operation of an NFS process. the server in the network is a network appliance storage system. such as databases that transfer small blocks of data when updating fields. NFS is stateless. and portmap (also known as rpcbind). Microsoft SharePoint. SCSI is an efficient protocol and is the accepted standard within the industry. a client cannot mount a resource if mountd is not running on the server. In Figure 6-11. Block I/O and File I/O are complementary solutions for reading and writing data to and from storage devices. CIFS and NFS are verbose protocols that send thousands of messages between the servers and NAS devices when reading and writing files. NFS daemon (nfsd). quota daemon (rquot_1_main). to allow for easy recovery in the event of server failure. SCSI is mostly used in Fibre Channel SANs to transfer blocks of data between servers and storage arrays in a high-bandwidth. Block I/O protocols are suitable for well-structured applications. For example. records. Figure 6-11 Block I/O Versus File I/O The storage system provides services to the client. Block I/O uses the SCSI protocol to transfer data in 512-byte blocks. low.latency redundant SAN. These services include mount daemon (mounted). and tables. NFS is a widely used protocol for sharing files across networks.

and iSCSI. Fibre Channel was introduced to overcome these limitations. number of storage devices per chain. Therefore. Storage networks are more difficult to configure. SCSI is limited by distance. in Fibre Channel.In the scenario portrayed in Figure 6-11. Front-end data communications are considered host-to-host or client/server. NAS transfers files or file fragments. lack of device sharing. NFS. Figure 6-12 DAS. and NAS are four techniques with which storage networks can be realized (see Figure 6-12). you already figured out why Fibre Channel was introduced. FCoE. FCoE. resiliency. loop or . and NAS Comparison At this stage. SAN. NAS servers have only limited suitability as data storage for databases due to lack of performance. Storage networks can be realized with NAS servers by installing an additional LAN between the NAS server and the application servers. and block-level data access applies to Fibre Channel (FC) back-end networks. iSCSI. FCoE SAN. NTFS. Putting It All Together Fibre Channel SAN. In contrast to Fibre Channel. In contrast to NAS. On the other hand. Fibre Channel supplies optimal performance for the data exchange between server and storage device. Back-end data communication is hostto-storage or program-to-device. iSCSI SAN. Fibre Channel offers a greater sustained data rate (1 GFC–16 GFC at the time of writing this book). and management flexibility. CIFS (common Internet file-system) or NCP (Netware Core protocol—Novell) Back-end protocols include SCSI. file-level data access applies to IP front-end networks. and iSCSI the data exchange between servers and storage devices takes place in a block-based fashion. SANs include the back end along with front-end components. Network Attached Storage (NAS) appliances reside on the front-end. IDE along with file-system formatting standards such as FAT. Front-end protocols include FTP. speed. SMB (server message block—Microsoft NT). NAS servers are turnkey file servers.

to a group of systems is called a scale-out architecture. In addition. The Storage Networking Industry Association (SNIA) standards group defines tiered storage as “storage that is physically partitioned into multiple distinct classes based on price. High availability with scale-up storage is achieved through component redundancy that eliminates single points of failure. Data may be dynamically moved among classes in a tiered storage implementation . A scaleout solution is like a set of multiple arrays with a single logical view. Increasing storage capacity is an important task for keeping up with data growth. The cost of the added capacity depends on the price of the devices. and centralized management with local or remote access. In a large-scale storage infrastructure. virtually unlimited devices in a fabric. Storage Architectures Storage systems are designed in a way that customers can purchase some amount of capacity initially with the option of adding more as the existing capacity fills up with data. This is the reason IT organizations tend to spend such a high percentage of their budget on storage every year. facilities. the impact on application availability. but not the number of the storage controllers. distances of hundreds of kilometers or greater with extension devices. Scaling architectures determine how much capacity can be added. or nodes. Scale-up and scale-out storage architectures have hardware. performance or other attributes. An architecture that adds capacity by adding storage components to an existing system is called a scale-up architecture. Tiered Storage Many storage environments support a diversity of needs and use disparate technologies that cause storage sprawl. nondisruptive device addition. Products designed with this architecture are often classified as distributed. if necessary. A scale-up solution is a way to increase only the capacity. and products designed with it are generally classified as monolithic storage. this environment yields a suboptimal storage design that can be improved only with a focus on data access characteristics analysis and management to provide optimum performance. allowing system updates to be rolled back. and by software upgrades processes that take several steps to complete. Scale-up and Scale-out Storage Storage systems increase their capacity in one of two ways: by adding storage components to an existing system or by adding storage systems to an existing group of systems.switched networks for device sharing and low latency. the lifespan of storage products built on these architectures is typically four years or less. Scale-up storage systems tend to be the least flexible and involve the most planning and longest service interruptions when adding capacity. and how much work and planning is involved. and maintenance costs. how quickly it can be added. An architecture that adds capacity by adding individual storage systems. which can be relatively high with some storage systems.

” Tiered storage is a mix of high performing/high-cost storage with low performing/low-cost storage and placing data based on specific characteristics. However. system footprint. you can see the concept and a cost versus performance relationship of a tiered storage environment.based on access activity or other considerations. Figure 6-13 Cost Versus Performance in a Tiered Storage Environment Typically. footprint. you might more efficiently address the highest performance needs while reducing the Enterprise class storage. because lower-tier storage is less expensive. and cooling reductions. . In Figure 6-13. the overall management effort increases when managing storage capacity and storage performance needs across multiple storage classes. Table 6-4 lists the different storage tier levels. and energy costs. an optimal design keeps the active operational data in Tier 0 and Tier 1 and uses the benefits associated with a tiered storage approach mostly related to cost. By introducing solid-state drive (SSD) storage as tier 0. Environmental savings. such as the performance needs. are possible. age. such as energy. and importance of data availability. A tiered storage approach can provide the performance you need and save significant costs associated with storage.

Table 6-4 Comparison of Storage Tier Levels .

It is also about making sure the network design and topology look as clean and functional one. and each end node (host or storage) is connected to both networks. To achieve 99.999 percent availability the network should be built with redundant switches that are not interconnected. Because the traditional FC SAN design doubles the cost of network implementation. two. Modern SAN design is about deploying ports and switches in a configuration that provides flexibility and scalability. or five years later as the day they were first deployed. Some companies take the same approach with their traditional IP/Ethernet networks. In the traditional FC SAN design. Although . Some companies are looking to iSCSI and FCoE as the answer. many companies are actively seeking alternatives.999 percent availability. each host and storage device is dual-attached to the network.SAN Design SAN design doesn’t have to be rocket science. and others are considering single path FC SANs. but most do not for reasons of cost. The FC network is built on two separate networks (commonly called path A and path B). Figure 6-14 SAN A and SAN B FC Networks Infrastructures for data access necessarily include options for redundant server systems. Figure 6-14 illustrates typical dual path FC-SAN designs. This is primarily motivated by a desire to achieve 99.

Active/Active: balanced I/O over both paths (implementation specific) and Active/Passive: I/O over primary path—switches to standby path upon failure. and clusters are tightly coupled systems that function as a single. although multiport HBAs provide path redundancy. In general.server systems might not necessarily be thought of as storage elements. all downstream connections. most current . changing an I/O path involves changing the initiator used for I/O transmissions and. another SCSI communication connection is used in its place. Farms are loosely coupled individual systems that have common access to shared data. If one of these communication connections fails. Figure 6-15 portrays two types of multipathing. servers are the starting point for most data access. Multipathing software depends on having redundant I/O paths between systems and storage. From a storage I/O perspective. This includes switches and network cables that are being used to transfer I/O data between a computer and its storage. they are clearly key pieces of data access infrastructures. Figure 6-15 Storage Multipathing Failover A single multiport HBA can have two or more ports connecting to the SAN that can be used by multipathing software. Redundant server systems and filing systems are created through one of two approaches: clustered systems or server farms. However. fault-tolerant system. Multipathing establishes two or more SCSI communication connections between a host system and the storage it uses. by extension. Server filing systems are clearly in the domain of storage.

SAN Design Considerations The underlying principles of SAN design are relatively straightforward: plan a network topology that can handle the number of ports necessary now and into the future. It also selects an alternative path in the event of failure of the primary path. Although FSPF itself provides for optimal routing between nodes. it is typically better to overestimate the port count requirements than to underestimate them. It is about helping ensure that a design remains functional if . These underlying principles fall into five general categories: Port density and topology requirements: Number of ports required now and in the future Device performance and oversubscription ratios: Determination of what is acceptable and what is unavoidable Traffic management: Preferential routing or resource allocation Fault isolation: Consolidation while maintaining isolation Control plane scalability: Reduced routing complexity 1. Designing for a 1500-port SAN does not necessarily imply that 1500 ports need to be purchased initially. Considering this. Port Density and Topology Requirements The single most important factor in determining the most suitable SAN design is determining the number of end ports for now and over the anticipated lifespan of the design. the Dijkstra algorithm on which it is commonly based has a worst-case running time that is the square of the number of nodes in the fabric. FSPF is the standard routing protocol used in Fibre Channel fabrics. From a design standpoint. design a network topology with a given end-to-end performance and throughput level in mind. and provide the necessary connectivity with remote data centers to handle the business requirements of business continuity and disaster recovery. it could be advantageous to implement multipathing so that the timeout values for storage paths exceed the times needed to converge new network routes. the design for a SAN that will handle a network with 100 end ports will be very different from the design for a SAN that has to handle a network with 1500 end ports. Multipathing is not the only automated way to recover from a network problem in a SAN. As long as the new network route allows the storage path’s initiator and LUN to communicate.multipathing implementations use dual HBAs to provide redundancy for the HBA adapter. As an example. SAN switches use the Fabric Shortest Path First (FSPF) routing protocol to converge new routes through the network following a change to the network configuration. including link or switch failures. taking into account any physical requirements of a design. or even at all. Multipathing software in a host system can use new network routes that have been created in the SAN by switch routing algorithms. This depends on switches in the network recognizing a change to the network and completing their route convergence process prior to the I/O operation timing out in the multipathing software. establishing the shortest and quickest path between any two devices. FSPF automatically calculates the best path between any two devices in a fabric through dynamically computing routes. the storage process uses the route.

Preferably. the lifespan for any design should encompass the depreciation schedule for equipment. it is not difficult to plan based on an estimate of the immediate server connectivity requirements. Although it is difficult to predict future requirements and capabilities.that number of ports is attained. 12. because redesigning and reengineering a network topology become both more time-consuming and more difficult as the number of devices on a SAN expands. and 18 months as rough guidelines for the projected growth in number of end-port devices in the future. A design should also consider physical space requirements. Figure 6-16 portrays the SAN major design factors. rather than later finding the design is unworkable. For example. protocols. Figure 6-16 SAN Major Design Factors For new environments. determining the approximate port count requirements is not difficult. As a minimum. You can use the current number of end-port devices and the increase in number of devices during the previous 6. it is more difficult to determine future port-count growth requirements. Where existing SAN infrastructure is present. but once again. unused module . typically three years or more. coupled with an estimated growth rate of 30 percent per year. and densities. is the data center all on one floor? Is it all in one building? Is there a desire to use lower-cost connectivity options such as iSCSI for servers with minimal I/O requirements? Do you want to use IP SAN extension for disaster recovery connectivity? Any design should also consider increases in future port speeds. a design should last longer than this.

application I/O requests. percentage of reads versus writes. When considering all the performance characteristics of the SAN infrastructure and the servers and storage devices. SAN switch ports are rarely run at their maximum speed for a prolonged period. Device Performance and Oversubscription Ratios Oversubscription. and actual disks. is the practice of connecting multiple devices to the same switch port to optimize switch use. however. multiple slower devices may fan in to a single port to take advantage of unused capacity. and I/O service time from the target device. The two metrics are closely related. Because storage subsystem I/O resources are not commonly consumed at 100 percent all the time by a single client. It is important to know the fan-out ratio in a SAN design so that each server gets optimal access to storage resources. Note A fan-out ratio is the relationship in quantity between a single port on a storage device and the number of servers that are attached to it. which is derived from factors including I/O size. Fan-in is how many storage ports can be served from a single host channel. IOPS performance relates only to the servers and storage devices and their ability to handle high numbers of I/O operations. disk controllers. results in an uneconomic use of storage. Key factors in deciding the optimum fan-out ratio are server HBA queue depth. Oversubscription is a necessity of any networked infrastructure and directly relates to the major benefit of network—to share common resources among numerous clients. These recommendations are often in the range of 7:1 to 15:1. two oversubscription metrics must be managed: IOPS and network bandwidth capacity of the SAN. whereas bandwidth capacity relates to all devices in the SAN. On the storage side. including the system architecture. and port throughput. storage device input/output (IOPS). Figure 617 portrays the major SAN oversubscription factors. cache. Too low a fan-out ratio. application performance will be affected negatively. the lower the cost of the underlying network infrastructure and shared resources. When the fan-out ratio is high and the storage array becomes overloaded. including the SAN infrastructure itself. the supported bandwidth is again strictly derived from the IOPS capacity of the disk subsystem itself. a fan-out ratio of storage subsystem ports can be achieved based on the I/O demands of various applications and server platforms. the required bandwidth is strictly derived from the I/O load. The higher the rate of oversubscription. 2. in a SAN switching environment. although they pertain to different elements of the SAN.slots in switches that have a proven investment protection record open the possibility to future expansion. On the server side. . CPU capacity. Most major disk subsystem vendors provide guidelines as to the recommended fan-out ratio of subsystem client-side ports to server connections.

Although ideal scenario tests can be contrived using larger I/Os. 3. The general principle in optimizing design oversubscription is to group applications or servers that burst high I/O rates at different time slots within the daily production cycle. and the SAN design oversubscription has little effect on I/O contention. peak time I/O traffic contention is minimized. 8:1) coupled with monitoring of the traffic on the switch ports connected to storage arrays and Inter-Switch Links (ISL) to see if bandwidth is a limiting factor. In this case. which might temporarily require high-rate I/O service. server-side resources. the oversubscription ratio can be increased gradually to a level that is both maximizing performance while minimizing cost. This grouping can examine either complementary application I/O profiles or careful scheduling of I/O-intensive activities such as backups and batch jobs. and application I/O patterns do not result in sustained full-bandwidth utilization. In more common scenarios. and application performance can be monitored closely. application server performance is acceptable. However. and sequential I/O operations to show wire-rate performance. Traffic Management . I/O composition. this is far from a practical real-world implementation.Figure 6-17 SAN Oversubscription Design Considerations In most cases. Best-practice would be to build a SAN design using a topology that derives a relatively conservative oversubscription ratio (for example. Because of this fact. neither application server HBAs nor disk subsystem client-side controllers can handle full wire-rate sustained bandwidth. you must account for burst I/O traffic. oversubscription can be safely factored into SAN design. If bandwidth is not the limited factor. large CPUs.

Many organizations would like to consolidate their storage infrastructure into a single physical fabric. but both technical and business challenges make this difficult. Fault Isolation Consolidating multiple areas of storage into a single physical fabric both increases storage utilization and reduces the administrative overhead associated with centralized storage management. Technology such as virtual SANs (VSANs. Faults within one fabric are contained within a single fabric (VSAN) and are not propagated to other fabrics. . should traffic use one path in preference to the other? For some SAN designs it makes sense to implement traffic management policies that influence traffic flow and relative traffic priorities. 4. see Figure 6-18) enables this consolidation while increasing the security and stability of Fibre Channel fabrics by logically isolating devices that are physically connected to the same set of switches.Are there any differing performance requirements for different application servers? Should bandwidth be reserved or preference be given to traffic in the case of congestion? Given two alternate traffic paths between data centers with differing distances. The major drawback is that faults are no longer isolated within individual storage areas.

Because the control plane is critical to network operations. and other Fibre Channel fabric services. Fibre Channel frames destined for the switch itself. and a control plane.Figure 6-18 Fault Isolation with VSANs 5. typically through a high rate of traffic destined to the switch itself. Control plane scalability is the primary reason storage vendors set limits on the number of switches and devices they have certified and qualified for operation in a single fabric. any service disruption to the control plane can result in business impacting network outages. These result in excessive CPU utilization and/or deprive the switch of CPU resources . name server and domain-controller queries. Control Plane Scalability A SAN switch can be logically divided into two parts: a data plane. which handles the forwarding of data frames within the SAN. which handles switch management functions. Control plane service disruptions (perpetrated either inadvertently or maliciously) are possible. routing protocols. routing updates and keepalives. such as Fabric Shortest Path First (FSPF).

the standard permits the connection of one or more arbitrated loops to a fabric. doubling the number of devices in a SAN can result in a quadrupling of the CPU processing required to maintain that routing. and this is why more emphasis is placed upon the fabric topology than on the two other topologies in this chapter. Point-to-point defines a bidirectional connection between two devices. Finally. Although FSPF itself provides for optimal routing between nodes. which provide all the benefits of multiple parallel ISLs between switches (higher throughput and resiliency) but only appear in the topology as a single logical link rather than multiple parallel links. Attention should be paid to the CPU and memory resources available for control plane functionality and to port aggregation features such as Cisco port channels. fabric defines a network in which several devices can exchange data simultaneously at full bandwidth. Control plane CPU deprivation can also occur when there is insufficient control plane CPU relative to the size of the network topology and a networkwide event (for example. the Dijkstra algorithm on which it is commonly based has a worstcase running time that is the square of the number of nodes in the fabric. Arbitrated loop defines a unidirectional ring in which only two devices can ever exchange data with one another at any one time. That is. A fabric basically requires one or more Fibre Channel switches connected together to form a control center between the end devices. and point-topoint (see Figure 6-19). . A goal of SAN design should be to try to minimize the processing required with a given SAN topology. loss of a major switch or significant change in topology) occurs. The fabric topology is the most frequently used of all topologies. FSPF is the standard routing protocol used in Fibre Channel fabrics. SAN Topologies The Fibre Channel standard defines three different topologies: fabric.for normal processing. Furthermore. arbitrated loop.

Bandwidth is shared equally among all connected devices. In both the core-only and core-edge designs. Switched fabrics have a theoretical address support for over 16 million connections. However. compared to arbitrated-loop at 126. Address space constraints limit FC-SANs to a maximum of 239 switches. one input and one output channel. but this creates one physical network and compromises resilience against networkwide disruptions (for example. Unlike Ethernet. Like Ethernet. any port can communicate with another cut-through or store and forward switching.Figure 6-19 Fibre Channel Topologies Common to all topologies is that devices (servers. Adding a new device increases aggregate bandwidth. but that exceeds the practical and physical limitation of the switches that make up a fabric. Also switched FC fabrics are aware of arbitrated loop devices when attached. Host-to-storage FC connections are generally redundant. of which there is a limit of 126. the port is generally realized by means of so-called HBAs. In servers. FSPF reduces all physical topologies to a logical tree topology. A switched fabric topology allows dynamic interconnections between nodes through ports connected to a fabric. the links of the arbitrated loop topology are unidirectional: each output channel is connected to the input channel of the next port until the circle is closed. FSPF convergence). Cisco’s virtual SAN (VSAN) technology increases the number of switches that can be physically interconnected by reusing the entire FC address space within each VSAN. so that every output channel is connected to an input channel. Figure 6-20 illustrates the typical FC SAN topologies. there is a limit to the number of FC switches that can be interconnected. The connection between two ports is called a link. A port always consists of two channels. therefore. FC switches employ a routing protocol called fabric shortest path first (FSPF) based on a link-state algorithm. . Most FC SANs are deployed in one of two designs commonly known as the two-tier topology: core-edge-only and threetier topology (edge-core-edge) designs. the redundant paths are usually not interconnected. In the point-to-point topology and in the fabric topology the links are always bidirectional: in this case the input channel and the output channel of the two ports involved in the link are connected by a cross. In this configuration the end devices are bidirectionally connected to the hub. On the other hand. the wiring within the hub ensures that the unidirectional data flow within the arbitrated loop is maintained. FC switches can be interconnected in any manner. The edge switches in the core-edge design may be connected to both core switches. storage devices. The cabling of an arbitrated loop can be simplified with the aid of a hub. It is possible for any port in a node to communicate with any port in another node connected to the same fabric. single host-to-storage FC connections are common in cluster and grid environments because host-based failover mechanisms are inherent to such environments. The core-only is a star topology. The more active devices connected the less available bandwidth will be available. and switches) must be equipped with one or more Fibre Channel ports. The complexity and the size of the FC SANs increase when the number of connections increases.

(See Figure 6-21. Typically. Both have a direct impact over a major part of the entire data center cabling system. the data rate for the connections. SAN designs that can be built using a single physical switch are commonly referred to as collapsed core designs. EoR designs reduce the number of network devices and optimize port utilization on the network devices. The best design choice leans on the number of servers per rack. The proliferation of NPIV-capable end devices such as host bus adaptors (HBAs) and Cisco NPV-mode switches makes the number of fabric logins needed on a perport. the budget. ToR designs are based on intra-rack cabling between servers and smaller switches. making use of core ports (non oversubscribed) and edge ports (oversubscribed) but . Although these designs reduce the amount of cabling and optimize the space used by network equipment. On the other hand. in a SAN design.Figure 6-20 Sample FC SAN Topologies The Top-of-Rack (ToR) and End-of-Row (EoR) designs represent how access switches and servers are connected to each other.) This terminology refers to the fact that a design is conceptually a core/edge design. the number of fabric logins needed has increased even more. With the use of features such as NPort ID Virtualization (NPIV) and Cisco N-Port Virtualization (NPV). The total number of hosts and NPV switches determine the number of fabric logins required on the core switch. EoR designs are based on inter-rack cabling between servers and high-density switches installed on the same row as the server racks. the number of end devices determines the number of fabric logins needed. which can be installed on the same racks as the servers. and per-physical-fabric basis a critical consideration. The fabric login limits determine the design of the current SAN as well as its potential for future growth. they offer the storage team the challenge to manage a higher number of devices. The increase in blade server deployments and the consolidation of servers through the use of server virtualization technologies affect the design of the network. per-switch. and the operational complexity. per-line-card. But EoR flexibility taxes data centers with a great quantity of horizontal cabling running under the raised floor.

but to use the UK English spelling (Fibre) when referring to the standard. vendors. technical associations. protection against a sprinkler head failing above a switch) and protection against equipment failure. Many standards bodies. although it could be argued that multiple physical fabrics provide increased physical protection (for example. Figure 6-21 Sample Collapsed Core FC SAN Design SAN designs should always use two isolated fabrics for high availability. When copper support was added. and there is nondisruptive failover between fabrics in the event of a problem in one fabric. . Both provide separation of fabric services. Fibre Channel was developed through industry cooperation. a collapsed core design on Cisco MDS 9000 family switches would utilize both non oversubscribed (storage) and oversubscribed (host-optimized) line-cards. Multipathing software should be deployed on the hosts to manage connectivity between the host and storage so that I/O uses both paths. which was developed by a vendor and submitted for standardization afterward. and industry-wide consortiums accredit the Fibre Channel standard.that it has been collapsed into a single physical switch. Fibre Channel is a technology standard that allows data to be transferred at extremely high speeds. Note Is it Fibre or Fiber? Fibre Channel was originally designed to support fiber optic cabling only. Fabric isolation can be achieved using either VSANs or dual physical switches. Fibre Channel Fibre Channel (FC) is the predominant architecture upon which SAN implementations are built. the committee decided to keep the name in principle. Current implementations support data transfers at up to 16 Gbps or even more. with both hosts and storage connecting to both fabrics. There are many products on the market that take advantage of the high-speed and high-availability characteristics of the Fibre Channel architecture. unlike Small Computer System Interface (SCSI). Traditionally. The U. English spelling (Fiber) is retained when referring generically to fiber optics and cabling.S.

Fibre Channel is structured by a lengthy list of standards. Figure 6-22 Fibre Channel Standards The Fibre Channel protocol stack is subdivided into five layers (see Figure 6-10 in the “Block Level Protocols” section). The lower four layers. The use of the various ULPs decides. These services will be required to administer and operate a Fibre Channel network. technical standard for networking that incorporates the channel transport characteristics of an I/O bus. the physical levels.Fibre Channel is an open. a Fibre Channel SAN (as a storage network) or both at the same time. with the flexible connectivity and distance characteristics of a traditional network. ULPs) are mapped on the underlying Fibre Channel network. in which each bit has its own data line . Because of its channel-like qualities. hosts and applications see storage devices that are attached to the SAN as though they are locally attached storage. Detailed information on all FC Standards can be downloaded from T11 at www. the transmission. Because of its network characteristics. and it can be managed as a network. defines how application protocols (upper layer protocols. FC-0 to FC-3. whether a real Fibre Channel network is used as an IP network. that is. The upper layer. Figure 6-22 portrays just a few of them.t11. FC-0: Physical Interface FC-0 defines the physical transmission medium (cable. for example. The link services and fabric services are located quasi-adjacent to the Fibre Channel protocol stack. plug) and specifies which physical signals are used to transmit the bits 0 and 1. In contrast to the SCSI bus. define the fundamental communication techniques. and the addressing.org. it can support multiple protocols and a broad range of devices. FC-4.

up to 30 meters (98 feet). It is compatible with both SMF and MMF.5 micron. Where costly electronics are heavily concentrated. Product flexibility needs to be considered when you plan a SAN. This fiber might need to be laid underground or through a conduit. If the two units that need to be connected are in different cities. The 50micron or 562. Long wave laser: This technology uses a wavelength of 1300 nanometers. Dark fiber generically refers to a long. Fiber-optic cables have a major advantage in noise immunity. in this case. Multimode fiber is for shorter distances.plus additional control lines.3 micron. some form of fiber-optic link is required. a standard fiber cable suffices. Over a slightly longer distance—for example. Multimode cabling is used with shortwave laser light and has either a 50-micron or a 62. Normally. In such a case. they lease fiber-optic cables to meet their needs. Fibre Channel architecture supports both short wave and long wave optical transmitter technologies in the following ways: Short wave laser: This technology uses a wavelength of 780 nanometers and is compatible only with MMF. is typically associated with more expensive.5-micron core with a cladding of 125 micron. of light to travel within the fiber. optical fibers provide a high-performance transmission medium. Larger. where repeater and amplifier spacing needs to be maximized. MMF fiber is better suited for shorter distance applications. To connect one optical device to another. fiber-optic cabling is referred to by mode or the frequencies of light waves that are carried by a particular cable type. and can be identified by their DB-9. Fibre Channel can use either optical fiber (for distance) or copper cable links (for short distance at low cost). Copper cables tend to be used for short distances. Mixing fiber-optical and copper components in the same environment is supported. FC-1: Encode/Decode FC-1 defines how data is encoded before it is transmitted via a Fibre Channel cable (8b/10b data transmission code scheme patented by IBM). SMFs are used in applications where low signal loss and high data rates are required. the primary cost of the system does not lie with the cable. a fiber link might need to be laid. FC-1 also describes certain transmission words . MMF is more economical because it can be used with inexpensive connectors and laser devices. Single-mode fiber (SMF) allows only one pathway. although not all products provide that flexibility. The core size is typically 8. from one building to the next. Common multimode core sizes are 50 micron and 62. When a company leases equipment. Single-mode fiber is for longer distances. but it is not as simple as connecting two switches together in a single rack.5-micron diameter is sufficiently large for injected light waves to be reflected off the core interior. the problem is much larger. dedicated fiber-optic link that can be used without the need for any additional equipment. Because most businesses are not in the business of laying cable. If the distance is short. Multimode fiber (MMF) allows more than one mode of light. Fibre Channel transmits the bits sequentially via a single line. which reduces the total system cost. Overall. 9-pin connector. or mode. which was refined and proven over many years. the fiberoptic cable that they lease is known as dark fiber. An example of this type of application is on long spans between two system or network devices.

the data is encoded before transmission and decoded upon reception. A parity check does not find the even-numbered bit errors. This information allows the receiver to synchronize to the embedded clock information and successfully recover the data at the required error rate. The format of the 8b/10b character is of the format D/Kxx. The frame delimiters. The 66-bit entity is made by prefixing one of two possible 2-bit preambles to the 64 bits to be transmitted. the 64 bits are entirely data. which also establishes word boundary alignment. The use of the 01 and 10 preambles guarantees a bit transmission every 66 bits. An ordered set always begins with the special character K28. plus 56 bits of control information and data. which means that a continuous stream of zeros or ones cannot be valid data. Ordered sets provide the availability to obtain bit and word synchronization. and end-of-frame (EOF) ordered sets establish the boundaries of a frame. Ordered sets delimit frame boundaries and occur outside of frames. The 8b/10b encoding logic finds almost all errors. There are 11 types of SOF and eight types of EOF delimiters that are defined for the fabric and N_Port sequence . to move the data across the network. They immediately precede or follow the contents of a frame. This scheme is called 8b/10b encoding because it refers to the number of data bits input to the encoder and the number of bits output from the encoder. If the preamble is 10. The ordered sets are 4-byte transmission words that contain data and special characters.5.5). If the preamble is 01. Ordered sets begin with a special transmission character (K28. The overhead of the 64B/66B encoding is considerably less than the more common 8b/10b-encoding scheme. Encoding and Decoding To transfer data over a high-speed serial interface. There are two categories of transmission words: data words and ordered sets. This 8b/10b encoding finds errors that a parity check cannot. an 8-bit type field follows. only the odd numbers. The preambles 00 and 11 are not used. The information sent on a link consists of transmission words containing four transmission characters each. Ordered Sets Fibre Channel uses a command syntax. They are 40 bits long. and generate an error if seen. The encoding process ensures that sufficient clock information is present in the serial data stream. Three major types of ordered sets are defined by the signaling protocol. which is known as an ordered set. Transmission word consists of four continuous transmission characters treated as a unit. aligned on a word boundary. which have a special meaning. which is outside the normal data space. Data words occur between the start of frame (SOF) and end of frame (EOF) delimiters.y: D or K D = Data K =Special Character “xx” = Decimal value of the 5 Least Significant bits “y” = Decimal value of the 3 Most Significant bits Communications of 10 and 16 Gbps use 64/66b encoding.(ordered sets) that are required for the administration of a Fibre Channel connection (link control protocol). Sixty-four bits of data are transmitted as a 66-bit entity. It also allows easier clock and timer synchronization because a transmission must be seen every 66 bits. the start-of-frame (SOF). The 8b/10b encoding process converts each 8-bit byte into two possible 10-bit characters.

Table 6-5 Fibre Channel Primitive Sequences FC-2: Framing Protocol FC-2 is the most comprehensive layer in the Fibre Channel protocol stack. a corresponding primitive sequence or idle is transmitted in response. A primitive sequence is an ordered set that is transmitted and repeated continuously to indicate specific conditions within a port. The primitive sequences that are supported by the standard are illustrated in Table 6-5. When a primitive sequence is received and recognized. a file) are transmitted via the Fibre Channel network. An idle is a primitive signal that is transmitted on the link to indicate that an operational port facility is ready for frame transmission and reception. Each exchange consists of one or more bidirectional sequences (see Figure 6-23). idle and receiver ready (R_RDY).control. It determines how larger data units (for example. Recognition of a primitive sequence requires consecutive detection of three instances of the same ordered set. Or the set might indicate conditions that are encountered by the receiver logic of a port. It regulates the flow control that ensures that the transmitter sends the data only at a speed that the receiver can process it. And it defines various service classes that are tailored to the requirements of various applications: Multiple exchanges are initiated between initiators (hosts) and targets (disks). . The two primitive signals. are ordered sets that are designated by the standard to have a special meaning. The R_RDY primitive signal indicates that the interface buffer is available for receiving further frames.

A Fibre Channel frame consists of a header. In contrast to Ethernet and TCP/IP. they signal events such as the successful delivery of a data frame. A sequence is a larger data unit that is transferred from a transmitter to a receiver.112 bytes of useful data. have to be broken down into several frames. A sequence could be representing an individual database transaction. each exchange maps to a SCSI command. The frame is bracketed by a start-of-frame (SOF) delimiter and an end-of-frame (EOF) delimiter. A Fibre Channel network transmits control frames and data frames. Only one sequence can be transferred after another within an exchange. Control frames contain no useful data. useful data (payload). For the SCSI3 ULP. therefore. The CRC checking procedure is designed to recognize all transmission errors if the underlying medium does not exceed the specified error rate of 10-12. Data frames transmit up to 2. and a Cyclic Redundancy Checksum (CRC) (see Figure 6-24). . Finally. FC-2 guarantees that sequences are delivered to the receiver in the same order they were sent from the transmitter.” Furthermore. Larger sequences. Fibre Channel is an integrated whole: the layers of the Fibre Channel protocol stack are so well harmonized with one another that the ratio of payload to protocol overhead is very efficient at up to 98%. hence the name “sequence.Figure 6-23 Fibre Channel FC-2 Hierarchy Each sequence consists of one or more frames. six filling words must be transmitted by means of a link between two frames. sequences are only delivered to the next protocol layer up when all frames of the sequence have arrived at the receiver.

with hardly any products providing the connection-oriented Class 1. Class 2. Almost all new Fibre Channel products (HBAs. switches. A port can thus maintain several connections at the same time. In Fibre Channel SAN implementations. In Class 2 the receiver acknowledges each received frame. Three of these defined classes (Class 1. storage devices) support the service classes Class 2 and Class 3. Class F serves for the data exchange between the switches within a fabric. In addition. Table 6-6 lists the different Fibre Channel classes of service. Class 2 uses end-to-end flow control and link flow control. Class 3 achieves less than Class 2: frames are not acknowledged. the end devices themselves negotiate whether they communicate by Class 2 or Class 3.Figure 6-24 Fibre Channel Frame Format Fibre Channel Service Classes The Fibre Channel standard defines six service classes for exchange of data between end devices. which realize a packet-oriented service (datagram service). Class 2 and Class 3. Class 1 defines a connection-oriented communication connection between two node ports: a Class 1 connection is opened before the transmission of frames. the higher protocol layers must notice for themselves whether a frame has been lost. . not end-to-end flow control. Instead. on the other hand. Several Class 2 and Class 3 connections can thus share the bandwidth. This means that only link flow control takes place. the frames are individually routed through the Fibre Channel network. and Class 3) are realized in products available on the market. are packet-oriented services: no dedicated connection is built up. In addition.

If the receiver awards the transmitter a credit of 3. FC-2 defines two mechanisms for flow control: end-to-end flow control and link flow control (see Figure 6-25). Fibre Channel uses credit model. This means that the link flow control also takes place at the Fibre Channel switches. Each credit represents the capacity of the receiver to receive a Fibre Channel frame. BB_Credits are the more practical application and are very important for switches that are communicating across an extension (E_Port) because of the inherent delays present with transport over LAN/WAN/MAN. The transmitter may not send further frames until the receiver has acknowledged the receipt of at least some of the transmitted frames. Initial credit levels are established with the fabric at login and then replenished upon receipt of receiver-ready (R_RDYY) from the target. Two communicating ports negotiating the buffer-to-buffer credits achieve this. In end-to-end flow control. The sender decrements one BB_Credit for each frame sent and increments one for each R_RDY received. Buffer-to-buffer credits are used in Class 2 and Class 3 services between a node and a switch. . The link flow control takes place at each physical connection.Table 6-6 Fibre Channel Classes of Service Fibre Channel Flow Control Flow control ensures that the transmitter sends data only at a speed that the receiver can receive it. two end devices negotiate the end-to-end credit before the exchange of data. The end-to-end flow control is realized on the HBA cards of the end devices. the transmitter may only send the receiver three frames.

Buffer to buffer credits (BB_Credit) are negotiated between each device in a FC fabric. As distance increases or frame size decreases. FC flow control is critical to the efficiency of the SAN fabric. the number of available BB_Credits required increases as well. Figure 6-26 portrays the Fibre Channel Frame buffering on an FC switched fabric. a small FC frame uses the same buffer as a large FC frame. regardless of its frame size. After the initial credit level is set.Figure 6-25 Fibre Channel End-to-End and Buffer-to-Buffer Flow Control Mechanism End-to-end credits are used in Class 1 and Class 2 services between two end nodes. there is no concept of end-to-end buffering exists. Flow control management occurs between two connected Fibre Channel ports to prevent a target device from being overwhelmed with frames from the initiator device. regardless of the number of switches in the network. FC frames are buffered and queued in intermediate switches. . One buffer is used per FC frame. Hop-by-hop traffic flow is paced by return of Receiver Ready (R_RDY) frames and can only transmit up to the number of BB_Credits before traffic is throttled. the sender decrements one EE_Credit for each frame sent and increments one when an acknowledgement (ACK) is received. Insufficient BB_Credits will throttle performance—no data will be transmitted until R_RDY is returned.

000). This helps keep the fiber optic link full.000m characters (2 KB frame) is approximately 20 μsec (2000 /1 Gbps or 100 MBps FC data rate). Table 6-7 illustrates the required BB_Credits for a specific FC frame size with specific link speeds. the maximum number of frames in a second is 10. Striping could distribute the frames of an exchange over several ports and thus increase the throughput between the two devices.Figure 6-26 Fibre Channel Frame Buffering The optimal buffer credit value can be calculated by determining the transmit time per FC frame and dividing that into the total round-trip time per frame.000 (1sec. and without running out of BB_Credits. then BB_Credit >= 5 (100 μsec / 2 μsec =5).000 bytes * 10.) If the sender waits for a response before sending the next frame./. 0001 = 10. 50μsec × 2 =100μsec. Multipathing combines several paths between two multiport end devices to form a logical path . The speed of light over a fiber optic link is ~ 5μsec per km. The following functions are being discussed for FC-3: Striping manages several paths between multiport end devices. the time it takes for a frame to travel from a sender to receiver and response to the sender is 100 μsec. in currently available products. For a 10km link. A Fibre Channel frame size is ~2 Kbytes (2148 bytes maximum). If the round-trip time is 100 μsec. (Speed of light over fiber optic links ~5μsec/km. The transmit time for 2.000 frames per sec). This implies that the effective data rate is ~ 20 MBps (2. FC-3 is empty. Table 6-7 Fibre Channel BB_Credits FC-3: Common Services FC-3 has been in its conceptual phase since 1988.

the nWWN and pWWN may be the same. 3. array controller. using the port address. Compressing and encryption of the data to be transmitted preferably realized in the hardware on the HBA. and Fibre Channel disk drive has a single unique nWWN. Fibre Channel Addressing Fibre Channel uses World Wide Names (WWN) and Fibre Channel IDs (FCID). Fibre Channel ports are intelligent interface points on the Fibre Channel SAN. . the following occurs: 1. Vendors buy blocks of WWNs from the IEEE and allocate them to devices in the factory. FCIDs are dynamically acquired addresses that are routable in a switch fabric. For example. The nWWNs are required because the node itself must sometimes be uniquely identified. The service or application communicates with the target device. On multiport devices. These characteristics ensure that the fabric can reliably identify and locate devices. the pWWN is used to uniquely identify each port. The name server looks up and returns the current port address that is associated with the target WWN. Every HBA. gateway. Ports must be uniquely identifiable because each port participates in a unique data path. Failure or overloading of a path can be hidden from the higher protocol layers. WWNs are unique identifiers that are hardcoded into Fibre Channel devices. switch. array or tape controller. Figure 6-27 portrays Fibre Channel addressing on an FC fabric. path failover and multiplexing software can detect redundant paths to a device by observing that the same nWWN is associated with multiple pWWNs. Each Fibre Channel port has at least one WWN. They are found embedded in various devices. WWNs are important for enabling Cisco Fabric Services because they have these characteristics— they are guaranteed to be globally unique and are permanently associated with devices. and switched fabric.group. A dual-ported HBA has three WWNs: one nWWN and one pWWN for each port. This capability is an important consideration for Cisco Fabric Services. 2. The two types of WWNs are node WWNs (nWWN) and port WWNs (pWWN): The nWWNs uniquely identify devices. On a single-port device. When a management service or application needs to quickly locate a specific device. The pWWNs uniquely identify each port in a device. The nWWNs and pWWNs are both needed because devices can have multiple ports. such as an I/O adapter. The service or application queries the switch name server service with the WWN of the target device.

Figure 6-27 Fibre Channel Naming on FC Fabric The Fibre Channel point-to-point topology uses a 1-bit addressing scheme. Switched Fabric Address Space The 24-bit Fibre Channel address consists of three 8-bit elements: The domain ID is used to define a switch. Figure 6-28 portrays FCID addressing. . Addresses are cooperatively chosen during loop initialization. However. so 126 addresses are available for nodes. One address is reserved for an FL port. only a subset of 127 addresses is available because of the 8B/10B encoding requirements. Each switch receives a unique domain ID. Figure 6-28 FC ID Addressing The FC-AL topology uses an 8-bit addressing scheme: The arbitrated loop physical address (AL-PA) is an 8-bit address. which provides 256 potential addresses. One port assigns itself an address of 0x000000 and then assigns the other port an address of 0x000001.

Although the domain ID is an 8-bit field. such as BB Credit. Management Services. Process Login (PRLI). it provides operating characteristics associated with the fabric (service parameters. The fabric also assigns or confirms (if trying to reuse an old ID) the N_port identifier of the port initiating the BB_Credit value. N_Ports perform fabric login by transmitting the FLOGI extended link service command to the well-known address of 0xFFFFFE of the Fabric F_Port and exchanges service parameters. Logout (LOGO). only 239 domains are available to the fabric: Domains 01 through EF are available. State Change Notification (SCN). Each fabric-attached loop receives a unique area ID. the number of switches in a fabric is far smaller because of storage-vendor qualifications. In practice. so there can be no more than 239 switches in a fabric. Alias Services. and class of service supported. such as max frame size). A new N Port Fabric Login source address is 0x000000 and destination address is 0xFFFFFE. If fabric is present. Fabric Login (FLOGI) is used by the N_Port to discover if a fabric is present. Most of the ELSs are performed as a two sequence exchange: a request from the originator and a response from the responder. Generic services include the following: Directory Service. Fibre Channel Link Services Link services provide a number of architected functions that are available to the users of the Fibre Channel port. Areas are also used to uniquely identify fabric-attached arbitrated loops. Process Logout (PRLO). FC-CT (Fibre Channel Common Transport) protocol is used as a transport media for these services. and FC-4 link services (generic services). Following are ELS command examples: N_Port Login (PLOGI). . Registered State Change Notification (RSCN). Basic link service commands support low-level functions like aborting a sequence (ABTS) and passing control bit information. Extended link services (ELS) are performed in a single exchange. Figure 6-29 portrays FC fabric and port login/logout and query to switch process.The area ID is used to identify groups of ports within a domain. Port login is required for generic services. F_Port Login (FLOGI). There are three types of link services defined. State Change Registration (SCR). Time Services and Key Distribution Services. ELS service requests are not permitted prior to a port login except the fabric login. maximum payload size. They are basic link services. Loop Initialize (LINIT). Domains 00 and F0 through FF are reserved for use by switch services. FC-GS shares a Common Transport (CT) at the FC-4 level and the CT provides access to the services. depending upon the type of function provided and whether the frame contains a payload. extended link services. Areas can be used to group ports within a switch. Each switch must have a unique domain ID.

GA_NXT (Name Server Query): Initiator to name server (0xFFFFFC) requesting attributes about a specific port. 2. PLOGI (ELS): Initiator/target to name server (0xFFFFFC) for port login and establish a Fibre Channel session. ACCEPT (Query reply): Name server to initiator with list of attributes of the requested port. 8. 4. 7. ACCEPT (ELS reply): Name server to initiator/target using Port_Identifier as D_ID for port login acceptance. and State Change Registration 1. PLOGI (ELS): Initiator to target to establish an end-to-end Fibre Channel session (see Figure 6-30).Figure 6-29 Fibre Channel Fabric. Fibre Channel ID (Port_Identifier) is assigned by login server to initiator/target. 5. 9. 6. ACCEPT (ELS reply): Fabric controller to initiator/target acknowledging registration command. . 3. Port Login. FLOGI extended Link Service (ELS): Initiator/target to login server (0xFFFFFE) for switch fabric login. ACCEPT (ELS reply): Login server to initiator/target for fabric login acceptance. SCR (ELS): Initiator/target issues State Change Registration Command (SCRC) to fabric controller (0xFFFFFD) for notification when a change in the switch fabric occurs.

ACCEPT (ELS reply): Target to initiator acknowledging session establishment. RSCN). N-Ports can register for state changes in the fabric controller (State Change Registration. All FC services have in common that they are addressed via FC-2 frames. Like all services. The fabric login server processes incoming fabric login requests with the address 0×FF FF FE. node WWN. supported FC-4 protocols. the name server appears as an N-Port to the other ports. ACCEPT (ELS reply): Target to initiator acknowledging initiator’s port login and Fibre Channel session. N-Ports must log on with the name server by means of port login before they can use its services. Servers can use this service to monitor their storage devices. The fabric controller manages changes to the fabric under the address 0×FF FF FD. port address. or SCR). It stores information such as port WWN. . The name server administers a database on N-Ports under the address 0×FF FF FC. 12. supported service classes. The fabric controller then informs registered N-Ports of changes to the fabric (Registered State Change Notification. 11. Table 6-8 lists the reserved Fibre Channel addresses. This information is managed by fabric services. Initiator may now issue SCSI commands to the target. and they can be reached by well-defined addresses. Fibre Channel Fabric Services In a Fibre Channel topology the switches manage a range of information that operates the fabric. N-Ports can register their own properties with the name server and request information on other N-Ports.Figure 6-30 Fibre Channel Port/Process Login to Target 10. PRLI (ELS): Initiator to target process login requesting an FC-4 session. and so on.

This is where the application protocols (Upper Layer Protocols. This mapping of existing protocols aims to ease the transition to Fibre Channel networks: ideally. This means that the FC-4 protocol mappings support the Application Programming Interface (API) of existing protocols upward in the direction of the operating system and realize these downward in the direction of the medium via the Fibre Channel network. The task of the FC-4 protocol mappings is to map the application protocols onto the underlying Fibre Channel network. The application protocol for SCSI is called Fibre Channel Protocol (FCP).Table 6-8 FC-PH Has Defined a Block of Addresses for Special Functions (Reserved Addresses) FC-4: ULPs—Application Protocols The layers FC-0 to FC-3 serve to connect end devices together by means of a Fibre Channel network. or ULPs) come into play. For example. The protocol mappings determine how the mechanisms of Fibre Channel are used in order to realize the application protocol by means of Fibre Channel. This emulation of traditional SCSI devices should make it possible for Fibre Channel SANs to be simply and painlessly integrated into existing hardware and software. The operating system recognizes storage devices connected via Fibre Channel as SCSI devices.) FCP maps the SCSI protocol onto the underlying Fibre Channel network. no further modifications are necessary to the operating system except for the installation of a new device driver. SCSI and IP. For the connection of storage devices to servers. the type of data that end devices exchange via Fibre Channel connections remains open. the SCSI cable is therefore replaced by a Fibre Channel network. However. they specify which service classes will be used and how the data flow in the application protocol will be projected onto the exchange sequence frame mechanism of Fibre Channel. . which it addresses like “normal” SCSI devices. The idea of the FCP protocol is that the system administrator merely installs a new device driver on the server and this realizes the FCP protocol. A specific Fibre Channel network can serve as a medium for several application protocols—for example. (See Figure 6-31.

When a server or storage device communicates. Novell. which the interface point formats to be sent to the target device. Figure 6-32 portrays the various Fibre Channel port types. They are found embedded in various devices. IPFC defines how IP packets will be transferred via a Fibre Channel network. it has been possible to realize storage networks in the world of mainframes since the 1990s. The server or storage device issues SCSI commands. These ports understand Fibre Channel. MacOS). such as an I/O adapter. Windows. Using ESCON. the interface point acts as the initiator or target for the connection. Fibre Connection (FICON) is a further important application protocol. . array or tape controller. and switched fabric. OS/400. Fibre Channel—Standard Port Types Fibre Channel ports are intelligent interface points on the Fibre Channel SAN. FICON maps the ESCON protocol (Enterprise System Connection) used in the world of mainframes onto Fibre Channel networks. The Fibre Channel switch recognizes which type of device is attaching to the SAN and configures the ports accordingly.Figure 6-31 FCP Protocol Stack A further application protocol is IPFC: IPFC uses a Fibre Channel connection between two servers as a medium for IP data traffic. Fibre Channel is therefore taking the old familiar storage networks from the world of mainframes into the Open System world (Unix.

Node-proxy port (NP Port): An NP Port is a port on a device that is in N-Port Virtualization (NPV) mode and connects to the core switch via an F Port. F Ports support class 2 and class 3 service. If more than one FL Port is detected on the FC-AL during initialization. TE Ports are specific to Cisco MDS 9000 Series switches and expand the functionality of E Ports to support virtual SAN (VSAN) trunking. They also serve as a conduit between switches for frames that are destined to remote node ports (N Ports) and node loop ports (NL Ports). This port connects to a peripheral device (such as a host or disk) that operates as an N Port. an interface functions as a fabric port. transport quality of service (QoS) parameters. an interface functions as a fabric loop port. This port connects to one or more NL Ports (including FL Ports in other switches) to form a public FC-AL. class 3. This port connects to another E Port to create an interswitch link (ISL) between two switches. an interface functions as a trunking expansion port. Fabric loop port (FL Port): In FL Port mode. and the Fibre Channel Traceroute (fctrace) feature. FL Ports support class 2 and class 3 service. only one FL Port becomes operational. an interface functions as a fabric expansion port. and class F service. NP Ports function like node ports (N . This port connects to another TE Port to create an extended ISL (EISL) between two switches. which contains VSAN information. Interconnected switches use the VSAN ID to multiplex traffic from one or more VSANs across the same physical link. E Ports support class 2. Fabric port (F Port): In F Port mode. all frames that are transmitted are in the EISL frame format. An F Port can be attached to only 1 N Port. the other FL Ports enter nonparticipating mode. Trunking expansion port (TE Port): In TE Port mode. E Ports carry frames between switches for configuration and fabric management.Figure 6-32 Fibre Channel Standard Port Types Expansion port (E Port): In E Port mode. When an interface is in TE Port mode.

This feature is nonintrusive and does not affect switching of network traffic for any SPAN source port. the G-Port configures itself as an E-Port. The SPAN feature is specific to Cisco MDS 9000 Series switches. In TF Port mode. . Auto mode: An interface that is configured in auto mode can operate in one of the following modes: F Port. A set of protocols runs over the SAN to handle routing. Monitoring is performed using a standard Fibre Channel analyzer (or a similar Switch Probe) that is attached to the SD Port. Virtual Storage-Area Network A VSAN is a virtual storage-area network (SAN). an interface functions as a trunking expansion port. or TF Port. you use the physical links to make these interconnections. SPAN tunnel port (ST Port): In ST Port mode. naming. it cannot be attached to any device and therefore cannot be used for normal Fibre Channel traffic. which contains VSAN information. an interface functions as a SPAN. for example. an interface functions as an entry-point port in the source switch for the Remote SPAN (RSPAN) Fibre Channel tunnel. Such ports are called G-Ports. and zoning. This interface connects to a TF Port to create a link to a core N Port ID Virtualization (NPIV) switch from an NPV switch to carry tagged frames. FL Port. TE Port. ST Port mode and the RSPAN feature are specific to Cisco MDS 9000 Series switches. some SAN extender devices implement a B Port model to connect geographically dispersed fabrics. depending on the attached N or NL Port. SD Ports cannot receive frames and transmit only a copy of the source traffic. a Fibre Channel switch is connected to a further Fibre Channel switch via a G-Port. they also function as proxies for multiple physical N Ports. TNP Port: In TNP Port mode. Fx Port: An interface that is configured as an Fx Port can operate in either F or FL Port mode. with the port mode being determined during interface initialization. This model uses B Ports that are as described in the T11 Standard Fibre Channel Backbone 2 (FC-BB-2). When a port is configured as an ST Port. An SD Port monitors network traffic that passes through a Fibre Channel interface. You can design multiple SANs with different topologies. A SAN is a dedicated network that interconnects hosts and storage devices primarily to exchange SCSI traffic. Fx Port mode is determined during interface initialization. In SANs. G-Port (Generic_Port): Modern Fibre Channel switches configure their ports automatically. TF Ports are specific to Cisco MDS 9000 Series switches and expand the functionality of F Ports to support VSAN trunking. Bridge port (B Port): Whereas E Ports typically interconnect Fibre Channel switches. all frames are transmitted in the EISL frame format. an interface functions as a trunking expansion port. This interface connects to another trunking node port (TN Port) or trunking node-proxy port (TNP Port) to create a link between a core switch and an NPV switch or a host bus adapter (HBA) to carry tagged frames. Trunking fabric port (TF Port): In TF Port mode.Ports) but in addition to providing N Port operations. If. E Port. Switched Port Analyzer (SPAN) destination port (SD Port): In SD Port mode.

configuration management capability. Figure 6-33 portrays VSAN topology and a representation of VSANs on an MDS 9700 Director switch by per port allocation. links. and policies are created within the new VSAN. the network administrator can build a single topology containing switches. Each VSAN in this topology has the same behavior and property of a SAN. A VSAN has the following additional features: Multiple VSANs can share the same physical topology. Figure 6-33 VSANs Physically separate SAN islands offer a higher degree of security because each physical infrastructure contains a separate set of Cisco Fabric Services and management access.A SAN island refers to a completely physically isolated switch or group of switches that are used to connect hosts to storage devices. . but also a complete replicated set of Fibre Channel services for each VSAN. VSANs increase the efficiency of a SAN fabric by alleviating the need to build multiple physically isolated fabrics to meet organizational or application needs. Therefore. VSANs provide not only a hardware-based isolation. fewer less-costly redundant fabrics can be built. thus increasing VSAN scalability. The reasons for building SAN islands include to isolate different applications into their own fabric or to raise availability by minimizing the impact of fabric-wide disruptive events. Instead. and one or more VSANs. Spare ports within the fabric can be quickly and nondisruptively assigned to existing VSANs. when a VSAN is created. in practice this situation can become costly and wasteful in terms of fabric ports and resources. providing a clean method of virtually growing application-specific SAN islands. Unfortunately. The same Fibre Channel IDs (FC IDs) can be assigned to a host in another VSAN. a completely separate set of Cisco Fabric Services. With the introduction of VSANs. each housing multiple applications and still providing island-like isolation.

ensuring absolute separation between user groups. VSANs offer the following advantages: Traffic isolation: Traffic is contained within VSAN boundaries. Up to 256 VSANs can be configured in a switch. Scalability: VSANs are overlaid on top of a single physical fabric. one is a default VSAN (VSAN 1) and another is an isolated VSAN (VSAN 4094). Membership to a VSAN is based on physical ports. Table 6-9 lists the different characteristics of VSAN and Zone. moved. domain manager. if desired. If one VSAN fails. Of these. redundant protection (to another VSAN in the same physical SAN) is configured using a backup path between the host and the device. such as FSPF.Every instance of a VSAN runs all required protocols. and zoning. Redundancy: Several VSANs created on the same physical SAN ensure redundancy. User-specified VSAN IDs range from 2 to 4093. Per VSAN fabric services: Replication of fabric services on a per VSAN basis provides increased scalability and availability. The EISL link is supported between Cisco MDS and Nexus switch products. Moving a device from one VSAN to another only requires configuration at the port level. The ability to create several logical VSAN layers increases the scalability of the SAN. Events causing traffic disruptions in one VSAN are contained within that VSAN and are not propagated to other VSANs. . and devices reside in only one VSAN. Using a hardware-based frame tagging mechanism on VSAN member ports and EISL links isolates each separate virtual fabric. or changed between VSANs without changing the physical structure of a SAN. Fabric-related configurations in one VSAN do not affect the associated traffic in another VSAN. and no physical port may belong to more than one VSAN. The EISL link type has been created and includes added tagging information for each frame within the fabric. not at a physical level. Ease of configuration: Users can be added.

availability. These benefits are derived from the separation of Fibre Channel services in each VSAN and the isolation of traffic between . DPVM offers flexibility and eliminates the need to reconfigure the port VSAN membership to maintain fabric topology when a host or storage device connection is moved between two Cisco MDS switches or two ports within a switch. DPVM uses the Cisco Fabric Services (CFS) infrastructure to allow efficient database management and distribution. The Cisco NX-OS software checks the database during a device FLOGI and obtains the required VSAN details. The pWWN identifies the host or device and the nWWN identifies a node consisting of multiple devices.Table 6-9 VSAN and Zone Comparison Dynamic Port VSAN Membership Port VSAN membership on the switch is assigned on a port-by-port basis. DPVM configurations are based on port world wide name (pWWN) and node world wide name (nWWN) assignments. preference is given to the pWWN. A DPVM database contains mapping information for each device pWWN/nWWN assignment and the corresponding VSAN. and security by allowing multiple Fibre Channel SANs to share a common physical infrastructure of switches and ISLs. Inter-VSAN Routing VSANs improve SAN scalability. By default each port belongs to the default VSAN. This method is referred to as Dynamic Port VSAN Membership (DPVM). You can dynamically assign VSAN membership to ports by assigning VSANs based on the device WWN. You can assign any one of these identifiers or any combination of these identifiers to configure DPVM mapping. If you assign a combination. It retains the configured VSAN regardless of where a device is connected or moved.

Zoning provides security within a single fabric. Data traffic isolation between the VSANs also inherently prevents sharing of resources attached to a VSAN. whether physical or logical. It provides efficient business continuity or disaster recovery solutions when used with Fibre Channel over IP (FCIP). Using IVR. It establishes proper interconnected routes that traverse one or more VSANs across multiple switches. LUN masking is used to provide additional security to LUNs after an initiator has reached the target device. Fibre Channel traffic does not flow between VSANs. you can access resources across VSANs without compromising other VSAN benefits. to restrict access between initiators and targets (see Figure 6-35).VSANs (see Figure 6-34). IVR transports data traffic between specific initiators and targets on different VSANs without merging VSANs into a single logical fabric. nor can initiators access resources across VSANs other than the designated VSAN. . such as robotic tape libraries. IVR is not limited to VSANs present on a common switch. Figure 6-34 Inter-VSAN Topology Fibre Channel Zoning and LUN Masking VSANs are used to segment the physical fabric into multiple logical fabrics.

potentially with a different operating system. soft zoning is similar to the computing concept of security through obscurity. it will see only the devices it is allowed to see. In this way. This process dictates which devices. modern switches would employ hard zoning when you implement soft. That stated. to show only an allowed subset of devices. can see which targets. Soft zoning restricts only the fabric name service. To avoid any compromise of crucial data within the SAN. For example. Advanced zoning capabilities specified in the FC-GS-4 and FC-SW-3 standards are provided. This requires efficient hardware implementation (frame filtering) in the fabric switches.Figure 6-35 VSANs Versus Zoning The zoning service within a Fibre Channel fabric is designed to provide security between devices that share the same fabric. The fabric name service allows each device to query the addresses of all other devices. More recently. the differences between the two have blurred. With many types of servers and storage devices on the network. However. reducing the risk of data loss. if a host were to access a disk that another host is using. the data on the disk could become corrupted. hard and soft. All modern SAN switches then enforce soft zoning in hardware. the need for security is crucial. hard zoning restricts actual communication across a fabric. standards-compliant zoning capabilities. Cisco MDS 9000 and . but is much more secure. when a server looks at the content of the fabric. There are two main methods of zoning. You can use either the existing basic zoning capabilities or the advanced (enhanced). Therefore. zoning allows the user to overlay a security map. The primary goal is to prevent certain devices from accessing other devices within the fabric. namely hosts. any server can still attempt to contact any device on the network by address. In contrast.

any device that is not in an active zone (a zone that is part of an active zone set) is a member of the default zone. Zones can vary in size. Zone changes can be configured nondisruptively. all devices are members of the default zone. Symbolic nodename: Specifies the member symbolic node name. Port world wide name (pWWN): Specifies the pWWN of an N port attached to the switch as a member of the zone. members in different zones cannot access each other. Additionally. full zone sets are distributed to all switches in the fabric. The maximum length is 240 characters. Only one zone set can be activated at any time. This membership is also referred to as port-based zoning. Interface and switch WWN (sWWN): Specifies the interface of a switch identified by the sWWN. If a new switch is added to an existing fabric. Any installed changes to a zoning configuration are therefore disruptive to the entire connected fabric. Zoning has the following features: A zone consists of multiple zone members. A zone set consists of one or more zones. A Cisco MDS switch can have a maximum of 500 zone sets. When you activate a zone (from any switch). A zone can be a member of more than one zone set. New zones and zone sets can be activated without interrupting traffic on unaffected ports or devices. A zone set can be activated or deactivated as a single entity across all switches in the fabric. if this feature is enabled in the source switch.Cisco Nexus family switches implement hard zoning. It is a distributed service that is common throughout the fabric. Devices can belong to more than one zone. Fabric pWWN: Specifies the WWN of the fabric port (switch port’s WWN). This membership is also referred to as interface-based zoning. IPv6 address: The IPv6 address of an attached device in 128 bits in colon-separated hexadecimal format. FC ID: Specifies the FC ID of an N port attached to the switch as a member of the zone. Zoning was designed to do nothing more than prevent devices from communicating with other unauthorized devices. If zoning is activated. Zoning was also not designed to address availability or scalability of a Fibre Channel infrastructure. Zoning does have its limitations. zone sets are acquired by the new switch. Zone membership criteria is based mainly on WWNs or FC IDs. Domain ID and port number: Specifies the domain ID of an MDS domain and additionally specifies a port belonging to a non-Cisco switch. IPv4 address: Specifies the IPv4 address (and optionally the subnet mask) of an attached device. Interface and domain ID: Specifies the interface of a switch identified by the domain ID. If zoning is not activated. all switches in the fabric receive the active zone set. Zoning can be administered from any switch in the fabric. . Members in a zone can access each other.

a single initiator with a single target is the most efficient approach to zoning. LUN mapping is configured in the HBA of each host. but it is the responsibility of the administrator to configure each HBA so that each host has exclusive access to its LUNs. mistakes are more likely to be made and more than one host might access the same LUN by accident. although both methods can be used concurrently. If a second host should discover and try to mount the same LUN volume.” LUN masking is the most common method of ensuring LUN security. For this reason. When there are many hosts. This type of configuration wastes switch resources by provisioning and managing many communicating pairs (initiator-to-initiator or target-totarget) that will never actually communicate with each other. all other hosts are masked out. If LUN masking is unavailable in the storage array. discover LUNs. “Fibre Channel Storage Area Networking. Each SCSI host uses a number of primary SCSI commands to identify its target. The zoning feature complies with the FC-GS-4 and FC-SW-3 standards. The following guidelines must be considered when creating zone members: Configuring only one initiator and one target for a zone provides the most efficient use of the switch resources. As the host mounts each LUN volume. LUN masking determines which LUNs are to be advertised through which storage array ports. to ensure that only one host at a time can access each LUN. LUN mapping can be used. it overwrites the previous signature. Both standards support the basic zoning functionalities and the enhanced zoning functionalities. unless the hosts are configured in a cluster. The best practice is to avoid configuring large numbers of targets or large numbers of initiators in a single zone. and which host is allowed to own which LUNs. For a zone with N members. Device Aliases Versus Zone-Based (FC) Aliases . N × (N – 1) access permissions need to be enabled. and obtain their size. it writes a signature at the start of the LUN to claim exclusive access. Access between default zone members is controlled by the default zone policy. LUN masking is essentially a mapping table inside the front-end array controllers. All members of a zone can communicate with each other.Default zone membership includes all ports or WWNs that do not have a specific membership association. LUN masking ensures that only one host can access a LUN. These are the commands: Identify: Which device are you? Report LUNs: How many LUNs are behind this storage array port? Report capacity: What is the capacity of each LUN? Request sense: Is a LUN online and available? It is important to ensure that only one host accesses each LUN on the storage array at a time. Configuring multiple initiators to multiple targets is not recommended. Configuring the same initiator to multiple targets is accepted. LUNs can be advertised on many storage ports and discovered by several hosts at the same time. Both features are explained in detail in Chapter 9. Many LUNs are to be visible. An alternative method of LUN security is LUN mapping.

These user-friendly names are referred to as device aliases. Reference List Storage Network Industry Association (SNIA): http://www. Table 6-10 lists the different characteristics of zone aliases and device aliases. You can avoid this problem if you define a user-friendly name for a port WWN and use this name in the entire configuration commands as required. you must assign the correct device name each time you configure these features. port security) in a Cisco MDS 9000 family switch.snia. When device alias runs in the enhanced mode. The applications store the device alias name in the configuration and distribute it in the device alias format instead of expanding to pWWN.Device Aliases Versus Zone-Based (FC) Aliases When the port WWN (pWWN) of a device must be specified to configure different features (zoning. QoS.org . the application immediately expands to pWWNs. An incorrect device name may cause unexpected results. The applications track the device alias database changes and take actions to enforce it. all applications accept the device-alias configuration in the native format. Table 6-10 Comparison Between Zone Aliases and Device Aliases Device alias supports two modes: basic and enhanced. This operation continues until the mode is changed to enhanced. When you configure the basic mode using device aliases.

Table 6-11 lists a reference for these key topics and the page numbers on which each is found.org/education/dictionary Farley. Northwest Learning Associates.snia. 2013 Kembel.t11. Fibre Channel a Comprehensive Introduction. noted with the key topic icon in the outer margin of the page.T10. Inc. Rethinking Enterprise Storage: A Hybrid Cloud Mode. Robert W.charters/ips-charter.org SNIA dictionary: http://www. . 2000 Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter.html ANSI T11—Fibre Channel: http://www.org/html.Internet Engineering Task Force—IP Storage: http://www. Microsoft Press..ietf. Marc.htm ANSI T10 Technical Committee: www.ieee.org/index.org IEEE Standards site: http://standards.

.

.

” also on the disc. includes completed tables and lists to check your work. “Memory Tables” (found on the disc). and complete the tables and lists from memory.Table 6-11 Key Topics for Chapter 6 Complete Tables and Lists from Memory Print a copy of Appendix B. Define Key Terms Define the following key terms from this chapter. or at least the section for this chapter. “Memory Tables Answer Key. and check your answers in the glossary: . Appendix C.

virtual storage-area network (VSAN) zoning fan-out ratio fan-in ratio logical unit number (LUN) masking LUN mapping Extended Link Services (ELS) multipathing Big Data IoE N_Port Login (PLOGI) F_Port Login (FLOGI) logout (LOGO) process login (PRLI) process logout (PRLO) state change notification (SCN) registered state change notification (RSCN) state change registration (SCR) inter-FSAN routing (IVR) Fibre Channel Generic Services (FC-GS) hard disk drive (HDD) solid-state drive (SSD) Small Computer System Interface (SCSI) initiator target American National Standards Institute (ANSI) International Committee for Information Technology Standards (INCITS) Fibre Connection (FICON) T11 Input/output Operations per Second (IOPS) Common Internet File System (CIFS) Network File System (NFS) Cisco Fabric Services (CFS) Top-of-Rack (ToR) End-of-Row (EoR) Dynamic Port VSAN Membership (DPVM) .

inter-VSAN routing (IVR) Fabric Shortest Path First (FSPF) .

Cisco MDS Multiservice and Multilayer Fabric Switches. With the addition of the fourth-generation modules. 16-. second-. A quick look at the storage-area network (SAN) marketplace and the track record of delivery reveals that Cisco is the leader among SAN providers in delivering compatible architectures and designs that can adapt to future changes in technology. This chapter discusses the Cisco Multilayer Data Center Switch (MDS) product family in four main areas: Cisco MDS Multilayer Directors. Table 7-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions.” . high-speed. In this chapter. You can find the answers in Appendix A. “Answers to the ‘Do I Know This Already?’ Quizzes. 2-. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. Cisco MDS Product Family This chapter covers the following exam topics: Describe the Cisco MDS Product Family Describe the Cisco MDS Multilayer Directors Describe the Cisco MDS Fabric Switches Describe the Cisco MDS Software and Storage Services Describe the Cisco Prime Data Center Network Management Evolving business requirements highlight the need for high-density. high-bandwidth storage networking solution along with integrated fabric applications to support dynamic data center requirements. and third-generation modules to all coexist in both existing customer chassis and new switch configurations.Chapter 7. One major benefit of the Cisco MDS 9000 family architecture is investment protection: the capability of first-. providing true investment protection. the Cisco MDS 9000 family of storage networking products supports now 1-. and Data Center Network Management. This chapter goes directly into the key concepts of the Cisco Storage Networking product family. and 10-Gbps Fibre Channel along with being Fibre Channel over Ethernet (FCoE) ready. read the entire chapter. 4-. and discusses topics relevant to “Introducing Cisco Data Center Technologies” (DCICT) certification. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. The Cisco MDS 9000 family provides the leading high-density. Cisco MDS Software and Storage Services. 8-. the focus is on the current generation of modules and storage services solutions. and low-latency networks that improve data center scalability and manageability while controlling IT costs.

If you do not know the answer to a question or are only partially sure of the answer. What is head-of-line blocking? a. c. 4 d. b. 2 5. 8 b. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. 32 Gbps Fibre Channel 2. Which of the following Cisco SAN switches are director model? a. How many crossbar switching fabric modules exists on a Cisco MDS 9706 switch? a. SAN extension d. MDS 9706 c. MDS 9506 b. It is used for high availability for fabric modules. Which of the following Cisco MDS fabric switches support Fibre Channel over IP (FCIP) and I/O Accelerator (IOA)? . Multiprotocol support b. you should mark that question as wrong for purposes of the self-assessment. 1. MDS 9222i d. MDS 9513 6.Table 7-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. 3. MDS 9148S b. onePK API e. It is used for a forwarding mechanism in Cisco MDS switches. d. MDS 9709 d. 6 c. Which of the following features are NOT supported by Cisco MDS switches? a. It is an authentication mechanism for Fibre Channel frames. Which of the following MDS switches are qualified for IBM Fiber Connection (FICON)? a. High availability for fabric modules c. Congestion on one port causes in blocking of traffic to other ports. MDS 9704 4. MDS 9520 c.

Andiamo means “Let’s go” in Italian. c. Performance Manager d. d. b. 9. 10. These devices are capable of deploying highly available and scalable storage-area networks (SAN) for the most demanding data center . It is an intelligent fabric application that provides Fibre Channel link encryption. What is Cisco Data Mobility Manager (DMM)? a. It is a mainframe-based replication solution. It is a new Fibre Channel login method that is used for storage arrays. IBM Fiber Connection 8. and Cisco entered the storage business by saying “Andiamo. It is an industry standard that allows multiple N-port fabric login concurrently on a single physical Fibre Channel link. It is an intelligent fabric application that provides SCSI acceleration. What is N-port identifier virtualization (NPIV) in the Fibre Channel world? a. DCNM Data Mobility Manager e. It is a high-availability feature that allows managing the Fibre Channel traffic load. DCNM-SAN Server b. Which protocol or protocols does the Cisco MDS 9250i fabric switch NOT support? a.a. DCNM-SAN Client c. DCNM traffic accelerator Foundation Topics Once Upon a Time There Was a Company Called Andiamo In August 2002 Cisco Systems officially entered the storage networking business with the spin-in acquisition of Andiamo Systems. Fibre Channel over Ethernet b. d.” The MDS 9000 Multilayer Director switch family is composed of fabric and director-class Fibre Channel switches. It is an intelligent fabric application that does data migration between heterogeneous disk arrays. It is an industry standard that reduces the number of Fibre Channel domain IDs in core-edge SANs. Virtual Extensible LAN e. MDS 9148 7. MDS 9222i c. Which of the following options are fundamental components of Cisco Prime DCNM SAN? a. IP over Fibre Channel c. MDS 9250i b. c. MDS 9148S d. Fibre Channel over IP d. b.

Cisco again set the benchmark for high-performance Fibre Channel Director switches.44 Tbps. Figure 7-1 Cisco MDS 9000 Product Family In December 2002.and fifthgeneration. resulting in a total system switching capacity of 1. second-. With the introduction of second-generation line cards. In 2006. increasing interface bandwidth per slot to 96 Gbps full duplex. At the time of writing this chapter. the earlier generation line cards can still work on MDS 9500 Director switches together with the latest line cards. Offering a maximum system density of 224 1/2 Gbps Fibre Channel ports.environments. . Cisco set the benchmark for high-performance Fibre Channel Director switches with the release of the Cisco MDS 9509 Multilayer Director. and the flagship Cisco MDS 9513 Multilayer Director. Figure 7-1 portrays the available Cisco MDS 9000 switches (at the time of writing). system density has grown to a maximum of 528 Fibre Channel ports operating at 1/2/4-Gbps or 44 10-Gbps Fibre Channel ports. so in this chapter the focus will not be on first-. On the other hand. resulting in a total system switching capacity of 2. the earlier generation line cards already became end of sale (EOS).2 Tbps. first-generation Cisco MDS 9000 Family line cards connected to dual crossbar switch fabrics with an aggregate of 80 Gbps full-duplex bandwidth per slot. the available line cards for MDS reached to fourth. and third-generation line cards. crossbars (supervisors).

resulting in a distributed forwarding architecture. 10 Gbps Fibre Channel over IP (FCIP). A single resource should help to accelerate troubleshooting with easy-to-understand dynamic topology views and the ability to monitor the health of all the devices and filters on events that occur.The Cisco approach to Fibre Channel storage-area networking reflected its strong heritage in data networking. high-performance storage director: To handle the rapid scale accompanied by data center consolidation. Organizations should also be able to leverage and report on the data captured and plan for additional capacity more effectively. or Internet Small Computer System Interface (iSCSI) and FICON. This is coupled with the 384 ports of line rate 16 Gbps FC. These new MDS solutions provide the following: Enhanced availability: Built on the experience from the MDS 9500 and the Nexus product line. Following a 10-year run with the initial MDS platform. IBM Fiber Connection (FICON for mainframe connectivity). The MDS 9250i Multiservice Fabric Switch accommodates 16 Gbps FC. Being considered for future development are 32 Gbps Fibre Channel and 40 Gbps Fibre Channel over Ethernet (FCoE). storage. The N+1 concept allows eight power supplies and up to six fabric modules. The latter allows for tighter integration with the server and network admins. There is never any requirement for frames to be forwarded to the supervisor for centralized forwarding. they can manage Cisco MDS and Nexus platforms. Future proofing: The MDS 9710 will support the Cisco Open Network Environment (ONE) Platform Kit (onePK) API to enable future services access and programmability. From a single management tool. The MDS 9250i has been designed to help accelerate SAN extension.5 Tbps of FC throughput per slot. High-density. including N+1 fabric modules and three fan trays with front-to-back airflow. and disk replication as well as migrate data between arrays or data centers. . Frame forwarding logic is distributed in application-specific integrated circuits (ASIC) on the line cards themselves. The Secret Sauce of Cisco MDS: Architecture All Cisco MDS 9500 Series Director switches are based on the same underlying crossbar architecture. namely the concept of building a modular chassis that could be upgraded to incorporate future services without ripping and replacing the original chassis. essentially extending the concept of Software Defined Networks (SDN) into the SAN. Comprehensive and simplified management of the entire data center network: Organizations looking to simplify management and integrate previously disparate networking teams. and data should find the Cisco Prime Data Center Network Manager (DCNM) appealing. the MDS 9710 crossbar and central arbiter design enable up to 1. and 10 Gbps Fiber Channel over Ethernet (FCoE). Multiprotocol support: The MDS 9710 Multilayer Director supports a range of Fibre Channel speeds. starting with the MDS 9710 Multilayer Director and MDS 9250i Multiservice Fabric Switch. DCNM is also integrated with VMware vSphere to provide a more comprehensive VM-to-storage visibility. All frame forwarding is performed in dedicated forwarding hardware and distributed across all line cards in the system. the MDS 9710 provides enhanced reliability. 10 Gbps FCoE. nor is there ever any requirement for forwarding to drop back into software for processing. in 2012 Cisco introduced its next generation of Fibre Channel platforms. tape backup.

3. making use of QoS policy maps to determine the order in which frames are scheduled. In the event of a crossbar failure. this ensures that when multiple input ports on multiple line cards are contending for a congested output port. resulting in an identical amount of active crossbar switching capacity regardless of whether the system is operating with one or two crossbar switch fabrics. if priority is desired for a given set of traffic flows. From a switch architecture perspective. Depending on the MDS Director switch model. the standby channel on the remaining crossbar becomes active. It also ensures that congestion on one port does not result in blocking of traffic to other ports (head-of-line blocking). the frame is tagged with its ingress port and virtual storage-area networking (VSAN). resulting in both crossbars being active (1:1 redundancy).Forwarding of frames on the Cisco MDS always follows the same processing sequence: 1. If the destination port is congested or busy. QoS and CoS can be used to give priority to those traffic flows. multiprotocol (Small Computer System Interface over IP [iSCSI] and Fibre Channel over IP [FCIP]) and intelligent (storage services) line cards. One active channel and one standby channel are used on each of the crossbar switch fabrics. frame switching uses a technique called virtual output queuing (VOQ). Starting with frame error checking on ingress. 4. It is used for Fibre Channel NAT. A lookup is issued to the forwarding and frame routing tables to determine where to switch the frame to and whether to route the frame to a new VSAN and/or rewrite the source/destination of the frame if Inter VSAN Routing (IVR) is being used. and the frame is transmitted out the output port. 5. IVR is an MDS NX-OS software feature that is similar to Network Address Translation (NAT) in the IP world. . If there are multiple equal-cost paths for a given destination device. enabling the switch to handle duplicate addresses within different fabrics separated by VSANs. 2. In addition. When a frame is scheduled for transmission. 6. Figure 72 portrays the Central Arbitration in MDS 9000 switches. crossbar capacity is provided to each line card slot as a small number of high-bandwidth channels. This provides significant flexibility for line card module design and architecture and makes it possible to build both performance-optimized (nonblocking. Input access control lists (ACL) are used to enforce hard zoning. oversubscribed) line cards. the number of crossbar switch fabrics that are installed might change. the frame will be queued on the ingress line card for delivery. and the VSAN tag used internal to the switch is stripped from the front of the frame. Combined with centralized arbitration. it is transferred across the crossbar switch fabric to the egress line card where there is a further output access control list (ACL). the switch will choose an egress port based on the load-balancing policy configured for that particular VSAN. To ensure fairness between ports. non-oversubscribed) line cards and host-optimized (nonblocking. there is complete fairness between ports regardless of port location within the switch.

Ingress PHY/MAC Modules: Fibre Channel frames enter the switch front-panel optical Fibre Channel ports through Small Form-Factor Pluggable (SFP) optics for Fibre Channel interfaces or X2 transceiver optical modules for 10 Gbps Fibre Channel interfaces. as well as other Fibre Channel signaling primitives. . On ingress. The primary function of the MAC is to decipher Fibre Channel frames from the incoming bit stream by identifying start-of-frame (SoF) and end-of-frame (EoF) markers in the bit stream. sending the electrical stream of bits into the MAC. the PHY converts optical signals received into electrical signals.Figure 7-2 Crossbar Switch with Central Arbitration A Day in the Life of a Fibre Channel Frame in MDS 9000 All Cisco MDS 9000 family line cards use the same process for frame forwarding. Figure 7-3 Cisco MDS 9000 Series Switches Packet Flow 1. The various frame-processing stages are outlined in detail in this section (see Figure 7-3). Each front-panel port has an associated physical layer (PHY) device port and a media access controller (MAC).

frame QoS markings (if the ingress port is set to trust the sender. The concept of an internal switch header is an important architectural element of what enables the multiprotocol and multitransport capabilities of Cisco MDS 9000 family switches. or perform any additional inspection on the frame. Return buffer credits are issued if the ingress-queuing buffers have not exceeded an internal threshold. three simultaneous lookups are initiated followed by forwarding logic based on the result of those lookups: The first of these is a per-VSAN forwarding table lookup determined by VSAN and the destination address. D_ID) or a hash also based on the exchange ID (S_ID. The third lookup is a per-VSAN ingress ACL lookup determined by VSAN. D_ID. and a variety of other fields from the interswitch header and Fibre Channel frame header. 2. return buffer credits are paced to match the configured throughput on the port. the MAC communicates with the forwarding and queuing modules and issues return buffer-credits (R_RDY primitives) to the sending device to recredit the received frames. destination address. Before forwarding the frame. D_ID) of the frame. ingress VSAN. all frames within the same flow (either between a single source to a single destination or within a single SCSI I/O operation) will always be forwarded on the same physical path. if there are multiple equal-cost Fabric Shortest Path First [FSPF] routes or the destination is a Cisco port channel bundle). and a timestamp of when the frame entered the switch. The switch uses this to maintain a series of statistics about what devices are communicating between one another. a load-balancing decision is made to choose a single physical egress interface from a set of interfaces. providing the forwarding module with details such as ingress port. ingress port. The result from this lookup tells the forwarding module where to switch the frame to (egress port). Ingress Forwarding Module: The ingress forwarding module determines which output port on the switch to send the ingress frame to. . If the frame has multiple possible egress ports (for example. The second lookup is a statistics lookup. This also forms the basis for the Cisco implementation of hard zoning within Fibre Channel. the frame is dropped because there is no destination to forward to. The switch uses the result from this lookup to either permit the frame to be forwarded. the final forwarding step is to rewrite the VSAN ID and optionally the source and destination addresses (S_ID. The load-balancing policy (and algorithm) can be configured on a per-VSAN basis to be either a hash of the source and destination addresses (S_ID. If the lookup fails. source address. The MAC also checks that the received frame contains no errors by validating its cyclic redundancy check (CRC). This table lookup also indicates whether there is a requirement for any Inter VSAN routing (IVR—in simple terms Fibre Channel NAT). and the destination address within the Fibre Channel frame. In this manner. When a frame is received by the ingress forwarding module. OX_ID) of the frame. guaranteeing in-order delivery. blank if not). If traffic from a given source address to a given destination address is marked for IVR. the MAC prepends an internal switch header onto the frame. 3. VSAN. Queuing Module: The queuing module is responsible for scheduling the flow of frames through the switch (see Figure 7-4). If ingress rate limiting has been enabled.Along with frames being received. type of port. based on the input port. drop the frame. Statistics gathered include frame and byte counters from a given source to a given destination.

head-of-line blocking is generally solved by “dropping” frames. Figure 7-5 shows a congested destination. All the packets sent from the source device to all the destination points are held up because the destination point A cannot accept any packet. So head-ofline blocking is less of an issue in the Ethernet world.Figure 7-4 Queuing Module Logic It uses distributed virtual output queuing (VOQ) to prevent head-of-line blocking and is the functional block responsible for implementing QoS and fairness between frames contending for a congested output port. Figure 7-5 Head-of-Line Blocking . which in this picture is point A. In the Fibre Channel world. In the Ethernet world. Point A is causing a head-of-line blocking. if there is congestion the ingress port is blocked. Note Head-of-line blocking is the congestion of one port resulting in blocking traffic to other ports.

the DWRR algorithm keeps track of this excess usage and subtracts it from the queue’s next transmission bandwidth allocation. however. Frames enter the queuing module and are queued in a VOQ based on the output port to which the frame is due to be sent (see Figure 7-6). and 20 for the low queue. The default weights are 50 for the high queue. 30 for the medium queue. corresponding to four QoS levels per port: one priority queue and three queues using deficit-weighted round robin (DWRR) scheduling. Figure 7-7 portrays the distributed virtual output queuing. these can be set to any ratio. Figure 7-6 Virtual Output Queues Four virtual output queues are available for each output port. It monitors utilization in the transmit queues. DWRR is a scheduling algorithm that is used for congestion management in the network scheduler. granting requests to transmit from ingress forwarding .The queuing module is also responsible for providing frame buffering for ingress queuing if an output interface is congested. Central Arbitration Module: The central arbiters are responsible for scheduling frame transmission from ingress to egress. Frames queued in the priority queue will be scheduled before traffic in any of the DWRR queues. If a queue uses excess bandwidth. Unicast Fibre Channel frames (frames with only a single destination) are queued in one virtual output queue. Multicast Fibre Channel frames are replicated across multiple virtual output queues for each output port to which a multicast frame is destined to be forwarded. Figure 7-7 Distributed Virtual Output Queuing 4a. Frames queued in the three DWRR queues are scheduled using whatever relative weightings are configured.

) The over speed helps ensure that the crossbar remains nonblocking even while sustaining peak traffic levels.modules whenever there is an empty slot in the output port buffers. If the primary arbiter fails. (Figure 7-8 portrays the crossbar switch architecture. Another feature to maximize crossbar performance under peak frame rate is a feature called superframing. with the redundant arbiter snooping all requests issued at and grants offered by the primary active arbiter. in the event of congestion on an output port. Congestion does not result in head-of-line blocking because queuing at ingress utilizes virtual output queues. the originating line card will issue a request to the central arbiters asking for a scheduling slot on the output port of the output line card. that is. the arbiter will grant transmission to the output port in a fair (round-robin) manner. multiple frames can be transferred back to back across the crossbar. the redundant arbiter can resume operation without any loss of frames or performance or suffer any delays caused by switchover. Crossbar Switch Fabric Module: Dual redundant crossbar switch fabrics are used on all Director switches to provide low-latency. Superframing improves the crossbar efficiency in the situation where there are multiple frames queued originating from one line card with a common destination. Anytime a frame is to be transferred from ingress to egress. the active central arbiter will respond to the request with a grant. Two redundant central arbiters are available on each Cisco MDS Director switch. Because switch-wide arbitration is operating in a centralized manner. it operates internally at a rate three times faster than its endpoints. and nonoversubscribed switching capacity between line card modules. They are capable of scheduling more than a billion frames per second. In this manner. . and each QoS level for each output port has its own virtual output queue. This also helps ensure that any blocking does not spread throughout the system. If there is an empty slot in the output port buffer. Rather than having to arbitrate each frame independently. the arbiter will wait until there is a free buffer. fairness is also guaranteed. nonblocking. any congestion on an output port will result in distributed buffering across ingress ports and line cards. The crossbar itself operates three times over speed. 4b. high-throughput. If there is no free buffer space.

Figure 7-9 portrays it in a flow diagram. if the frame requires IVR. Figure 7-9 Egress Frame Forwarding Logic Before the frame arrives. and outbound QoS queuing. and enforce some additional ACL checks (logical unit number [LUN] zoning and read-only zoning if configured). Egress Forwarding Module: The primary role of the egress forwarding module is to perform some additional frame checks. the egress-forwarding module will issue an ACL table lookup to see if any . There is also additional Inter VSAN routing IVR frame processing. If the frame is valid. the egress-forwarding module has signaled the central arbiter that there is output buffer space available for receiving frames. make sure that the Fibre Channel frame is valid. the first processing step is to validate that the frame is error free and has a valid CRC. When a frame arrives at the egressforwarding module from the crossbar switch fabric.Figure 7-8 Centralized Crossbar Architecture Crossbar switch fabric modules depend on clock modules for crossbar channel synchronization. 5.

Figure 7-10 portrays how the in-order delivery frames are delivered in order. The next processing step is to finalize any Fibre Channel frame header rewrites associated with IVR. slow WAN links subject to packet loss are combined with a number of Fibre Channel sources.additional ACLs need to be applied in permitting or denying the frame to flow to its destination. ACLs applied on egress include LUN zoning and read-only zoning ACLs. Under normal operating conditions. Egress PHY/MAC Modules: Frames arriving into the egress PHY/MAC module first have their switch internal header removed. 6. Note that 500 milliseconds is an extremely long time to buffer a frame and typically occurs only when long. If the output port is a Trunked E_Port (TE_Port). Dropping of expired frames is in accordance with Fibre Channel standards and sets an upper limit for how long a frame can be queued within a single switch. an Enhanced ISL (EISL) header is prepended to the frame. it is very unlikely for any frames to be expired. The default drop timer is 500 milliseconds and can be tuned using the fcdroplatency switch configuration setting. the frame is queued for transmission to the destination port MAC with queuing on a per-CoS basis matching the DWRR queuing and configured QoS policy map. and if the frame has been queued within the switch for too long it is dropped. . the frame time stamp is checked. Next. Finally.

shifting traffic from what might have been a congested link to one that is no longer congested. the frames are transmitted onto the cable of the output port. Unfortunately. This guarantee extends across an entire multiswitch fabric. provided the fabric is stable and no topology changes are occurring. The outbound MAC is responsible for formatting the frame into the correct encoding over the cable. when a topology change occurs (for example. and inserting other Fibre Channel primitives over the cable. In this case. . a link failure occurs).Figure 7-10 In-Order Delivery Finally. inserting SoF and EoF markers. frames might be delivered out of order. Reordering of frames can occur when routing changes are instantiated. newer frames transmitted down a new non-congested path might pass older frames queued in the previous congested path. Lossless Networkwide In-Order Delivery Guarantee The Cisco MDS switch architecture guarantees that frames can never be reordered within a switch.

Common Software Across All Platforms Cisco MDS 9000 NX-OS runs on all Cisco MDS 9000 family switches and Cisco Nexus family of data center Ethernet switches. Multiprotocol Support In addition to supporting Fibre Channel Protocol (FCP). Native iSCSI support in the Cisco MDS 9000 family helps customers consolidate storage for a wide range of servers into a common pool on the SAN. VSANs facilitate true hardware-based separation of FICON and open systems. isolated environments to improve Fibre Channel SAN scalability. manageability. The strict traffic segregation provided by VSANs helps ensure that the control and data traffic of a given VSAN are confined within the VSAN’s own domain. and standard Linux core. This partitioning of fabric services greatly reduces network instability by containing fabric reconfiguration settings and error conditions within an individual VSAN. Cisco MDS Software and Storage Services The foundation of the data center network is the network software that runs the switches in the network. This mechanism is a little crude because it guarantees that an ISL failure or topology change will be a disruptive event to the entire fabric. all Fibre Channel switch vendors have a fabricwide configuration setting called a networkwide in-order delivery (network-IOD) guarantee. starting from release 4. No new forwarding decisions are made during the time it takes for traffic to flush from existing interface queues. FCoE. Virtual SANs Virtual SAN (VSAN) technology partitions a single physical SAN into multiple VSANs. stable. providing a modular and sustainable base for the long term. Cisco NX-OS Software is the network software for the Cisco MDS 9000 family and Cisco Nexus family data center switching products. Small Computer System Interface over IP (iSCSI). and Fibre Channel over IP (FCIP) in a single platform. It is based on a secure. For mainframe environments.To protect against this. Users can create SAN administrator roles that are limited in scope . availability. Each VSAN is a logically and functionally separate SAN with its own set of Fibre Channel fabric services. Cisco MDS 9000 NX-OS supports IBM Fibre Connection (FICON). Formerly known as Cisco SAN-OS. Flexibility and Scalability Cisco MDS 9000 NX-OS is a highly flexible and scalable platform for enterprise SANs. all SAN switch vendors default to disable network-IOD except where its use is mandated by channel protocols such as IBM Fiber Connection (FICON). This stops the forwarding of new frames when there has been an ISL failure or a topology change. VSAN capabilities enable Cisco MDS 9000 NX-OS to logically divide a large physical fabric into separate. The general idea here is that it is better to stop all traffic for a period of time than to allow the possibility for some traffic to arrive out of order.1 it is rebranded as Cisco MDS 9000 NX-OS Software. Because of this. and security.

For example. data acceleration. F-port trunking allows multiple VSANs on a single uplink in N-port virtualization (NPV) mode. extending VSANs to include devices at remote locations. Inter-VSAN (IVR) routing is used to transport data traffic between specific initiators and targets on different VSANs without merging VSANs into a single logical fabric. nor can initiators access any resources aside from the ones designated with IVR. Cisco DCNM-SAN is used to administer Cisco DMM. provide flexibility and meet the high-availability requirements of enterprise data centers. This approach improves the manageability of large SANs and reduces disruptions resulting from human errors by isolating the effect of a SAN administrator’s action to a specific VSAN whose membership can be isolated based on the switch ports or World Wide Names (WWN) of attached devices. The Cisco MDS 9000 family also implements trunking for VSANs. Cisco DMM offers rate-adjusted online migration to enable applications to continue uninterrupted operation while data migration is in progress. snapshots. no additional management software is required. and replication on Cisco MDS 9000 family switches. Intelligent Fabric Applications Cisco MDS 9000 NX-OS forms a solid foundation for delivery of network-based storage applications and services such as virtualization. Cisco Data Mobility Manager Cisco Data Mobility Manager (DMM) is a SAN-based. and multipath supports. Advanced capabilities. such as data verification. The feature also extends the distance for . Easy insertion and provisioning of appliance-based storage applications is achieved by removing the need to insert the appliance into the primary I/O path between servers and storage. Valuable resources such as tape libraries can be easily shared without compromise. IVR also can be used with FCIP to create more efficient business-continuity and disaster-recovery solutions. data migration.to certain VSANs. Cisco DMM is transparent to host applications and storage devices. VSANs are supported across FCIP links between SANs. a SAN administrator role can be set up to allow configuration of all platform-specific capabilities. Cisco DMM can be introduced without the need to rewire or reconfigure the SAN. and other roles can be set up to allow configuration and management within specific VSANs only. Network-Assisted Applications Cisco MDS 9000 family network-assisted storage applications offer deployment flexibility and investment protection by allowing the deployment of appliance-based storage applications for any server or storage device in the SAN without the need to rewire SAN switches or end devices. unequal-size logical unit number (LUN) migration. intelligent fabric application offering data migration between heterogeneous disk arrays. Fibre Channel control traffic does not flow between VSANs. I/O Accelerator The Cisco MDS 9000 I/O Accelerator (IOA) feature is a SAN-based intelligent fabric application that provides SCSI acceleration to significantly improve the number of SCSI I/O operations per second (IOPS) over long distances in a Fibre Channel or FCIP SAN by reducing the effect of transport latency on the processing of each operation. Trunking allows Inter-Switch Links (ISL) to carry traffic for multiple VSANs on the same physical link. continuous data protection.

XRC Acceleration IBM Extended Remote Copy (XRC). IOA includes the following features: Transport independence: IOA provides a unified solution to accelerate I/O operations over the MAN and WAN. The new Cisco MDS 9000 XRC Acceleration feature supports essentially unlimited distances. Write acceleration: IOA provides write acceleration for Fibre Channel and FCIP networks. XRC Acceleration accelerates dynamic updates from the primary to the secondary direct-access storage device (DASD) by reading ahead of the remote replication IBM System z. known as the System Data Mover (SDM). In the past. This process is sometimes referred to as XRC emulation or XRC extension. reducing or eliminating the latency effects that can otherwise reduce performance at distances of 124 miles (200 km) or greater. Network Security Cisco takes a comprehensive approach to network security with Cisco MDS 9000 NX-OS. Compression: Compression in IOA increases the effective MAN and WAN bandwidth without the need for costly infrastructure upgrades. Transparent insertion: IOA requires no fabric reconfiguration or rewiring and can be transparently turned on by enabling the IOA license. High availability and resiliency: IOA combines port channels and equal-cost multipath (ECMP) routing with disk and tape acceleration for higher availability and resiliency. Integration of data compression into IOA enables implementation of more efficient Fibre Channel. In . IOA can also be used to enable remote tape backup and restore operations without significant throughput degradation. Write acceleration significantly reduces latency and extends the distance for disk replication.disaster-recovery and business-continuity applications over WANs and metropolitan-area networks (MAN). Speed independence: IOA can accelerate 1/2/4/8/10/16-Gbps links and consolidate traffic over 8/10/16-Gbps ISLs. EMC MirrorView. Intuitive provisioning: IOA can be easily provisioned using Cisco DCNM-SAN. IOA as a fabric service: IOA service units (interfaces) can be located anywhere in the fabric and can provide acceleration service to any port. This data is buffered within the Cisco MDS 9000 family module that is local to the SDM. Cisco has supported XRC over FCIP at distances of up to 124 miles (200 km). Tape acceleration improves the performance of tape devices and enables remote tape vaulting over extended distances for data backup for disaster-recovery purposes. Tape acceleration: IOA provides tape acceleration for Fibre Channel and FCIP networks. and HDS TrueCopy to extend the distance between data centers or reduce the effects of latency. is a mainframe-based software replication solution in widespread use in financial institutions worldwide. IOA can be deployed with disk data-replication solutions such as EMC Symmetrix Remote Data Facility (SRDF).and FCIP-based business-continuity and disaster-recovery solutions without the need to add or manage a separate device. which is now officially renamed IBM z/OS Global Mirror. Service clustering: IOA delivers redundancy and load balancing for I/O acceleration.

If authentication fails. Cisco TrustSec Fibre Channel Link Encryption Cisco TrustSec Fibre Channel Link Encryption addresses customer needs for data integrity and privacy. and accounting (AAA). The encryption algorithm is 128-bit Advanced Encryption Standard (AES) and enables either AES-Galois/Counter Mode (AESGCM) or AES-Galois Message Authentication Code (AES-GMAC) for an interface. and data integrity for both FCIP and iSCSI connections on the Cisco MDS 9000 family switches. authorization. Other customers. Diffie-Hellman Challenge Handshake Authentication Protocol (DH-CHAP) is used to perform authentication locally in the Cisco MDS 9000 family or remotely through RADIUS or TACACS+. Encryption is performed at line rate by encapsulating frames at egress with encryption using the GCM mode of AES 128-bit encryption. IP Security for FCIP and iSCSI Traffic flowing outside the data center must be protected. Starting with Cisco MDS 9000 NX-OS 4. Fibre Channel data between E-ports of Cisco MDS 9000 8-Gbps Fibre Channel switching modules can be encrypted. such as those in defense and intelligence services. Switch and Host Authentication Fibre Channel Security Protocol (FC-SP) capabilities in Cisco MDS 9000 NX-OS provide switchto-switch and host-to-switch authentication for enterprisewide fabrics. Cisco MDS 9000 family management has been certified for Federal Information Processing Standards (FIPS) 140-2 Level 2 and validated for Common Criteria (CC) Evaluation Assurance Level 3 (EAL 3). Cisco MDS 9000 NX-OS offers other significant security features. and AES-GMAC authenticates only the frames that are being passed between the two E-ports. a switch or host cannot join the fabric. Role-Based Access Control . or dense wavelength-division multiplexing (DWDM). AES-GCM encrypts and authenticates frames. There are two primary use cases for Cisco TrustSec Fibre Channel Link Encryption: Many customers will want to help ensure the privacy and integrity of any data that leaves the secure confines of their data center through a native Fibre Channel link. The proven IETF standard IP Security (IPsec) capabilities in Cisco MDS 9000 NX-OS offer secure authentication.2(1). which provide true isolation of SAN-attached devices. may be even more security focused and choose to encrypt all traffic within their data center as well. because the encryption is at full line rate with no performance penalty. such as role-based access control (RBAC) and Cisco TrustSec Fibre Channel Link Encryption. A future release of Cisco NX-OS software will enable encryption of Fibre Channel data between Eports of Cisco MDS 9000 16-Gbps Fibre Channel switching modules. coarse wavelength-division multiplexing (CWDM).addition to VSANs. and supports the industry-standard security protocol for authentication. Cisco TrustSec Fibre Channel Link Encryption is an extension of the FC-SP feature and uses the existing FC-SP architecture. data encryption for privacy. At ingress. frames are decrypted and authenticated with integrity check. Cisco MDS 9000 NX-OS uses Internet Key Exchange Version 1 (IKEv1) and IKEv2 protocols to dynamically set up security associations for IPsec using preshared keys for remote-side authentication. such as dark fiber.

Port Security and Fabric Binding Port security locks the mapping of an entity to a switch port. offer full RBAC for switch features managed using this protocol. instead of creating multiple two-member zones.Role-Based Access Control Cisco MDS 9000 NX-OS provides RBAC for management access to the Cisco MDS 9000 family command-line interface (CLI) and Simple Network Management Protocol (SNMP). zoning is always enforced per frame using access control lists (ACLs) that are applied at the ingress switch. or switches. targets. Zoning Zoning provides access control for devices within a SAN. which dramatically reduces operational complexity while consuming fewer resources on the switch. and only a single administrative account is required for each user. In addition. identified by their WWNs. In addition to the two default roles on the switch. This locking mechanism helps ensure that unauthorized devices connecting to the switch port do not disrupt the SAN fabric. and none of them cause performance degradation. or plus switch WWN of the interface iSCSI zoning: Defines zone members based on the host zone iSCSI name IP address To provide strict network security. up to 64 user-defined roles can be configured. Fabric binding extends port security to allow ISLs between only specified switches. Cisco MDS 9000 NX-OS supports the following types of zoning: N-port zoning: Defines zone members based on the end-device (host and storage) port WWN Fibre Channel identifier (FC-ID) Fx-port zoning: Defines zone members based on the switch port WWN WWN plus interface index. The roles describe the access-control policies for various featurespecific commands on one or more VSANs. such as Cisco DCNM-SAN. or a virtualized cluster level without sacrificing switch resources. Enhanced zoning session-management capabilities further enhance security by allowing only one user at a time to modify zones. CLI and SNMP users and passwords are shared. or domain ID plus interface index Domain ID plus port number (for Brocade interoperability) Fabric port WWN Interface plus domain ID. Smart zoning enables customers to intuitively create fewer multimember zones at an application level. Availability . a physical cluster level. All zoning policies are enforced in hardware. Applications using SNMP Version 3 (SNMPv3). Both standards support the basic and enhanced zoning functionalities. the software provides support for smart zoning. The zoning feature complies with the FC-GS-4 and FC-SW-3 standards. The entities can be hosts.

it takes down the associated diskarray link when port tracking is configured. no manual intervention is required. the port channel continues to function properly without requiring fabric reconfiguration. and Management-Interface High Availability The Virtual Routing Redundancy Protocol (VRRP) increases the availability of Cisco MDS 9000 Family management traffic routed over both Ethernet and Fibre Channel networks. Port Tracking for Resilient SAN Extension The Cisco MDS 9000 NX-OS port-tracking feature enhances SAN extension resiliency. Similarly. iSCSI. With this feature. and they do not need a designated master port. FCIP. if a port or a switching module fails. so the array can redirect a failed I/O operation to another link without waiting for an I/O timeout. In the autoconfigure mode. making control-traffic path failures transparent to applications. including misconfiguration detection and auto creation of port channels among compatible ISLs. Thus. Stateful Process Failover Cisco MDS 9000 NX-OS automatically restarts failed software processes and provides stateful supervisor engine failover to help ensure that any hardware or software failures on the control plane do not disrupt traffic flow in the fabric.Cisco MDS 9000 NX-OS provides resilient software architecture for mission-critical hardware deployments. independent of host software. The autotrespass feature enables high-availability iSCSI connections to RAID subsystems. Cisco MDS 9000 NX-OS uses a protocol to exchange port channel configuration information between adjacent switches to simplify port channel management. . either locally or on another Cisco MDS 9000 family switch. Trespass commands can be sent automatically when Cisco MDS 9000 NX-OS detects failures on active paths. ISL ports can reside on any switching module. up to 16 expansion ports (Eports) or trunking E-ports (TE-ports) or F-ports connected to NP-ports can be bundled into a port channel. Otherwise. VRRP increases IP network availability for iSCSI and FCIP connections by allowing failover of connections from one port to another. This feature facilitates the failover of an iSCSI volume from one IP services port to any other IP services port. VRRP dynamically manages redundant paths for external Cisco MDS 9000 family management applications. ISLs with compatible parameters automatically form channel groups. Nondisruptive Software Upgrades Cisco MDS 9000 NX-OS provides nondisruptive software upgrades for director-class products with redundant hardware and fabric switches. disk arrays must wait seconds for an I/O timeout to recover from a network link failure. If a Cisco MDS 9000 family switch detects a WAN or MAN link failure. ISL Resiliency Using Port Channels Port channels aggregate multiple physical ISLs into one logical link with higher bandwidth and port resiliency for both Fibre Channel and FICON traffic. Minimally disruptive upgrades are provided for the other Cisco MDS 9000 family fabric switches that do not have redundant supervisor engine hardware.

server. and wireless devices. v2. and temperature. Fabric Device Management Interface (FDMI) capabilities provided by Cisco MDS 9000 NX-OS simplify management of devices such as Fibre Channel HBAs though in-band communications. Management interfaces supported by Cisco MDS 9000 NX-OS include the following: CLI through a serial port or out-of-band (OOB) Ethernet management port. and switch ports. and management traffic routed in band and out of band Open APIs Cisco MDS 9000 NX-OS provides a truly open API for the Cisco MDS 9000 family based on the industry-standard SNMP. Cisco fabric services simplify SAN provisioning by automatically distributing configuration information to all switches in a storage network. power supplies. and inventory management. including switch. With FDMI. intelligent system log (syslog) management. storage devices. and in-band IP over Fibre Channel (IPFC) SNMPv1. routers. fabric. The Cisco DCNM-SAN Storage Management Initiative Specification (SMI-S) server provides an XML interface with an embedded agent that complies with the Web-Based Enterprise Management (WBEM). Also. CiscoWorks DFM can also monitor the health of important components such as fans. N-Port Virtualization Cisco MDS 9000 NX-OS supports industry-standard N-port identifier virtualization (NPIV). The Cisco MDS 9000 NX-OS open API allows the CiscoWorks Resource Manager Essentials (RME) application to provide centralized Cisco MDS 9000 family configuration management. Distributed device alias services provide fabricwide alias names for host bus adapters (HBA). software-image management. Configuration and Software-Image Management The CiscoWorks solution is a commonly used suite of tools for a wide range of Cisco devices such as IP switches. and SMI-S standards. and v3 over an OOB management port and in-band IPFC Cisco FICON Control Unit Port (CUP) for in-band management from IBM S/390 and z/900 processors IPv6 support for iSCSI. and zoning profiles. all major storage and network management software vendors use the Cisco MDS 9000 NX-OS management API. management applications can gather HBA and host OS information without the need to install proprietary host agents. eliminating the need to reenter names when devices are moved. HBAs that support NPIV can help improve SAN security by enabling configuration of zoning and port security . FCIP. The open API also helps CiscoWorks Device Fault Manager (DFM) monitor Cisco MDS 9000 family device health.Manageability Cisco MDS 9000 NX-OS incorporates many management features that facilitate effective management of growing storage environments with existing resources. Commands performed on the switches by Cisco DCNM-SAN use this open API extensively. Common Information Model (CIM). which allows multiple N-port fabric logins concurrently on a single physical Fibre Channel link. such as supervisor memory and processor utilization.

Cisco Data Center Network Manager Server Federation Cisco DCNM server federation improves management availability and scalability by load balancing fabric-discovery. NPV is a complementary feature that reduces the number of Fibre Channel domain IDs in core-edge SANs. In addition.independently for each virtual machine (OS partition) on a host. they just pass traffic between core switch links and end devices. This feature is available only for Cisco MDS 9000 family blade switches and the Cisco MDS 9100 Series Multilayer Fabric Switches. and logs from a single web browser.000 end devices. performance-monitoring. FlexAttach Cisco MDS 9000 NX-OS supports the FlexAttach feature.2(1) supports the FlexAttach feature on Cisco MDS 9148 Multilayer Fabric Switch when the NPV mode is enabled. Network Boot for iSCSI Hosts Cisco MDS 9000 NX-OS simplifies iSCSI-attached host management by providing network-boot capability. FlexAttach addresses these problems. In addition to being useful for server connections. NPIV is beneficial for connectivity between core and edge SAN switches. users can connect to any Cisco DCNM instance and view all reports. iSCSI targets presented by Cisco MDS 9000 family IP storage services and Fibre Channel device-state-change notifications are registered by Cisco MDS 9000 NX-OS. One of the main problems faced today in SAN environments is the time and effort required to install and replace servers. Up to 10 Cisco DCNM instances can form a federation (or cluster) that can manage more than 75. Cisco MDS 9000 NXOS 6. Internet Storage Name Service The Internet Storage Name Service (iSNS) helps existing TCP/IP networks function more effectively as SANs by automating discovery. Autolearn for Network Security Configuration The autolearn feature allows the Cisco MDS 9000 family to automatically learn about devices and switches that connect to it. . the SAN configuration should not have to be changed when a new server is installed or an existing server is replaced. NPIV is used by edge switches in the NPV mode to log in to multiple end devices that share a link to the core switch. which eliminates the domain IDs for these switches. Cisco MDS 9000 family fabric switches operating in the NPV mode do not join a fabric. and disaster recovery. The administrator can use this feature to configure and activate network security features such as port security without having to manually configure the security for each port. Cisco DCNM provides a single management pane for viewing and managing all fabrics within a single federation. either through the highly available. management. and event-handling processes. To alleviate the need for interaction between SAN and server administrators. The process involves both SAN and server administrators. and the interaction and coordination between them can make the process time consuming. statistics. high availability. inventory. reducing the number of configuration changes and the time and coordination required by SAN and server administrators when installing and replacing servers. A storage administrator can discover and move fabrics within a federation for the purposes of load balancing. distributed iSNS built in to Cisco MDS 9000 NX-OS or through external iSNS servers. and configuration of iSCSI devices.

Extended Credits Full line-rate Fibre Channel ports provide at least 255 buffer credits as standard. iSCSI server load balancing (iSLB) is available to automatically redirect servers to the next available Gigabit Ethernet port. N-port WWN. or FC-ID. Quality of Service Four distinct quality-of-service (QoS) priority levels are available: three for Fibre Channel data traffic and one for Fibre Channel control traffic. Fibre Channel data traffic for latency-sensitive applications can be configured to receive higher priority than throughput-intensive applications using data QoS priority levels. Control traffic is assigned the highest QoS priority automatically. and management traffic routed inband and out-of-band. In addition to allowing fabric-wide iSCSI configuration from a single switch. routers. Traffic Management In addition to implementing the Fabric Shortest Path First (FSPF) Protocol to calculate the best path between two switches and providing in-order delivery features. and pure IPv6 data networks. A complete dual stack has been implemented for IPv4 and IPv6 to remain compatible with the large base of IPv4-compatible hosts. Adding credits lengthens distances for Fibre Channel SAN extension. Virtual Output Queuing Virtual output queuing (VOQ) buffers Fibre Channel traffic at the ingress port to eliminate head-of- . iSCSI. to accelerate convergence of fabric-wide protocols such as FSPF. iSLB greatly simplifies iSCSI configuration and provides automatic. transitional networks with a mixture of both versions. Proxy mode reduces the number of separate times that back-end tasks such as Fibre Channel zoning and storage-device configuration must be performed. Cisco MDS 9000 NX-OS enhances the architecture of the Cisco MDS 9000 family with several advanced traffic-management features that help ensure consistent performance of the SAN under varying load conditions. zone merges. This dual-stack approach allows the Cisco MDS 9000 family switches to easily connect to older IP networks. Using extended credits. and Cisco MDS 9000 family switches running previous software revisions. and principal switch selection. up to 4095 buffer credits from a pool of more than 6000 buffer credits for a module can be allocated to ports as needed to greatly extend the distance of Fibre Channel SANs.Proxy iSCSI Initiator The proxy iSCSI initiator simplifies configuration procedures when multiple iSCSI initiators (hosts) are assigned to the same iSCSI target ports. rapid recovery from IP connectivity problems for high availability. Data traffic can be classified for QoS by the VSAN identifier. IPv6 Cisco MDS 9000 NX-OS provides IPv6 support for FCIP. iSCSI Server Load Balancing Cisco MDS 9000 NX-OS helps simplify large-scale deployment and management of iSCSI servers. Zonebased QoS helps simplify configuration and administration by using the familiar zoning concept. zone.

Limiting bandwidth on one or more Fibre Channel ports allows the other ports in the group to receive a greater share of the available bandwidth under high-use conditions. Fibre Channel Port Rate Limiting The Fibre Channel port rate-limiting feature for the Cisco MDS 9100 Series Multilayer Fabric Switches controls the amount of bandwidth available to individual Fibre Channel ports within groups of four host-optimized ports. and optionally the exchange ID. For WAN performance optimization. FCIP performance is further enhanced for SAN extension by compression and write acceleration. Cisco MDS 9000 NX-OS includes a SAN extension tuner. The switch is designed so that the presence of a slow N-port on the SAN does not affect the performance of any other port on the SAN. Load balancing using port channels is performed over both Fibre Channel and FCIP links.line blocking. Port rate limiting is also beneficial for throttling WAN traffic at the source to help eliminate excessive buffering in Fibre Channel and IP data network devices. and reduce latency by eliminating TCP connection setup for most data transfers. Cisco MDS 9000 NX-OS also can be configured to load balance across multiple same-cost FSPF routes. Gigabit Ethernet ports for the Cisco MDS 9222i Multiservice Modular Switch (MMS). Load Balancing of Port Channel Traffic Port channels load balance Fibre Channel traffic using a hash of the source FC-ID and destination FC-ID. . more efficient FCIP-based business-continuity and disaster-recovery solutions can be implemented without the need to add and manage a separate device. and MDS 9000 16-Port Storage Services Node (SSN) achieve up to a 43:1 compression ratio. the next-generation intelligent services switch in the Cisco MDS 9200 Series Multiservice Switch portfolio. which dramatically reduce write throughput. iSCSI and SAN Extension Performance Enhancements iSCSI and FCIP enhancements address out-of-order delivery problems. FCIP Tape Acceleration Centralization of tape backup and archive operations provides significant cost savings by allowing expensive robotic tape libraries and high-speed drives to be shared. Highperformance streaming tape drives require a continuous flow of data to avoid write-data underruns. FCIP Compression FCIP compression in Cisco MDS 9000 NX-OS increases the effective WAN bandwidth without the need for costly infrastructure upgrades. MDS 9000 18/4-Port Multiservice Module (MSM). By integrating data compression into the Cisco MDS 9000 family. with typical ratios of 4:1 over a variety of data sources. optimize transfer sizes for the IP network topology. This centralization poses a challenge for remote backup media servers that need to transfer data across a WAN. helping determine the number of concurrent I/O operations needed to increase FCIP throughput. which directs SCSI I/O commands to a specific virtual target and reports IOPS and I/O latency results. A future release of Cisco NX-OS software will support similar compression ratios on the 10 Gigabit Ethernet FCIP ports on the new Cisco MDS 9250i.

The Cisco Switched Port Analyzer (SPAN) feature allows an administrator to analyze all traffic between ports (called the SPAN source ports) by nonintrusive directing the SPAN-session traffic to a SPAN destination port that has an external analyzer attached to it. SCSI Flow Statistics LUN-level SCSI flow statistics can be collected for any combination of initiator and target. FCIP tape acceleration helps achieve nearly full throughput over WAN links for remote tape-backup operations for both open systems and mainframe environments. Alert grouping capabilities and customizable destination profiles offer the flexibility needed to notify specific individuals or support organizations only when necessary. or to send IP-encapsulated Fibre Channel control traffic to a remote PC for decoding and display using the open source Ethereal networkanalyzer application. Troubleshooting. to external entities. Cisco MDS 9000 NX-OS can shut down malfunctioning ports if errors exceed user-defined thresholds. expanding. With Fibre Channel traceroute. debugging errors in a Fibre Channel SAN require the use of a Fibre Channel analyzer. Cisco Switched Port Analyzer and Cisco Fabric Analyzer Typically. Fibre Channel Ping and Fibre Channel Traceroute Cisco MDS 9000 NX-OS brings to storage networks features such as Fibre Channel ping and Fibre Channel traceroute. This feature is available only on the Cisco MDS 9000 family storage service modules. Cisco Call Home provides a notification system triggered by software and hardware events. and maintaining SANs. The SPAN destination port does not have to be on the same switch as the SPAN source ports. These features also increase availability by decreasing SAN disruptions for maintenance and reducing recovery time from problems. These . packaged with other relevant information in a standard format. administrators can check the connectivity of an N-port and determine its round-trip latency. which causes significant disruption of traffic in the SAN. SPAN sources can include Fibre Channel ports and FCIP and iSCSI virtual ports for IP services.Without FCIP tape acceleration. With Fibre Channel ping. This capability helps isolate problems and reduce risks by preventing errors from spreading to the whole fabric. The embedded Cisco Fabric Analyzer allows the Cisco MDS 9000 family to save Fibre Channel control traffic within the switch for text-based analysis. Call Home Cisco MDS 9000 NX-OS offers a Call Home feature for proactive fault management. The scope of these statistics includes read. The Call Home feature forwards the alarms and events. write. any Fibre Channel port in the fabric can be a source. Fibre Channel control traffic therefore can be captured and analyzed without an expensive Fibre Channel analyzer. administrators can check the reachability of a switch by tracing the path followed by frames and determining hop-by-hop latency. the effective WAN throughput for remote tape operations decreases exponentially as the WAN latency increases. and control commands and error statistics. and Diagnostics Cisco MDS 9000 NX-OS is among the first storage network OSs to provide a broad set of serviceability features that simplify the process of building. and restore operations for open systems. Serviceability.

and interconnections are functioning properly. These online diagnostics do not adversely affect normal Fibre Channel operations. IPFC: The Cisco MDS 9000 Family provides the capability to carry IP packets over a Fibre Channel network. Boot-time diagnostics. NTP messages are transported using IPFC. switching modules. . or time stamps. Cisco EEM helps customers harness the network intelligence intrinsic to the Cisco software and enables them to customize behavior based on network events as they happen. providing a precise time base for all switches. the powerful Cisco Generic Online Diagnostics (GOLD) framework replaces the Cisco Online Health Management System (OHMS) diagnostic framework on the new Cisco MDS 9700 Series Multilayer Directors. Within the fabric. a port is isolated from the external connection. External entities can include. Cisco GOLD is a suite of diagnostic facilities for verifying that hardware and internal data paths are operating as designed. An NTP server must be accessible from the fabric through the OOB Ethernet port. facilitating logging and display of messages ranging from brief summaries to very detailed information for debugging. standby fabric loopback tests. With this feature. and the Cisco Technical Assistance Center (TAC). and traffic is looped internally from the transmit path back to the receive path. a server in-house or at a service provider’s facility. During testing. an administrator’s e-mail account or pager. optics. but are not restricted to. continuous monitoring. Network Time Protocol (NTP) support: NTP synchronizes system clocks in the fabric.notification messages can be used to automatically open technical-assistance tickets and resolve problems before they become critical.2. an external management station attached through an OOB management port to a Cisco MDS 9000 family switch in the fabric can manage all other switches in the fabric using the in-band IPFC Protocol. and they can be sent to external syslog servers. Messages are logged internally. Periodically tests are run to verify that supervisor engines. switch counters. Starting with Cisco MDS 9000 NX-OS 6. Traps can be generated based on a threshold value. System Log The Cisco MDS 9000 family syslog capabilities greatly enhance debugging and management. Syslog severity levels can be set individually for all Cisco MDS 9000 NX-OS functions. Loopback testing: The Cisco MDS 9000 family uses offline port loopback testing to check port capabilities. Other Serviceability Features Additional serviceability features include the following: Online diagnostics: Cisco MDS 9000 NX-OS provides advanced online diagnostics capabilities. Messages can be selectively routed to a console and to log files. allowing them to be run in production SAN environments. Cisco Embedded Event Manager (EEM): Cisco EEM is a powerful device and system management technology integrated into Cisco NX-OS. Enhanced event logging and reporting with SNMP traps and syslog: Cisco MDS 9000 family events filtering and remote monitoring (RMON) provide complete and exceptionally flexible control over SNMP traps. and on-demand and scheduled tests are part of the Cisco GOLD feature set.

SAN Extension over IP Package. . such as the Cisco MDS 9000 Enterprise Package. DCNM Packages. The grace period allows plenty of time to correct the licensing issue. IOA Package. Messages ranging from only high-severity events to detailed debugging messages can be logged. Cisco license packages require a simple installation of an electronic license: no software installation or upgrade is required. the switch provides a grace period before the unlicensed features are disabled. Mainframe Package. Licenses can also be installed on the switch in the factory. if desired. DMM Package. All licensed features may be evaluated for up to 120 days before a license is expired. Cisco MDS 9000 Family Software License Packages Most Cisco MDS 9000 family software features are included in the standard package: the base configuration of the switch. and XRC Acceleration Package. However. 9200. Table 7-2 lists the Cisco MDS 9700 family licenses. On-demand port activation licenses are also available for the Cisco MDS 9000 family blade switches and 4-Gbps Cisco MDS 9100 Series Multilayer Fabric Switches. some features are logically grouped into add-on packages that must be licensed separately.Syslog provides a comprehensive supplemental source of information for managing Cisco MDS 9000 family switches. Cisco MDS stores license keys on the chassis serial PROM (SPROM). Cisco Data Center Network Manager for SAN includes a centralized license management console that provides a single interface for managing licenses across all Cisco MDS switches in the fabric. Cisco offers a set of advanced software features logically grouped into software license packages. and Table 7-3 lists the Cisco MDS 9500. This single console reduces management overhead and prevents problems because of improperly maintained licensing. and 9100 family licenses. so license keys are never lost even during a software reinstallation. If an administrative error does occur with licensing.

Table 7-2 Cisco MDS 9700 Family Licenses .

.

9200. and FCoE connectivity to achieve low total cost of ownership (TCO). They enable seamless deployment of unified fabrics with high-performance Fibre Channel. They share the same operating system and management interface with Cisco Nexus data center switches. FICON.Table 7-3 Cisco MDS 9500. and Table 7-4 explains the models. scalable enterprise clouds. Figure 7-11 portrays the Cisco MDS 9000 Director switches since year 2002. and 9100 Family Licenses Cisco MDS Multilayer Directors Cisco MDS 9500 and 9700 Series Multilayer Directors are (SAN) switches designed for deployment in large. .

The Cisco MDS 9710 also supports 10 Gigabit Ethernet clocked optics carrying Fibre Channel traffic. 10-. and 16Gbps Fibre Channel and FCoE. It supports 2-. MDS 9710 supports up to 384 2/4/8-Gbps or 4/8/16-Gbps autosensing line-rate Fibre Channel ports in a single chassis and up to 1152 Fibre Channel ports per rack with up to 24 terabits per second (Tbps) of Fibre Channel system bandwidth.Figure 7-11 Cisco MDS 9000 Director Switches Table 7-4 Details of Cisco MDS 9500 and 9700 Series Director Switch Models Cisco MDS 9700 Cisco MDS 9700 is available in a 10-slot and 6-slot configuration. 4-. 8-. The multilayer architecture of the Cisco MDS 9700 series enables a consistent feature set over a .

based on central arbitration and crossbar fabric.5 Tbps of front-panel Fibre Channel throughput between modules in each direction for each of the eight Cisco MDS 9710 payload slots. provides 16-Gbps line-rate. Fabric modules are located behind the fan trays. Eight power supply bays are located at the front of the chassis. This per-slot bandwidth is twice the bandwidth needed to support a 48-port 16-Gbps Fibre Channel module at full line rate. Figure 7-12 Cisco MDS 9710 Director Switch The Cisco MDS 9700 series switches enable redundancy on all major components. It provides grid redundancy on the power supply and 1+1 redundant supervisors. remove only one fan tray at a time to access the appropriate fabric modules: Fabric Modules . Figure 7-12 portrays the Cisco MDS 9700 Director switches and their modular components. The Cisco MDS 9710 Director has a 10-slot chassis and it supports the following: Three fan trays at the back of the chassis. predictable performance across all traffic conditions for every port in the chassis. the fabric modules are numbered 1–6 from left to right when facing the rear of chassis.protocol-independent switch fabric. It has three hot-swappable rear panel fan trays with redundant individual fans. Users can add an additional fabric card to enable N+1 fabric redundancy. non-blocking. MDS 9710 and MDS 9706 switches transparently integrate Fibre Channel. and FICON. MDS 9710 and MDS 9706 have six crossbar-switching modules located at the rear of the chassis. The combination of the 16-Gbps Fibre Channel switching module and the Fabric-1 crossbar switching modules enables up to 1. The power supplies are redundant by default and they can be configured to be combined if desired. Cisco MDS 9710 architecture. two Supervisor-1 modules reside in slots 5 and 6. extending connectivity from FCoE and Fibre Channel fabrics to FCoE and Fibre Channel storage devices. including the fabric card. FCoE. When the system is running. They support multihop FCoE.

the best practice is one behind each fan tray. two Supervisor-1 modules reside in slots 3 and 4. Figure 7-14 portrays the Cisco MDS 9706 Director switch chassis details. Air exits through perforations in fan trays. . Four power supply bays are located at the front of the chassis. Fabric Modules 3–4 are behind fan tray 2. Air enters through perforations in line cards. Figure 7-13 Cisco MDS 9710 Director Switch The Cisco MDS 9706 Director switch has a 6-slot chassis and it supports the following: Three fan trays at the back of the chassis. and power supplies. The power supplies are redundant by default and they can be configured to be combined if desired. Fabric Modules 5–6 are behind fan tray 3. Blank panels must be installed in an empty line card or PSU slots to ensure proper airflow.1–2 are behind fan tray 1. MDS 9710 and MDS 9706 have front to back airflow. supervisor modules. It has three hot-swappable rear panel fan trays with redundant individual fans. Figure 7-13 portrays the Cisco MDS 9710 Director switch details. Fabric modules may be installed in any slot.

It supports 1-. . and N_Port ID virtualization (NPIV) for mainframe Linux partitions. 1-Gbps Fibre Channel over IP (FCIP). 8-.and 13-slot configurations. and 10-Gbps Fibre Channel. 2-. 1/2/4/8-Gbps and 10-Gbps FICON: The Cisco MDS 9513 supports advanced FICON services including cascaded FICON fabrics. and 1-Gbps Small Computer System Interface over IP (iSCSI) port speeds. 4-. 10-Gbps FCoE. VSAN-enabled intermix of mainframe and open systems environments. Figure 7-15 portrays the Cisco MDS 9500 Director switch details.Figure 7-14 Cisco MDS 9706 Director Switch Cisco MDS 9500 Cisco MDS 9500 is available in 6. MDS 9500 supports up to 528 1/2/4/8-Gbps autosensing Fibre Channel ports in a single chassis and up to 1584 Fibre Channel ports per rack. Inclusion of Cisco Control Unit Port (CUP) support enables in-band management of Cisco MDS 9000 family switches from mainframe management applications.

cisco. It continues to be supported at the time of writing this chapter. redundant power supplies. and power supplies. and slots 7 and 8 are for Supervisor-2A modules only. where slots 1 to 6 and slots 9 to 13 are reserved for switching and services modules only. A variable-speed fan tray with 15 individual fans is located on the front-left panel of the chassis. It has one hot-swappable front panel fan tray with redundant individual fans. The Cisco MDS 9513 Director is a 13-slot Fibre Channel switch. and failure of one or more individual fans causes all other fans to speed up to compensate. Two crossbar modules are located at the rear of the chassis. The Cisco MDS 9513 Director supports the following: Two Supervisor-2A modules reside in slots 7 and 8. The front panel consists of 13 horizontal slots. A side-mounted fan tray provides active cooling. The power supplies are redundant by default and they can be configured to be combined if desired. and it supports the following: . All Cisco MDS 9500 Series Director switches are fully redundant with no single point of failure. The system has dual supervisors. and a variable-speed fan tray with two individual fans located above the crossbar modules. external crossbar modules.Figure 7-15 Cisco MDS 9500 Director Switch Note The Cisco MDS 9509 has reached End of Life. for more information. All fans are variable speed. and employs majority signal voting wherever 1:1 redundancy would not be appropriate. Two hot-swappable clock modules are located at the rear of the chassis. The rear of the chassis supports two vertical. clock modules. One hot-swappable fan module for the crossbar modules is located at the rear of the chassis. Modules exist on both sides of the plane. End of Sale status. The Cisco MDS 9506 Director has a 6-slot chassis. crossbar switch fabrics. the fan tray still provides sufficient cooling to keep the switch operational. two vertical. two clock modules. redundant.com. Two power supplies are located at the rear of the chassis. www. Although there is a single fan tray. In the unlikely event of multiple simultaneous fan failures. The Cisco MDS 9513 Director uses a midplane. Please refer to the Cisco website. there are multiple redundant fans with dual fan controllers.

COM1 port. Slots 5 and 6 are reserved for the supervisor modules. with a console port. Two power entry modules (PEM) are in the front of the chassis for easy access to power supply connectors and switches. . and a MGMT 10/100 Ethernet port on each module. Two power supplies are located in the back of the chassis. The power supplies are redundant by default and can be configured to be combined if desired. The first four slots are for optional modules that can include up to four switching modules or three IPS modules. Table 7-5 compares the major details of Cisco 9500 and 9700 Series Director switch models.Up to two Supervisor-2A modules that provide a switching fabric. It has one hot-swappable fan module with redundant fans.

.

.

Cisco MDS 9700–MDS 9500 Supervisor and Crossbar Switching Modules The Cisco MDS 9500 Series Supervisor-2A Module incorporates an integrated crossbar switching fabric. intelligent SAN services. The modules occupy two slots in the Cisco MDS 9500 Series chassis. The supervisor module also supports stateful process restarts. Designed to integrate multiprotocol switching and routing. The active/standby configuration of the supervisor modules allows support of nondisruptive software upgrades.Table 7-5 Details of Cisco MDS 9500 and 9700 Series Director Switch Models Note The Cisco Fabric Module 2. 4-port 10 Gbps FC Module have reached End of Life. with the remaining slots available for switching modules. and 1.com. www. Gen-3 1/2/4/8 Gbps modules. Supporting up to 528 Fibre Channel ports in a single chassis.2 Tbps of internal system bandwidth when installed in the Cisco MDS 9513 Multilayer Director chassis. Refer to the Cisco website. These modules are still be supported at the time of writing this chapter. allowing recovery from most process errors without a reset of the supervisor . 1584 Fibre Channel ports in a single rack. The Cisco MDS 9500 Series directors include two supervisor modules—a primary module and a redundant module. so we will not be covering these modules in this book. the Cisco MDS 9500 Series Supervisor-2A Module enables intelligent. and secure high-performance multilayer SAN switching solutions. resilient. and storage applications onto highly scalable SAN switching platforms. scalable.cisco. End of Sale status. This topic describes the capacities of the Cisco MDS 9000 Series switching and supervisor modules.4 terabits per second (Tbps) of internal system bandwidth when combined with a Cisco MDS 9506 or MDS 9509 Multilayer Director chassis or up to 2. for more information. Cisco MDS 9000 Series Multilayer Components Several modules are available for the modular switches in the Cisco MDS 9000 Series product range.

On the Cisco MDS 9513 chassis. They house the crossbar switch fabrics and central arbiters used to provide data-plane connectivity between all the line cards in the chassis. the supervisor modules do not handle any frame forwarding or frame processing. the Cisco MDS NX-OS Release 5. Table 7-6 lists the details of switching capabilities per fabric for Cisco MDS 9710 and MDS 9706 Director switch models.module and with no disruption to traffic. an RJ-45 serial console port. Figure 7-16 Cisco MDS 9000 Supervisor Models . and the crossbar switch fabrics housed on the supervisors are disabled. DS-13SLT-FAB3 that is a generation-4 type module. Although each Supervisor-2A module contains a crossbar switch fabric. This capability is achieved through Cisco MDS 9500 Series Multilayer Directors crossbar architecture with a central arbiter arbitrating fairly between local traffic and traffic to and from other modules. the crossbar switch fabrics are on fabric cards (refer to Figure 7-15) inserted into the rear of the chassis. The Cisco MDS 9000 arbitrated local switching feature provides line-rate switching across all ports on the same module without degrading performance or increasing latency for traffic to and from other modules in the chassis. Besides providing crossbar switch fabric capacity. this is used only on Cisco MDS 9509 and MDS 9506 chassis. Cisco MDS 9000 family supervisor modules provide two essential switch functions: They house the control-plane processors that are used to manage the switch and keep the switch operational. Management connectivity is provided through an out-of-band Ethernet (interface mgmt0). Figure 7-16 portrays the Cisco MDS 9500 Series Supervisor-2A module and Cisco MDS 9700 Supervisor-1 module. and optionally also a DB9 auxiliary console port suitable for modem connectivity. All frame forwarding and processing is handled within the distributed forwarding ASICs on the line cards themselves.2 supports Cisco MDS 9513 Fabric 3 module. On Cisco MDS 9513 Director.

Figure 7-17 portrays the crossbar switching modules for MDS 9700 and MDS 9513 platforms. which is why three is the minimum number of fabric modules that are supported on MDS 9700 Series switches.Table 7-6 Details of Switching Capabilities per Fabric for Cisco MDS 9710 and MDS 9706 Each fabric module provides 220 Gbps data bandwidth per slot. Figure 7-17 Cisco MDS 9700 Crossbar Switching Fabric-1 and MDS 9513 Crossbar Switching Fabric-3 . Three fabric modules provide 660 Gbps data bandwidth and 768 Gbps of Fibre Channel bandwidth per slot. Table 7-7 lists the details of MDS 9700 Supervisor-1 and MDS 9500 Supervisor-2A. equivalent to 256 Gbps of Fibre Channel bandwidth. The line-rate on 48-port 16 Gbps Fibre Channel module needs only three Fabric modules.

Figure 7-18 portrays the Cisco MDS 9000 48-port 8-Gbps Advanced Fibre Channel Switching Module. Cisco MDS 9000 48-Port 8-Gbps Advanced Fibre Channel Switching Module The MDS 9000 48-Port 8-Gbps Advanced Fibre Channel Switching Module provides high port density and is ideal for connection of high-performance virtualized servers.0 ports provided on the front panel for simplified configuration-file uploading and downloading using common USB memory stick products. the 48-port 8-Gbps Advanced module delivers fullduplex aggregate performance of 768 Gbps across locally switched ports on the module. the 48-port 8-Gbps Advanced Fibre Channel switching module delivers full duplex aggregate backplane switching performance of 512 Gbps.Table 7-7 Details of MDS 9700 Supervisor-1 and MDS 9500 Supervisor-2A Both MDS 9700 supervisor-1 and MDS 9500 supervisor-2A modules have two USB 2. . this module supports 1. With Arbitrated Local Switching.5:1 oversubscription at 8-Gbps Fibre Channel (FC) rate across all ports. For large-scale storage networking environments. For traffic switched across the backplane.

. and switch cascading. 2-. It provides multiprotocol capabilities. distributed intelligent fabric services. The 32-port 8-Gbps Advanced switching module delivers full-duplex aggregate performance of 512 Gbps. and 4-Gbps Fibre Channel ports and four 1-Gigabit Ethernet IP storage services ports. Fibre Channel. The modules can support 8-Gbps or 10-Gbps and/or a combined (8. in a single-form-factor. IBM Fibre Connection (FICON). Cisco MDS 9000 I/O Accelerator (IOA). and the ports are grouped in different orders depending on the 8-Gbps and 10Gbps modes. The Cisco MDS 9000 18/4-Port Multiservice Module (MSM) is optimized for deployment of high-performance SAN extension solutions. Cisco Storage Media Encryption (SME). Figure 7-19 Cisco MDS 9000 32-port 8-Gbps Advanced Fibre Channel Switching Module Cisco MDS 9000 18/4-Port Multiservice Module (MSM) The Cisco MDS 9000 18/4-Port Multiservice Module (Figure 7-20) offers 18 1-. and cost-effective IP storage and mainframe connectivity in enterprise storage networks. Figure 7-19 portrays the Cisco MDS 9000 32-port 8-Gbps Advanced Fibre Channel Switching Module. making this module well suited for the high-performance 8-Gbps storage subsystems.and 10-Gbps) InterSwitch Link (ISL). The 10-Gbps ports can be used for long-haul DWDM connections. FICON Control Unit Port (CUP) management. integrating. Small Computer System Interface over IP (iSCSI). Fibre Channel over IP (FCIP).Figure 7-18 Cisco MDS 9000 48-port 8-Gbps Advanced Fibre Channel Switching Module Cisco MDS 9000 32-Port 8-Gbps Advanced Fibre Channel Switching Module The MDS 9000 32-Port 8-Gbps Advanced Fibre Channel Switching Module delivers line-rate performance across all ports and is ideal for high-end storage subsystems and for Inter-Switch Link (ISL) connectivity. Cisco Storage Services Enabler (SSE). Cisco MDS 9000 Extended Remote Copy (XRC) Acceleration.

The 16-Port Storage Services Node offers FCIP for remote SAN Extension. FC write acceleration. Figure 7-21 portrays the Cisco MDS 9000 16-port Gigabit Ethernet Storage Services Node (SSN) Module. Table 7-8 lists the details of Multiservice Module (MSM) and Storage Services Node (SSN). or be configured to consolidate up to four fabric application instances to dramatically decrease hardware footprint and free valuable slots in the MDS 9500 Director chassis. business continuance.Figure 7-20 Cisco MDS 9000 18/4-port Multiservice Module Cisco MDS 9000 16-Port Gigabit Ethernet Storage Services Node (SSN) Module The Cisco MDS 9000 16-Port Storage Services Node provides a high-performance. Cisco IO Accelerator (IOA) to optimize the utilization of metropolitan. Figure 7-21 Cisco MDS 9000 16-port Gigabit Ethernet Storage Services Node Module . unified platform for deploying enterprise-class disaster recovery. The Cisco MDS 9000 16-Port Storage Services Node hosts four independent service engines. and FC tape read-and-write acceleration. and intelligent fabric applications. which can each be individually and incrementally enabled to scale as business requirements change. tape. and Cisco Storage Media Encryption (SME) to secure missioncritical data stored on heterogeneous disk. flexible.and wide-area network (MAN/WAN) resources for backup and replication using hardware-based compression. and VTL drives.

This module supports hot-swappable X2 optical SC interfaces. Not only does the Cisco MDS 9000 10-Gbps 8-Port FCoE Module take advantage of migration of FCoE into the core layer. Figure 7-23 Cisco MDS 9000 8-port 10-Gbps Fibre Channel over Ethernet (FCoE) Module FCoE allows an evolutionary approach to I/O consolidation by preserving all Fibre Channel constructs. training. security. respectively. maintaining the latency. and traffic management attributes of Fibre Channel while preserving investments in Fibre Channel tools. Modules can be configured with either short-wavelength or long-wavelength X2 transceivers for connectivity up to 300 meters and 10 kilometers. FCoE enables the preservation of Fibre Channel as the storage protocol in the data center while giving customers a viable solution for I/O consolidation.Table 7-8 Details of Multiservice Module (MSM) and Storage Services Node (SSN) Cisco MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module Cisco MDS 9000 Series supports 10-Gbps Fibre Channel switching with the Cisco MDS 9000 4-Port 10-Gbps Fibre Channel switching module. but it also extends enterprise-class Fibre Channel services to Cisco Nexus 7000 and 5000 Series Switches and FCoE initiators. This capability Allows FCoE initiators to access remote Fibre Channel resources connected through Fibre Channel over IP (FCIP) for enterprise backup solutions Supports virtual SANs (VSANs) for resource separation and consolidation. Supports inter-VSAN routing (IVR) to use resources that may be segmented. . and SANs. Figure 7-22 Cisco MDS 9000 4-port 10-Gbps Fibre Channel Switching Module Cisco MDS 9000 8-Port 10-Gbps Fibre Channel over Ethernet (FCoE) Module Cisco MDS 9500 Series of Multilayer Directors support 10-Gbps 8-port FCoE switching module with full line-rate FCoE connectivity to extend the benefits of FCoE beyond the access layer into the core of the data center network. Cisco MDS 9000 10-Gbps FCoE module (see Figure 7-23) provides the industry’s first multihop-capable FCoE module. Figure 7-22 portrays the MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module.

comprehensive security frameworks. Figure 7-25 portrays the Cisco MDS 9700 48 port 16-Gbps Fibre Channel Switching Module. This module also supports connectivity to FCoE initiators and targets that only send FCoE traffic. 8-Gbps. enabling full link bandwidth over long distances with no degradation in link utilization. fault detection and isolation of error packets. Each port supports 500 buffer credits without requiring additional licensing. or 10-Gbps SFP+ transceivers. thus providing a way to deploy dedicated FCoE SAN without requiring any convergence. advanced traffic management capabilities. up to 4095 buffer credits can be allocated to an individual port. resilient high-performance ISLs. integrated VSANs and IVR. Figure 7-24 portrays the Cisco MDS 9700 48-port 16-Gbps Fibre Channel Switching Module. Figure 7-24 Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module Cisco MDS 9700 48-Port 10-Gbps Fibre Channel over Ethernet (FCoE) Module The Cisco MDS 9700 48-Port 10-Gbps FCoE Module provides the scalability of 384 10-Gbps full line-rate autosensing ports in a single chassis. and supports hot-swappable Enhanced Small Form-Factor Pluggable (SFP+) transceivers. MDS 9700 module also supports 10 Gigabit Ethernet clocked optics carrying 10 Gbps Fibre Channel traffic. . Individual ports can be configured with Cisco 16-Gbps.Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module Cisco MDS 9700 Series Fibre Channel Switching Module is hot swappable and compatible with 2/4/8-Gbps. The 16-Gbps Fibre Channel switching module continues to provide previously available features such as predictable performance and high availability. 4/8/16-Gbps and 10-Gbps FC interfaces. and sophisticated diagnostics. With the Cisco Enterprise Package.

cisco. Enhanced Small Form-Factor Pluggable (SFP+). . The most up-to-date information can be found for pluggable transceivers in the following link: www. Cisco MDS family Storage Modules offer a wide range of options for various SAN architectures. They allow you to choose different cabling types and distances on a port-by-port basis.html.Figure 7-25 Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module Cisco MDS 9000 Family Pluggable Transceivers The Cisco Small Form-Factor Pluggable (SFP). along with the chassis compatibility associated with them. Table 7-9 lists all the hardware modules available at the time of writing this chapter. and X2 devices are hot-swappable transceivers that plug in to Cisco MDS 9000 family director switching modules and fabric switch ports.com/c/en/us/products/collateral/storage-networking/mds-9000-seriesmultilayer-switches/product_data_sheet09186a00801bc698.

.

the Cisco MDS 9200 Series switches provide an ideal solution for departmental and remote branch-office SANs. and cost-effective multiprotocol connectivity for both open systems and mainframe environments. offering high-performance SAN extension and disaster-recovery solutions. The Cisco MDS 9200 Series switches deliver state-of-theart multiprotocol and distributed multiservice convergence. providing edge-switch connectivity to enterprise core SANs. With high port densities. With a compact form factor and advanced capabilities normally available only on director-class switches. The Cisco MDS 9200 and 9100 Series . intelligent fabric services. Cisco MDS NX-OS provides services optimized for virtual machines and blade servers that let IT managers dynamically respond to changing business needs in virtual environments. and highly configurable Fibre Channel switches that are ideal for small to medium-sized businesses.Table 7-9 Hardware Modules and the Chassis Compatibility Associated with Them Cisco MDS Multiservice and Multilayer Fabric Switches Cisco MDS 9200 and MDS 9100 Series switches are fabric switches designed for deployment in small to medium-sized enterprise SANs. the Cisco MDS 9100 Series switches easily scale from an entry-level departmental switch to a top-ofthe-rack switch. easy-to-install. The Cisco MDS 9100 Series switches are cost-effective.

When extended distances are required. federal government. The Cisco MDS 9148 can be used as the foundation for small. Cisco MDS 9148 The MDS 9148 is a 1 RU Fibre Channel switch with 48 line-rate 8-Gbps ports. as a Top-of-Rack switch. and support IPv6. Figure 7-26 Cisco MDS 9148 Cisco MDS 9148S The Cisco MDS 9148S 16G Multilayer Fabric Switch (see Figure 7-27). providing load-balancing bandwidth use across all links. compact one rack-unit (1RU) switch scales from 12 to 48 line-rate 16 Gbps Fibre Channel ports. core-edge SAN infrastructures. Figure 7-26 portrays the Cisco MDS 9148 switch.switches are FIPS compliant. The following additional capabilities are included: Each port group consisting of four ports has a pool of 128 buffer credits.and 32-port models onsite. up to 125 buffer credits can be allocated to a single port within the port group. The Cisco MDS 9148 Multilayer Fabric Switch is designed to support quick configuration with zerotouch plug-and-play features and task wizards that allow it to be deployed quickly and easily in networks of any size. The base switch model comes with 12 ports enabled. standalone SANs. Customers have the option of purchasing preconfigured models of 16. as mandated by the U. .S. with a default of 32 buffer credits per port. 36. also supporting 1/2/4-Gbps connectivity. and can be upgraded as needed with the 12-port Cisco MDS 9148S OnDemand Port Activation license to support configurations of 24. Port channels enable users to aggregate up to 16 physical Inter-Switch Links (ISLs) into a single logical bundle. 32. Ports per chassis: the On-Demand Port Activation license allows the addition of eight 8-Gbps ports. or 48 enabled ports. all the way to 48 ports by adding these licenses. This extensibility is available without additional licensing. or as an edge switch in larger. or 48 ports and upgrading the 16. helping ensure that the bundle remains active even in the event of a port group failure. The bundle can consist of any port from the switch.

Table 7-10 Chassis Comparison Between MDS 9148 and MDS 9148S .Figure 7-27 Cisco MDS 9148S The Cisco MDS 9148S offers In-Service Software Upgrades (ISSU). Table 7-10 lists the chassis comparison between MDS 9148 and MDS 9148S. The Cisco MDS 9148S also supports Power On Auto Provisioning (POAP) to automate software image upgrades and configuration file installation on newly deployed switches. This means that Cisco NX-OS Software can be upgraded while the Fibre Channel ports carry traffic. port channels for Inter-Switch Link (ISL) resiliency. and F-port channeling for resiliency on uplinks from a Cisco MDS 9148S operating in NPV mode. The Cisco MDS 9148S includes dual redundant hot-swappable power supplies and fan trays.

The Cisco MDS 9222i supports the Cisco MDS 9000 4/44-Port 8-Gbps Host-Optimized Fibre Channel Switching Module in the expansion slot. This additional engine can run a second advanced software application solution package such as Cisco MDS 9000 I/O Accelerator (IOA) to further improve the SAN Extension over IP performance and throughput . The Cisco MDS 9222i is designed to address the needs of medium-sized and large enterprises with a wide range of SAN capabilities. The Cisco MDS 9222i also supports a Cisco MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module in the expansion slot. and disaster recovery and business continuity solutions. the Cisco MDS 9222i scales up to 66 ports for both open systems and IBM Fiber Connection (FICON) mainframe environments. Links can span any port on any module in a chassis for added scalability and resilience. It can be used as a cost-effective SAN extension over IP switch for midrange. Figure 7-28 Cisco MDS 9222i The Cisco MDS 9222i can add a second services engine when a Cisco MDS 9000 18/4-Port Multiservice Module (MSM) is installed in the expansion slot. It provides 18 fixed 4-Gbps Fibre Channel ports and 4 fixed Gigabit Ethernet ports for Fibre Channel over IP (FCIP) and Small Computer System Interface over IP (iSCSI) storage services. The following additional capabilities are included: High-performance ISLs: Cisco MDS 9222i supports up to 16 Fibre Channel interswitch links (ISLs) in a single port channel. with four of the Fibre Channel ports capable of running at 8 Gbps. Intelligent application services engines: The Cisco MDS 9222i includes as standard a single application services engine in the fixed slot that enables the included SAN Extension over IP software solution package to run on the four fixed 1 Gigabit Ethernet storage services ports.Cisco MDS 9222i The Cisco MDS 9222i provides multilayer and multiprotocol functions in a compact three-rack-unit (3RU) form factor. midsize customers in support of IT simplification. replication. It can also provide remote site device aggregation and SAN extension connectivity to large data center directors. Thus. Figure 7-28 portrays the Cisco MDS 9222i two-rack-unit switch. Up to 4095 buffer-to-buffer credits can be assigned to a single Fibre Channel port to extend storage networks over long distances.

Advanced FICON services: The Cisco MDS 9222i supports System z FICON environments. Table 7-11 lists the technical details of the Cisco MDS 9200 and 9100 Series models. In Service Software Upgrade (ISSU) for Fibre Channel interfaces: Cisco MDS 9222i provides high serviceability by allowing Cisco MDS 9000 NX-OS Software to be upgraded while the Fibre Channel ports are carrying traffic. Figure 7-29 portrays the Cisco MDS 9250i two-rack-unit switch. The Cisco MDS 9000 XRC Acceleration feature enables acceleration of dynamic updates for IBM z/OS Global Mirror. the Cisco MDS 9222i switch platform scales from a single application service solution in the standard offering to a maximum of five heterogeneous or homogeneous application services. If more intelligent services applications need to be deployed. FICON tape acceleration reduces latency effects for FICON channel extension over FCIP for FICON tapes read and write operations to mainframe physical or virtual tape. depending on the choice of optional service module that is installed in the expansion slot. IPv6 support is provided for FCIP. two 1/10-Gigabit Ethernet IP storage services ports.running on the standard services engine. formerly known as XRC. and eight 10-Gigabit Ethernet FCoE ports in a fixed two-rack-unit (2RU) form factor. and N-port ID virtualization (NPIV) for mainframe Linux partitions. The Cisco MDS 9250i connects to existing native Fibre Channel networks. enabling features such as Fibre Channel over IP (FCIP) and compression on the switch without the need for additional licenses. . Cisco MDS 9250i The Cisco MDS 9250i offers up to forty 16-Gbps Fibre Channel ports. and management traffic routed in-band and out-ofband. using the eight 10-Gigabit Ethernet FCoE ports. Also. The Cisco SAN Extension over IP application package license is enabled as standard on the two fixed 1/10 Gigabit Ethernet IP storage services ports. iSCSI. the Cisco MDS 9250i platform attaches to directly connected FCoE and Fibre Channel storage devices and supports multitier unified network fabric connectivity directly over FCoE. VSAN-enabled intermix of mainframe and open systems environments. IBM Control Unit Port (CUP) support enables in-band management of Cisco MDS 9200 Series switches from the mainframe management console. including cascaded FICON fabrics. Overall. the Cisco MDS 9000 16-Port Storage Services Node (SSN) can be installed in the expansion slot to provide four additional services engines for simultaneously running application software solution packages such as Cisco Storage Media Encryption (SME). This feature is sometimes referred to as tape pipelining.

Figure 7-29 Cisco MDS 9250i .

.

High-performance ISLs: Cisco MDS 9250i supports up to 16 Fibre Channel ISLs in a single port channel. 2 ports of 10-Gigabit Ethernet for FCIP and iSCSI storage services. Advanced FICON services: The Cisco MDS 9250i supports FICON environments. Cisco MDS 9250i supports In Service Software Upgrade (ISSU) for Fibre Channel interfaces. whether the host is online or offline. and IBM Extended Remote Copy (XRC) features like the MDS 9222i. Cisco MDS Fibre Channel Blade Switches . It also supports IBM CUP. Cisco Data Mobility Manager (DMM) as a distributed fabric service: Cisco DMM is a fabric-based data migration solution that transfers block data nondisruptively across heterogeneous storage volumes and across distances. and 8 ports of 10 Gigabit Ethernet for FCoE connectivity. FICON tape acceleration. Intelligent application services engine: The Cisco MDS 9250i includes as standard a single application services engine that enables the included Cisco SAN Extension over IP software solution package to run on the two fixed 1/10 Gigabit Ethernet storage services ports. including cascaded FICON fabrics.Table 7-11 Summary of the Cisco MDS 9200 and 9100 Series Models The following additional capabilities are included: SAN consolidation with integrated multiprotocol support: The Cisco MDS 9250i is available in a base configuration of 20 ports (upgradeable to 40 through On-Demand Port Activation license) of 16-Gbps Fibre Channel. Platform for intelligent fabric applications: The Cisco MDS 9250i provides an open platform that delivers the intelligence and advanced features required for multilayer intelligent SANs. Links can span any port on any module in a chassis. VSAN-enabled intermix of mainframe and open systems environments. Up to 256 buffer-to-buffer credits can be assigned to a single Fibre Channel port. FIPS compliance: The Cisco MDS 9250i is FIPS 140-2 compliant and it is IPv6 capable.

The Cisco MDS Fibre Channel Blade Switch functions are equivalent to the Cisco MDS 9124 switch. 2. and MSIM (20-port model only). The management of the module is integrated with the IBM Management Suite. BladeCenter T. With its flexibility to expand ports in increments. it includes advanced storage networking features and functions and is compatible with Cisco MDS 9500 Series Multilayer Directors and Cisco MDS 9200 Series Multilayer Fabric Switches. and 1 Gbps. There is a 12-port license upgrade to upgrade the 12-port model from 12 ports to 24 ports. in the form factor to fit IBM BladeCenter (BladeCenter. 7 ports are for internal connections to the server and 3 ports are external. Supported IBM BladeCenter chassis are IBM BladeCenter E. Architecturally. and BladeCenter H) and HP c-Class BladeSystem (see Figure 7-30). Figure 7-30 Cisco MDS Fibre Channel Blade Switches The Cisco MDS Fibre Channel Blade Switch for IBM BladeCenter has two models: the 10-port switch and the 20-port switch. There is a 10-port upgrade license to upgrade from 10 ports to 20 ports. the Cisco MDS Fibre Channel Blade Switch offers the densities required from entry level to advance. BladeCenter T. BladeCenter H. and 4 ports are external or storage area network (SAN) facing. 8 ports are for internal connections to the server. BladeCenter HT. Cisco offers Fibre Channel blade switch solutions for HP c-Class BladeSystem and IBM BladeCenter. The Cisco MDS Fibre Channel Blade Switch is capable of speeds of 4. In the 20-port model. It is running Cisco MDS 9000 NX-OS Software. 14-ports are for internal connections to the server and 6-ports are external. In the 12-port model. and 8 ports are external or SAN facing. There is also the MDS 8Gbps Fibre Channel Blade Switch for HP cClass BladeSystem. 16 ports are internal for server connections. The Cisco MDS Fibre Channel Blade Switch for HP c-Class BladeSystem has two models: the base 12-port model and the 24-port model. providing transparent. The switch modules have the following features: . In the 24-port model. Figure 7-30 shows the Cisco MDS Fibre Channel Blade Switch for HP and IBM. In the 10-port model. and IBM BladeCenter S (10-port model only). blade switches share the same design as the Cisco MDS 9000 family fabric switches.Cisco Fibre Channel Blade switches are built using the same hardware and software platform as Cisco MDS 9000 family Fibre Channel switches and hence are consistent in features and performance. end-to-end service delivery in core-edge deployments. The management of the module is integrated with the HP OpenView Management Suite.

FL_ports (fabric loop ports). and troubleshoot the data center network infrastructure.Six external autosensing Fibre Channel ports that operate at 4. Two internal full-duplex 100 Mbps Ethernet interfaces. Power-on diagnostics and status reporting. Internal autosensing Fibre Channel ports that operate at 4 or 2 Gbps. .0 (at the time of writing this chapter) includes the infrastructure necessary to operate and automate a Cisco DFA network fabric. Figure 7-31 Cisco MDS 8-Gbps Fibre Channel Blade Switch Cisco Prime Data Center Network Manager Cisco Prime Data Center Network Manager (DCNM) is a management system for the Cisco Data Center Unified Fabric. Cisco Prime DCNM 7. Figure 7-32 illustrates Cisco Prime DCNM as central point of integration for NX-OS Platforms with visibility across networking. It enables you to provision. External ports can be configured as F_ports (fabric ports). 2. DCNM software provides ready-to-use. which is composed of Nexus and MDS 9000 switches. large-scale. Figure 7-31 shows the Cisco MDS Fibre Channel Blade Switch for HP.0 policy-based deployment helps reduce labor and operating expenses (OpEx). or 1 Gbps. standards-based. computing. Cisco Prime DCNM 7. It provides visibility and control of the unified data center so that you can optimize for the quality of service (QoS) required to meet service-level agreements. Cisco Prime DCNM 7.0 integrates with external hypervisor solutions and third-party applications using a Representational State Transfer (REST) API. monitor. Internal ports are configured as F_ports at 2 Gbps or 4 Gbps. extensible management capabilities for Cisco Dynamic Fabric Automation (DFA). or E_ports (expansion ports). and storage resources.

or as a virtual service blade on Nexus 1010 or 1100. Eases diagnosis and troubleshooting of data center outages. Cisco DCNM also supports the installation of the Cisco DCNM for SAN and Cisco DCNM for LAN components with a single installer. performance.2 is retitled with the name Cisco DCNM for LAN. It provides a robust framework and comprehensive feature set that meets the routing. inventory. and storage administration needs of data centers. and through trending capacity manager predicts when an individual tier will be consumed. Opens northbound application programming interfaces (API) for Cisco and third-party management and orchestration platforms. and LDAP. Cisco DCNM provides a high level of visibility and control through a single web-based management console for Cisco Nexus. This management solution can be deployed in Windows and Linux. The VMware vCenter plug-in brings Cisco Prime DCNM computing dashboard into VMware vCenter for dependency mapping. configuration. Cisco DCNM authenticates administration through the following protocols: TACACS+. Tracks port utilization by tier. RADIUS. and detects performance degradation. switching. Cisco Fabric Manager product for Cisco DCNM Release 5. VMpath analysis provides the capability to view performance for every switch hop all the way to the individual VMware ESX server and virtual machine. and Cisco Unified Computing System products.Figure 7-32 Cisco Prime DCNM as Central Point of Integration Cisco Prime DCNM provisions and optimizes the overall uptime and reliability of the data center fabric. and events. .2 is retitled with the name Cisco DCNM for SAN. More Cisco DCNM servers can be added to a cluster. This advanced management product Provides self-service provisioning of intelligent and scalable fabrics. Simplifies operational management of virtualized data centers. Proactively monitors the SAN and LAN. Centralizes fabric management to facilitate resource moves. adds. Cisco Prime DCNM product for Cisco DCNM Release 5. Cisco MDS. and changes.

Cisco Traffic Analyzer. DCNM-SAN Server provides centralized MDS management services and performance monitoring.3). Figure 7-33 Cisco Prime DCNM Managing Virtualized Topology DCNM-SAN Server DCNM-SAN Server is a platform for advanced MDS monitoring.3 and 6. The Cisco DCNM-SAN Clients can also connect directly to an MDS switch in fabrics that are not monitored by a Cisco DCNM-SAN Server. DCNM-SAN Web Client.7. and configuration capabilities. including the server components. A licensed DCNM-SAN Server maintains up-to-date discovery information on all configured fabrics so device status and interconnections are immediately available when you open the DCNM-SAN Client. which ensures that you can manage any of your MDS devices from a single console. requires about 100 GB of hard disk space. Figure 7-33 shows how Cisco Prime DCNM manages highly virtualized data centers. Cisco DCNMSAN Server runs on VMware ESXi (Cisco Prime DCNM 7. troubleshooting. DCNM-SAN Client. Continuous health monitoring: MDS health is monitored continuously. so any events that occurred since the last time you opened the DCNM-SAN Client are captured. Authentication in DCNM-SAN Client. Network Monitoring. Cisco DCNM-SAN Server has the following features: Multiple fabric management: DCNM-SAN Server monitors multiple physical fabrics under the same user interface. SNMP operations are used to collect fabric information.4 (Cisco Prime DCNM 6. Each computer configured as a Cisco DCNM-SAN Server can monitor multiple Fibre Channel SAN fabrics. Windows 2008 Standard SP2. Up to 16 clients (by default) can connect to a single Cisco DCNM-SAN Server concurrently. Performance Monitoring. Performance Manager.0) or Windows 2008 R2 SP1. Roaming user profiles: The licensed DCNM-SAN Server uses the roaming user profile feature .Cisco DCNM SAN Fundamentals Overview Cisco Prime DCNM SAN has the following fundamental components: DCNM-SAN Server. The Cisco DCNM-SAN software.6. 5. 6. This facilitates managing redundant fabrics. Red Hat Enterprise Linux 5. Device Manager.

Gradually the resolution is reduced as groups of the oldest samples are rolled up together. However. A key management capability is network performance monitoring. Performance Manager summary reports: The Performance Manager summary report provides a high-level view of the network performance. because its size does not increase over time. and continuously monitored fabrics. and yearly trends. a Device Manager dialog box shows values for a single switch. such as Performance Manager. Zero maintenance databases for statistics storage: No maintenance is required to maintain Performance Manager’s round-robin database. To enable the trial version of a feature. However. although DCNM-SAN tables show values for one or more switches. You see a dialog box explaining that this is a demo version of the feature and that it is enabled for a limited time. and inventory from a remote location using a web browser. At prescribed intervals the oldest samples are averaged (rolled-up) and saved. Cisco Nexus 5000 Series switch chassis. monthly. you need to buy and install the Cisco DCNM-SAN Server package. Licensing Requirements for Cisco DCNM-SAN Server When you install DCNM-SAN. DCNM-SAN Web Client With DCNM-SAN Web Client you can monitor Cisco MDS switch events. so that your user interface will be consistent regardless of what computer you use to manage your storage networks. the status of each port within each module. Device Manager Device Manager provides a graphical representation of a Cisco MDS 9000 family switch chassis.to store your preferences and topology map layouts on the server. Performance Manager gathers network device statistics historically . and the fan assemblies. performance. These reports list the average and peak throughput and provide hot links to additional performance graphs and tables with additional statistics. The tables in the DCNM-SAN information pane basically correspond to the dialog boxes that appear in Device Manager. Device Manager also provides more detailed information for verifying or troubleshooting device-specific configuration than DCNM-SAN. remote client support. A full two days of raw samples are saved for maximum resolution. You can also view the results for specific time intervals using the interactive zooming functionality. weekly. Both tabular and graphical reports are available for all interconnections monitored by Performance Manager. the power supplies. you run the feature as if you had purchased the license. These reports are available only if you create a collection using Performance Manager and start the collector. the supervisor modules. or Cisco Nexus 7000 Series switch chassis (FCoE only) along with the installed switching modules. trial versions of these licensed features are available. Performance Manager drill-down reports: Performance Manager can analyze daily. Performance Manager The primary purpose of DCNM-SAN is to manage the network. To get the licensed features. the basic unlicensed version of Cisco DCNM-SAN Server is installed with it.

topology mapping. SCSI read or traffic throughput and frame counts. or storage elements) and collects the appropriate information at fixed five-minute intervals. If this username and password are not a recognized SNMP username and password. and information viewing capabilities. DCNM-SAN re-creates a fabric topology. Traffic encapsulated by one or more Port Analyzer Adapter products can be analyzed concurrently with a single workstation running Cisco Traffic Analyzer. Cisco Traffic Analyzer monitors round-trip response times. Based on this configuration. the switch creates a temporary SNMP username that is used by DCNM-SAN Client and DCNM-SAN Server. Authentication in DCNM-SAN Client Administrators launch DCNM-SAN Client and select the seed switch that is used to discover the fabric. Performance Manager can collect statistics for ISLs. a SAN discovery process begins. Fibre Channel Generic . After DCNM-SAN is invoked. Additional statistics are also available on Fibre Channel frame sizes and network management protocols. and provides inventory and configuration information in multiple viewing options such as fabric view. storage elements. Performance Manager has three operational stages: Definition: The Flow Wizard sets up flows in the switches. hosts. either DCNM-SAN Client or DCNM-SAN Server opens a CLI session to the switch (SSH or Telnet) and retries the user name and password pair.and provides this information graphically using a web browser. SCSI I/Os per second. and operation view. These files determine which SAN elements and SAN links Performance Manager gathers statistics for. Presentation: Generates web pages to present the collected data through DCNM-SAN Web Server. Collection: The Web Server Performance Collection screen collects information on desired fabrics. hosts. presents it in a customizable map. device view. Using information polled from a seed Cisco MDS 9000 family switch. Performance Manager communicates with the appropriate devices (switches. a public domain software enhanced by Cisco for Fibre Channel traffic analysis. summary view. The username and password are passed to DCNM-SAN Server and are used to authenticate to the seed switch. Performance Manager also integrates with external tools such as Cisco Traffic Analyzer. including Name Server registrations. SCSI session status. If the switch in either the local switch authentication database or through a remote AAA server recognizes the username and password. Performance Manager presents recent statistics in detail and older statistics in summary. and configured flows. and management task information. which is based on ntop. Performance Manager gathers statistics from across the fabric based on collection configuration files. Cisco Traffic Analyzer Cisco Traffic Analyzer provides real-time analysis of SPAN traffic or analysis of captured traffic through a web browser user interface. DCNM-SAN collects information on the fabric topology through SNMP queries to the switches connected to it. Flows are defined based on a host-to-storage (or storage-to-host) link. Network Monitoring DCNM-SAN provides extensive SAN discovery.

serial number and firmware version. you can monitor any of a number of statistics.html Cisco MDS 9700 Data Sheets: http://www. Real-time performance statistics are a useful tool in dynamic troubleshooting and fault isolation within the fabric.com/c/en/us/products/storagenetworking/mds-9700-series-multilayer-directors/datasheet-listing.com/c/en/us/products/collateral/storage-networking/mds-9000-seriesmultilayer-switches/product_data_sheet09186a00801bc698. You can set the polling interval from ten seconds to one hour.html More information on the Cisco MDS 9000 Family intelligent fabric applications: http://www. Device Manager provides an easy tool for monitoring ports on the Cisco MDS 9000 family switches. DCNMSAN gathers this information through SNMP queries to each switch. host bus adapters (HBAs). and VSANs. and host operating-system type and version discovery without host agents. and SAN links.cisco. and storage devices are discovered.html Cisco MDS 9500 Data Sheets: http://www.com/go/dcnm More information on the Cisco I/O Acceleration: http://www. errors.cisco. and minimum or maximum value per second. value per second.com/c/en/us/products/storage-networking/index.com/en/US/products/ps6028/index.pdf Cisco MDS 9000 Family Pluggable Transceivers Data Sheet: http://www. Fabric Shortest Path First (FSPF). For a selected port. Performance Monitoring DCNM-SAN and Device Manager provide multiple tools for monitoring the performance of the overall fabric. class 2 traffic. These statistics show the performance of the selected port in real-time and can be used for performance monitoring and troubleshooting. The Cisco MDS 9000 family switches use Fabric-Device Management Interface (FDMI) to retrieve the HBA model. and SCSI-3.cisco.com/en/US/prod/collateral/ps4159/ps6409/ps6028/ps10531/data_sheet_c78538860.html More information on the Data Mobility Manager (DMM): http://www.html . and FICON data. port channels. This tool gathers statistics at a configurable interval and displays the results in tables or charts. These tools provide real-time statistics as well as historical performance monitoring. DCNM-SAN automatically discovers all devices and interconnects on one or more fabrics. software revision levels. vendor. Reference List The landing page for Cisco SAN portfolio is available at http://www.cisco.com/c/en/us/products/collateral/storagenetworking/mds-9500-series-multilayer-directors/data_sheet_c78_605104. including traffic in and out. SAN elements. The device information discovered includes device names.com/en/US/prod/collateral/ps4159/ps6409/ps6028/product_data_sheet0900ae More information on the Cisco Prime DCNM: http://www. All available switches.Services (FC-GS). and display the results based on a number of selectable options including absolute value.cisco.cisco. ISLs. Real-time statistics gather data on parts of the fabric in user-configurable intervals and display these results in DCNM-SAN and Device Manager.cisco.cisco.

cisco.com/en/US/prod/collateral/ps4159/ps6409/ps6028/ps10526/data_sheet_c78538834. Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. Table 7-12 lists a reference of these key topics and the page numbers on which each is found. .html.More information on the XRC: http://www. noted with the key topics icon in the outer margin of the page.

Appendix C.” also on the disc. includes completed tables and lists to check your work.Table 7-12 Key Topics for Chapter 7 Complete Tables and Lists from Memory Print a copy of Appendix B. and check your answers in the glossary: storage-area network (SAN) Fibre Channel over Ethernet (FCoE) Data Center Network Manager (DCNM) Internet Small Computer System Interface (iSCSI) Fibre Connection (FICON) Fibre Channel over IP (FCIP) buffer credits crossbar arbiter virtual output queuing (VOQ) virtual SAN (VSAN) zoning N-port virtualization (NPV) N-port identifier virtualization (NPIV) Data Mobility Manager (DMM) I/O Accelerator (IOA) Dynamic Fabric Automation (DFA) multiprotocol support XRC Acceleration inter-VSAN routing (IVR) head-of-line blocking . or at least the section for this chapter. Define Key Terms Define the following key terms from this chapter. and complete the tables and lists from memory. “Memory Tables” (found on the disc). “Memory Tables Answer Key.

This chapter goes directly into the key concepts of storage virtualization and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. why. Application virtualization: Decoupling the application and its data from the operating system. virtualization means to create a virtual version of a device or resource. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. Devices. This chapter guides you through storage virtualization concepts answering basic what. including the following: Storage virtualization: Combining multiple network storage devices into what appears to be a single storage unit. Table 8-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. Server virtualization: Partitioning a physical server into smaller virtual servers. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. or network where the framework divides the resource into one or more execution environments. storage device.” Table 8-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. Network virtualization: Using network resources through a logical segmentation of a single physical network. “Answers to the ‘Do I Know This Already?’ Quizzes. and how questions. You can find the answers in Appendix A. Virtualizing Storage This chapter covers the following exam topics: Describe Storage Virtualization Describe LUN Storage Virtualization Describe Redundant Array of Inexpensive Disk (RAID) Describe Virtualizing SANs (VSAN) Describe Where the Storage Virtualization Occurs In computing.Chapter 8. read the entire chapter. applications. such as a server. and human users are able to interact with the virtual resource as if it were a real single logical resource. A number of computing technologies benefit from virtualization. If you .

) a. Achieving cost-effective business continuity and data protection 3. 6. Exponential data growth and disruptive storage upgrades c. IETF (Internet Engineering Task Force) c. Virtualization trends d. Which of the following options explains RAID 1+0? a. 1. 2. It comprises block-level striping with double distributed parity. Where does the storage virtualization occur? (Choose three. . It comprises block-level striping with distributed parity. d. It is an exact copy (or mirror) of a set of data on two disks. It is an exact copy (or mirror) of a set of data on two disks. File systems e. The storage array c. SNIA (Storage Networking Industry Association) d. The control plane e. Power and cooling consumption in the data center b. Which of the following organizations defines the storage virtualization standards? a. It creates a striped set from a series of mirrored drives. Which of the following options can be virtualized according to SNIA storage virtualization taxonomy? (Choose all that apply.) a. DNS 5. Blocks c. Disks b. Which of the following options are reasons to use storage virtualization? (Choose three. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. The host b.do not know the answer to a question or are only partially sure of the answer. Which of the following options explains RAID 6? a. INCITS (International Committee for Information Technology Standards) b. Tape systems d. Storage virtualization has no standard measure defined by a reputable organization. Data protection regulations e. Switches 4. c.) a. b. b. you should mark that question as wrong for purposes of the self-assessment. It comprises block-level striping with distributed parity. In the SAN fabric d.

IV b. c. d. d. e.) a. III. It uses the array controller’s CPU cycles. 7. 10. e. c. The LVM manipulates LUN representation to create virtualized storage to the file system. b. II. III. III. IV e. It initiates a write to the secondary storage array. It is licensed and managed per host. a. Write operation to the primary array is acknowledged locally. I. It creates a striped set from a series of mirrored drives. d. It is a feature of storage arrays. It is close to the file system. It comprises block-level striping with double distributed parity. Select the correct order of operations for completing asynchronous array-based replication.) a. IV. III. It should be configured on HBAs. I. primary storage array does not require a confirmation from the secondary storage array to acknowledge the write operation to the server. 9. Which of the following options are the advantages of host-based storage virtualization? (Choose three. It is a feature of Fibre Channel HBA. I.) a. b. Which of the following options explains the Logical Volume Manager (LVM)? (Choose two. Foundation Topics . The LVM is a collection of ports from a set of connected Fibre Channel switches. It is a proprietary technique that Cisco MDS switches offer. VI d.c. It uses the operating system’s built-in tools. IV. 8. It is independent of SAN transport. Primary storage array maintains new data queued to be copied to the secondary storage array at a later time. b. II. I. An acknowledgement is sent to the primary storage array by a secondary storage array after the data is stored on the secondary storage array. It provides basic LUN-level security by allowing LUNs to be seen by selected servers. Asynchronous mirroring operation was not explained correctly. Which of the following options explains the LUN masking? (Choose two. II. LVMs can be used to divide large physical disk arrays into more manageable virtual disks. III. c. II c. d. II. I. Write operation is received by primary storage array from the host. LVM combines multiple hard disk drive components into a logical unit to provide data redundancy.

also referred to as metadata. the virtualization layer looks at the logical address in that I/O request and. In 2003. which was a box full of DRAM chips that appeared to be rotating magnetic disks. The late 1970s witnessed the introduction of the first solid-state disk. Virtualization gained momentum in the late 1990s as a result of virtualizing SANs and storage networks in an effort to tie together all the computing platforms from the 1980s. computer servers. When a host requests I/O. In 2005. The first virtual tape systems appeared in 1997 for mainframe systems. or isolating the internal function of a storage (sub) system or service from applications. SDS refers to the abstraction of the physical elements. the control plane and the data plane have been intertwined within the traditional switches that are deployed today. In networking. Software Defined Storage (SDS) has been proposed in 2013 as a new category of storage software products. policy-driven. however. A single set of tools and processes performs everything from online any-to-any data migrations to heterogeneous replication. application-aware storage services through orchestration of the underlining storage infrastructure in support of an overall software defined environment. Virtual memory operating systems evolved during the 1960s. Unlike previous new protocols or architectures.What Is Storage Virtualization? Storage virtualization is a way to logically combine storage capacity and resources from various heterogeneous. is stored in huge mapping tables. The original mass storage device used a tape cartridge. external storage systems into one virtual pool of storage. storage virtualization is the act of abstracting. SDS represents a new evolution for the storage industry for how storage will be managed and deployed in the future. virtualization began to move into storage and some disk subsystems. How Storage Virtualization Works Storage virtualization works through mapping. Virtualization achieved a new level when the first Redundant Array of Inexpensive Disk (RAID) virtual disk array was announced in 1992 for mainframe systems. or general network resources for the purpose of enabling application and network independent management of storage or data. which has produced useful tutorial content on the various flavors of virtualization technology. virtual tape was the most popular storage initiative in many storage management pools. virtual tape architectures moved away from the mainframe and entered new markets. increasing the demand for virtual tape. The storage virtualization layer creates a mapping from the logical storage address space (used by the hosts) to the physical address of the storage device. Such mapping information. hiding. Much of the pioneering work in virtualization began on mainframes and has since moved into the non-mainframe. similar to server virtualization. SDS delivers automated. The beginning of virtualization goes a long way back. open-system world. According to the SNIA dictionary. storage virtualization has no standard measure defined by a reputable organization such as INCITS (International Committee for Information Technology Standards) or the IETF (Internet Engineering Task Force). The term Software Defined Storage is a follow up of the technology trend Software Defined Networking. which was first used to describe an approach in network technology that abstracts various elements of networking and creates an abstraction or virtualized layer in software. . This virtual pool can then be more easily managed and provisioned as needed. One decade later. The closest vendor-neutral attempt to make storage virtualization concepts comprehensible has been the work of the Storage Networking Industry Association (SNIA). which was a helical-scan cartridge that looked like a disk. making abstraction and virtualization more difficult to manage in complex virtual environments.

The application is unaware of the mapping that happens beneath the covers. and space. Organizations can use the technology to resolve the issue and create a more adaptive. The major economic driver is to reduce costs without sacrificing data integrity or performance. Figure 8-1 Storage Virtualization Mapping Heterogeneous Physical Storage to Virtual Storage Why Storage Virtualization? The business drivers for storage virtualization are much the same as those for server virtualization. CIOs and IT managers must cope with shrinking IT budgets and growing client demands. The virtualization layer then performs I/O with the underlying storage device using the physical address. ensure business continuity. use IT resources more efficiently. they are faced with ever-increasing constraints on power. translates the logical address into the address of a physical storage device. and become more agile. flexible service-based infrastructure to reflect ongoing changes in business requirements. They must simultaneously improve asset utilization. Figure 8-1 illustrates this virtual storage layer that sits between the applications and the physical storage devices. and when it receives the data back from the physical device. In addition.using the mapping table. The top seven reasons to use storage virtualizations are Exponential data growth and disruptive storage upgrades Low utilization of existing assets . it returns that data to the application as if it had come from the logical address. cooling.

or in the network via intelligent fabric switches or SAN-attached appliances. and manage storage Ensuring rapid. Figure 8-2 SNIA Storage Virtualization Taxonomy Separates Objects of Virtualization from Location and Means of Execution What is being virtualized may include blocks. Storage virtualization provides the means to build higher-level storage services that mask the complexity of all underlying components and enable automation of data storage operations. How the virtualization occurs may be via inband or out-of-band separation of control and data paths. . where the virtualization occurs. and file or record virtualization. high-quality storage service delivery to application and business owners Achieving cost-effective business continuity and data protection Power and cooling consumption in the data center What Is Being Virtualized? The Storage Networking Industry Association (SNIA) taxonomy for storage virtualization is divided into three basic categories: what is being virtualized. Where virtualization occurs may be on the host. in storage arrays. disks. and how it is implemented.Growing management complexity with flat or decreasing budgets for IT staff head count Increasing hard costs and environmental costs to acquire. file systems. This is illustrated in Figure 8-2. run. tape systems.

The abstraction layer that masks physical from logical storage may reside on host systems such as servers. The ultimate goal of storage virtualization should be to simplify storage administration. Block Virtualization—RAID RAID is another data storage virtualization technology that combines multiple hard disk drive components into a logical unit to provide data redundancy and improve performance or capacity. or array-based virtualization. within SAN switches. these alternatives are referred to as host-based. so that both block data and the control information that govern its virtual appearance transit the same link. it was defined as Redundant Arrays of Inexpensive Disks (RAID) on a paper . Figure 8-3 Possible Alternatives Where the Storage Virtualization May Occur In addition to differences between where storage virtualization is located. The in-band method places the virtualization engine directly in the data path. it can present an iSCSI target to the host while using standard FC-attached arrays for the storage. Originally. network-based. binding multiple levels of technologies on a foundation of logical abstraction. This capability enables end users to preserve investment in storage resources while benefiting from the latest host connection mechanisms such as Fibre Channel over Ethernet (FCoE). presenting an image of virtual storage to the host by one link and allowing the host to directly retrieve data blocks from physical storage on another. respectively. or on a storage array or tape subsystem targets. vendors have different methods for implementing virtualized storage transport. In common usage. It has the opportunity to work as a bridge. within the storage network in the form of a virtualization appliance. Figure 8-3 illustrates all components of an intelligent SAN. For example.The most important reasons for storage virtualization were covered in the “Why Storage Virtualization?” section. This can be achieved by a layered approach. In the in-band method the appliance acts as an I/O target to the hosts and acts as an I/O initiator to the storage. in 1988. The out-of-band method provides separate paths for data and control. In-band and out-of-band virtualization techniques are sometimes referred to as symmetrical and asymmetrical.

The most common RAID levels that are used today are RAID 0 (striping). A RAID 6 comprises block-level striping with double distributed parity. but RAID 6 uses two separate parities based respectively on addition and multiplication in a particular Galois Field or Reed–Solomon error correction that we will not be covering in this chapter. volume management software. Many RAID levels employ an error protection scheme called “parity. the size of the array will be 1 TB (500 GB × 2). Most of the RAID levels use simple XOR. It requires that all drives but one be present to operate. performance and capacity. RAID 0 is normally used to increase performance. if a 500 GB disk is mirrored together with 400 GB disk. RAID levels greater than RAID 0 provide protection against unrecoverable (sector) read errors. A RAID 5 comprises block-level striping with distributed parity. although it can also be used as a way to create a large logical disk out of two or more physical ones.written by David A. Running RAID in a volume manager opens the potential for integrating file system and volume management products for management and performance-tuning purposes. It provides no data redundancy. Today. and RAID 6 (dual parity). It can be found in disk subsystems. Patterson. RAID 1 and variants (mirroring). but many variations have evolved—notably. RAID 5 requires a minimum of three disks. A RAID 0 (also known as a stripe set or striped volume) splits data evenly across two or more disks (striped). Disk subsystems have been the most popular product for RAID implementations and probably will continue to be for many years to come. RAID has been implemented in adapters/controllers for many years. device driver software. Such an array can only be as big as the smallest member disk. Upon failure of a single drive. The different schemes or architectures are named by the word RAID followed by a number (for example. without parity information. Gary A. RAID 5 (distributed parity). and Randy Katz. referred to as RAID levels. Now it is commonly used as Redundant Array of Independent disks (RAID). This is useful when read performance or reliability is more important than data storage capacity. RAID is a staple in most storage products. A RAID 1 is an exact copy (or mirror) of a set of data on two disks. starting with SCSI and including Advanced Technology Attachment (ATA) and serial ATA. as well as whole disk failure. This makes larger RAID groups more practical. RAID 1). and virtually any processor along the I/O path. depending on the specific level of redundancy and performance required. subsequent reads can be calculated from the distributed parity so that no data is lost. system motherboards. the size of the array will be 400 GB. A classic RAID 1 mirrored pair contains two disks. The Storage Networking Industry Association (SNIA) standardizes RAID levels and their associated data formats in the common RAID Disk Drive Format (DDF) standard. Because RAID works with storage address spaces and not necessarily storage devices. Data is distributed across the hard disk drives in one of several ways. Each scheme provides a different balance between the key goals: reliability and availability. several nested levels and many nonstandard levels (mostly proprietary). there were five RAID levels. host bus adapters. Host volume management software typically has several RAID variations and. Originally. For example. if an 800 GB disk is striped together with a 500 GB disk. from the University of California.” a widely used method in information technology to provide fault tolerance in a given set of data. There is an enormous range of capabilities offered in disk subsystems across a wide range of prices. For example. Double parity provides fault tolerance up to two failed drives. Gibson. RAID 0. offers many sophisticated configuration options. especially for high- . A RAID 0 can be created with disks of different sizes. in some cases. it can be used on multiple levels recursively. but the storage space added to the array by each disk is limited to the size of the smallest disk.

respectively. When the top array is RAID 0 (such as in RAID 1+0 and RAID 5+0). . The final array of the RAID level is known as the top array. the more important it becomes to choose RAID 6 instead of RAID 5. The array can sustain multiple drive losses as long as no mirror loses all its drives. using drives from multiple sources and manufacturers. Many storage controllers allow RAID levels to be nested (hybrid). The array continues to operate with one or more drives failed in the same mirror set. the data on the RAID system is lost. but if drives fail on both sides of the mirror. most vendors omit the “+”. it is possible to mitigate most of the problems associated with RAID 5. Table 8-2 lists the different types of RAID levels. With a RAID 6 array. RAID 1+0 creates a striped set from a series of mirrored drives. As with RAID 5. which yields RAID 10 and RAID 50. because large-capacity drives take longer to restore. The larger the drive capacities and the larger the array size. and storage arrays are rarely nested more than one level deep. a single drive failure results in reduced performance of the entire array until the failed drive has been replaced. Most common examples of nested RAIDs are the following: RAID 0+1 creates a second striped set to mirror a primary striped set.availability systems.

.

.

.

which manages the process of mapping that logical space to the physical location. A Logical Unit Number (LUN) is a unique identifier that designates individual hard disk devices or grouped devices for address by a protocol associated with a SCSI. or similar interface. The front-end LUN or volume is presented to the computer or host system for use. iSCSI.Table 8-2 Overview of Standard RAID Levels Virtualizing Logical Unit Numbers Virtualization of storage helps to provide location independence by abstracting the physical location of the data. The back-end resource refers to a LUN that is not presented to the computer or host system for direct use. Fibre Channel (FC). The user is presented with a logical space for data storage by the virtualization system. How the mapping . You can have multiple layers of virtualization or mapping by using the output of one layer as the input for a higher layer of virtualization. LUNs are central to the management of block storage arrays shared over a storage-area network (SAN). Virtualization maps the relationships between the back-end resources and front-end resources.

Each storage array vendor has its own management and proprietary techniques for LUN masking in the array. known as a Logical Block Address (LBA). leading to potential loss of data or security. In most SAN environments. each individual LUN must be discovered by only one server host bus adapter (HBA). one physical disk is broken down into smaller subsets in multiple megabytes or gigabytes of disk space. a feature of enterprise storage arrays. provides basic LUN-level security by allowing LUNs to be seen only by selected servers that are identified by their port worldwide name (pWWN). LUN mapping: LUN mapping. as shown in Figure 8-13: LUN masking: LUN masking. LUN mapping must be configured on every HBA. allows the administrator to selectively map some LUNs that have been discovered by the HBA. In a heterogeneous environment with arrays from different vendors. LUN masking in heterogeneous environments or where Just a Bunch of Disks (JBOD) are installed. LUN zoning can be used instead of. In a block-based storage environment. They then perform LUN management in the array (LUN masking) or in the network (LUN zoning). . LUN management becomes more difficult.is performed depends on the implementation. LUN zoning: LUN zoning. allows LUNs to be selectively zoned to their appropriate host port. a proprietary technique that Cisco MDS switches offer. a feature of Fibre Channel HBAs. the same volume will be accessed by more than one file system. one block of information is addressed by using a LUN and an offset within that LUN. Most administrators configure the HBA to automatically map all LUNs that the HBA discovers. Otherwise. In a large SAN. this mapping is a large management task. There are three ways to prevent this multiple access. or in combination with. Typically.

Thin-provisioning is a LUN virtualization feature that helps reduce the waste of block storage resources. which may be heterogeneous in nature. a virtualizer element is positioned between a host and its associated target disk array. This process is fully transparent to the applications using the storage infrastructure. consequently demanding special attention to the ratio between “declared” and actually used resources. The physical storage resources are aggregated into storage pools from which the logical storage is created. and LUN Zoning on a Storage Area Network (SAN) Note JBODs do not have a management function or controller and so do not support LUN masking. This feature brings the concept of oversubscription to data storage. Each storage vendor has different LUN virtualization solutions and techniques. and the virtual storage space will scale up by the same amount.Figure 8-13 LUN Masking. In this . only blocks that are present in the vLUN are saved on the storage pool. can be added as and when needed. disks can be reduced in size by removing some physical storage from the mapping (uses for this are limited because there is no guarantee of what resides on the areas removed). More physical storage can be allocated by adding to the mapping table (assuming the using system can cope with online expansion). although a server can detect a vLUN with a “full” size. This virtualizer generates a virtual LUN (vLUN) that proxies server I/O operations while hiding specialized data block processes that are occurring in the pool of disk arrays. Similarly. Using this technique. More storage systems. In most LUN virtualization deployments. LUN Mapping. Utilizing vLUNs disk expansion and shrinking can be managed easily.

or D2D2T. and name services. A VSAN. accessed through NFS and CIFS protocols over IP networks) not requiring tape library emulation at all. Ports within a single switch can be partitioned into multiple VSANs. With tape media virtualization the amount of mounts are reduced. and iSCSI. VSANs can also be configured separately and independently. Most current VTL solutions use SAS or SATA disk arrays as the primary storage component because of their relatively low cost. The shift to VTL also eliminates streaming problems that often impair efficiency in tape drives. Restore processes are found to be faster than backup regardless of implementations. modeled after the virtual local area network (VLAN) concept in Ethernet networking. By backing up data to disks instead of tapes. Tape media virtualization resolves the problem of underutilized tape media. The use of VSANs allows traffic to be isolated within specific portions of the network. . events. Figure 8-14 portrays three different SAN islands that are being virtualized onto a common SAN infrastructure on Cisco MDS switches. Virtualizing the disk storage as tape allows integration of VTLs with existing backup software and existing backup and recovery processes and policies. The use of array enclosures increases the scalability of the solution by allowing the addition of more disk drives and enclosures to increase the storage capacity. zones. A VTL presents a storage component (usually hard disk storage) as tape libraries or tape drives for use with existing backup software. If a problem occurs in one VSAN. tape libraries. that problem can be handled with a minimum of disruption to the rest of the network. In October 2004. VTL often increases performance of both backup and recovery operations. Each VSAN is a separate self-contained fabric using distinctive security policies. it saves tapes. FICON. (This scheme is called disk-to-disk-to-tape. and floor space. Tape Storage Virtualization Tape storage virtualization can be divided into two categories: the tape media virtualization and the tape drive and library virtualization (VTL). like each FC fabric. They also often offer a disk staging feature: moving the data from disk to a physical tape for long-term storage. and applied to a SAN. such as physical tapes.chapter we have briefly covered a subset of these features. Conversely. multiple switches can join a number of ports to form a single VSAN. VSANs were designed by Cisco. for disaster recovery purposes. A virtual tape library (VTL) is used typically for backup and recovery purposes. The benefits of such virtualization include storage consolidation and faster data restore processes. the data stored on the VTL’s disk array is exported to other media. despite sharing hardware resources. because disk technology does not rely on streaming and can write effectively regardless of data transfer speeds. In some cases. most contemporary backup software products introduced also direct usage of the file system storage (especially network-attached storage. FCIP. the Technical Committee T11 of the International Committee for Information Technology Standards approved VSAN technology to become a standard of the American National Standards Institute (ANSI). The geographic location of the switches and the attached devices are independent of their segmentation into logical VSANs.) Alternatively. Virtualizing Storage-Area Networks A virtual storage-area network (VSAN) is a collection of ports from a set of connected Fibre Channel switches. memberships. can offer different high-level protocols such as FCP. which form a virtual fabric.

N-Port ID Virtualization (NPIV) and N-Port Virtualizer. N-Port ID Virtualization (NPIV) NPIV allows a Fibre Channel host connection or N-Port to be assigned multiple N-Port IDs or Fibre Channel IDs (FCID) over a single link. . aliasing. and that the total number of domain IDs in a fabric is limited. In some cases. The N-Port Virtualizer feature allows the blade switch or top-of-rack fabric device to behave as an NPIV-based host bus adapter (HBA) to the core Fibre Channel director. The problem. NPV is primarily a switch-based technology. though. Whereas NPIV is primarily a host-based solution. it must retrieve information about the file from the metadata servers first (see Figure 8-15). There is. NPV is intended to address this problem. and security perspectives. this limit can be fairly low depending upon the devices attached to the fabric. The device aggregates the locally connected host ports or N-Ports into one or more uplinks (pseudo-interswitch links) to the core switches. Consider that every Fibre Channel switch in a fabric needs a different domain ID. In a virtual machine environment where many host operating systems or applications are running on a physical host. It is designed to reduce switch management and overhead in larger SAN deployments.Figure 8-14 Independent Physical SAN Islands Virtualized onto a Common SAN Infrastructure Since December 2002 Cisco enabled multiple virtualization features within intelligent SANs. each virtual machine can now be managed independently from zoning. All FCIDs assigned can now be managed on a Fibre Channel fabric as unique entities on the same physical host. These systems are based on a global file directory structure that is contained on dedicated file metadata servers. is that you often need to add Fibre Channel switches to scale the size of your fabric. Different applications can be used in conjunction with NPIV. Virtualizing File Systems A virtual file system is an abstraction layer that allows client applications using heterogeneous filesharing network protocols to access a unified pool of storage resources. Several examples are Inter VSAN routing (IVR). N-Port Virtualizer (NPV) An extension to NPIV is the N-Port Virtualizer feature. therefore. Whenever a file system client needs to access a file. an inherent conflict between trying to reduce the overall number of switches to keep the domain ID count low while also needing to add switches to have a sufficiently high port count.

Figure 8-15 File System Virtualization The purpose of a virtual file system (VFS) is to allow client applications to access different types of concrete file systems in a uniform way. be used to access local and network storage devices transparently without the client application noticing the difference. for example. The terms of the contract might bring incompatibility from release to release. and possibly modified before recompilation. NTFS. Therefore. to allow it to work with a new release of the operating system. and Unix file systems. A VFS specifies an interface (or a “contract”) between the kernel and a concrete file system. so that concrete file system support built for a given release of the operating system would work with future versions of the operating system. A VFS can. it is easy to add support for new file system types to the kernel simply by fulfilling the contract. which would require that the concrete file system support be recompiled. . Sun Microsystems introduced one of the first VFSes on Unix-like systems. and the Oracle Clustered File System (OCFS) are all examples of virtual file systems. Mac OS. It can be used to bridge the differences in Windows. so that applications can access files on local file systems of those types without having to know what type of file system they are accessing. The VMware Virtual Machine File System (VMFS). Or the supplier of the operating system might make only backward-compatible changes to the contract. Linux’s Global File System (GFS).

In a similar fashion. transparent data migration. A physical device driver handles the volumes (LUN) presented to the host system. more power-efficient systems that bring down capital and operational costs. In-band solutions typically offer better performance. a software layer. Data can be moved from a tier-one NAS to a tier-two or tier-three NAS or network file server automatically. and in other instances it is offered as a separate product. Where Does the Storage Virtualization Occur? Storage virtualization functionality. a situation equivalent to not being able to mix brands of servers in a server virtualization project.File/Record Virtualization File virtualization unites multiple storage devices into a single logical pool of files. including aggregation (pooling). you simply type in the domain name. standalone file virtualization systems do not have to use all the same hardware. In a common scenario. they can be moved to lower cost. often called a “global” file system. as well as in a storage system. There may be some advantage here because the file system itself is managing the metadata that contains the file locations. which could solve the file stub and manual look-up problems. can be implemented at the host server in the data path on a network switch blade or on an appliance. Host-Based Virtualization A host-based virtualization requires additional software running on the host as a privileged task or process. In Chapter 6. the logical volume manager. This eliminates one of the key capabilities of a file virtualization system. The second file virtualization method uses a standalone virtualization engine—typically an appliance. In some cases. you could lose the ability to store older data on a less expensive storage system. but most important. There are two types of file virtualization methods available today. volume management is built in to the operating system. Because these files are less frequently accessed. which removes the requirement to know the IP address of a website. is that this metadata is unique to the users of that specific file system. and objectives for deploying storage virtualization. The first is one that’s built in to the storage system or the NAS itself. heterogeneous replication. file virtualization eliminates the need to know exactly where a file is physically located. Often global file systems are not granular to the file level. the flexibility to mix hardware in a system. Users look to a single mount point that shows all their files. These systems can either sit in the data path (in-band) or outside the data path (out-of-band) and can move files to alternate locations based on a variety of attribute related policies. File virtualization is similar to Dynamic Name Service (DNS). However. The best location to implement storage virtualization functionality depends in part on the preferences. as you will remember. residing above the disk device driver intercepts the I/O requests (see Figure 8-16) and provides the metadata lookup and I/O mapping. existing technologies. the primary NAS could be a high-speed high performance system storing production data. meaning the same hardware must be used to expand these systems and is often available from only one manufacturer. . we discussed different types of tiered storage. The challenge. It is a vital part of both file area network (FAN) and network file management (NFM) concepts. however. With this solution. and device emulation. whereas out-of-band solutions are simpler to implement. Both offer file-level granularity.

servers. RAID controllers for internal DAS and JBOD external chassis are good examples of hardware-based virtualization. So with host-based virtualization.Figure 8-16 Logical Volume Manager on a Server System The LVM manipulates LUN representation to create virtualized storage to the file system or database manager. For smaller data centers. it’s called Logical Volume Manager or LVM. LVMs can be used to divide large physical disk arrays into more manageable virtual disks or to create large virtual disks from multiple physical disks. Host-based storage virtualization may require more CPU cycles. or LDM) that performs virtualization tasks. but manual administration of hundreds or thousands of servers in a large data center can be very costly. Use of host-resident virtualization must therefore be balanced against the processing requirements of the upper layer applications so that overall performance expectations can be met. Fabric-based storage virtualization dynamically interacts with virtualizers on arrays. the virtualization capabilities require standardized protocols to ensure interoperability between fabric-hosted virtualization engines. Most modern operating systems have some form of logical volume management built in (in Linux. logical volume management may be complemented by hardware-based virtualization. or appliances. this may not be a significant issue. it will not differentiate the RAID levels. it’s ZFS’s zpool layer. Fabric may be composed of multiple switches from different vendors. in Windows. So the storage administrator must ensure that the appropriate classes of storage under LVM control are paired with individual application requirements. For higher performance requirements. but the software that performs the striping calculations often places a noticeable burden on the host CPU. . in Solaris and FreeBSD. RAID can be implemented without a special controller. The logical volume manager treats any LUN as a vanilla resource for storage pooling. The SAN fabric provides connectivity to heterogeneous server platforms and heterogeneous storage arrays and tape subsystems. heterogeneous storage systems can be used on a perserver basis. Network-Based (Fabric-Based) Virtualization A network-based virtualization is a fabric-based switch infrastructure. it’s Logical Disk Manager.

Allocation of virtualized storage to individual servers and management of the storage metadata is the responsibility of the storage application running on the CPP. Out-of-band solutions typically run on the server or on an appliance in the network and essentially process the control traffic. switching. which perform I/O with the actual storage device on behalf of the host.A host-based (server-based) virtualization provides independence from vendor-specific storage. map them to the physical storage locations. They do require that the virtualization engine handle both control traffic and data. which requires the processing power and internal bandwidth to ensure that they don’t add too much latency to the I/O process for host servers. FAIS is an open systems project of the ANSI/INCITS T11. the virtualization device is sitting in the path of the I/O data flow. and regenerate those I/O requests to the storage systems on the back end. Multivendor fabric-based virtualization provides high-availability. Caching for improving performance and other storage management features such as replication and migration can be supported. mirroring. FAIS development is thus being driven by the switch manufacturers and by companies who have developed storage virtualization software and virtualization hardware-assist components. Asymmetric or out-of-band: The virtualization device in this approach sits outside the data . It can be either in-band or out-of-band. and it initiated the fabric application interface standard (FAIS). The data path controller (DPC) may be implemented at the port level in the form of an applicationspecific integrated circuit (ASIC) or dedicated CPU. and storage-based virtualization provides independence from vendor-specific servers and operating systems. The FAIS initiative aims to provide fabric-based services. while maintaining high-performance switching of storage data. Fabric-based virtualization requires processing power and memory on top of a hardware architecture that is providing processing power for fabric services. referring to whether the virtualization engine is in the data path. and data replication. such as snapshots. array. centralized within the switch architecture. Network-based virtualization provides independence from both. The DPC is optimized for low latency and high bandwidth to execute basic SCSI read/write transactions under the management of one or more CPPs. The CPP is therefore a high-performance CPU with auxiliary memory. In-band solutions intercept I/O requests from hosts. and other tasks. The control path processor (CPP) includes some form of operating system. This can result in less latency than in-band storage virtualization because the virtualization engine doesn’t handle the data. The APIs are a means to more easily integrate storage applications that were originally developed as host. it’s also less disruptive if the virtualization engine goes down. The fabric-based virtualization engine can be hosted on an appliance. and the storage virtualization application. so vendor implementation may be different. or appliance-based utilities to now be supported within fabric switches and directors. Network-based virtualization can be done with one of the following approaches: Symmetric or in-band: With an in-band approach. the FAIS application interface. Hosts send I/O requests to the virtualization device. The FAIS specifications do not define a particular hardware design for CPP and DPC entities. The FAIS initiative separates control information from the data path. the CPP and the DCP. routing I/O requests to the proper physical locations. an application-specific integrated circuit (ASIC) blade. or auxiliary module. but don’t handle data traffic. There are two types of processors.5 task group and defines a set of common application programming interfaces (APIs) to be implemented within fabrics.

taking advantage of intelligent SAN switches to perform I/O redirection and other virtualization tasks at wire speed. which knows to first request the location of data from the virtualization device and then use that mapped physical address to perform the I/O. the CPU is susceptible to being overwhelmed by I/O traffic. These ASIC can look inside Fibre Channel frames and perform I/O mapping and reroute the frames without introducing any significant amount of latency. . Whereas in typical in-band solutions. What this means is that special software is needed on the host. Specialized software running on a dedicated highly available appliance interacts with the intelligent switch ports to manage I/O traffic and map logical-to-physical storage resources at wire speed. Figure 8-17 portrays the three architecture options for implementing network-based storage virtualization.path between the host and storage device. in the split-path approach the I/O-intensive work is offloaded to dedicated port-level ASICs on the SAN switch. Hybrid split-path: This method uses a combination of in-band and out-of-band approaches.

out-ofband (out-of-data path with agent and metadata controller) or split path (hybrid of in-band and out-ofband). MDS 9222i Multiservice Modular Switch.Figure 8-17 Possible Architecture Options for How the Storage Virtualization Is Implemented There are many debates regarding the architecture advantages of in-band (in the data path) vs. Some applications and their storage requirements are best suited for a combination of technologies to address specific needs and requirements. and MDS 9000 18/4-Port Multiservice . Cisco SANTAP is one of the Intelligent Storage Services features supported on the Storage Services Module (SSM).

Array-based data replication may be synchronous or asynchronous.Module (MSM-18/4). Cisco SANTAP is a good example of Network-based virtualization. either through a distributed management scheme or through a hierarchal master/slave relationship. synchronous data replication is generally limited to metropolitan distances (~150 kilometers) to avoid performance degradation due to speed of light propagation delay. That controller. The control path is implemented using an SCSI-based protocol. For this reason. SANTAP has a control path and a data path. which reduces the effort needed to integrate applications with the core services being offered by the SSM. The SCSI I/O is therefore dependent on the speed at which both writes can be performed. The communication protocol between storage subsystems may be proprietary or based on the SNIA SMI-S standard. is responsible for providing the mapping functionality. The Cisco MDS 9000 SANTAP service enables customers to deploy third-party appliance-based storage applications without compromising the integrity. The communication between separate storage array controllers enables the disk resources of each system to be managed collectively. . An appliance sends requests to a Control Virtual Target (CVT). The control path handles requests that create and manipulate replication sessions sent by an appliance. called the primary storage array controller. which the SANTAP process creates and monitors. and other storage management services across the storage devices that it is virtualizing. Array-Based Virtualization In the array-based approach. each write to a primary array must be completed on the secondary array before the SCSI transaction acknowledges completion. The primary storage array controller also provides replication. and any latency between the two systems affects overall performance. and multiple other storage devices from the same vendor or from a different vendor can be added behind it. This is commonly referred to as disk-to-disk (D2D) data replication. the virtualization layer resides inside a storage array controller. The protocol-based interface that is offered by SANTAP allows easy and rapid integration of the data storage service application because it delivers a loose connection between the application and an SSM. As one of the first forms of array-to-array virtualization. data replication requires that a storage system function as a target to its attached servers and as an initiator to a secondary array. In a synchronous replication operation (see Figure 8-18). availability. migration. Responses are sent to the control logical unit number (LUN) on the appliance. or performance of a data path between the server and disk. SANTAP also allows LUN mapping to Appliance Virtual Targets (AVT).

Additionally. while one or more write transactions may be queued to the secondary array. or snapshots. a checkpoint to restore the state of an application. and a kind of off-host processing.Figure 8-18 Synchronous Mirroring In asynchronous array-based replication (see Figure 8-19). . These copies can be used for a backup. Like photographic snapshots capture images of physical action. Both virtual (copy-on-write) and physical (full-copy) snapshots protect against corruption of data content. Point-in-time copy is a technology that permits you to make the copies of large sets of data a common activity. point-in-time copy. This solves the problem of local performance. some implementations required the primary array to temporarily buffer each pending transaction to the secondary so that transient disruptions are recoverable. Figure 8-19 Asynchronous Mirroring Array-based virtualization services often include automated point-in-time copies or snapshots that may be implemented as a complement to data replication. data mining. To minimize this risk. full-copy snapshots can protect against physical destruction. but may result in loss of the most current write to the secondary array if the link between the two systems is lost. individual write operations to the primary array are acknowledged locally. test data. are virtual or physical copies of data that capture the state of data set contents at a single instant.

Indianapolis: Cisco Press. and ease of management. The best solution is going to be the one that meets customers’ specific requirements and may vary by customers’ different tiers of storage and applications. Table 8-3 Comparison of Storage Virtualization Levels Reference List “Common RAID Disk Drive Format (DDF) standard”. TechTarget 2006.org/education/dictionary SNIA Storage Virtualization tutorial I and II: http://www. SNIA. Santana. There are many approaches to implement and deploy storage virtualization functionality to meet various requirements.org/tech_activities/standards/curr_standards/ddf/ SNIA Dictionary: http://www. functionality. Gustavo.snia. Table 8-3 provides a basic comparison between various approaches based on SNIA’s virtualization tutorial. Moore. Sun Microsystems.html A. N-Port Virtualization in the Data Center: http://www.Solutions need to be able to scale in terms of performance. and resiliency without introducing instability. connectivity. Storage Virtualization for IT Flexibility. 2013.snia.cisco.org/education/storage_networking_primer/stor_virt Fred G.snia. A. Data Center Virtualization Fundamentals.org: http://www.com/c/en/us/products/collateral/storagenetworking/mds-9000-nx-os-software-release-4-1/white_paper_c11-459263. .

2006. Tom. Indianapolis: Addison-Wesley Professional. Appendix C. Table 8-4 Key Topics Complete Tables and Lists from Memory Print a copy of Appendix B. Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter.Farley. noted with the key topic icon in the outer margin of the page. Marc. Clark. . 2004. 2005. and complete the tables and lists from memory. James. or at least the section for this chapter. Table 8-4 lists a reference of these key topics and the page numbers on which each is found. Indianapolis: Cisco Press. Indianapolis: Cisco Press. Storage Virtualization—Technologies for Simplifying Data Storage and Management. “Memory Tables” (found on the disc). Storage Networking Fundamentals. Storage Networking Protocol Fundamentals. includes completed tables and lists to check your work. Volume 1 and 2. Long.” also on the disc. “Memory Tables Answer Key.

Define Key Terms Define the following key terms from this chapter. and check your answers in the Glossary: storage-area network (SAN) Redundant Array of Inexpensive Disk (RAID) virtual storage-area network (VSAN) operating system (OS) Logical Unit Number (LUN) virtual LUN (vLUN) application-specific integrated circuit (ASIC) host bus adapter (HBA) thin-provisioning Storage Networking Industry Association (SNIA) software-defined storage (SDS) metadata in-band out-of-band virtual tape library (VTL) disk-to-disk-to-tape (D2D2T) N-Port virtualizer (NPV) N-Port ID virtualization (NPIV) virtual file system (VFS) asynchronous and synchronous mirroring host-based virtualization network-based virtualization array-based virtualization .

starting from release 4. Formerly known as Cisco SAN-OS. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. Table 9-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. Cisco NX-OS Software is the network software for Cisco MDS 9000 family and Cisco Nexus family data center switching products. In this chapter we build our storage-area network starting from the initial setup configuration on Cisco MDS 9000 switches.1 it is rebranded as Cisco MDS 9000 NX-OS Software. providing a modular and sustainable base for the long term. stable.Chapter 9. It is based on a secure. the fabric login. and the fabric domain using command-line interface. zoning. This chapter discusses how to configure Cisco MDS 9000 Series multilayer switches and how to update software licenses.” discussed Cisco MDS 9000 Software features. read the entire chapter. “Cisco MDS Product Family.” . You can find the answers in Appendix A. and standard Linux core. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. It also describes how to verify virtual storage-area networks (VSAN). Chapter 7. Fibre Channel Storage-Area Networking This chapter covers the following exam topics: Perform Initial Setup on Cisco MDS Switch Verify SAN Switch Operations Verify Name Server Login Configure and Verify VSAN Configure and Verify Zoning The foundation of the data center network is the network software that runs the switches in the network. This chapter goes directly into the practical configuration steps of the Cisco MDS product family and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. “Answers to the ‘Do I Know This Already?’ Quizzes.

None 3. 4. Out-of-band management interface e. d. MDS 9250i b. VSAN interface d. Which MDS switch models support the PowerOn Auto provisioning with NX-OS 6. MDS 9706 family director switch does not support in-band management. COM1. All FC interfaces c. How does the On-Demand Port Activation license work on the Cisco MDS 9148S 16G FC switch? a. and 8 ports of 10 Gigabit Ethernet for FCoE connectivity. The base configuration of 20 ports of 16 Gbps Fibre Channel. you should mark that question as wrong for purposes of the self-assessment. and MGMT interface? (Choose all the correct answers. Via enabling SSH service. MDS 9706 2. MDS 9513 d. 2 ports of 10 Gigabit Ethernet for FCIP and iSCSI storage services. MDS 9148 c. c. MDS 9148S e. During the initial setup script of MDS 9500. There is no configuration option for in-band management on MDS family switches. MGMT 0 interface b. b. how can you configure in-band management? a.) a. . MDS 9508 c.Table 9-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. During the initial setup script of MDS 9706. MDS 9706 5. MDS 9148S d. MDS 9710 b.2(9)? a. Which of the following switches support Console port. Via answering yes to Configuring Advanced IP Options. 1. If you do not know the answer to a question or are only partially sure of the answer. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. which interfaces can be configured with the Ipv6 address? a.

d. c. System—BIOS—Loader—Kickstart d. None 8.2. 6.2 255. BIOS—Loader—Kickstart—System c. 36. vsan 4 suspend . or 48 ports. vsan 4 name SUSPEND b. System—Kickstart—BIOS—Loader b. Click here to view code image switch(config-if)# (no) switchport mode F switch(config-if)# (no) switchport mode auto e. Which of the following options is the correct configuration for in-band management of MDS 9250i? a.255. Which is the correct option for the boot sequence? a. The base switch model comes with 24 ports enabled and can be upgraded as needed with a 12-port activation license to support 36 or 48 ports. 32. or 48 ports. Which NX-OS CLI command suspends a specific VSAN traffic? a.1 b. Click here to view code image switch(config)# interface mgmt0 switch(config-if)# ip address 10.0 switch(config-if)# no shutdown switch(config-if)# exit switch(config)# ip default-gateway 10. no suspend vsan 4 c.22.255.22. BIOS—Loader—System—Kickstart 7. Click here to view code image switch(config)# interface mgmt0 switch(config-if)# ipv6 enable switch(config-if)# ipv6 address ipv6 address 2001:0db8:800:200c::417a/64 switch(config-if)# no shutdown c. The base switch model comes with 8 ports and can be upgraded to models of 16.b. The base switch model comes with 12 ports enabled and can be upgraded as needed with the 12-port activation license to support 24. Click here to view code image switch(config)# interface vsan 1 switch(config-if)# no shutdown d.2.

The MDS multiservice and multilayer switches have one management and one console port as well. Switch 2: Switchport mode E. pWWN b. Switch 1: Switchport mode E. The MDS multilayer directors have dual supervisor modules. 10-Gbps Fibre Channel over Ethernet (FCoE). Ipv6 address c. FC ID 10. 2-. follow these steps: . Switch 2: Switchport mode E. 16-. any device connected to this port must be capable of asynchronous transmission. trunk mode auto d. Switch 1: Switchport mode E. The Cisco MDS family switches provide the following types of ports: Console port (supervisor modules): The console port is available for all the MDS product family. Switch 2: Switchport mode E. trunk mode auto. labeled “Console. The console port. and each of the supervisor modules have one management and one console port. trunk mode on. you may want to quickly review the chapter. Before starting any configuration. if needed. This port is used to create a local management connection to set the IP address and other initial configuration settings before connecting the switch to the network for the first time. Switch 1: Switchport mode E. No correct answer exists Foundation Topics Where Do I Start? The Cisco MDS 9000 family of storage networking products supports 1-. trunk mode auto e. trunk mode off. Switch 1: Switchport mode E. Switch 2: Switchport mode F. For each MDS product family switch there is a specific hardware installation guide. which explains in detail how and where to install specific components of the MDS switches.” is an RS-232 port with an RJ-45 interface. In which of the following conditions for two MDS switches is the trunking state (ISL) and port mode E? a. 4-. This chapter starts with explaining what to do after the switch is installed and powered on following the hardware installation guide.and 10-Gbps Fibre Channel over IP (FCIP) on Gigabit Ethernet ports. Mac address of the MGMT 0 interface d. vsan no suspend vsan 4 9. the equipment should be installed and mounted onto the racks. 1. 8-. trunk mode off b. 10-Gbps Fibre Channel. trunk mode off. Which of the following options are valid member types for a zone configuration on MDS 9222i switch? a.d. trunk mode on c. To connect the console port to a computer terminal. In Chapter 7 we described all the details about the MDS 9000 product family. It is an asynchronous (sync) serial port.

We recommend using the adapter and cable provided with the switch. COM1 port is available for all the MDS product family except MDS 9700 director switches. 8 data bits. Note For high availability. The active supervisor module owns the IP address used by both of these Ethernet connections. To connect the COM1 port to a modem. Configure the terminal emulator program to match the following default port characteristics: 9600 baud. This process requires an Ethernet connection to the newly activated supervisor module. switch. . no parity. Connect the adapters using the RJ-45-to-RJ-45 rollover cable (or equivalent crossover cable). On a switchover. Connect the console cable (a rollover RJ-45 to RJ-45 cable) to the console port and to the RJ-45 to DB-9 adapter or the RJ-45 to DP-25 adapter (depending on your computer) at the computer serial port. follow these steps: 1. Connect the DB-9 serial adapter to the COM1 port. Connect the modem to the COM1 port using the adapters and cables. The supervisor modules support an autosensing MGMT 10/100/1000 Ethernet port (labeled “MGMT 10/100/1000”) and have an RJ-45 interface. 2. The COM1 port (labeled “COM1”) is an RS-232 port with a DB-9 interface. RJ-45. You can use it to connect to an external serial communication device such as a modem. You can connect the MGMT 10/100/1000 Ethernet port to an external hub. 1 stop bit. You need to use a modular. the newly activated supervisor module takes over this IP address. 2. connect the MGMT 10/100/1000 Ethernet port on the active supervisor module and on the standby supervisor module to the same network or VLAN. Connect the supplied RJ-45 to DB-9 female adapter or RJ-45 to DP-25 female adapter (depending on your computer) to the computer serial port. The default COM1 settings are as follows: line Aux: |Speed: 9600 bauds Databits: 8 bits per byte Stopbits: 1 bit(s) Parity: none Modem In: Enable Modem Init-String . Connect the RJ-45 to DB-25 modem adapter to the modem. c.default: ATE0Q1&D2&C1S0=1\015 Statistics: tx:17 rx:0 Register Bits:RTS|DTR MGMT 10/100/1000 Ethernet port (supervisor module): This is an Ethernet port that you can use to access and manage the switch by IP address. 3. COM1 port: This is an RS-232 port that you can use to connect to an external serial communication device such as a modem. MGMT Ethernet port is available for all the MDS product family. or router. It is available on each supervisor module of MDS 9500 Series switches.1. such as through Cisco Data Center Network Manager (DCNM). straightthrough UTP cable to connect the MGMT 10/100/1000 Ethernet port to an Ethernet switch port or hub or use a crossover cable to connect to a router interface. b. a.

SFP+. CDR. SFF-8431. Slot 0 and LOG FLASH. Gigabit Ethernet ports (IP storage ports): These are IP Storage Services ports that are used for the FCIP or iSCSI connectivity. MAC. Let’s also review the transceivers before we continue with the configuration. and the cable must not exceed the stipulated cable length for reliable communication. The LOG FLASH and Slot 0 USB ports use different formats for their data. thus it moves some functions to the motherboard. SFP+ transceivers that are supported by the switch. SFP+ was replaced by the XFP 10 G modules and became the mainstream. and XFP? SFP. Each transceiver must match the transceiver on the other end of the cable. including signal modulation function.com/c/en/us/products/collateral/storage-networking/mds-9000-series-multilayerswitches/product_data_sheet09186a00801bc698. In the supervisor module. SFP+ is in compliance with the protocol of IEEE802. The most up-todate information can be found for pluggable transceivers here: http://www. The Cisco MDS 9000 family supports both Fibre Channel and FCoE protocols for SFP+ transceivers. and SFF-8432 specifications. The Cisco MDS 9700 Series switch has two USB drives (in each Supervisor-1 module). and XFP are all terms for a type of transceiver that plugs into a special port on a switch or other network device to convert the port to a copper or fiber interface. SFP is based on the INF-8074i specification and SFP+ is based on the SFF-8431 specification. If the related SFP has a Cisco-assigned extended ID. and long wavelength (LWL) transceivers with LWL transceivers. there are two USB drives. and X2 devices are hot-swappable transceivers that plug into Cisco MDS 9000 family director switching modules and fabric switch ports. The show interface transceiver command and the show interface fc slot/port transceiver command display both values for Cisco-supported SFPs. and EDC.Fibre Channel ports (switching modules): These are Fibre Channel (FC) ports that you can use to connect to the SAN or for in-band management. The SFP hardware transmitters are identified by their acronyms when displayed in the show interface brief command. The only restrictions are that short wavelength (SWL) transceivers must be paired with SWL transceivers. The size of SFP+ is smaller than XFP.html. They both have the same size and appearance but are based on different standards. SFP+ is the mainstream design. XFP is based on the standard of XFP MSA. SFP.cisco.3ae. The USB drive is not available for MDS 9100 Series switches. The Cisco SFP. They allow you to choose different cabling types and distances on a port-by-port basis. Fibre Channel over Ethernet ports (switching modules): These are Fibre Channel over Ethernet (FCoE) ports that you can use to connect to the SAN or for in-band management. X2. It can connect with the same type of XFP. SFP and SFP+ specifications are developed by the Multiple Source Agreement (MSA) group. USB drive/port: It is a simple interface that allows you to connect to different devices supported by NX-OS. What are the differences between SFP+. and the cable must not exceed the stipulated cable length for reliable communications. the show interface and show interface brief commands display the ID instead of the transmitter type. SFP has the lowest cost among all three. XFP and SFP+ are 10G fiber optical modules and can connect with other types of 10G modules. SFP+. . and XENPAK. X2. and XENPAK directly. An advantage of the SFP+ modules is that SFP+ has a more compact form factor package than X2 and XFP. The cost of SFP+ is lower than XFP. You can use any combination of SFP.

.To get the most up-to-date technical specifications for fiber optics per the current standards and specs. monitor. “Management and Monitoring of Cisco Nexus Devices. Figure 9-1 Tools for Configuring Cisco NX-OS Software The different protocols that are supported in order to access. The Cisco MDS product family uses the same operating system as NX-OS. In Chapter 5. and attenuation for optical fiber applications by fiber type. The Nexus product line and MDS family switches use mostly the same management features.” we discussed Nexus management and monitoring features.org/tech/Linkspec. Figure 9-1 portrays the tools for configuring the Cisco NX-OS software. check the Fiber Optic Association page (FOA) at http://www. and configure the Cisco MDS 9000 family of switches are described in Table 9-2.thefoa. The Cisco MDS 9000 family switches can be accessed and configured in many ways.htm. maximum supportable distances. and they support standard management protocols.

The setup utility allows you to configure only enough connectivity for system management. the device hostname). you can use the CLI to perform additional configuration. . If a default answer is not available (for example. except for the administrator password. and Configure the Cisco MDS Family The Cisco MDS NX-OS Setup Utility—Back to Basics The Cisco MDS NX-OS setup utility is an interactive command-line interface (CLI) mode that guides you through a basic (also called a startup) configuration of the system. press Enter. The setup starts automatically when a device has no configuration file in NVRAM.Table 9-2 Protocols to Access. Monitor. If you want to skip answers to any questions. Figure 9-2 portrays the setup script flow. The dialog guides you through initial configuration. the device uses what was previously configured and skips to the next question. You can press Ctrl+C at any prompt to skip the remaining configuration options and proceed with what you have configured up to that point. The setup utility allows you to build an initial configuration file using the System Configuration dialog. After the file is created.

If you have dual supervisor modules. Step 3. Note Be sure to configure the IPv4 route. not the configured value. Connect the Ethernet management port on the supervisor module to the network. connect the console ports on both supervisor modules to the network. The setup script only supports IPv4. If you enable IPv4 routing. you can use the setup utility at any time for basic device configuration. it runs a setup program that . when no configuration is present. However. If IPv4 routing is disabled. the device uses the default gateway IPv4 address. you must execute the following steps: Step 1. Have a password strategy for your network environment. If you have dual supervisor modules. Be sure to carefully check the configuration changes before you save the configuration. the setup utility changes to the configuration using that default. the setup utility does not change that configuration if you skip that step. if there is a default value for the step. For example. if you have already configured the mgmt0 interface. The first time you access a switch in the Cisco MDS 9000 family. However. The setup utility keeps the configured values when you skip steps in the script.Figure 9-2 Setup Script Flow You use the setup utility mainly for configuring the system initially. connect the Ethernet management ports on both supervisor modules to the network. Before starting the setup utility. Connect the console port on the supervisor module to the network. Step 2. and the default gateway IPv4 address to enable SNMP access. the device uses the IPv4 route and the default network IPv4 address. the default network IPv4 address.

This setup utility will guide you through the basic configuration of the system. The IP address can only be configured from the CLI. assign the IP address. Enter the password for the administrator. digits. Setup configures only enough connectivity for management of the system. An interface for VSAN 1 is created on every switch in the fabric. Press Ctrl+C at any prompt to end the configuration process Step 5. you can create an additional user account (in the network-admin role) besides the administrator’s account. When you power up the switch for the first time. and special characters. Click here to view code image Enter the password for admin: admin-password Confirm the password for admin: admin-password Step 4. Click here to view code image Do you want to enforce secure password standard (yes/no): yes You can also enable a secure password standard using the password strength-check command. the Cisco MDS 9000 Family Cisco Prime DCNM can reach the switch through the console port. A default route that points to the switch providing access to the IP network should be configured on every switch in the Fibre Channel fabric. Step 2. The following are the initial setup procedure steps for configuring out-of-band management on the mgmt0 interface: Step 1. Click here to view code image Create another login account (yes/no) [no]: yes While configuring your initial setup. Step 3. There are two types of management through MDS NX-OS CLI. Switches in the Cisco MDS 9000 family boot automatically. . This management interface uses the Fibre Channel infrastructure to transport IP traffic. Use CTRL-C-c at anytime to skip the remaining dialogs.prompts you for the IP address and other configuration information necessary for the switch to communicate over the supervisor module Ethernet interface. Each switch should have its VSAN 1 interface configured with either an IPv4 address or an IPv6 address in the same subnetwork. You can configure out-of-band management on the mgmt 0 interface. Press Enter at any time to skip a dialog. Enter yes (no is the default) if you do not want to create additional accounts. Enter yes (yes is the default) to enable a secure password standard. This information is required to configure and manage the switch. Power on the switch. The in-band management logical interface is VSAN 1. uppercase letters. After you perform this step. A secure password should contain characters from at least three of the classes: lowercase letters. Enter yes to enter the setup mode. when no configuration is present. *Note: setup is mainly used for configuring the system initially. Click here to view code image Would you like to enter the basic configuration dialog (yes/no): yes The setup utility guides you through the basic configuration process. So setup always assumes system defaults and not the current system configuration values.

Click here to view code image Mgmt0 IPv4 netmask: subnet_mask Step 9. Click here to view code image Configure read-only SNMP community string (yes/no) [n]: yes b. IP version 6 (IPv6) is supported in Cisco MDS NX-OS Release 4. The operator cannot make any configuration changes. The default switch name is “switch”. The administrator can also create and customize up to 64 additional roles. Click here to view code image Enter the switch name: switch_name Step 8. a. Step 6. two roles exist in all switches: a. One (of these 64 additional roles) can be configured during the initial setup process. Click here to view code image Mgmt0 IPv4 address: ip_address Enter the mgmt0 IPv4 subnet mask.1(x) and later. the setup script supports only IP version 4 (IPv4) for the management interface. Click here to view code image Continue with out-of-band (mgmt0) management configuration? [yes/ no]: yes Enter the mgmt0 IPv4 address. Configure the read-only or read-write SNMP community string. Click here to view code image SNMP community string: snmp_community Step 7. Enter a name for the switch. Enter yes (yes is the default) at the configuration prompt to configure out-of-band management. However. . Enter the SNMP community string. Network operator (network-operator): Has permission to view the configuration only.Click here to view code image Enter the user login ID: user_name Enter the password for user_name: user-password Confirm the password for user_name: user-password Enter the user role [network-operator]: network-admin By default. Network administrator (network-admin): Has permission to execute all commands and make configuration changes. b. The switch name is limited to 32 alphanumeric characters. Enter yes (yes is the default) to configure the default gateway. Enter yes (no is the default) to avoid configuring the read-only SNMP community string.

Click here to view code image Continue with in-band (VSAN1) management configuration? (yes/no) [no]: no b. DNS. Enter yes (yes is the default) to configure a static route. IP address of the default gateway: default_gateway Step 10. Enter yes (yes is the default) to configure the DNS IPv4 address. Enter yes (no is the default) to configure advanced IP options. such as in-band management static routes. Click here to view code image Destination prefix mask: dest_mask Enter the next hop IP address. Enter yes (yes is the default) to enable IPv4 routing capabilities. Click here to view code image . Enter yes (yes is the default) to configure the default network. Click here to view code image Next hop ip address: next_hop_address d. Enter no (no is the default) at the in-band management configuration prompt. Click here to view code image Default network IP address [dest_prefix]: dest_prefix e. Click here to view code image Configure Advanced IP options (yes/no)? [n]: yes a. default network. Click here to view code image Configure the default-network: (yes/no) [y]: yes Enter the default network IPv4 address.Click here to view code image Configure the default-gateway: (yes/no) [y]: yes Enter the default gateway IP address. Click here to view code image Enable ip routing capabilities? (yes/no) [y]: yes c. and domain name. Click here to view code image Configure static route: (yes/no) [y]: yes Enter the destination prefix. Click here to view code image Destination prefix: dest_prefix Enter the destination prefix mask.

Click here to view code image Type the SSH key you would like to generate (dsa/rsa)? rsa Enter the number of key bits within the specified range. Enter yes (no is the default) to disable the Telnet service. Click here to view code image Enter number of milliseconds for congestion/no_credit drop[100 1000] or [d/default] for default:100 . Enter yes (no is the default) to skip the default domain name configuration. Click here to view code image Enable the telnet service? (yes/no) [n]: yes Step 13. DNS IP address: name_server f. Click here to view code image Enter the number of key bits? (768-2048) [1024]: 2048 Step 12. Enter con (con is the default) to configure congestion or no_credit drop.Configure the DNS IP address? (yes/no) [y]: yes Enter the DNS IP address. Enter yes (yes is the default) to enable the SSH service Click here to view code image Enabled SSH service? (yes/no) [n]: yes Enter the SSH key type. Click here to view code image Configure the default domain name? (yes/no) [n]: yes Enter the default domain name. Enter yes (yes is the default) to configure congestion or no_credit drop for FC interfaces. Click here to view code image Enter the type of drop to configure congestion/no_credit drop? (con/no) [c]:con Step 15. Enter a value from 100 to 1000 (d is the default) to calculate the number of milliseconds for congestion or no_credit drop. Click here to view code image Default domain name: domain_name Step 11. Click here to view code image Configure congestion or no_credit drop for fc interfaces? (yes/ no) [q/quit] to quit [y]: yes Step 14.

Click here to view code image Configure NTP server? (yes/no) [n]: yes Enter the NTP server IPv4 address. NTP server IP address: ntp_server_IP_address Step 18. Enter permit (deny is the default) to deny a default zone policy configuration. Click here to view code image Configure default switchport mode F (yes/no) [n]: y Step 21. Enter enhanced (basic is the default) to configure default-zone mode as enhanced. Click here to view code image Enter mode for congestion/no_credit drop[E/F]: Step 17. Step 24. Enter shut (shut is the default) to configure the default switch port interface to the shut (disabled) state. Enter on (off is the default) to configure the switch port trunk mode. Click here to view code image Configure default port-channel auto-create state (on/off) [off]: on Step 22. Click here to view code image Enable full zoneset distribution (yes/no) [n]: yes Overrides the switch-wide default for the full zone set distribution feature. You see the new configuration. Enter on (off is the default) to configure the port-channel auto-create state. Click here to view code image . Enter yes (no is the default) to configure the NTP server. Enter yes (no is the default) to disable a full zone set distribution.Step 16. FCIP. Click here to view code image Configure default switchport trunk mode (on/off/auto) [off]: on Step 20. and Gigabit Ethernet interfaces are shut down. Only the Fibre Channel. Step 19. iSCSI. Click here to view code image Configure default zone policy (permit/deny) [deny]: permit Permits traffic flow to all members of the default zone. Step 23. Click here to view code image Configure default switchport interface state (shut/noshut) [shut]: shut The management Ethernet interface is not shut down at this point. Enter yes (yes is the default) to configure the switchport mode F. Enter a mode for congestion or no_credit drop. Review and edit the configuration that you have just entered.

issuing the show version command.cisco. Example 9-1 Verifying NX-OS Version on an MDS 9710 Switch Click here to view code image DS-9710-1# show version Cisco Nexus Operating System (NX-OS) Software TAC support: http://www. Step 27. none of your changes are updated the next time the switch is rebooted. Enter no (no is the default) if you are satisfied with the configuration. Log in to the switch using the new username and password. Step 25. Step 28. Verify that the switch is running the latest Cisco NX-OS 6. This ensures that the kickstart and system images are also automatically configured. Verify that the required licenses are installed in the switch using the show license command Step 29. The following configuration will be applied: Click here to view code image username admin password admin_pass role network-admin username user_name password user_pass role network-admin snmp-server community snmp_community ro switchname switch interface mgmt0 ip address ip_address subnet_mask no shutdown ip routing ip route dest_prefix dest_mask dest_address ip default-network dest_prefix ip default-gateway default_gateway ip name-server name_server ip domain-name domain_name telnet server disable ssh key rsa 2048 force ssh server enable ntp server ipaddr ntp_server system default switchport shutdown system default switchport trunk mode on system default switchport mode F system default port-channel auto-create zone default-zone permit vsan 1-4093 zoneset distribute full vsan 1-4093 system default zone mode enhanced Would you like to edit the configuration? (yes/no) [n]: n Step 26. Example 9-1 portrays show version display output.2(x) software. Enter yes (yes is default) to use and save this configuration. depending on which you installed.Configure default zone mode (basic/enhanced) [basic]: enhanced Overrides the switch-wide default zone mode as enhanced. Click here to view code image Use this configuration and save it? (yes/no) [y]: yes If you do not save the configuration at this point.com/tac . Type yes to save the new configuration.

-----------------1 48 2/4/8/10/16 Gbps Advanced FC Module DS-X9448-768K9 2 48 2/4/8/10/16 Gbps Advanced FC Module DS-X9448-768K9 5 0 Supervisor Module-3 DS-X97-SF1-K9 6 0 Supervisor Module-3 DS-X97-SF1-K9 Mod Sw Hw --. All rights reserved.0.1. 39 minute(s).opensource.org/licenses/lgpl-2.2(3) 1.1 5 6.2(3) 1.opensource.-------------.html Copyright (c) 2002-2013. The copyrights to certain works contained in this software are owned by other third parties and used and distributed under license.1 2 6. Ethernet Plugin Step 30. 0 hour(s).org/licenses/gpl-2.cisco.2(3) 1. Certain components of this software are licensed under the GNU General Public License (GPL) version 2.php Software BIOS: version 3. 20 second(s) Last reset Reason: Unknown System version: 6.-------------------------------------.2(3) system: version 6.php and http://www. Cisco Systems. Inc.2(3) 1.1.com/en/US/products/ps9372/tsd_products_support_series_ home. A copy of each such license is available at http://www.1.0 Mod MAC-Address(es) Serial-Num --.2(3) BIOS compile time: 02/27/2013 kickstart image file is: bootflash:///kickstart kickstart compile time: 7/10/2013 2:00:00 [07/31/2013 10:10:16] system image file is: bootflash:///system system compile time: 7/10/2013 2:00:00 [07/31/2013 11:59:38] Hardware cisco MDS 9710 (10 Slot) Chassis ("Supervisor Module-3") Intel(R) Xeon(R) CPU with 8120784 kB of memory.0 or the GNU Lesser General Public License (LGPL) Version 2.-----1 6.---------1 1c-df-0f-78-cf-d8 to 1c-df-0f-78-cf-db JAE171308VQ 2 1c-df-0f-78-d8-50 to 1c-df-0f-78-d8-53 JAE1714019Y 5 1c-df-0f-78-df-05 to 1c-df-0f-78-df-17 JAE171504KY Status ---------ok ok active * ha-standby .----------------------------------. Verify the status of the modules on the switch using the show module (see Example 9-2) command.0 6 6.----.2(3) Service: Plugin Core Plugin.0 kickstart: version 6. Example 9-2 Verifying Modules on an MDS 9710 Switch Click here to view code image MDS-9710-1# show module Mod Ports Module-Type Model --. Processor Board ID JAE171504KY Device name: MDS-9710-1 bootflash: 3915776 kB slot0: 0 kB (expansion flash) Kernel uptime is 0 day(s).Documents: http://www.

DNS. default network. Enter no (yes is the default) to configure a static route.0 NA 1. .0 NA 1. Enter no (yes is the default) to enable IPv4 routing capabilities. Enter yes (no is the default) at the in-band management configuration prompt. and domain name.----------------------------------.0 MAC-Address(es) Serial-Num -------------------------------------. Enter yes (no is the default) to configure advanced IP options such as in-band management.-----NA 1. static routes. Click here to view code image Configure Advanced IP options (yes/no)? [n]: yes a. Click here to view code image Continue with in-band (VSAN1) management configuration? (yes/no) [no]: yes Enter the VSAN 1 IPv4 address. Click here to view code image VSAN1 IPv4 address: ip_address Enter the IPv4 subnet mask.6 Mod --1 2 5 6 Xbar --1 2 3 Xbar --1 2 3 Xbar --1 2 3 1c-df-0f-78-df-2b to 1c-df-0f-78-df-3d JAE171504E6 Online Diag Status -----------------Pass Pass Pass Pass Ports Module-Type Model ----. Click here to view code image Enable ip routing capabilities? (yes/no) [y]: no c. Click here to view code image VSAN1 IPv4 net mask: subnet_mask b.-----------------0 Fabric Module 1 DS-X9710-FAB1 0 Fabric Module 1 DS-X9710-FAB1 0 Fabric Module 1 DS-X9710-FAB1 Sw Hw -------------.---------NA JAE1710085T NA JAE1710087T NA JAE1709042X Status ---------ok ok ok The following are the initial setup procedure steps for configuring in-band management logical interface on VSAN 1: You can configure both in-band and out-of-band configuration together by entering yes in both step 10c and step 10d in the following procedure: Step 10.

When a configuration file is not found. It also obtains the IP address of the DCNM server to download the configuration script that is run on the switch to download and install the appropriate software image and device configuration file. it loads the software image that is installed at manufacturing and tries to find a configuration file from which to boot. One or more servers can contain the desired software images and configuration files. During startup. A TFTP server that contains the configuration script is used to automate the software image installation and configuration process. Enter no (yes is the default) to configure the default network. Click here to view code image Configure the DNS IP address? (yes/no) [y]: no f. If you exit POAP mode. Figure 9-3 portrays the network infrastructure for POAP. a prompt appears. and DNS (Domain Name System) server. and DCNM DNS server IP addresses. Enter no (no is the default) to skip the default domain name configuration. all the front-panel interfaces are set up in the default configuration. If you continue in POAP mode. When a Cisco MDS switch with the POAP feature boots and does not find the startup configuration. When you power up a switch for the first time. you enter the normal interactive setup script. POAP mode starts. the switch enters POAP mode. Starting with NX-OS 6. asking if you want to abort POAP and continue with a normal setup. The POAP setup requires the following network infrastructure: a DHCP server to bootstrap the interface IP address. The USB device on MDS 9148 or on MDS 9148S does not contain the required installation files. the POAP capability is available on Cisco MDS 9148 and MDS 9148S multilayer fabric switches (at the time of writing this book).Click here to view code image Configure static route: (yes/no) [y]: no d. Enter no (yes is the default) to configure the DNS IPv4 address. gateway address. gateway. and bootstraps itself with its interface IP address. Click here to view code image Configure the default-network: (yes/no) [y]: no e.2(9). Click here to view code image Configure the default domain name? (yes/no) [n]: no In the final configuration the following CLI commands will be added to the configuration: Click here to view code image interface vsan1 ip address ip_address subnet_mask no shutdown The Power On Auto Provisioning POAP (Power On Auto Provisioning) automates the process of upgrading software images and installing configuration files on Cisco MDS and Nexus switches that are being deployed in the network. . locates the DCNM DHCP server. You can choose to exit or continue with POAP.

Deploy one or more servers to host the software images and configuration files. and TFTP server IP addresses and a boot file with the path and name of the configuration script file. Deploy a DHCP server and configure it with the interface. use one of the following commands: show running config: Displays the running configuration show startup config: Displays the startup configuration Licensing Cisco MDS 9000 Family NX-OS Software Features . gateway.Figure 9-3 POAP Network Infrastructure Here are the steps to set up the network environment using POAP: Step 1. enter y (yes). (Optional) Put the POAP configuration script and any other desired software image and switch configuration files on a USB device that is accessible to the switch. Step 4. Step 3. Step 2. This information is provided to the switch when it first boots. (Optional) if you want to exit POAP mode and enter the normal interactive setup script. Step 3. Power on the switch. Deploy a TFTP server to host the configuration script. Step 2. To verify the configuration after bootstrapping the device using POAP. follow these steps to configure the switch using POAP: Step 1. After you set up the network environment for POAP setup. Install the switch in the network.

The cost varies based on a per-module usage. the license file continues to function from the version that is available on the chassis. Licenses can also be installed on the switch in the factory. The first step is to contact the reseller or Cisco representative and request this service. The cost varies based on a per-switch usage. Cisco license packages require a simple installation of an electronic license: no software installation or upgrade is required. On-demand port activation licenses are also available for the Cisco MDS 9000 family blade switches and 4-Gbps Cisco MDS 9100 Series multilayer fabric switches. All licensed features may be evaluated for up to 120 days before a license is expired. DMM Package. IOA Package. the Cisco MDS 9134 Fabric Switch. If an administrative error does occur with licensing.Licenses are available for all switches in the Cisco MDS 9000 family. An example is the IPS-8 or IPS-4 module using the FCIP feature. and the Cisco Fabric Switch for IBM BladeCenter. DCNM Packages. You can also obtain licenses to activate ports on the Cisco MDS 9124 Fabric Switch. Cisco Data Center Network Manager for SAN includes a centralized license management console that provides a single interface for managing licenses across all Cisco MDS switches in the fabric. you must first obtain the license key file and then install that file in the device. . Performing a Manual Installation: If you have existing devices or if you want to install the licenses on your own. The second step is using the device and the licensed features. Obtaining the License Key File consists of the following steps: Step 1. Even if both supervisor modules fail. the Cisco Fabric Switch for HP c-Class Blade System. Devices with dual supervisors have the following additional high-availability features: The license software runs on both supervisor modules and provides failover protection. The license key file is mirrored on both supervisor modules. Obtain the serial number for your device by entering the show license host-id command. The grace period allows plenty of time to correct the licensing issue. SAN Extension over IP Package. the switch provides a grace period before the unlicensed features are disabled. such as the Cisco MDS 9000 Enterprise Package. and XRC Acceleration Package. so license keys are never lost even during a software reinstallation. Licensing allows you to access specified premium features on the switch after you install the appropriate license for that feature. Module-based licenses allow features that require additional hardware modules. This single console reduces management overhead and prevents problems because of improperly maintained licensing. Mainframe Package. License Installation You can either obtain a factory-installed license (only applies to new device orders) or perform a manual license installation of the license (applies to existing devices in your network). Obtaining a Factory-Installed License: You can obtain factory-installed licenses for a new Cisco NX-OS device. Some features are logically grouped into add-on packages that must be licensed separately. Cisco MDS stores license keys on the chassis serial PROM (SPROM). The licensing model defined for the Cisco MDS product line has two options: Feature-based licenses allow features that are applicable to the entire switch. Any feature not included in a license package is bundled with the Cisco MDS 9000 family switches.

sfp. Click here to view code image mds9710-1# show licenser host-id License hostid: VDH=JAF1710BBQH Step 2.0 permanent uncounted \ VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M9200-ALL-LICENSESINTRL</ SKU> \ HOSTID=VDH=FOX1244H0U4 \ . Use the copy licenses command to save your license file to either the bootflash: directory or a slot0: device.lic Installing license . Locate the website URL from the software license claim certificate document. You can access the Product License Registration website from the Software Download website at this URL: http://www. (Optional) Back up the license key file. ftp. Locate the product authorization key (PAK) from the software license claim certificate document. Log in to the device through the console port of the active supervisor. Step 5. You can use the file transfer service (tftp. as shown in Example 9-3. Obtain your claim software license certificate document.com/cisco/web/download/index.cisco. ftp. sfp. sftp. Step 6.com/en/US/support/tsd_cisco_worldwide_contacts. Step 4. Follow the instructions on the Product License Registration website to register the license for your device. Step 3. Step 3. or scp) or use the Cisco Fabric Manager License Wizard under Fabric Manager Tools to copy a license to the switch. Click here to view code image switch# install license bootflash:license_file. Perform the installation by using the install license command on the active supervisor module from the device console. Step 2.done Step 4. Installing the License Key File consists of the following steps: Step 1.html. Step 7. sftp. contact Cisco Technical Support at this URL: http://www.The host ID is also referred to as the device serial number.html. or scp) or use the Cisco Fabric Manager License Wizard under Fabric Manager Tools to copy a license to the switch. Example 9-3 show license Output on MDS 9222i Switch Click here to view code image MDS-9222i# show license MDS20090713084124110. You can use the file transfer service (tftp. Exit the device console and open a new terminal session to view all license files installed on the device using the show license command.lic: SERVER this_host ANY VENDOR cisco INCREMENT ENTERPRISE_PKG cisco 1.. If you cannot locate your software license claim certificate. Step 5.cisco.

. To uninstall the installed license file. Example 9-4 show license brief Output on MDS 9222i Switch Click here to view code image MDS-9222i# show license brief MDS20090713084124110. it can activate a license grace period. where filename is the name of the installed license key file.lic: SERVER this_host ANY VENDOR cisco .0 permanent 1 \ VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M9200-ALL-LICENSESINTRL</ SKU> \ HOSTID=VDH=FOX1244H0U4 \ NOTICE="<LicFileID>20090713084124110</LicFileID><LicLineID>3</LicLineID> \ <PAK></PAK>" SIGN=C03E97505672 <snip> ..NOTICE="<LicFileID>20090713084124110</LicFileID><LicLineID>1</LicLineID> \ <PAK></PAK>" SIGN=D2ABC826DB70 INCREMENT STORAGE_SERVICES_ENABLER_PKG cisco 1. Click here to view code image switch# clear license Enterprise. you can use clear license filename command.lic Clearing license Enterprise.lic MDS-9222i# show license file MDS201308120931018680.lic: SERVER this_host ANY VENDOR cisco INCREMENT SAN_EXTN_OVER_IP_18_4 cisco 1.0 permanent 1 \ VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M9200-ALLLICENSES-INTRL</SKU> \ HOSTID=VDH=FOX1244H0U4 \ NOTICE="<LicFileID>20090713084124110</LicFileID><LicLineID>2</LicLineID> \ <PAK></PAK>" SIGN=9287E36C708C INCREMENT SAN_EXTN_OVER_IP cisco 1.lic MDS201308120931018680. Click here to view code image switch# show license usage ENTERPRISE_PKG Application ----------ivr qos_manager ----------- You can use the show license usage command to identify all the active features.lic MDS201308120931018680.0 permanent 2 \ VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>L-M92EXT1AK9=</SKU> \ HOSTID=VDH=FOX1244H0U4 \ NOTICE="<LicFileID>20130812093101868</LicFileID><LicLineID>1</LicLineID> \ <PAK></PAK>" SIGN=DD075A320298 When you enable a Cisco NX-OS software feature. You can use the show license brief command to display a list of license files installed on the device (see Example 9-4).

Example 9-5 Verifying License Usage on an MDS 9148 Switch Click here to view code image MDS-9148# show license usage Feature Ins Lic Status Expiry Date Comments Count ---------------------------------------------------------------------------FM_SERVER_PKG No Unused ENTERPRISE_PKG No Unused Grace expired . or 48 ports and upgrading the 16. or 48 enabled ports. The PORT_ACTIVATION_PKG does not appear as installed if you have only the default license installed. You can use the show license usage command (see Example 9-5) to view any licenses assigned to a switch. show license usage: Displays the usage information for installed licenses. If a license is installed but no ports have acquired a license. 2 ports of 10 Gigabit Ethernet for FCIP and iSCSI storage services. where url specifies the bootflash: slot0:. 36. You can enable the grace period feature by using the license grace-period command: Click here to view code image switch# configure terminal switch(config)# license grace-period Verifying the License Configuration To display the license configuration. the status displayed is Unused. show license brief: Displays summary information for all installed license files.If the license is time bound. and 8 ports of 10 Gigabit Ethernet for FCoE connectivity. show license host-id: Displays the host ID for the physical device. The base switch model comes with 12 ports enabled and can be upgraded as needed with the 12-port Cisco MDS 9148S On-Demand Port Activation license to support configurations of 24. you can update the license file by using the update license url command. On the Cisco MDS 9148.and 32-port models onsite all the way to 48 ports by adding these licenses. The Cisco MDS 9148S 16G Multilayer Fabric Switch compact one rack-unit (1RU) switch scales from 12 to 48 line-rate 16 Gbps Fibre Channel ports. the On-Demand Port Activation license allows the addition of eight 8-Gbp ports. 32. the status displayed is In use. you must obtain and install an updated license. usb1:. On-Demand Port Activation Licensing The Cisco MDS 9250i is available in a base configuration of 20 ports (upgradeable to 40 through On-Demand Port Activation license) of 16 Gbps Fibre Channel. use one of the following commands: show license: Displays information for all installed license files. show license file: Displays information for a specific license file. Customers have the option of purchasing preconfigured models of 16. If a license is in use. or usb2: location of the updated license file. After obtaining the license file. You need to contact technical support to request an updated license.

Table 9-3 Port Activation License Status Definitions Making a Port Eligible for a License By default. 2. [no] port-license . interface fc slot/port 3. 1. you must make that port eligible by using the port-license command.PORT_ACTIVATION_PKG No 16 In use never - Example 9-6 shows the default port license configuration for the Cisco MDS 9148 switch: Example 9-6 Verifying Port-License Usage on an MDS 9148 Switch Click here to view code image bdc-mds9148-2# show port-license Available port activation licenses are 0 ----------------------------------------------Interface Cookie Port Activation License ----------------------------------------------fc1/1 16777216 acquired fc1/2 16781312 acquired fc1/3 16785408 acquired fc1/4 16789504 acquired fc1/5 16793600 acquired fc1/6 16797696 acquired fc1/7 16801792 acquired fc1/8 16805888 acquired fc1/9 16809984 acquired fc1/10 16814080 acquired fc1/11 16818176 acquired fc1/12 16822272 acquired fc1/13 16826368 acquired fc1/14 16830464 acquired fc1/15 16834560 acquired fc1/16 16838656 acquired fc1/17 16842752 eligible <snip> fc1/46 16961536 eligible fc1/47 16965632 eligible fc1/48 16969728 eligible There are three different statuses for port licenses. However. if a port has already been made ineligible and you prefer to activate it. which are described in Table 9-3. Configure terminal. all ports are eligible to receive a license.

Configure terminal 2. (Optional) copy running-config startup-config Acquiring a License for a Port If you prefer not to accept the default on-demand port license assignments.4. . The images and variables are important factors in any install procedure. (Optional) show port-license 6. To select the kickstart image. no port-license 5. the switch returns the message “port activation license not available. Shutdown 8. interface fc slot/port 3. (Optional) show port-license 12. (Optional) show port-license 6. Exit 5. interface fc slot/port 7. To select the system image. Exit 5. Configure terminal. If you attempt to move a license to a port and no license is available. (Optional) copy running-config startup-config Cisco MDS 9000 NX-OS Software Upgrade and Downgrade Each switch is shipped with the Cisco MDS NX-OS operating system for Cisco MDS 9000 family switches. you will need to first acquire licenses for ports to which you want to move the license. [no] port-license acquire 4. Both images are not always required for each install. Exit 11. (Optional) copy running-config startup-config Moving Licenses Among Ports You can move a license from a port (or range of ports) at any time. use the KICKSTART variable. The Cisco MDS NX-OS software consists of two images: the kickstart image and the system image. 2. You must specify the variable and the respective image to upgrade or downgrade your switch. use the SYSTEM variable. 1. port-license acquire 9. No shutdown 10. interface fc slot/port 3.” 1. Exit 6. Shutdown 4.

issue the show incompatibility system bootflash:filename command. but existing devices will not experience any disruption of traffic via the data plane. To view the results of a dynamic compatibility check. the incompatibility is strict. During the upgrade. the software reports the incompatibility. Image version: Each image file has a version. you may decide to proceed with this installation. The image to be installed is considered incompatible with the running image if one of the following statements is true: An incompatible feature is enabled in the image to be installed and it is not available in the running image and may cause the switch to move into an inconsistent state. In the Release Notes. So new devices will be unable to log in to the fabric via the control plane. In this case. If the running image and the image you want to install are incompatible. On switches with dual supervisor modules. Supervisor modules: There are single or dual supervisor modules. or if a failure occurs. the software upgrade may be disruptive. both supervisor modules must have Ethernet connections on the management interfaces (mgmt 0) to maintain connectivity when switchovers occur during upgrades and downgrades. Compatibility is established based on the image and configuration: Image incompatibility: The running image and the image to be installed are not compatible. there are specific sections that explain compatibility between different software versions and hardware modules. An incompatible feature is enabled in the image to be installed and it is not available in the running image and does not cause the switch to move into an inconsistent state. Note What is nondisruptive NX-OS upgrade or downgrade? Nondisruptive upgrades on the Cisco MDS fabric switches take down the control plane for not more than 80 seconds. In some cases.The software image install procedure is dependent on the following factors: Software images: The kickstart and system image files reside in directories or folders that can be accessed from the Cisco MDS 9000 family switch prompt. Use this command to obtain further information when the install all . the control plane is down. In some cases. but the data plane remains up. both images may be High Availability (HA) compatible in some cases and incompatible in others. In some cases. because they are not supported in the image to be installed. when the upgrade has progressed past the point at which it cannot be stopped gracefully. the user may need to take a specific path to perform nondisruptive software upgrades or downgrades. Configuration incompatibility: There is a possible incompatibility if certain features in the running image are turned off. Flash disks on the switch: The bootflash: resides on the supervisor module and the Compact Flash disk is inserted into the slot0: device. Before starting any software upgrade or downgrade. If the active and the standby supervisor modules run different versions of the image. the user should review NX-OS Release Notes. In this case. the incompatibility is loose.

the boot parameters in NVRAM should point to the location and name of the system image. When the Cisco MDS 9000 Series multilayer switch is first switched on or during reboot. and proceeds to load the startup configuration that contains the switch configuration from NVRAM. some resources might become unavailable after a switchover. the boot process fails at the last stage. Finally. copy the correct kickstart and system images to the correct location and change the boot commands in your configuration. Figure 9-4 portrays the boot sequence. The kickstart loads the Linux kernel and device drivers and then needs to load the system image. the administrator must recover from the error and reload the switch. . Again. If this error happens. Do you wish to continue? (y/ n) [y]: n You can upgrade any switch in the Cisco MDS 9000 Family using one of the following methods: Automated. Quick. You can exit if you do not want to proceed with these changes. one-step upgrades using the install all command. one-step upgrade using the reload command. The boot parameters are held in NVRAM and point to the location and name of both the kickstart and system images. The install all command launches a script that greatly simplifies the boot procedure and checks for errors and the upgrade impact before proceeding. as a result. usually on bootflash. The loader obtains the location of the kickstart file. usually on bootflash. If the boot parameters are missing or have an incorrect name or location. This upgrade is disruptive. The kickstart then verifies and loads the system image. Before running the reload command. the system image loads the Cisco NX-OS Software. The install all command compares and presents the results of the compatibility before proceeding with the installation.command returns the following message: Click here to view code image Warning: The startup config contains commands not supported by the standby supervisor. checks the file systems. The BIOS then runs the loader bootstrap function. and verifies the kickstart image before loading it. the system BIOS on the supervisor module first runs power-on self-test (POST) diagnostics. This upgrade is nondisruptive for director switches.

you must have a working connection to both supervisors. and follow these steps: Step 1. In this section we discuss firmware upgrade steps for a director switch with dual supervisor modules. the boot sequence steps are as follows: 1. switch. 4.Figure 9-4 Boot Sequence For the MDS Director switches with dual supervisor modules. Verify the following physical connections for the new Cisco MDS 9500 family switch: The console port is physically connected to a computer terminal (or terminal server). 2. Step 2. Be aware that if you are upgrading through the management interface. 6.txt command. Bring up standby supervisor module with the new kickstart and system images. To upgrade the switch. 3. Install licenses (if necessary) to ensure that the required features are available on the . use the console connection. Perform non-disruptive image upgrade for each data module (one at a time). because this process causes a switchover and the current standby supervisor will be active after the upgrade. Upgrade the BIOS on the active and standby supervisor modules and the data modules. Steps 7 and 8 will not be executed. Upgrading to Cisco NX-OS on an MDS 9000 Series Switch During any firmware upgrade. 5. You can also create a backup of your existing configuration to a file by issuing the copy running-config bootflash:backup_config. The management 10/100 Ethernet port (mgmt0) is connected to an external hub. use the latest Cisco MDS NX-OS Software on the Cisco MDS 9000 director series switch. or router. Bring up the old active supervisor module with the new kickstart and system images. Step 3. For MDS switches with only one supervisor module. Switch over from the active supervisor module to the upgraded supervisor module. Issue the copy running-config startup-config command to store your current running configuration. Upgrade complete.

bin Step 6.switch.com/MDS/m9500-sf2ek9kickstart-mz.x.2. Verify that your switch is running compatible hardware by checking the Release Notes.2.2.2.bin bootflash:m9500-sf2ek9-kickstartmz.2. If you need more space on the standby supervisor module bootflash on a Cisco MDS 9500 Series switch.6.cisco. delete unnecessary files to make space available.6. Click here to view code image switch(standby)# del bootflash: m9500-sf2ek9-kickstart-mz. Verify that space is available on the standby supervisor module bootflash on a Cisco MDS 9500 Series switch.6.6. depending on which you are installing. Verify that the switch is running the required software version by issuing the show version command.2(x) image file. Ensure that the required space is available in the bootflash: directory for the image file(s) to be copied using the dir bootflash: command.bin switch# del m9500-sf2ek9-mz-npe.2.x. Step 10. Click here to view code image switch# del m9500-sf2ek9-kickstart-mz.6.5c.cisco. Click here to view code image switch# attach mod x (where x is the module number of the standby supervisor) switch(standby)s# dir bootflash: 12288 Aug 26 19:06:14 2011 lost+found/ 16206848 Jul 01 10:54:49 2011 m9500-sf2ek9-kickstart-mz.6.6.6. Step 9. Click here to view code image switch# copy tftp://tftpserver. Step 5.6. Download the files to an FTP or TFTP server.5.2.com/MDS/m9500-sf2ek9mz.bin bootflash:m9500-sf2ek9-mz. delete unnecessary files to make space available.27.5.5.bin switch# copy tftp://tftpserver.2. and select the required Cisco MDS NX-OS Release 6. Use the delete bootflash: filename command to remove unnecessary files.6.bin 16604160 Jul 01 10:20:07 2011 m9500-sf2ek9-kickstart-mz.bin switch(standby)# del bootflash: m9500-sf2ek9-mz.bin Usage for bootflash://sup-local 122811392 bytes used 61748224 bytes free 184559616 bytes total switch(standby)# exit ( to return to the active supervisor ) Step 7. If you need more space on the active supervisor module bootflash. .2.bin Step 8.2.bin Step 11.x.5. Step 12.x. Copy the Cisco MDS NX-OS kickstart and system images to the active supervisor module bootflash using FTP or TFTP. Step 4.6. Access the Software Download Center.

6. (Refer to Steps 6 and 7 in the upgrade section.bin switch# copy tftp://tftpserver.9.com/MDS/m9700-sf3ek9mz. In this section we discuss firmware upgrade steps for an MDS director switch with dual supervisor modules.Step 13. delete unnecessary files to make space available. You should first read the Release Notes and follow the steps for specific NX-OS downgrade instructions. For MDS switches with only one supervisor module. Here we outline the major steps for the downgrade: Step 1.6. download it from an FTP or TFTP server to the active supervisor module bootflash. Steps 7 and 8 will not be executed.bin Step 3. (Refer to Step 5 in the upgrade section.5. because this process causes a switchover and the current standby supervisor will be active after the downgrade. Downgrading Cisco NX-OS Release on an MDS 9500 Series Switch During any firmware upgrade.2. The process checks configuration compatibility.bin system m9500-sf2ek9-mz. If you need more space on the active supervisor module bootflash: use the delete command to remove unnecessary files and ensure that the required space is available on the active supervisor.2. the upgrade might already be complete. The output displays the status only after the switch has rebooted with the new image.bin ssi m9000-ek9ssi-mz.9. When you can reconnect to the switch through a Telnet session.2. the session is disconnected when the switch reboots.2. Perform the upgrade by issuing the install all command. Be aware that if you are downgrading through the management interface.cisco. avoid using the reload command. use the console connection. .6. Step 2. When downgrading any switch in the Cisco MDS 9000 family.6. Ensure that the required space is available on the active supervisor. Click here to view code image switch# install all kickstart m9500-sf2ek9-kickstartmz.6.) If you need more space on the standby supervisor module bootflash.) Click here to view code image switch# copy tftp://tftpserver. the script will check whether you want to continue with the upgrade. in which case the output will display the status of the upgrade.cisco. All actions preceding the reboot are not captured in this output because when you enter the install all command using a Telnet session. Verify that the system image files for the downgrade are present on the active supervisor module bootflash with dir bootflash command. Use the install all command to downgrade the switch and handle configuration conversions.6.6.bin bootflash:m9700-sf3ek9-kickstart-mz.bin bootflash:m9700-sf3ek9-mz. Do you want to continue with the installation (y/n)? [n] y If the input is entered as y the install will continue.5.com/MDS/m9700-sf3ek9mz.2. You can display the status of a nondisruptive upgrade by using the show install all status command.9.2. If the software image file is not present.5.2. After information is provided about the impact of the upgrade before it takes place.bin The install all process verifies all the images before installation and detects incompatibilities. you must have a working connection to both supervisors.5.

each interface may be configured in auto or Fx port modes.bin. the management interface (mgmt0).1. Each physical Fibre Channel interface in a switch may operate in one of several port modes: E port.5.2. one per line.. End with CNTL/Z.switch# dir bootflash: Step 4. the characteristics of the interfaces through which the frames are received and sent must be defined. Click here to view code image switch# install all kickstart bootflash:m9500-sf2ek9-kickstartmz. Interfaces are created in VSAN 1 by default.2. and B port (see Figure 9-5). Capability : CAP_FEATURE_AUTO_CREATED_41_PORT_CHANNEL Description : auto create enabled ports or auto created port-channels are present Capability requirement : STRICT Disable command : 1. 2.Convert autocreated port channels to be persistent (port-channel 1 persistent) .5.Disable autocreate on interfaces (no channel-group auto). Besides these modes.bin The following configurations on active are incompatible with the system image 1) Service : port-channel . SD port.5. When a module is removed and replaced with the same type of module. To relay the frames. Click here to view code image switch# configure terminal Enter configuration commands.S74 Step 8. FL port. switch(config)# interface fcip 31 switch(config-if)# no channel-group auto switch(config-if)# end switch# port-channel 127 persistent switch# Step 6. F port. TE port. Verify the status of the modules on the switch using the show module command. Gigabit Ethernet interfaces. Disable any features that are incompatible with the downgrade system image. ST port. Step 5. the configuration is retained. Issue the show incompatibility system command (see Example 9-7) to determine whether you need to disable any features not supported by the earlier release. These two modes determine the port type during interface initialization.S74 system bootflash:m9500-sf2ek9-mz. TL port.bin.1.. The configured interfaces can be Fibre Channel interfaces. Example 9-7 show incompatibility system Output on MDS 9500 Switch Click here to view code image switch# show incompatibility system bootflash:m9500-sf2ek9-mz. Save the configuration using the copy running-config startup-config command.2. or VSAN interfaces. Click here to view code image switch# copy running-config startup-config Step 7.1. Configuring Interfaces The main function of a switch is to relay frames from one data link to another. If a different type of module is inserted. the original configuration is no . Issue the install all command to downgrade the software.

“Data Center Storage Architecture. The Cisco NX-OS software implicitly performs a graceful shutdown in response to either of the following actions for interfaces operating in the E port mode: If you shut down an interface. . Figure 9-5 Cisco MDS 9000 Family Switch Port Modes The interface state depends on the administrative configuration of the interface and the dynamic state of the physical link. In Table 9-4 the interface states are summarized.longer retained. Table 9-4 Interface States Graceful Shutdown Interfaces on a port are shut down by default (unless you modified the initial configuration). In Chapter 6.” we discussed the different Fibre Channel port types.

Frame Encapsulation The switchport encap eisl command applies only to SD (SPAN destination) port interfaces. 4 Gbps of bandwidth is reserved. By default. If the encapsulation is set to EISL. This enhancement reduces the chance of frame loss. the switches connected to the shutdown link coordinate with each other to ensure that all frames in the ports are safely sent through the link before shutting down. When a shutdown is triggered either by you or the Cisco NX-OS software. even if the port negotiates at an operating speed of 1 Gbps or 2 Gbps. An SD port monitors network traffic that passes through a Fibre Channel interface. an interface functions as a SPAN. which allows traffic to be switched directly with a local crossbar when the traffic is directed from one port to another on the same line card. an extra switching step is avoided. the switch disables the interface when the threshold is reached. Local Switching Local switching can be enabled in Generation 4 modules. and 8 Gbps on the 8-Gbps switching modules. note the following guidelines: . Autosensing speed is enabled on all 4-Gbps and 8-Gbps switching module interfaces by default. Monitoring is performed using a standard Fibre Channel analyzer (or a similar Switch Probe) that is attached to the SD Port. which decreases the latency. Bit Error Thresholds The bit error rate threshold is used by the switch to detect an increased error rate before performance degradation seriously affects traffic. The bit errors can occur for the following reasons: Faulty or bad cable Faulty or bad GBIC or SFP GBIC or SFP is specified to operate at 1 Gbps but is used at 2 Gbps GBIC or SFP is specified to operate at 2 Gbps but is used at 4 Gbps Short haul cable is used for long haul or long haul cable is used for short haul Momentary sync loss Loose cable connection at one or both ends Improper GBIC or SFP connection at one or both ends A bit error rate threshold is detected when 15 error bursts occur in a 5-minute period. When autosensing is enabled for an interface operating in dedicated rate mode. By using local switching. regardless of the SPAN sources. This configuration enables the interfaces to operate at speeds of 1 Gbps. When using local switching. or 4 Gbps on the 4-Gbps switching modules. This command determines the frame format for all frames transmitted by the interface in SD port mode. 2 Gbps. You can enter a shutdown and no shutdown command sequence to re-enable the interface. A graceful shutdown ensures that no frames are lost when the interface is shutting down. all outgoing frames are transmitted in the EISL frame format. Port Administrative Speeds By default. the port administrative speed for an interface is automatically calculated by the switch. In SD port mode.If a Cisco NX-OS software application executes a port shutdown as part of its function.

When planning port bandwidth requirements. storage ports that have higher fan-out ratios). allocation of the bandwidth within the port group is important. When configuring the ports. Ports in the port group can have bandwidth dedicated to them. such as ISL ports. To place a port in shared mode. you can share the bandwidth in a port group by using the switchport rate-mode shared command. For other ports. and ports on highbandwidth servers. Note Local switching is not supported on the Cisco MDS 9710 switch. The Cisco MDS 9000 Family allows for the bandwidth of ports in a port group to be allocated based on the requirements of individual ports. Table 9-5 lists the bandwidth and port group configuration for Fibre Channel modules. E ports are not allowed in the module because they must be in dedicated mode. be sure not to exceed the available bandwidth in a port group. or ports can share a pool of bandwidth. which usually is the default state. For ports that require high-sustained bandwidth.All ports need to be in shared mode. you can have bandwidth dedicated to them in a port group by using the switchport rate-mode dedicated command. enter the switchport rate-mode shared command. Dedicated and Shared Rate Modes Ports on Cisco MDS 9000 Family line cards are grouped into port groups that have a fixed amount of bandwidth per port group. typically servers that access shared storage-array ports (that is. storage and tape array ports. Table 9-5 Bandwidth and Port Group Configurations for Fibre Channel Modules .

You can. In some cases. subnet mask.4 Gbps of bandwidth. you must configure either the IP version 4 (IPv4) parameters (IP address. Each port group has 32. which can be used by other unrelated flows that do not experience the slow drain condition. the end devices do not accept the frames at the configured or negotiated rate. Management Interfaces You can remotely configure the switch through the management interface (mgmt0). a Cisco MDS 9513 Multilayer Director with a Fabric 3 module installed. No-credit timeout drops all packets after the slow drain is detected using the configured thresholds. These recommendations are often in the range of 7:1 to 15:1. we discussed the fan-out ratio. Most major disk subsystem vendors provide guidelines as to the recommended fan-out ratio of subsystem client-side ports to server connections. The lesser frame timeout value helps to alleviate the slow drain condition that affects the fabric by dropping the packets on the edge ports sooner than the time they actually get timed out (500 mms). When there are slow devices attached to the fabric. This feature provides various enhancements to detect slow drain devices that are causing congestion in the network and also provides a congestion avoidance function. and the port group has only 32. These classes of service do not support end-to-end flow control. and default gateway). Note This feature is used mainly for edge ports that are connected to slow edge devices. This oversubscription rate is well below the oversubscription rate of the typical storage array port (fan-out ratio) and does not affect performance. To configure a connection on the mgmt0 interface. configure all 6 ports in shared rate mode. has eight port groups of 6 ports each.48:1 (6 ports × 8 Gbps = 48 Gbps/32. configure a lesser frame timeout for the ports.4 Gbps). To avoid or minimize the stuck condition. You can also mix dedicated and shared rate ports in a port group. so that the ports run at 8 Gbps and are oversubscribed at a rate of 1. or the IP version 6 (IPv6) parameters so that the switch is reachable. This feature is focused mainly on the edge ports that are connected to slow drain devices. You cannot configure all 6 ports of a port group at the 8-Gbps dedicated rates because that would require 48 Gbps of bandwidth. The goal is to avoid or minimize the frames being stuck in the edge ports due to slow drain devices that are causing ISL blockage. we recommend that you apply this feature only for edge F ports and retain the default configuration for ISLs as E and TE ports. however. even though destination devices do not experience slow drain. Slow Drain Device Detection and Congestion Avoidance All data traffic between end devices in a SAN fabric is carried by Fibre Channel Class 3. In Chapter 6. The slow devices lead to ISL credit shortage in the traffic destined for these devices and they congest the links. The management port (mgmt0) is autosensing and operates in full-duplex mode at a speed of . the traffic is carried by Class 2 services that use link-level. The credit shortage affects the unrelated flows in the fabric that use the same ISL link. This function frees the buffer space in ISL. and buffer-tobuffer flow control. Even though this feature can be applied to ISLs as well.4 Gbps of bandwidth available. per-hop-based. and using a 48-port 8-Gbps Advanced Fibre Channel module.For example.

the attached interface is automatically deleted. the interface cannot be created. Configure each interface only in one VSAN. the default speed is 100 Mbps and the default duplex mode is auto. You can create an IP interface on top of a VSAN and then use this interface to send frames to this VSAN. If a VSAN does not exist. Autosensing supports both the speed and the duplex mode. To use this feature. you must configure the IP address for this VSAN. Here are the guidelines when creating or deleting a VSAN interface: Create a VSAN before creating the interface for that VSAN. . the default speed is auto and the default duplex mode is auto. On a Supervisor-2 module. On a Supervisor-1 module. VSAN interfaces cannot be created for nonexisting VSANs. Create the interface VSAN. If you delete the VSAN. All the NX-OS CLI commands for interface configuration and verification are summarized in Table 9-6.10/100/1000 Mbps. it is not created automatically. VSAN Interfaces VSANs apply to Fibre Channel fabrics and enable you to configure multiple isolated SAN topologies within the same physical infrastructure.

.

.

.

In this section we discuss the configuration and verification steps to build a Fibre Channel SAN topology. Fibre Channel Services. zoning. and SAN topologies. In this chapter we build up the following sample SAN topology together to practice what we have learned so far (see Figure 9-6).Table 9-6 Summary of NX-OS CLI Commands for Interface Configuration and Verification Verify Initiator and Target Fabric Login In Chapters 6 and 7 we discussed a theoretical explanation of Virtual SAN (VSAN). . Fibre Channel concepts.

VSANs have the following attributes: VSAN ID: The VSAN ID identifies the VSAN as the default VSAN (VSAN 1). State: The administrative state of a VSAN can be configured to an active (default) or . Nexus 6000. and the isolated VSAN (VSAN 4094). Nexus 7000 Series switches. and UCS Fabric Interconnect. This state indicates that traffic can pass through this VSAN. With VSANs you can create multiple logical SANs over a common physical infrastructure. user-defined VSANs (VSAN 2 to 4093). VSANs provide isolation among devices that are physically connected to the same fabric. A VSAN is in the operational state if the VSAN is active and at least one port is up.Figure 9-6 Our Sample SAN Topology Diagram VSAN Configuration You can achieve higher security and greater stability in Fibre Channel fabrics by using virtual SANs (VSANs) on Cisco MDS 9000 family switches and Cisco Nexus 5000. Each VSAN can contain up to 239 switches and has an independent address space that allows identical Fibre Channel IDs (FC IDs) to be used simultaneously in different VSANs. This state cannot be configured.

Cisco NX-OS software supports the following four interop modes: Mode 1: Standards based interop mode that requires all other vendors in the fabric to be in interop mode Mode 2: Brocade native mode (Core PID 0) Mode 3: Brocade native mode (Core PID 1) Mode 4: McData native mode Port VSAN Membership Port VSAN membership on the switch is assigned on a port-by-port basis.) In Example 9-8. we will be configuring and verifying VSANs on our sample topology that was shown in Figure 9-6. it is disabled. All ports in a suspended VSAN are disabled. they may exist in various conditions or states. VSAN name: This text string identifies the VSAN for management purposes. By default. Fibre Channel standards guide vendors toward common external Fibre Channel interfaces.suspended state. the default) for load balancing path selection. This method is referred to as dynamic port VSAN membership (DPVM). By default. which specifically turns off advanced or proprietary features and provides the product with a more amiable standards-compliant implementation. Dynamically: By assigning VSANs based on the device WWN. you activate the services for that VSAN. The active state of a VSAN indicates that the VSAN is configured and enabled. you can preconfigure all the VSAN parameters for the whole fabric and activate the VSAN immediately. If a port is configured in this VSAN. the default name for VSAN 3 is VSAN0003. By suspending a VSAN. By enabling a VSAN. The name can be from 1 to 32 characters long and it must be unique across all VSANs. Interop mode: Interoperability enables the products of multiple vendors to interact with each other. Use this state to deactivate a VSAN without losing the VSAN’s configuration. You can assign VSAN membership to ports using one of two methods: Statically: By assigning VSANs to ports. The suspended state of a VSAN indicates that the VSAN is configured but not enabled. each port belongs to the default VSAN. (DPVM is explained in Chapter 6. Example 9-8 VSAN Creation and Verification on MDS-9222i Switch Click here to view code image . Load balancing attributes: These attributes indicate the use of the source-destination ID (srcdst-id) or the originator exchange OX ID (src-dst-ox-id. the VSAN name is a concatenation of VSAN and a four-digit string representing the VSAN ID. For example. Each vendor has a regular mode and an equivalent interoperability mode. After VSANs are created.

FL mode: Arbitrated loop mode—storage connections that allow multiple storage targets to log in to a single switch port. fc1/3 ! Change the state of the fc interfaces MDS-9222i(config-if)# no shut ! Verifying status of the interfaces MDS-9222i(config-if)# show int fc 1/1.fc1/3 MDS-9222i(config-vsan-db)# interface fc1/1. MDS-9222i(config-vsan-db)# vsan 47 name VSAN-A interop loadbalancing ? src-dst-id Src-id/dst-id for loadbalancing src-dst-ox-id Ox-id/src-id/dst-id for loadbalancing(Default) ! Load balancing attributes of a VSAN ! To exit configuration mode issue command end MDS-9222i(config-vsan-db)# end ! Verifying the VSAN state. End with CNTL/Z. the Admin Mode is FX (the default). . which allows the port to come up in either F mode or FL mode: F mode: Point-to-point mode for server connections and most storage connections (storage ports in F mode have only one target logging in to that port. associated with the storage array port). We have done this just so that we can see multiple storage targets for zoning. fc1/3 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) fc1/1 47 FX on up swl F 2 -fc1/3 47 FX on up swl FL 4 -- Note. The last steps for the VSAN configuration will be verifying the VSAN membership and usage (see Example 9-9). MDS-9222i(config)# vsan database ! Creating VSAN 47 MDS-9222i(config-vsan-db)# vsan 47 ? MDS-9222i(config-vsan-db)# vsan 47 name VSAN-A ! Configuring VSAN name for VSAN 47 MDS-9222i(config-vsan-db)# vsan 47 name VSAN-A ? <CR> interop Interoperability mode value loadbalancing Configure loadbalancing scheme suspend Suspend vsan ! Other attributes of a VSAN configuration can be displayed with a question mark after !VSAN name. the operational state is down since no FC port is up MDS-9222i# show vsan 47 vsan 47 information name:VSAN-A state:active interoperability mode:default loadbalancing:src-id/dst-id/oxid operational state:down MDS-9222i(config)# vsan database ! Interface fc1/1 and fc 1/3 are included in VSAN 47 MDS-9222i(config-vsan-db)# vsan 47 interface fc1/1. as highlighted.!Initial Configuration and entering the VSAN configuration database MDS-9222i# conf t Enter configuration commands. one per line.

each host or disk requires a Fibre Channel ID. In a multiswitch fabric configuration. the fabric login is successful. Examine the FLOGI database on a switch that is directly connected to the host HBA and connected ports. The name server stores name entries for all hosts in the FCNS database. Name servers allow a database entry to be modified by a device that originally registered the information. These attributes are deregistered when the Nx port logs out either explicitly or implicitly.Example 9-9 Verification Steps for VSAN Membership and Usage Click here to view code image ! Verifying the VSAN membership MDS-9222i# show vsan membership vsan 1 interfaces: fc1/1 fc1/2 fc1/3 fc1/5 fc1/6 fc1/7 fc1/9 fc1/10 fc1/11 fc1/13 fc1/14 fc1/15 fc1/17 fc1/18 vsan 47 interfaces: fc1/1 fc1/3 vsan 4079(evfp_isolated_vsan) interfaces: vsan 4094(isolated_vsan) interfaces: ! Verifying VSAN usage MDS-9222i# show vsan usage 1 vsan configured configured vsans: 1. FCID is 0x410100 . The name server permits an Nx port to register attributes during a PLOGI (to the name server) to obtain attributes of other hosts. In Example 9-10. One instance of the name server process runs on each switch. the name server instances running on each switch share information in a distributed database. trunk mode is on snmp link state traps are enabled Port mode is F. SFP is short wave laser w/o OFC (SN) Port WWN is 20:01:00:05:9b:77:0d:c0 Admin port mode is FX. If the required device is displayed in the FLOGI table. we continue verifying FCNS and FLOGI databases on our sample SAN topology.48-4093 fc1/4 fc1/8 fc1/12 fc1/16 In a Fibre Channel fabric. 47 vsans available for configuration: 2-46. The name server functionality maintains a database containing the attributes for all hosts and storage devices in each VSAN. Example 9-10 Verifying FCNS and FLOGI Databases on MDS-9222i Switch Click here to view code image ! Verifying fc interface status MDS-9222i(config-if)# show interface fc 1/1 fc1/1 is up Hardware is Fibre Channel. Use the show flogi command to verify if a storage device is displayed in the FLOGI table as in the next section.

3632 bytes 0 discards. ! Verifying name server database ! The assigned FCID 0x410100 and device WWN information are registered in the FCNS and attain fabric-wide visibility. MDS-9222i(config-if)# show fcns database VSAN 47: -------------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x4100e8 NL c5:9a:00:06:2b:04:fa:ce scsi-fcp:target 0x4100ef NL c5:9a:00:06:2b:01:01:04 scsi-fcp:target 0x410100 N 21:00:00:e0:8b:0a:f0:54 (Qlogic) scsi-fcp:init Total number of entries = 3 MDS-9222i(config-if)# show fcns database detail -----------------------VSAN:47 FCID:0x4100e8 -----------------------port-wwn (vendor) :c5:9a:00:06:2b:04:fa:ce node-wwn :c5:9a:00:06:2b:01:00:04 class :3 node-ip-addr :0. 0 errors 0 input OLS. which is attached to fc 1/1. fc1/3 47 0x4100e8 c5:9a:00:06:2b:04:fa:ce c5:9a:00:06:2b:01:00:04 fc1/3 47 0x4100ef c5:9a:00:06:2b:01:01:04 c5:9a:00:06:2b:01:00:04 Total number of flogi = 3. 1396 bytes 0 discards.0 ipa :ff ff ff ff ff ff ff ff fc4-types:fc4_features :scsi-fcp:target symbolic-port-name :SANBlaze V4. ! Verifying flogi database . the initiator is connected to fc 1/1 the domain id is x41 MDS-9222i(config-if)# show flogi database -------------------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME -------------------------------------------------------------------------------fc1/1 47 0x410100 21:00:00:e0:8b:0a:f0:54 20:00:00:e0:8b:0a:f0:54 ! JBOD FC target contains 2 NL_Ports (disks) and is connected to domain x041 ! FC Initiator with the pWWN 21:00:00:e0:8b:0a:f0:54 is attached to fc interface 1/1.0. 0 unknown class 0 too long. The buffer-to-buffer credits are negotiated during the port login. 2 loop inits 1 output OLS.0.0 Port . 0 NOS. 0 LRR. 0 NOS. 4 bytes/sec. 0 errors 0 CRC. 0 frames/sec 5 minutes output rate 32 bits/sec. 0 too short 18 frames output. the VSAN domain manager 47 has assigned FCID 0x410100 to the device during FLOGI. 1 loop inits 16 receive B2B credit remaining 3 transmit B2B credit remaining 3 low priority transmit B2B credit remaining Interface last changed at Mon Sep 22 09:55:15 2014 ! The port mode is F mode and there is automatically FCID is assigned to the FC Initiator.Port vsan is 47 Speed is 2 Gbps Rate mode is shared Transmit B2B Credit is 3 Receive B2B Credit is 16 Receive data field Size is 2112 Beacon is turned off 5 minutes input rate 88 bits/sec. 0 frames/sec 18 frames input. 11 bytes/sec. 1 LRR.

0.0 ipa :ff ff ff ff ff ff ff ff fc4-types:fc4_features :scsi-fcp:target symbolic-port-name :SANBlaze V4.0.0 fabric-port-wwn :20:01:00:05:9b:77:0d:c0 hard-addr :0x000000 permanent-port-wwn (vendor) :21:00:00:e0:8b:0a:f0:54 (Qlogic) connected interface :fc1/1 switch name (IP address) :MDS-9222i (10.8.0.0 Port symbolic-node-name : port-type :NL port-ip-addr :0.0.25 DVR:v9.0 ipa :ff ff ff ff ff ff ff ff fc4-types:fc4_features :scsi-fcp:init symbolic-port-name : symbolic-node-name :QLA2342 FW:v3.8.0.0.03. .44) Total number of entries = 3 The DCNM Device Manager for MDS 9222i will be displaying the ports in two tabs: one is Summary and the other one is a GUI display of the physical ports in the Device tab (see Figure 9-7).symbolic-node-name : port-type :NL port-ip-addr :0.44) -----------------------VSAN:47 FCID:0x410100 ! Initiator connected to fc 1/1 ! The vendor of the host bus adapter (HBA) is Qlogic where we can see in the vendor pWWN -----------------------port-wwn (vendor) :21:00:00:e0:8b:0a:f0:54 (Qlogic) node-wwn :20:00:00:e0:8b:0a:f0:54 class :3 node-ip-addr :0.0.2.0.6 port-type :N port-ip-addr :0.0 fabric-port-wwn :20:03:00:05:9b:77:0d:c0 hard-addr :0x000000 permanent-port-wwn (vendor) :00:00:00:00:00:00:00:00 connected interface :fc1/3 switch name (IP address) :MDS-9222i (10.0.0.2.1.8.8.44) -----------------------VSAN:47 FCID:0x4100ef -----------------------port-wwn (vendor) :c5:9a:00:06:2b:01:01:04 node-wwn :c5:9a:00:06:2b:01:00:04 class :3 node-ip-addr :0.0 fabric-port-wwn :20:03:00:05:9b:77:0d:c0 hard-addr :0x000000 permanent-port-wwn (vendor) :00:00:00:00:00:00:00:00 connected interface :fc1/3 switch name (IP address) :MDS-9222i (10.2.

.Figure 9-7 Cisco DCNM Device Manager for MDS 9222i Switch In Table 9-7 we summarize NX-OS CLI commands that are related to VSAN configuration and verification.

security is crucial. With many types of servers and storage devices on the network. if a host were to gain access to a disk that was being used by another host. the data on this disk could become corrupted.Table 9-7 Summary of NX-OS CLI Commands for VSAN Configuration and Verification Verify Fibre Channel Zoning on a Cisco MDS 9000 Series Switch Zoning is used to provide security and to restrict access between initiators and targets on the SAN fabric. For example. potentially with a different operating system. To avoid any compromise of .

You cannot make changes to an existing zone set and activate it if the full zone set is lost or is not propagated. we will continue building our sample topology with zoning configuration. You can copy an active zone set from the bootflash: directory. A zone can be a member of more than one zone set. volatile: directory. SCP. to one of the following areas: To the full zone set To a remote location (using FTP. thus reducing the risk of data loss. Only one zone set can be activated at any time. namely hosts. A zone set consists of one or more zones: A zone set can be activated or deactivated as a single entity across all switches in the fabric. port-number fcalias Add fcalias to zone fcid Add FCID member to zone fwwn Add Fabric Port WWN member to zone interface Add member based on interface ip-address Add IP address member to zone pwwn Add Port WWN member to zone symbolic-nodename Add Symbolic Node Name member to zone ! Inserting a disk (FC target) into zoneA MDS-9222i(config-zone)# member pwwn c5:9a:00:06:2b:01:01:04 ! Inserting pWWN of an initiator which is connected to fc 1/1 into zoneA MDS-9222i(config-zone)# member pwwn 21:00:00:e0:8b:0a:f0:54 ! Verifying zone zoneA if the correct members are configured or not MDS-9222i(config-zone)# show zone name zoneA zone name zoneA vsan 47 pwwn c5:9a:00:06:2b:01:01:04 pwwn 21:00:00:e0:8b:0a:f0:54 ! Creating zone set in VSAN 47 MDS-9222i(config-zone)# zoneset name zonesetA vsan 47 . Zone Set Distribution: You can distribute full zone sets using one of two methods: one-time distribution or full zone set distribution. Example 9-11 Configuring Zoning on MDS-9222i Switch Click here to view code image ! Creating zone in VSAN 47 MDS-9222i(config-if)# zone name zoneA vsan 47 MDS-9222i(config-zone)# member ? ! zone member can be configured with one of the below options device-alias Add device-alias member to zone domain-id Add member based on domain-id. zoning allows the user to overlay a security map. or TFTP) The active zone set is not part of the full zone set. Zone Set Duplication: You can make a copy of a zone set and then edit it without altering the original zone set. Members in different zones cannot access one another. SFTP. can see which targets. This map dictates which devices.crucial data within the SAN. Members in a zone can access one another. or slot0. A zone consists of multiple zone members. In Example 9-11.

but not initiator-initiator. DCNM is really best at viewing and operating on existing fabrics (that is. MDS-9222i(config)# show zoneset brief zoneset name zonesetA vsan 47 zone zoneA Setting up multiple switches independently and then connecting them into one fabric can be done in DCNM. In this chapter we are just doing the CLI versions of these commands. The traditional zoning method allows each device in a zone to communicate with every other device in the zone. The device type information of each device in a smart zone is automatically populated from the Fibre Channel Name Server (FCNS) database as host. However. For example.MDS-9222i(config-zoneset)# member zoneA MDS-9222i(config-zoneset)# exit ! Inserting a disk (FC target) into zoneA MDS-9222i(config)# zoneset activate name zonesetA vsan 47 Zoneset activation initiated. and then select ZonesetA (see Figure 9-8). The administrator is required to manage the individual zones according to the zone configuration guidelines. check zone status ! Verifying the active zone set status in VSAN 47 MDS-9222i(config)# show zoneset active zoneset name zonesetA vsan 47 zone name zoneA vsan 47 * fcid 0x4100ef [pwwn c5:9a:00:06:2b:01:01:04] * fcid 0x410100 [pwwn 21:00:00:e0:8b:0a:f0:54] The asterisks (*) indicate that a device is currently logged into the fabric (online). possibly a mistyped pWWN. and the combinations that are not used are ignored. This information allows more . just about everything except making initial connections of multiple switches into one fabric). By analyzing device-type information in the FCNS. initiator-target pairs are configured. useful combinations can be implemented at the hardware level by the Cisco MDS NX-OS software. Select VSAN 47. Figure 9-8 Cisco DCNM Zone Members for VSAN 47 About Smart Zoning Smart zoning implements hard zoning of large zones with fewer hardware resources than was previously required. A missing asterisk may indicate an offline device or an incorrectly configured zone. Smart zoning eliminates the need to create a single initiator to single target zones. target. or both. members can be displayed via Logical Domains (on the left pane). On the DCNM SAN active zone set.

INQUIRY) is supported. Second. all zone and zone set modifications for enhanced zoning include activation. the LUN number for each host bus adapter (HBA) from the storage subsystem should be configured with the LUN-based zone procedure. Last. LUN masking and mapping restricts server access to specific LUNs. follow these steps: Step 1. About LUN Zoning and Assigning LUNs to Storage Subsystems Logical unit number (LUN) zoning is a feature specific to switches in the Cisco MDS 9000 family. Enhanced zoning can be turned on per VSAN as long as each switch within that VSAN is enhanced zoning capable. and if you want to perform additional LUN zoning in a Cisco MDS 9000 family switch. such as a disk controller that needs to communicate with another disk controller. control traffic to LUN 0 (for example. with a few added commands. At the time enhanced zoning is enabled. If LUN masking is enabled on a storage subsystem. switch(config)# zone mode enhanced vsan 222 Zoning mode is changed to enhanced for VSAN 200. A storage device can have multiple LUNs behind it. The zone status can be verified with the show zone status command. differs from that of basic zoning. WRITE) is denied. With LUN zoning. By default. if available. For one thing. To enable enhanced zoning in a VSAN. but data traffic to LUN 0 (for example. If the device port is part of a zone. you can restrict access to specific LUNs associated with a device. is implicitly obtained for enhanced zoning.efficient utilization of switch hardware by identifying initiator-target pairs and configuring those only in hardware. The zone mode will display enhanced for enhanced mode and the default zone distribution policy is full for . In the event of a special situation. You need to check the zone status to verify the status. The default zoning is basic mode. Smart zoning can be enabled at the VSAN level but can also be disabled at zone level. however. as per standards requirements. the command will be propagated to the other switches within the VSAN automatically. Step 3. The default zoning mode can be selected during the initial setup. Click here to view code image switch(config)# no zone mode enhanced vsan 222 The preceding command disables the enhanced zoning mode for a specific VSAN. READ. smart zoning defaults can be overridden by the administrator to allow complete control. REPORT_LUNS. changes are either committed or aborted with enhanced zoning. a member of the zone can access any LUN in the device. switch# config t (enter the configuration mode) Step 2. About Enhanced Zoning The zoning feature complies with the FC-GS-4 and FC-SW-3 standards. Enhanced zoning uses the same techniques and tools as basic zoning. then. Enhanced zoning needs to be enabled on only one switch within the VSAN in the existing SAN topology. Both standards support the basic zoning functionalities explained in Chapter 6 and the enhanced zoning functionalities described in this section. a VSANwide lock. The flow of enhanced zoning. the enhanced zoning feature is disabled in all switches in the Cisco MDS 9000 family. When LUN 0 is not included within a zone.

In Table 9-8. Click here to view code image switch# show zone status vsan 222 VSAN: 222 default-zone: deny distribute: full Interop: default mode: enhanced merge-control: allow session: none hard-zoning: enabled broadcast: enabled smart-zoning: disabled rscn-format: fabric-address Default zone: qos: none broadcast: disabled ronly: disabled Full Zoning Database : DB size: 44 bytes Zonesets:0 Zones:0 Aliases: 0 Attribute-groups: 1 Active Zoning Database : Database Not Available The rules for enabling enhanced zoning are as follows: Enhanced zoning needs to be enabled on only one switch in the VSAN of an existing converged SAN fabric. Enabling enhanced zoning does not perform a zone set activation.enhanced mode. In basic zone mode the default zone distribution policy is active. we discuss the advantages of using enhanced zoning. thereby overwriting the destination switches’ full zone set database. Enabling it on multiple switches within the same VSAN can result in failure to activate properly. The switch that is chosen to initiate the migration to enhanced zoning will distribute its full zone database to the other switches in the VSAN. .

Table 9-8 Advantages of Enhanced Zoning .

fc1/3. (See Example 9-12.) Example 9-12 Verifying Initiator and Target Logins Click here to view code image MDS-9222i(config-if)# show interface fc1/1.fc1/4. Figure 9-9 SAN topology with CLI Configuration On MDS 9222i we configured two VSANs: VSAN 47 and VSAN 48. so far we managed to build only separate SAN islands (see Figure 9-9). Both VSANs are active and their operational states are up.fc1/12 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/1 47 FX on up swl F 2 -fc1/3 47 FX on up swl FL 4 -fc1/4 48 FX on up swl FL 4 -MDS-9222i(config-if)# show fcns database VSAN 47: -------------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x4100e8 NL c5:9a:00:06:2b:04:fa:ce scsi-fcp:target 0x4100ef NL c5:9a:00:06:2b:01:01:04 scsi-fcp:target 0x410100 N 21:00:00:e0:8b:0a:f0:54 (Qlogic) scsi-fcp:init Total number of entries = 3 VSAN 48: -------------------------------------------------------------------------- .Let’s return to our sample SAN topology.

and zone-based FC QoS. are LUN zoning. zone-based traffic prioritizing.FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x4100e8 NL c5:9b:00:06:2b:14:fa:ce scsi-fcp:target 0x4100ef NL c5:9b:00:06:2b:02:01:04 scsi-fcp:target Total number of entries = 2 MDS-9148(config-if)# show flogi database -------------------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME -------------------------------------------------------------------------------fc1/1 48 0x420000 21:01:00:e0:8b:2a:f0:54 20:01:00:e0:8b:2a:f0:54 Total number of flogi = 1. The zoning features. . read-only zones. which require the Enterprise package license. MDS-9148(config-if)# show fcns database VSAN 48: -------------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x420000 N 21:01:00:e0:8b:2a:f0:54 (Qlogic) scsi-fcp:init Total number of entries = 1 Table 9-9 summarizes NX-OS CLI commands for zone configuration and verification.

.

.

.

.

.

Trunking is supported on E ports and F ports. Figure 9-10 represents the possible trunking scenarios in a SAN with MDS core switches. over the same physical link. third-party core switches. Trunking enables interconnect ports to transmit and receive frames in more than one VSAN.Table 9-9 Summary of NX-OS CLI Commands for Zone Configuration and Verification VSAN Trunking and Setting Up ISL Port VSAN trunking is a feature specific to switches in the Cisco MDS 9000 Family. . NPV switches. and HBAs.

it is referred to as a TE port. it is referred to as a TF port. TF port: If trunk mode is enabled in an F port (see the link 2 in Figure 9-10) and that port becomes operational as a trunking F port. By default.Figure 9-10 Trunking F Ports The trunking feature includes the following key concepts: TE port: If trunk mode is enabled in an E port and that port becomes operational as a trunking E port. it is referred to as a TNP port. PTP is used because PTP also supports trunking port channels. TF-TN port link: A single link can be established to connect an F port to an HBA to carry tagged frames (see the link 1a and 1b in Figure 9-10) using Exchange Virtual Fabrics Protocol (EVFP). A Fibre Channel VSAN is called Virtual Fabric and uses a VF_ID in place of the VSAN ID. A server can reach multiple VSANs through a TF port without inter-VSAN routing (IVR). When an N port supports trunking. the VF_ID is 1 for all ports. a pWWN is defined for each VSAN and called a logical pWWN. In the case of MDS core switches. TN port: If trunk mode is enabled (not currently supported) in an N port (see the link 1b in Figure 9-10) and that port becomes operational as a trunking N port. it is referred to as a TN port. . TF-TNP port link: A single link can be established to connect an TF port to an TNP port using the PTP protocol to carry tagged frames (see the link 2 in Figure 9-10). Cisco Port Trunking Protocol (PTP) is used to carry tagged frames. TNP port: If trunk mode is enabled in an NP port (see the link 2 in Figure 9-10) and that port becomes operational as a trunking NP port. the pWWNs for which the N port requests additional FC_IDs are called virtual pWWNs. it is referred to as a TF port channel. TF PortChannel: If trunk mode is enabled in an F port channel (see the link 4 in Figure 9-10) and that port channel becomes operational as a trunking F port channel.

. ST. we summarize supported trunking protocols. The trunk mode configuration at the two ends of an ISL. trunk mode is disabled. Fx. The TE port continues to function in trunk mode. Also. The ISL is always in a trunking disabled state. if the third-party core switches ACC’s physical FLOGI with the EVFP bit is configured. by default. In Table 9-11. off (disabled). In the case of F ports. On NPV switches. and SD) on non-NPV switches. then EVFP protocol enables trunking on the link. we summarize the trunk mode status between switches. If the trunking protocol is disabled on a switch. no port on that switch can apply new trunk configurations. When connected to a third-party switch. the trunking protocol is enabled on E ports and disabled on F ports. If so. disable the trunking protocol. other switches that are directly connected to this switch are similarly affected on the connected interfaces.In Table 9-10. but only supports traffic in VSANs that it negotiated with previously (when the trunking protocol was enabled). In some cases. The preferred configuration on the Cisco MDS 9000 family switches is one side of the trunk set to auto and the other side set to on. FL. Trunk Modes By default. trunk mode is enabled on all Fibre Channel interfaces (Mode: E. determines the trunking state of the link and the port modes at both ends. the trunk mode configuration on E ports has no effect. You can configure trunk mode as on (enabled). Table 9-10 Supported Trunking Protocols By default. F. between two switches. or auto (automatic). you may need to merge traffic from different port VSANs across a nontrunking ISL. Existing trunk configurations are not affected.

if a port channel protocol frame is received from a peer port. Port channels aggregate multiple physical ISLs into one logical link with higher bandwidth and port resiliency for both Fibre Channel and FICON traffic. up to 16 expansion ports (Eports) or trunking E-ports (TE-ports) can be bundled into a PortChannel. The possible values for a channel group mode are as follows: ON (default): The member ports only operate as part of a port channel or remain inactive. it will default to the ON mode behavior. and they do not need a designated master port. does not support the PortChannel protocol. If a port or a switching module fails.Table 9-11 Trunk Mode Status Between Switches Each Fibre Channel interface has an associated trunk-allowed VSAN list. the port channel protocol is not initiated. the VSAN range (1 through 4093) is included in the trunk-allowed list. With this feature. frames are transmitted and received in one or more VSANs specified in this list. the software indicates its nonnegotiable status. In TE-port mode. There are couple of important guidelines and limitations for the trunking feature: F ports support trunking in Fx mode. ISL ports can reside on any switching module. If the peer port. the port channel continues to function properly without requiring fabric reconfiguration. By default. Port Channel Modes You can configure each port channel with a channel group mode parameter to determine the port channel protocol behavior for all member ports in this channel group. the trunking protocol ensures seamless operation as an E port. The ACTIVE port channel mode allows automatic recovery without explicitly enabling and disabling the port channel member ports at . TF. If a trunking enabled E port is connected to a third-party switch. and TNP links are used by the trunking protocol to determine the allowed active VSANs in which frames can be received or transmitted. while configured in a channel group. or responds with a nonnegotiable status. However. The trunk-allowed VSANs configured for TE. ACTIVE: The member ports initiate port channel protocol negotiation with the peer port(s) regardless of the channel group mode of the peer port. MDS does not enforce the uniqueness of logical pWWNs across VSANs. In this mode.

Ports are also in trunk on mode. we show the verification and configuration steps on NX-OS CLI for port channel configuration on our sample SAN topology. the interface Admin Mode is FX MDS-9222i# show interface fc1/5.8 Gbps of bandwidth. On our sample SAN topology. Figure 9-11 portrays the minimum required configuration steps to set up ISL port channel with mode active between two MDS switches.either end. Example 9-13 Verifying Port Channel on Our Sample SAN Topology Click here to view code image ! Before changing the ports to dedicated mode . Ports in auto mode will automatically come up as ISL (E-ports) after they are connected to another switch.fc1/6 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/5 1 FX on down swl --fc1/6 1 FX on down swl --! On 9222i ports need to be put into dedicated rate mode in order to be configured as ISL (or be set to mode auto). buffer –to-buffer credits and bandwidth. so when ISL comes up it will automatically carry all VSANs in common between switches. In Example 9-13. MDS-9222i(config)# show port-resources module 1 Module 1 Available dedicated buffers are 3987 Port-Group 1 Total bandwidth is 12. On MDS 9222i there are three port-groups available and each port-group consists of six Fibre Channel ports. Figure 9-11 Setting Up ISL Port on Sample SAN Topology On 9222i ports need to be put into dedicated rate mode to be ISLs (or be set to mode auto).8 Gbps . Groups of six ports have a total 12. When we set these two ports to dedicate they will reserve a dedicated 4 Gb of bandwidth (leaving 4.8 Gb shared between the other group members). we will connect two SAN islands by creating port channel using ports 5 and 6 on both MDS switches. Show port-resources module command will show the default and configured interface rate mode.

0 shared fc1/3 16 4.0 shared <snip> ! Changing the rate-mode to dedicated MDS-9222i(config)# inte fc1/5-6 MDS-9222i(config-if)# switchport rate-mode dedicated MDS-9222i(config-if)# switchport mode auto MDS-9222i(config-if)# sh interface fc1/5-6 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/5 1 auto on down swl --fc1/6 1 auto on down swl --! Interface fc 1/5-6 are configured for dedicated rate mode MDS-9222i(config-if)# show port-resources module 1 Module 1 Available dedicated buffers are 3543 Port-Group 1 Total bandwidth is 12.0 dedicated <snip> ! Configuring port-channel interface MDS-9222i(config-if)# channel-group 56 fc1/5 fc1/6 added to port-channel 56 and disabled please do the same operation on the switch at the other end of the port-channel. then do "no shutdown" at both ends to bring it up MDS-9222i(config-if)# no shut MDS-9222i(config-if)# int port-channel 56 MDS-9222i(config-if)# channel mode active MDS-9222i(config-if)# sh interface fc1/5-6 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/5 1 auto on trunking swl TE 4 56 fc1/6 1 auto on trunking swl TE 4 56 .0 shared fc1/3 16 4.0 dedicated fc1/6 250 4.0 shared fc1/5 250 4.0 shared fc1/6 16 4.Total shared bandwidth is 12.8 Gbps Allocated dedicated bandwidth is 8.0 shared fc1/2 16 4.0 shared fc1/4 16 4.0 Gbps -------------------------------------------------------------------Interfaces in the Port-Group B2B Credit Bandwidth Rate Mode Buffers (Gbps) -------------------------------------------------------------------fc1/1 16 4.8 Gbps Total shared bandwidth is 4.0 shared fc1/4 16 4.0 shared fc1/5 16 4.0 Gbps -------------------------------------------------------------------Interfaces in the Port-Group B2B Credit Bandwidth Rate Mode Buffers (Gbps) -------------------------------------------------------------------fc1/1 16 4.0 shared fc1/2 16 4.8 Gbps Allocated dedicated bandwidth is 0.

0 errors 2 input OLS.! Verifying the portchannel interface. Local switch run time information: State: Stable Local switch WWN: 20:01:00:05:9b:77:0d:c1 Running fabric name: 20:01:00:05:9b:77:0d:c1 Running priority: 2 Current domain ID: 0x51(81) Local switch configuration information: State: Enabled FCID persistence: Enabled Auto-reconfiguration: Disabled Contiguous-allocation: Disabled Configured fabric name: 20:01:00:05:30:00:28:df Optimize Mode: Disabled Configured priority: 128 Configured domain ID: 0x00(0) (preferred) Principal switch run time information: Running priority: 2 Interface Role RCF-reject --------------------------------------port-channel 56 Downstream Disabled --------------------------------------MDS-9148(config-if)# show fcdomain vsan 1 The local switch is a Subordinated Switch. 36000 bytes 0 discards. 0 too short 361 frames output. 22 loop inits 2 output OLS. 1 frames/sec 363 frames input. Local switch run time information: State: Stable Local switch WWN: 20:01:54:7f:ee:05:07:89 Running fabric name: 20:01:00:05:9b:77:0d:c1 Running priority: 128 . MDS-9222i(config-if)# show int port-channel 56 port-channel 56 is trunking Hardware is Fibre Channel Port WWN is 24:38:00:05:9b:77:0d:c0 Admin port mode is auto.47-48) Trunk vsans (up) (1. 128 bytes/sec. MDS-9222i# show fcdomain vsan 1 The local switch is the Principal Switch. 107 bytes/sec. VSAN 1 and VSAN 48 are up but VSAN 47 is in isolated status because VSAN 47 is not configured ! on MDS 9148 switch MDS 9148 is only carrying SAN-B traffic. 0 NOS. 2 loop inits Member[1] : fc1/5 Member[2] : fc1/6 ! During ISL bring up there will be principal switch election. 4 LRR. 2 NOS. trunk mode is on snmp link state traps are enabled Port mode is TE Port vsan is 1 Speed is 8 Gbps Trunk vsans (admin allowed and active) (1. 0 LRR. 1 frames/sec 5 minutes output rate 856 bits/sec. 0 unknown class 0 too long. for VSAN 1 the one with the lowest priority won the election which is MDS ! 9222i in our sample SAN topology. 29776 bytes 0 discards.48) Trunk vsans (isolated) (47) Trunk vsans (initializing) () 5 minutes input rate 1024 bits/sec. 0 errors 0 CRC.

which is connected to fc 1/1 of MDS 9148 via VSAN 48 to a LUN that resides on a storage array connected to fc 1/14 of MDS 9222i (VSAN 48). although they could be identified using switch physical port numbers. 2 ports up Ports: fc1/5 [up] fc1/6 [up] * ! The asterisks (*) indicate which interface became online first. (Zoning on separate VSANs is completely independent. MDS-9148(config-if)# show port-channel database port-channel 56 Administrative channel mode is active Operational channel mode is active Last membership update succeeded First operational port is fc1/6 2 ports in total. we need to provide access to our initiator. Zoning through the switches allows traffic on the switches to connect only chosen sets of servers and storage end devices on a particular VSAN.) Usually members of a zone are identified using the end devices World Wide Port Number (WWPN).Current domain ID: 0x52(82) Local switch configuration information: State: Enabled FCID persistence: Enabled Auto-reconfiguration: Disabled Contiguous-allocation: Disabled Configured fabric name: 20:01:00:05:30:00:28:df Optimize Mode: Disabled Configured priority: 128 Configured domain ID: 0x00(0) (preferred) Principal switch run time information: Running priority: 2 Interface Role RCF-reject --------------------------------------port-channel 56 Upstream Disabled --------------------------------------MDS-9222i# show port-channel database port-channel 56 Administrative channel mode is active Operational channel mode is active Last membership update succeeded First operational port is fc1/6 2 ports in total. 2 ports up Ports: fc1/5 [up] fc1/6 [up] * To bring up the rest of our sample SAN topology. Figure 9-12 portrays the necessary zoning configuration. .

FC initiator via dual host bus adapter (HBA) can access same LUNs from SAN A (VSAN 47) and SAN B (VSAN 48).Figure 9-12 Zoning Configuration on our Sample SAN Topology Our sample SAN topology is up with the preceding zone configuration. . In Table 9-12 we summarize NX-OS CLI commands for trunking and port channel.

.

cisco.html Cisco MDS 9000 NX-OS Software upgrade and downgrade guide is available at http://www.com/c/en/us/td/docs/switches/datacenter/mds9000/sw/6_2/upgrade/guides/nxos/upgrade.cisco. Cisco MDS 9148s Multilayer Switch Hardware installation guide is available at http://www.com/c/en/us/support/storage-networking/mds-9000-nx-os-san-ossoftware/products-installation-guides-list.com/c/en/us/td/docs/switches/datacenter/mds9000/hw/9700/mds_9700_hig/overview Cisco MDS Install and upgrade guides are available at http://www.cisco.cisco.com/c/en/us/support/storage-networking/mds-9000-nx-os-san-os- .com/c/en/us/support/storage-networking/mds-9000-series-multilayerswitches/products-installation-guides-list.cisco.html#pgfId-419386 Cisco MDS 9000 NX-OS and SAN OS software configuration guides are available at http://www.com/en/US/docs/switches/datacenter/mds9000/hw/9250i/installation/guide/connect.cisco.html Cisco MDS 9250i Multiservice Fabric Switch Hardware installation guide is available at http://www.com/c/en/us/td/docs/switches/datacenter/mds9000/hw/9148S/install/guide/9148S_h Cisco MDS NX-OS install and upgrade guides are available at http://www.Table 9-12 Summary of NX-OS CLI Commands for Trunking and Port Channels Reference List Cisco MDS 9700 Series Hardware installation guide is available at http://www.cisco.

noted with the key topic icon in the outer margin of the page.software/products-installation-and-configuration-guides-list.html Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. Table 9-13 lists a reference for these key topics and the page numbers on which each is found. .

includes completed tables and lists to check your work. or at least the section for this chapter.” (found on the disc). “Memory Tables Answer Key. “Memory Tables. Define Key Terms Define the following key terms from this chapter. and complete the tables and lists from memory. and check your answers in the Glossary: virtual storage-area networks (VSAN) zoning fport-channel-trunk enhanced small form-factor pluggable (SFP+) small form-factor pluggable (SFP) 10 gigabit small form factor pluggable (XFP) host bus adapter (HBA) Exchange Peer Parameters (EPP) Exchange Virtual Fabrics Protocol (EVFP) Extensible Markup Language (XML) common information model (CIM) Power On Auto Provisioning (POAP) switch worldwide name (sWWN) World Wide Port Name (pWWN) slow drain basic input/output system (BIOS) . Appendix C.” also on the disc.Table 9-13 Key Topics for Chapter 9 Complete Tables and Lists from Memory Print a copy of Appendix B.

random-access memory (RAM) Port Trunking Protocol (PTP) fan-out ratio fan-in ratio .

Self-Assessment Questionnaire Each chapter of this book introduces several key learning items. without looking back at the chapters or your notes. Details on each task follow the table. Refer to “How to View Only Part Review Questions by Part” in the Introduction to this book for help with how to make the PCPT software show you Part Review questions for this part only. if you’re not certain. Answer Part Review Questions For this task. which are relevant to the particular chapter. take the time to reread those topics. you should go back and refresh on the key topics. If you do not remember some details. you should try to answer these self-assessment questionnaires to the best of your ability. . but as you read through each new chapter you will become more familiar and comfortable with them. This exercise lists a set of key self-assessment questions. which will help you remember the concepts and technologies of these key topics as you progress with the book. For this self-assessment exercise in this book.Part II Review Keep track of your part review progress with the checklist shown in Table PII-1. Review Key Topics Browse back through the chapters and look for the Key Topic icons. answer the Part Review questions for this part of the book. There will be no written answers to these questions in this book. using the PCPT software. This might seem overwhelming to you initially. Refer to “How to View Only DIKTA Questions by Part” in the Introduction to this book for help with how to make the PCPT software show you DIKTA questions for this part only. Table PII-1 Part II Review Checklist Repeat All DIKTA Questions For this task. shown in the following list. answer the “Do I Know This Already?” questions again for the chapters in this part of the book. using the PCPT software.

but the chapter is structured to give you the knowledge and understanding to be able to discuss and explain these self-assessment questions. For example. number 6 is about Data Center Storage Architecture. number 7 is about Cisco MDS Product Family. Block Virtualization would apply to Chapter 8. You might or might not find the answer explicitly. Note your answer and try to reference key words in the answer from the relevant chapter. For example. and so on.Think of each of these open self-assessment questions as being about the chapters you’ve read. .

Part III: Data Center Unified Fabric Exam topics covered in Part III: Define Challenges in Today’s Data Center Networks Unified Fabric Principles Inter-Data Center Unified Fabric Components and Operation of the IEEE Data Center Bridging Standards FCoE Principles FCoE Connectivity Options FCoE and Fabric Extenders FCoE and Server Adapter Virtualization Network Topologies and Deployment Scenarios for Multihop Unified Fabric FCoE and Cisco FabricPath Chapter 10: Unified Fabric Overview Chapter 11: Data Center Bridging and FCoE Chapter 12: Multihop Unified Fabric .

” Table 10-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz allows you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. In this chapter we discuss how Unified Fabric promises to be the architecture that fulfills the requirements for the next generation of Ethernet networks in the data center.Chapter 10. You can find the answers in Appendix A. security. and orchestration platforms for more efficient operations and greater scalability. storage. It uniquely integrates with servers. and cloud environments. If you do not know the answer to a question or are only partially sure of the answer. which improve the robustness of Ethernet and its responsiveness. What is Unified Fabric? . Unified Fabric supports new concepts. and services infrastructure components. you should mark that question as wrong for purposes of the self-assessment. virtual. “Answers to the ‘Do I Know This Already?’ Quizzes. such as IEEE Data Center Bridging enhancements. read the entire chapter. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. which are designed to operate in physical. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. Cisco Unified Fabric delivers flexibility and consistent architecture across a diverse set of environments. Unified Fabric Overview This chapter covers the following exam topics: Define Challenges in Today’s Data Center Networks Unified Fabric Principles Inter-Data Center Unified Fabric Unified Fabric is a holistic network architecture comprising switching. By unifying and consolidating infrastructure components. Table 10-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. 1.

What Cisco virtual product is used to secure VM-to-VM communication within the same tenant space? a. 3. Which of the following is one challenge of today’s data center networks? a. c. d. . What are the two types of traffic that are most commonly consolidated with Unified Fabric? a. b. vPC blocks redundant paths toward the STP root bridge. b. vPC topology causes loss of 50% of bandwidth because of blocked ports. It is a feature of Cisco Nexus 7000 series switches to steer the traffic toward physical service nodes. 4.a. 6. Multilayer design is not optimal for east-west network traffic flows. c. c. d. Allow network discover application requirements. 2. Infiniband traffic d. How does vPC differ from traditional spanning tree topologies? a. It is a feature that allows virtual switches to get their basic configuration during boot. Virtual Supervisor Module b. Network traffic 5. Rewrite applications to conform to the network on which they are running. Virtual Application Security 7. Today’s networks as they are can accommodate any application demand for network and storage. vPC leverages port-channels and has no blocked ports. physical. Administrators have hard time doing unified cabling. It consolidates multiple types of traffic. d. vPC leverages IS-IS protocol to construct network topology. Spanning tree is automatically disabled when you enable vPC. b. What is one possible way to solve the problem of different application demands in today’s data centers? a. Too low power consumption is not good for the electric companies. Deploy separate application-specific dedicated environments. Virtual Security Gateway d. Voice traffic b. What is vPath? a. SAN storage traffic c. It is a virtual path selected between two virtual machines in the data center. b. c. It leverages the Ethernet network. Virtual Adaptive Security Appliance c. and cloud environments. c. b. It supports virtual. d. It leverages IEEE data center bridging.

d. Modern application environments no longer rely on simple client/server transactions. The frame is turned into multicast and sent only to some of the remote sites. ITR stands for Ingress Transformation Router. Over the years it has withstood the test of time against other technologies trying to displace it as the popular option for data center network environments. True b.d. converged fabric. The frame is dropped and the ICMP message is sent back to the source. Ethernet enjoys a broad base of engineering and operational expertise resulting from its ubiquitous presence in the data center environments for many years. Such traffic patterns also facilitated the transition from traditional multilayer “stove pipe” designs of access. b. Operational silos and isolated infrastructure solutions are not optimized for virtualized and cloud environments. The frame is dropped. True or false? In LISP. IT departments are also under everincreasing pressure of delivering more for less. which can simultaneously support multiple traffic types. Figure 10-1 depicts traditional north-south and the new east-west data center switching architectures. Wireless d. It is a feature of Cisco Nexus 1000v virtual switches to steer the traffic toward virtual service nodes. 8. 10. c. The frame is flooded through the overlay to all remote sites. False Foundation Topics Challenges of Today’s Data Center Networks Ethernet is by far the predominant choice for a single. At the same time. a. which enable strategic business growth and provide a foundation for the continuing trends of the private/public cloud deployments. where a single client request causes several network transactions between the servers in the data center. and core layers into a flatter data center switching fabric architecture leveraging SpineLeaf Clos topology. Traffic patterns have shifted from being predominantly north-south to being predominantly east-west. Which parts of the infrastructure can DCNM products manage? a. LAN b. What happens when OTV receives a unicast frame and the destination MAC address is not in the MAC address table? a. SAN c. vPC 9. It is well understood by network engineers and developers worldwide. aggregation. . IT departments of today’s organizations are under increasing pressure to deliver technological solutions. where decreasing budgets have to accommodate for innovative technology and process adoption. This prevents the data centers of today from becoming the engine of enablement for the business growth.

applicationspecific dedicated environments. Ultimately. which is a predominant use case behind the Unified Fabric deployment. whereas others absolutely must have guaranteed no-drop packet delivery. Packet drops can have severely adverse effects on overall application performance. The demands for high performance and cluster computing. FCoE protocol requires a lossless underlying fabric where packet loss is not tolerated. Client-to-server and server-to-server transactions usually involve short and bursty in nature transmissions. All these require the network infrastructure to be flexible and intelligent to discover and accommodate the changes in network traffic dynamics. Now application performance is measured along with performance of the network. Different types of traffic carry different characteristics. In the converged infrastructure of the Unified Fabric. growing application demands require additional capabilities from the underlying networking infrastructure. some accept packet drops as part of a normal network behavior. differences also exist in the capability of applications to handle packet drops. One method to solve this is to deploy multiple. whereas most of the server-to-storage traffic consists of long-lasting and steady flows. separate. as well as services such as Remote Direct Memory Access (RDMA). both of the behaviors must be accommodated. are at times addressed by leveraging the low latency InfiniBand network. Networks deployed in high-frequency trading environments have to accommodate ultra-low latency packet forwarding where traditional rules of typical Enterprise application deployments do not quite suffice. The best example is the FCoE protocol. which means that network available bandwidth and latency numbers are both important. even though this is not a popular technology in the majority of the modern . Different protocols respond differently to packet drops. This fundamental shift in traffic volumes and traffic patterns has resulted in greater reliance on the network. It is not uncommon for the data centers of today to deploy an Ethernet network for IP traffic and a Fibre Channel storage-area network (SAN) for block-based storage connectivity.Figure 10-1 Multilayer and Spine-Leaf Clos Architecture Increased volumes of data led to significant network and storage traffic growth across the infrastructure. Although bandwidth and latency matter.

Cisco Unified Fabric creates a platform of a true multiprotocol environment on a single unified network infrastructure. features. we can see that Ethernet has a promise of meeting the majority of the requirements for all the network types. and lower operating costs. Figure 10-2 Typical Distinct Data Center Networks When these types of separate networks are evaluated against the capabilities of the Ethernet technology. while greatly increasing network business value. faster application rollout.data centers. This highly segmented deployment results in high capital expenditures (CAPEX). data networking. . unifying storage. intelligence. It provides foundational connectivity service. This is where Cisco Unified Fabric serves as a key building block for ubiquitous infrastructure to deploy applications on top. resulting in fewer management points with shorter bring up time. and cloud environments. which breaks organizational silos while reducing technological complexity. and so on. Some organizations even build separate Ethernet networks for IP-based storage. and capabilities are combined with concepts of convergence. and network services onto consistent networking infrastructure across physical. security. All these combined create an opportunity for consolidation. Convergence of Network and Storage Convergence properties of the Cisco Unified Fabric architecture are the melding of the storage-area network (SAN) with a local-area network (LAN) delivered on top of enhanced Ethernet fabric. virtual. It enables optimized resource utilization. Cisco Unified Fabric Principles Cisco Unified Fabric is a multifaceted approach where architectures. Figure 10-2 depicts a traditional concept of distinct networks purposely built for specific requirements. high operational expenses (OPEX). greater application performance. scalability. and high demands for management. high availability. Such Ethernet fabric would be operationally simpler to provision and operate. The following sections examine in more detail the key properties of the Unified Fabric network.

Figure 10-4 depicts unified port operation. Figure 10-3 FCoE and Fibre Channel Bidirectional Bridging The Cisco Nexus 5000 series switches can leverage either native Fibre Channel ports or the unified ports to bridge between FCoE and traditional Fibre Channel environment. Similarly. or 1/2/4/8G Fibre Channel mode. Both the Cisco MDS 9000 family of storage switches and the Cisco Nexus 5000/7000 family of Unified Fabric network switches have features that facilitate network convergence. For instance. Figure 10-3 depicts bridging between FCoE and traditional Fibre Channel fabric. where FCoE servers connect to Fibre Channel storage arrays leveraging the principle behind bidirectional bridging on Cisco Nexus 5000 series switches. . Cisco Nexus 5000 and Cisco MDS 9000 family switches can provide full. and by that providing investment protection for the current network equipment and technologies.Although end-to-end converged Unified Fabric provides the utmost advantages to the organizations. unified ports can be software configured to operate in 1G/10G Ethernet. 10G FCoE. servers attached through traditional Fibre Channel HBAs can access newer storage arrays connected through FCoE ports. allowing FCoE servers to connect to older Fibre Channel-only storage arrays. At the time of writing this book. bidirectional bridging between FCoE and traditional Fibre Channel fabric. it also allows the flexibility of integrating into existing non-converged infrastructure.

clustering. Example 10-1 Configuring Unified Port on the Cisco Nexus 5000 Series Switch Click here to view code image nexus5500# configure terminal nexus5500 (config)# slot 1 nexus5500 (config-slot)# port 32 type fc nexus5500 (config-slot)# copy running-config startup-config nexus5500 (config-slot)# reload Consolidation can also translate to a very significant savings. and an out-of-band management connection.and IP-based storage protocols as the time progresses. With such great flexibility. Example 10-1 shows a configuration example for defining unified ports as port type Fibre Channel on Cisco Nexus 5500 switches. and so on. a typical server requires at least five connections: two redundant LAN connections. two redundant SAN connections. you can deploy Cisco Nexus Unified Fabric switches initially connected to traditional systems with HBAs and convert to FCoE or other Ethernet. At the time of writing this book. If you add on top separate connections for backup. for each server in the data center. you can quickly see how it translates into a significant .Figure 10-4 Unified Port Operation Note Unified port functionality is not supported on Cisco Nexus 5010 and 5020 switches. For example. Cisco Nexus 7000 series switches do not support unified port functionality.

With consolidation in place. and fewer switches and adapters also mean less power consumption. this includes provisioning of all the Fibre Channel services. such as zoning and name servers. this includes support for multicast and broadcast traffic. Figure 10-6 depicts the logical concept behind . To enforce Fibre Channel characteristics at each hop along the path between initiators and targets. all these connections can be replaced by the converged network adapters. From the perspective of a large size data center. adapters. link aggregation. One possible concern for delivering converged infrastructure for network and storage traffic is that Ethernet protocol does not provide guaranteed packet delivery. it can yield even further significant savings. which is essential for healthy Fibre Channel protocol operation. and fewer network layers. equal cost multipathing. This sometimes is referred to as FCoE Dense Mode. this cabling reduction translates to fewer ports. intermediate Unified Fabric Ethernet switches operate in either FCoE-NPV or FCoE switching mode. At the same time. Fibre Channel frames are encapsulated in outer Ethernet headers for hop-by-hop delivery between initiators and targets where lossless behavior is expected. a converged data center network provides all the functionality of existing disparate network and storage environments. VLANs. Figure 10-5 depicts an example of how network adapter consolidation can greatly reduce the number of server adapters used in the Unified Fabric topology. Such consistency allows IT staff to leverage operational practices of the isolated environments and apply them to the converged environment of the Unified Fabric. All switches composing the Unified Fabric environment run the same Cisco NX-OS switch operating system. Along with reduction in cabling. consolidation also carries operational improvements. Figure 10-5 Server Adapter Consolidation Using Converged Network Adapters If network adapter virtualization in the form of Cisco Adapter-FEX or Cisco VM-FEX technologies is leveraged. proving at last 2:1 saving in cabling. and switches. inter-VSAN routing (IVR). With FCoE. For Ethernet. fewer switches. and support for virtual SANs (VSAN). Fewer cables have implications on cooling by improving the airflow in the data center. which contribute to lower over-subscription and eventually high application performance.investment. For Fibre Channel. and so on. and the like.

These additional standards make sure that Unified Ethernet fabric provides lossless. For smaller size deployments. IEEE defines a set of standards augmenting standard Ethernet protocol behavior. Convergence with Cisco Unified Fabric also extends to management. Cisco’s primary management tool. and Cisco Unified Fabric architecture is flexible to allow gradual. however. which is increasingly necessary not only for the storage traffic. The only partial exception is Dynamic FCoE. Cisco DCNM allows managing network and storage environments as a single entity. Cisco Data Center Network Manager (DCNM). Running FCoE requires additional capabilities added to the Ethernet Unified Fabric network. SAN and LAN convergence does not have to happen over night. is optimized to work with both the Cisco MDS 9000 series storage switches and Cisco Nexus 5000/7000 series Unified Fabric network switches. no-drop. FICON and Fibre Channel over IP. as well as monitor and automate common network administration tasks. support these standards for enhanced Ethernet. With the Dynamic FCoE feature. making sure frame hashing does not result in out-of-order packets. Cisco Nexus 5000/7000 series Unified Fabric switches. and they are required for successful FCoE operation.having all FCoE intermediate switches operating in either FCoE-NPV or FCoE switching mode. Cisco MDS 9250i Multiservice Fabric Switch offers support for the Unified Fabric needs of running Fibre Channel. FCoE switching mode implies running FCoE Fibre Channel Forwarder. which can manage a fully converged network. Cisco DCNM provides a secure environment. FCoE. and many times non-disruptive. in-order delivery of packets endto-end. as well as the MDS 9500/9700 series storage switches with FCoE line cards. This set of standards is collectively called Data Center Bridging (DCB). where FCoE traffic is forwarded over Cisco FabricPath fabric core links. iSCSI. they still perform certain FCoE functions—for example. but also other types of multiprotocol communication across the Unified Fabric. sometimes referred to as FCoE Sparse Mode. Cisco FabricPath Spine Nodes do not act as FCoE Fibre Channel Forwarders. DCB increases overall reliability of the data center networks. Ultimately. the FCF. Figure 10-6 FCoE Dense Mode At the time of writing this book. where intermediate Ethernet switches do not have FCoE awareness. Cisco does not support FCoE topologies. migration from disparate .

and reencapsulate it in the FCoE protocol for transmission over the Unified Fabric. in-order delivery. NICs and HBAs make way for a new type of adapter called converged network adapter (CNA). the underlying network infrastructure cannot differentiate between iSCSI and any other TCP/IP application. which relies on IEEE DCB standards for efficient operation. no-drop. and it simply delivers IP packets between source and destination. Figure 10-7 depicts iSCSI deployment options with and without the gateway service. For environments where storage arrays lack iSCSI functionality. Small Computer System Interface over IP (iSCSI) can be considered as an alternative to FCoE. such as a Cisco MDS 9250i storage switch. which are used for this solution. extract the payload. In this case. iSCSI does not support native Fibre Channel services and tools. Figure 10-7 iSCSI Deployment Options Contrary to FCoE. Traditionally. which are built around traditional Fibre Channel fabrics. With convergence of the two. server network interface cards (NIC) and host bus adapters (HBA) are used to provide connectivity into isolated LAN and SAN environments. iSCSI allows encapsulating original Fibre Channel payload in the TCP/IP packets. some NICs and CNAs connected to Cisco . to terminate the iSCSI TCP/IP session. Even though DCB is not mandatory for iSCSI operation. TCP/IP offload can be leveraged on the network adapter to help with higher CPU utilization. organizations can leverage an iSCSI gateway device. Migration from NICs and HBAs to the CNA can occur either during the regular server hardware refresh cycles or it can be carried out by proactively swapping them during planned maintenance windows.isolated environments to unified network infrastructure. In environments where storage connectivity has less stringent requirements. iSCSI can also introduce increased CPU cycles resulting from TCP/IP overhead on the server. but that increases the cost of network adapters. iSCSI leverages TCP protocol to deliver lossless.

The capability to grow the network in a manner that causes the least amount of disruption to the ongoing operation is a crucial aspect of scalability. Unified Fabric leveraging connectivity speeds of 10G. which contributes to fewer ports. DCBX would negotiate configuration and settings between the switch and the network adapter by leveraging typelength-value (TLV) and sub-TLV fields. Example 10-2 Configuring No-Drop Behavior for iSCSI Traffic Click here to view code image nexus5000# configure terminal nexus5000(config)# class-map type qos match-all c1 nexus5000(config-cmap-qos)# match protocol iscsi nexus5000(config-cmap-qos)# match cos 6 nexus5000(config-cmap-qos)# exit nexus5000(config)# policy-map type qos c1 nexus5000(config-pmap-qos)# class c1 nexus5000(config-pmap-c-qos)# set qos-group 2 nexus5000(config-pmap-c-qos)# exit nexus5000(config)# class-map type network-qos c1 nexus5000(config-cmap-nq)# match qos-group 2 nexus5000(config-cmap-nq)# exit nexus5000(config)# policy-map type network-qos p1 nexus5000(config-pmap-nq)# class type network-qos c1 nexus5000(config-pmap-c-nq)# pause no-drop Although iSCSI continues to be a popular option in many storage applications. as well as the overall fabric and system scalability. Potential use of other DCB standards coded in TLV format adds even further flexibility to the iSCSI deployment by allowing underlying Unified Fabric infrastructure separate iSCSI storage traffic from the rest of IP traffic. such as. allowing a true geographic span. Class of Service (COS) markings. fewer switches. allowing overall larger scale. Example 10-2 shows the relevant command-line interface commands to enable a no-drop class for iSCSI traffic on Nexus 5000 series switches for guaranteed traffic delivery. is achieved without compromising manageability and cost. Each network deployment has unique . for critical storage environments. it is not quite replacing the wide adoption of the Fibre Channel protocol. particularly with small and medium-sized businesses (SMB). In that case. Scalability parameters can also stretch beyond the boundary of a single data center. part of DCB.Nexus 5000 series switches can be configured to accept configuration values sent by the switches leveraging Data Center Bridging Exchange protocol. where individual device performance. to the connected network adapters in a scalable manner. Scalability and Growth Scalability is defined as the capability to grow the environment based on changing needs. for example. either traditional or FCoE. Unified Fabric is often characterized as having a multidimensional scalability. This allows distributing configuration values. and subsequently fewer network tiers and fewer management points. 40G. and 100G offers substantially higher useable bandwidth for the server and storage connectivity.

which cannot be addressed by a rigid architecture. Its architecture does not include fabric modules. and several differentiating features. Figure 10-9 depicts Cisco Nexus 5000 series switch fabric architecture. while also following the investment protection philosophy. and F-series I/O modules for lower latency. For example. Cisco Nexus 5000 series switches offer a highly scalable access layer platform with 10G and 40G Ethernet capabilities and FCoE support for Unified Fabric convergence. higher port capacity. Figure 10-8 Cisco Nexus 7000 Series Switch Fabric Architecture Note Cisco Nexus 7004 switch has no fabric modules. Cisco’s entire data center switching portfolio adheres to the principles of supporting incremental growth and demands for scalability. Cisco Nexus 9000 series switches are based on a combination of Cisco’s own ASICs and Broadcom ASICs. .characteristics. and the line cards are directly interconnected through the switch backplane. These switches provide fixed and modular switch architecture with high port density and the capability to run Application Centric Infrastructure. Figure 10-10 depicts Cisco Nexus 9500 series modular switch architecture. Cisco Nexus 7000 series switches offer expandable pay-asyou-grow scalable switch fabric architecture and non-oversubscribed behavior for high-speed Ethernet fabrics. Cisco Nexus 9500 series modular switches have no backplane. but instead uses a unified port controller and Unified Fabric crossbar. Figure 10-8 depicts the Cisco Nexus 7000 series switch fabric architecture. Unified Fabric caters to the needs of ever expanding scalability by providing flexible architecture. Through an evolution of the Fabric Modules and the I/O Modules. Figure 10-9 Cisco Nexus 5000 Series Switch Fabric Architecture The most recent addition to the Cisco Nexus platforms is the Cisco Nexus 9000 family switches. Cisco Nexus 7000 series switches offer highly scalable M-series I/O modules for large capacities and feature richness.

persistent storage space (PSS) for process runtime information. Cisco MDS switches offer a highly available platform for storage area networks with expandable and resilient architecture of supporting increased Fibre Channel speeds and services. inservice software upgrade (ISSU) and so on. Figure 10-11 depicts NX-OS software architecture components. runs modular software architecture based on an underlying Linux kernel. Cisco NX-OS operating system.Figure 10-10 Cisco Nexus 9500 Series Modular Switch Architecture On the storage switch side. Figure 10-11 NX-OS Software Architecture Components Cisco NX-OS leverages innovative approach in software development by offering stateful process restart. which powers data center network and storage switching platforms. component patching. coupled with Role-Based Access Control (RBAC) and a .

Cisco implements virtual port channel (vPC) technology to help alleviate deficiencies of traditional spanning tree–driven topologies. this is no longer the case. Figure 10-13 depicts vPC operational logic. Legacy spanning tree–driven topologies lack fast convergence required for the stringent demands of modern applications. In addition to the changing network traffic patterns. allowing faster convergence and full utilization of the Access Switch provisioned uplink bandwidth. This shift has significant implications on the data center switching fabric designs. Spanning tree Layer 2 loop-free topologies do not allow backup links and. Figure 10-12 depicts spanning tree Layer 2 operational logic. . cause “waste” of 50% of provisioned uplink bandwidth on access switches by blocking redundant uplinks. and service insertion methodologies. vPC still runs Spanning Tree Protocol underneath as a safeguard mechanism to break Layer 2 bridging loops should they occur. where Etherchannels or port channels enable all-forwarding uplink topologies. traditional growth implied adding capacity in the form of switch ports or uplink bandwidth. This generates additional requirements on the data center switching fabric side around extensibility of Layer 2 connectivity. Figure 10-12 Spanning Tree Layer 2 Operational Logic The evolution of spanning tree topologies led to the use of multichassis link-aggregation solutions. Cisco NX-OS software enjoys frequent updates to keep up with the latest feature demands. as such. In the past. many applications reside on virtual machines with inherent characteristics of workload mobility and any workload anywhere.rich set of remote APIs. fabric scale. fabric capacities. Modern data centers observe different trends in packet forwarding where traditional client/server applications driving primarily north-south traffic patterns are replaced by a new breed of applications with predominantly east-west traffic patterns.

by using IS-IS routing protocol instead of Spanning Tree Protocol. Cisco FabricPath takes a step further by completely eliminating the dependency on the Spanning Tree Protocol in constructing loop-free Layer 2 topology. . It employs the principle of conversational learning to further scale switch hardware forwarding resources. It leverages Spine-Leaf Clos architecture to construct data center switching fabric with predicable end-to-end latency and vast east-west bisectional bandwidth. Even with vPC topologies. Cisco FabricPath combines the simplicity of Layer 2 with proven scalability of Layer 3.Figure 10-13 vPC Operational Logic Can we do better? Yes. Figure 10-14 depicts Cisco FabricPath topology and forwarding. the scalability of solutions requiring large Layer 2 domains in support of virtual machine mobility can be limited. It also allows equal cost multipathing for both unicast and multicast traffic to maximize network resource utilization.

Figure 10-14 Cisco FabricPath Topology and Forwarding By taking data center switching from PODs to fabric. Similar to Cisco FabricPath. Use of Cisco Nexus 2000 series Fabric Extenders in conjunction with Cisco FabricPath architecture enables us to build even larger-scale switching fabrics without increasing administrative domain. Figure 10-15 depicts VXLAN topology and forwarding. such as virtual extensible LANs or VXLAN. which requires a large number of virtual segments.000 VLANs supported by the IEEE 802. This is an important consideration with multitenant Unified Fabric. sometimes also known as the Virtual Network Identifier (VNID) field. providing similar advantages to Cisco FabricPath while enjoying broader switching platform support. Figure 10-16 depicts the VXLAN packet format.1Q standard widely deployed between switches. Figure 10-15 VXLAN Topology and Forwarding VXLAN also assists in breaking through the boundary of ~4. Cisco FabricPath breaks the boundaries and enables large-scale virtual machine mobility domains. Figure 10-16 VXLAN Packet Fields . VXLAN leverages underlying Layer 3 topology to extend Layer 2 services on top (MAC-in-IP). Principles of Spine-Leaf topologies can also be applied with other encapsulation technologies. where you can see the 24-bit VXLAN ID field. The 24-bit field potentially enables you to create more than 16 million virtual segments.

differentiated service. Cisco NX-OS. It provides consistency. called Virtual Supervisor Module (VSM). Consistent use of policies also enables faster application deployment cycles. Policies also greatly reduce the risk of human error. Cisco Nexus 1000v is a cross-hypervisor virtual software switch that allows delivering consistent network. high availability. In the past. and intelligent services delivered to applications residing on physical or virtual network infrastructure alike. Policies can be templatized and applied in a consistent repeatable manner. removal. so that creation. Policies are applied across the infrastructure and most importantly in the virtualized environment where workloads tend to be rapidly created. For the Cisco Nexus and Cisco MDS family of switches. removed. VSMs should be deployed in pairs to provide the needed redundancy. Security and Intelligence Network intelligence coupled with security is an essential component of Cisco Unified Fabric. control the switch line cards called Virtual Ethernet Module (VEM). performance. deployment of applications required a considerable effort of sequentially configuring various physical infrastructure components. where switch supervisor modules. it raises legitimate security concerns.VXLAN fabric allows building large-span virtual machine mobility domains with significant scale and bandwidth. Cisco Unified Fabric policies around security. The use of consistent network policies is essential for successful Unified Fabric operation. and the like can be consistently set across the environment allowing to tune those for special application needs if necessary. It operates in the manner similar to a modular chassis. and moved around. Cisco Unified Fabric also contains a foundation for traffic switching in the hypervisor layer in the form of a Cisco Nexus 1000v product. Cisco Unified Fabric contains a complete portfolio of security products that are virtualization aware. Figure 10-17 depicts Cisco Nexus 1000v components. a common feature set. or migration of individual workloads has no impact on the overall environment operation. . and Layer 4 through Layer 7 policies for the virtual workloads. security. the intelligence comes from a common switch operating system. As multiple types of traffic are consolidated on top of the Unified Fabric infrastructure. These products run in the hypervisor layer to secure virtual workloads inside a single tenant or across tenants.

. Cisco Virtual Security Gateway (VSG) is deployed on top of Cisco Nexus 1000v virtual switch catering the case of intra-tenant security. for easy single pane of glass operation. or they can secure intertenant communication between the virtual machines belonging to different tenants.Figure 10-17 Cisco Nexus 1000v Components Cisco Nexus 1000v runs NX-OS software on the VSMs and communicates with Virtual Machine Managers. Virtual services deployed as part of the Cisco Unified Fabric solution can secure intra-tenant communication between the virtual machine belonging to the same tenant. such as VMware vCenter. It provides logical isolation and policy enforcement based on both simple networking constructs and more elaborate virtual machine attributes. Figure 10-18 depicts an example of Cisco VSG deployment to secure a three-tier application within a single tenant space.

Example 10-3 VSG Port-Profile Configuration . which results in high traffic throughput. Example 10-3 shows how VSG is inserted into the Cisco Nexus 1000v port-profile for specific tenant use. With Cisco vPath. the first packet of each flow is redirected to VSG for inspection and policy enforcement. All the subsequent packets of the flow are switched in a fast-path in the hypervisor kernel. Figure 10-19 depicts vPath operation with VSG. Following inspection. This model contributes to consistent policies across the environment. a policy decision is made and programmed into the Cisco Nexus 1000v switch residing in hypervisor kernel space. Figure 10-19 vPath Operation with VSG VSG security policies can be deployed in tandem with switching policies enforced by the Cisco Nexus 1000v.Figure 10-18 Three-Tier Application Secured by VSG The VSG product offers a very scalable mode of operation leveraging Cisco vPath technology for traffic steering.

physical ASAs are connected to the services leaf nodes. Physical ASA appliances are many times positioned at the Unified Fabric aggregation layer. In the Spine-Leaf Clos architecture.10. Figure 10-20 Typical Cisco ASA Deployment Topologies Both physical and virtual ASA together provide a comprehensive set of security services for Cisco Unified Fabric. Another component of intelligent service delivery comes in the form of application acceleration. and they can also provide security services for the physical non-virtualized workloads to create consistent policy across virtual and physical infrastructures.1 vlan 10 profile vnsp-1 Inter-tenant space can be secured by either virtual firewall in the form of virtual Cisco ASA product deployed in the hypervisor layer or by the physical appliances deployed in the upstream physical network. It is also common to use two layers of security with virtual firewalls securing east-west traffic between the virtual machines and physical firewalls securing the north-south traffic in and out of the data center. This is where Cisco Wide Area Application Services (WAAS) offer a solution for application acceleration across WAN.10. Cisco WAAS offers the following main advantages: Transparent and standards-based secure application delivery LAN-like application performance with application-specific protocol acceleration .Click here to view code image nexus1000v# configure terminal nexus1000v(config)# port-profile host-profile nexus1000v(config-port-prof)# org root/Tenant-A nexus1000v(config-port-prof)# vn-service ip 10. Figure 10-20 depicts typical Cisco ASA topology in a traditional multilayer and Spine-Leaf architectures.

rather than transmitting the entire data. Second. Hypervisor deployment model brings the advantages of application acceleration close to the virtual workloads. Delivering applications over WAN is often accompanied by jitter. which can be delivered in the form of a physical appliance. In both cases. and grow to be business inhibitors. a virtual appliance. or a Cisco IOS software feature. a router module. influence user productivity. If left unaddressed. Some of the traffic transmitted across the WAN is repetitive and can be locally cached.IT agility through reduction of branch and data center footprint Enterprisewide deployment scalability and design flexibility Enhanced end-user experience Cisco WAAS reduces consumption of WAN bandwidth. Cisco WAAS leverages generic network traffic optimization. Cisco WAAS commonly leverages WCCP protocol traffic redirection to steer the traffic off path toward the WAAS appliance cluster. these factors can adversely impact application performance. With DRE. it can be deployed in the hypervisor leveraging Cisco vPath redirection mechanisms. Cisco WAAS leverages context-aware data redundancy elimination (DRE) to reduce traffic load on WAN circuits. First. which cause the content to be served out of the locally maintained cache. . Geographically dispersed organizations leverage significant WAN capacities and incur significant operational expenses for maintaining circuits and service levels. Cisco WAAS employs TCP flow optimization (TFO) to better manage TCP window size and maximize network bandwidth use in a way superior to common TCP/IP stack behavior on the host. Cisco WAAS leverages adaptive persistent session-based Lempel-Ziv (LZ) compression to shrink the size of network data traveling across. leveraging deployments of WAAS components in various parts of the topology. Figure 10-21 depicts use of WAN acceleration between the branch office and the data center. both ends of the WAN optimized connection can signal each other with very small signatures. and bandwidth starvation. as well as application-specific acceleration techniques to counter adverse impact of delivering applications across WAN. it can be deployed either at the edge of the environment in the case of a branch office or at the aggregation layer in case of a data center. Finally. subsequently optimizing application performance across long-haul circuits. packet loss. Cisco WAAS technology has two main deployment scenarios.

” for a more detailed discussion about Cisco WAAS technology. It provides visibility and control of the unified data center to uphold stringent servicelevel agreements. types of workloads. It dynamically monitors and takes actions when needed on both physical and virtual infrastructures. Figure 10-22 depicts Cisco Prime DCNM dashboard. “Cisco WAAS and Application Acceleration. DCNM can set and automatically provision a policy on converged LAN and SAN environments. The selection of an engine is based on performance characteristics. Cisco Prime Data Center Network Manager (DCNM) manages Cisco Nexus and Cisco MDS families of switches through a single pane of glass. These factors contribute to the overall data center infrastructure reliability and uptime. and deployment model.Figure 10-21 WAN Acceleration Between Data Center and Branch Office Cisco WAAS technology deployment requires positioning of relevant application acceleration engine on either side of the WAN connection. Management of the network is at the core of its intelligence. DCNM provides a robust set of features by streamlining the provisioning aspects of the Unified Fabric components. . Refer to Chapter 20. thereby improving business continuity.

At the same time. each data center must remain an autonomous unit of operation. Inter-Data Center Unified Fabric Geographically dispersed data centers provide added application resiliency and enhanced application reliability. Layer 3. Cisco DCNM provides automated discovery of the network components.Figure 10-22 Cisco Prime DCNM Dashboard Software features provided by Cisco NX-OS can be deployed with Cisco DCNM because it leverages multiple dashboards for the ease of use and representation of the overall network topology. keeping track of all physical and logical network device information. Such properties are the extension of Layer 2. and storage connectivity between the data centers powered by the end-to-end Unified Fabric approach. Cisco DCNM has extensive reporting features that allow building custom reports specific to the operating environment. including an endto-end path containing a mix of traditional Fibre Channel and FCoE. . DCNM can also handle provisioning and monitoring aspects of FCoE deployment. This discovery data can be imported to the change management systems for easier operations. Figure 10-23 depicts data center interconnect technology solutions. Network foundation to enable multi-data-center deployment must extend the network properties across the data centers participating in the distributed setup. Organizations can also leverage reports available through the preconfigured templates.

. a need for interconnecting isolated storage networks while maintaining characteristics of performance. These could be long haul point-to-point links. storage. As organizations leverage ubiquitous storage services. are in use today. deploying storage environments with different administrative boundaries. which lay the foundation for inter-data center Unified Fabric extension. or outcomes of acquisitions. Layer 3 networks allow extending storage connectivity between disparate data center environments by leveraging Fibre Channel over IP protocol (FCIP). or converged protocol on top. Fibre Channel over IP Disparate storage area networks are not uncommon. while Layer 2 bridged connectivity is achieved by leveraging Overlay Transport Virtualization (OTV).Figure 10-23 Data Center Interconnect Methods Multiple technologies. Disparate storage environments are most often located in topologies segregated from each other by Layer 3 routed networks. Layer 1 technologies in the form of DWDM or CWDM allow running any data. Figure 10-24 depicts the principle of transporting Fibre Channel frames across IP transport. Fibre Channel over IP extends the reach of the Fibre Channel protocol across geographic distances by encapsulating Fibre Channel frames in outer IP headers for transport across interconnecting Layer 3 routed networks. as well as private or public MPLS networks. These could be artifacts of building isolated environments to address local needs. and fault-tolerance becomes increasingly important. availability.

Should packet drop occur. which in turn have a detrimental effect on FCIP storage traffic performance. any packets carrying Fibre Channel payload are retransmitted following standard TCP/IP protocol operation. Intelligent control over TCP window size is needed to make sure long haul links are fully utilized while preventing excessive traffic loss and subsequent retransmission. SACK is not enabled by default on Cisco devices. or MTU. it must be manually enabled if required by issuing the sack-enable CLI command under the FCIP profile configuration. but so do the chances of packet drops. Path MTU Discovery (PMTUD) mechanism can be invoked on both sides of the FCIP connection to make sure the effective maximum supported path MTU is automatically discovered. cannot come at the expense of storage network performance. In case of packet loss. This. As TCP window size is scaled up. Large Maximum Transmission Unit size. it must be manually enabled if required by issuing the pmtu-enable CLI command under the FCIP profile configuration. and it must be enabled and supported on both ends of FCIP circuit. Cisco FCIP capable platforms allow several features to maximize utilization of the intermediate Layer 3 links. Acknowledgments are used to signal from the receiver back to the sender the receipts of contiguous in-sequence TCP segments. Cisco implements a selective acknowledgement (SACK) mechanism where the receiver can selectively acknowledge TCP segments received after packet loss occurs. however. This causes unnecessary traffic load. in the transit IP network can also greatly influence the performance characteristics of the storage traffic carried by FCIP.Figure 10-24 FCIP Operation Principle Leveraging TCP/IP protocol provides the FCIP guaranteed traffic delivery required for healthy storage network operation. TCP/IP protocol traditionally achieves its reliability for traffic delivery by leveraging the acknowledgement mechanism for the received data segments. Support for jumbo frames can allow transmission of full-size 2148 bytes Fibre Channel frames without requiring fragmentation and reassembly at the IP level into numerous TCP/IP segments carrying a single Fibre Channel frame. This lossless storage traffic behavior allows FCIP to extend native Fibre Channel connectivity across greater distances. . segments received after the packet loss occurs are not acknowledged and must be retransmitted. the sustained network throughput increases. rather than all packets beyond the acknowledgement.” and increased delays. network capacity “waste. In this case the sender only needs to retransmit the TCP segment lost in transit. FCIP PMTUD is not automatically enabled on Cisco devices. The SACK mechanism is most effective in environments with long and large transmissions. such as leveraging TCP extensions for high performance (IETF RFC 1312) and scaling TCP window size up to 1Gb.

introduce challenges that can inhibit their practical use. Spanning Tree Protocol–based extension topologies do not scale well and can cause cascading failures that can propagate beyond the boundaries of a single availability domain. zoning. These redundant tunnels are represented as virtual E_port or virtual TE_port types. and efficiency. At the time of writing this book. FCIP tunnels can be bound into port channels for simpler FSPF topology and efficient traffic loadbalancing operation. Each VSAN. while proper capacity management helps alleviate performance bottlenecks. Figure 10-25 Redundant FCIP Tunnels Leveraging Port Channeled Connections FCIP protocol allows maintaining VSAN traffic segregation by building logically isolated storage environments on top of shared physical storage infrastructure. and even though FCIP can handle such conditions. and path redundancy eliminates the single point of failure. either local or extended over the FCIP tunnel. fabric services. Overlay Transport Virtualization Certain mechanisms for Layer 2 extension across data centers. Figure 10-25 depicts the principle behind provisioning multiple redundant FCIP tunnels between disparate storage network environments leveraging port channeled connections.Deployment of Quality of Service features in the transit Layer 3 routed network is not a requirement for FCIP operation. This frame tagging is transparently carried over FCIP tunnels to extend the isolation. At the same time. and fabric management for ubiquitous deployment methodology. such features are highly advised to make sure that FCIP traffic is subjected to the absolute minimum influence from transient network delays and congestion. reliability. however. Some of the storage traffic types encapsulated inside the FCIP packets can be very sensitive to degraded network conditions. Along with performance characteristics. FCIP must also be designed following high availability and redundancy principles. Utmost attention should be paid to make sure the transit Layer 3 routed network between the sites interconnected by FCIP circuits conforms to the high degree of availability. overall storage performance will gain benefits from QoS policies operating at the network level. FCIP protocol is available on Cisco Nexus MDS 9500 modular switches by leveraging IP Storage Services Module (IPS) and the Cisco MDS 9250i Multiservice Fabric Switch. They fall short of making use of . For example. FCIP allows provisioning numerous tunnels interconnecting disparate storage networks. VSAN segregation is traditionally achieved through data-plane frame tagging. Leveraging device. These tunnels contribute to connectivity resiliency and allow failover. Just like regular inter-switch links. exhibits the same characteristics in regard to routing. circuit. which are considered by the Fabric Shortest Path First (FSPF) protocol for equal-cost multipathing purposes. or in some cases across PODs of the same data center.

multihomed. as well as provide intelligent traffic loadbalancing across the data center interconnect links. A more efficient protocol is needed to leverage all available paths for traffic forwarding. however. Figure 10-26 OTV Packet Forwarding Principle OTV simulates the behavior of traditional bridged environments while introducing improvements. which greatly reduces the risk often associated with large-scale Layer 2 extensions. Let’s now look at a possible alternative. and multipathed Layer 2 data center interconnect technology that preserves isolation and fault domain properties between the disparate data centers. Cisco Overlay Transport Virtualization (OTV) technology provides nondisruptive. Other Layer 2 data center interconnect technologies offer a more efficient model than Spanning Tree Protocol–based topologies. Example 10-4 shows IS-IS neighbor relations established between the OTV edge devices through the overlay. OTV leverages the control plane protocol in the form of an IS-IS routing protocol to dissimilate MAC address reachability across the OTV edge devices participating in the overlay. Figure 10-26 depicts basic OTV traffic forwarding principle where Layer 2 frames are forwarded between the sites attached to OTV overlay. OTV leverages a MAC-in-IP forwarding paradigm to extend Layer 2 connectivity over any IP transport.multipathing topologies by logically blocking redundant links. transport agnostic. The best example is MPLS-based technologies that require label switching between the data center networks for Layer 2 extension. they depend on specific transport network characteristics. Such a solution can be challenging for organizations not willing to develop operational knowledge around those technologies. at the same time. Example 10-4 IS-IS Neighbor Relationships Between OTV Edge Devices Click here to view code image .

+ . because the transport network is transparently forwarding their traffic. and scale. O .026d dynamic 0 F F Overlay1 Control plane protocol leveraged by OTV includes other Layer 2 semantics. G .54c2.primary entry. Let’s now look . Example 10-5 MAC Address Table Showing Local and OTV Learned Entries Click here to view code image OTV_EDGE1_SITEA# show mac address-table Legend: * . OTV can safeguard the Layer 2 environment across the data centers or across IP-separated PODs of the same data center by employing proven Layer 3 techniques. which allow maintaining traffic segregation across the data center boundaries. The use of overlay in OTV creates point-to-cloud type connectivity rather than a collection of fully meshed point-to-point links.016c dynamic 0 F F Po1 * 110 0000. Local OTV edge devices hold the mapping between remote host MAC addresses and the IP address of the remote OTV edge device.54c2. (R) . Control plane protocol offers intelligence above and beyond the rudimentary behavior of flood-and-learn techniques and thus is instrumental in implementing OTV value-added features to enhance and safeguard the inter-datacenter Layer 2 environment.6e02.43c1 OTV_EDGE2_SITE 001b.6e02. The actual hosts on both sides of the connection are unaware of OTV. An efficient network core is essential to deliver optimal application performance across the OTV overlay. which utilize data-plane learning or flood-and-learn behavior.43c4 Overlay1 Level 1 1 1 State UP UP UP Hold Time 00:00:25 00:00:27 00:00:07 Interface Overlay1 Overlay1 Overlay1 This approach is different from the traditional bridged environments. such as VLAN ID. which are difficult to set up.Overlay MAC age .020b dynamic 0 F F Overlay1 O 110 0000.6e02.6e01.020a dynamic 0 F F Overlay1 O 110 0000.010a dynamic 0 F F Po1 * 110 0000.ac6e dynamic 0 F F Po1 * 110 0000.6e01.0c07.Gateway MAC. and enable First Hop Routing Protocol localization and Address Resolution Protocol (ARP) proxying without creating an additional operational overhead. and multihoming mechanisms. multipathing.54c2.primary entry using vPC Peer-Link VLAN MAC Address Type age Secure NTFY Ports ---------+-----------------+--------+---------+------+----+-----------------G 001b.Routed MAC.54c2. manage. After MAC addresses are learned.016d dynamic 0 F F Po1 O 110 0000. As mentioned earlier.seconds since last seen. Such features stretch across loop-prevention. OTV edge devices leverage the underlying IP transport to carry original Ethernet frames encapsulated in IP protocol between the data centers.43c3 OTV_EDGE_SITE3 001b.43c2 static F F sup-eth1(R) * 110 0000.OTV_EDGE1_SITEA# show otv isis adjacency OTV-IS-IS process: default VPN: OTV-IS-IS adjacency database: System ID SNPA NEXUS7K-EDGE2 001b.6e01. where you can see locally learned MAC addresses as well as MAC addresses available through the overlay.6e02. Example 10-5 shows a MAC address table on an OTV edge device.026c dynamic 0 F F Overlay1 O 110 0000.

so OTV edge devices do not need to maintain a state of tunnels interconnecting data centers. where it can bring the advantages of fault containment and improved scale. it can be seamlessly deployed on top of any existing IP infrastructure. which provides a very convenient topological placement to address the needs of the Layer 2 extension across the data centers. This is a huge advantage for OTV. Use of multiple ingress and egress points does not require a complex deployment and operational model.a little bit deeper into the main benefits achieved with OTV technology. Optimal bandwidth utilization and scalability: OTV can make use of multiple edge devices better utilizing all available ingress and egress points in the data center through the built-in multipathing behavior. and ARP traffic is forwarded only in a control manner by offering local ARP response behavior. with the exception being multicast-based deployment. OTV is also fully transparent to the local Layer 2 domains. Multipathing is an important characteristic of OTV to maximize the utilization of available bandwidth. Little to no effect on existing network design: OTV leverages the IP network to transport original MAC encapsulated frames. Any network capable of carrying IP packets can be used for OTV core. Multipathing also applies to the traffic as it crosses the IP transit network. Local spanning tree topologies are unaffected too. unless specially allowed by the administrator. rather than tunneling. IS-IS. Link failure detection in the core is based on routing protocols. IS-IS can leverage its own hello/timeout timers or it can leverage BFD for very rapid failure detection. These behaviors create site containment similar to one available with Layer 3 routing. should one be formed in the locally attached Layer 2 environments. such as vPC. OTV employs the active-active mode of operation at the edge of the overlay where all ingress and egress points forward traffic simultaneously. where ECMP behavior can utilize multiple paths and links between the data centers. OTV can even be deployed over existing Layer 2 data center interconnect solutions. Fast failover and flexibility: OTV control plane protocol. so spanning tree domains in different data centers keep operating independently of each other. OTV also leverages dynamic encapsulation. Because OTV is transport agnostic. in time. OTV’s built-in loop prevention mechanisms avoid loops in the overlay and limit the scope of the loop. where hosts and switches not participating in OTV do not have any knowledge of OTV. OTV does introduce a boundary for the Spanning Tree Protocol. In case of OTV operating over multicast-enabled core. If deployed over vPC. Because control plane protocol is used to dissimilate MAC address reachability information. there is a requirement to run multicast routing in the underlying transit network. Another flexibility attribute of OTV comes from its incremental deployment characteristics. Optimized operations: OTV control plane protocol allows the simple addition and removal of . One of the flexibility attributes of OTV comes from its deployment model at the data center aggregation layer. OTV can be migrated from vPC to routed links for more streamlined infrastructure. OTV can operate over unicast-only or multicast-enabled core. where multicast-enabled core is required. which can rapidly reroute around failed sections of the network. dictates remote MAC address reachability. OTV does not allow unknown unicast traffic across the overlay. Fault isolation and site independence: Even though local spanning tree topologies are unaware of OTV. Spanning Tree Bridged Protocol Data Units (BPDU) are not forwarded over the OTV overlay. as with some of the other Layer 2 data center interconnect technologies.

and simplified disaster recovery are some of the main use cases behind Layer 2 extension. Cross-data center workload mobility. such as Ethernet. so data center interconnect requirements can be different based on specific clustered application needs. and Cisco CSR 1000v software router platforms. distributed application and server clustering. Ultimately. various failure conditions. dynamic resourcing. Layer 2 data center interconnect is not a requirement. which are replicated by the transit network and delivered to all other OTV edge devices that are part of the same overlay (listening to the same multicast group). OTV edge devices express their desire to join the overlay by sending IS-IS hello messages to the multicast control group. These applications make use of server virtualization technologies and leverage server virtualization properties. nonroutable applications. distributed applications. Nonroutable applications typically expect to be placed on a share transmissions medium. or resource constraints. nothing more than an IP network is required to make sure application components can communicate with each other. which changes its location. Endpoint mobility can cause inefficient traffic-forwarding patterns as a traditional network struggles to deliver packets across the shortest path toward an endpoint. . The traditional network forwarding paradigm does not distinguish between the identity of an endpoint (virtual machine) and its location in the network topology. virtualized applications reside on the virtual hypervisors. At the time of writing this book. As virtual machines move around the network due to administrative actions. however. the network must provide ubiquitous and optimized connectivity services across the entire mobility domain. such as Zen. Finally. with one example being Hadoop for Big Data distributed computation tasks. and as such require data center interconnect to exhibit Layer 2 connectivity properties. Most commonly deployed applications fall into one of the following categories: routable applications. it can be very beneficial in numerous cases. This behavior automatically establishes IS-IS neighbor relations across the shared medium without the need to define all remote OTV edge devices. Cisco ASR 1000 routers. even though at the time of writing the book only about 25% of total servers are estimated to be running a hypervisor virtualization layer. Hyper-V. Locator ID Separation Protocol Connectivity requirements between data center locations depend on the nature of the applications running in these data centers. clustered applications. fault tolerance. As modern data centers become increasingly virtualized. however.OTV sides from the overlay. and so on. the convenience of operations comes from the greatly simplified command-line interface needed to enable and configure OTV service on the device.” with some being able to operate over IP-routed network and some requiring Layer 2 adjacency. OTV technology is available on Cisco Nexus 7000 switches. KVM. such as virtual machine mobility. For routable applications. ingress traffic is still delivered to Site A because this is where the virtual machine’s IP subnet is still advertised out of. This great simplicity does not come on the account of deep level troubleshooting that is still available on the platforms supporting OTV functionality. Figure 10-27 depicts traffic tromboning cases where a virtual machine migrates from Site A to Site B. the need to cater to virtualized application needs also increases. Clustered applications are a “mixed bag. or ESXi. and virtualized applications. Distributed applications can typically make use of the IP network.

are the identifiers for the endpoints in the form of either a MAC or IP address. ETR. which is based on a true endpoint location in the topology. They perform LISP encapsulation sending the traffic from Ingress Tunnel Router (ITR) to Egress Tunnel Router (ETR) on its way toward the destination endpoint. while RLOCs are the identifiers of the edge devices on both sides of a LISP-optimized environment (most likely a loopback interface IP address on the edge router or data center aggregation switch). These edge devices are called tunnel routers. LISP. By decoupling endpoint identifiers (EID) from the Routing Locators (RLOC). proxy tunnel routers are used (PITR and PETR). and the proxy tunnel router topology. provides a scalable method for network traffic routing. . To integrate a non-LISP environment into the LISP-enabled network.Figure 10-27 Traffic Tromboning Case A better mechanism is needed to leverage network intelligence. LISP accommodates virtual machine mobility while maintaining shortest path forwarding through the infrastructure. Figure 10-28 shows ITR. EIDs. LISP routers providing both PITR and PETR functionality are called PxTRs. part of Cisco Unified Fabric holistic architecture. as the name suggests. LISP routers providing both ITR and ETR functionality are called xTRs.

If matched. Endpoint mobility events can occur within the same subnet. is called a roaming device. or across subnets. LISP mapping entries are distributed rather than centralized. first-hop router updates the LISP map server with a new RLOC for the EID. ESM leverages Layer 2 data center interconnect to extend VLANs and subsequently subnets across multiple data centers. These entries are derived from the LISP query process and are cached for a certain period of time before being flushed. a mapping is needed between the EID that represents the endpoint and the RLOC that represents a tunnel router that this endpoint is “behind. unless refreshed. When endpoints roam. LISP enables a virtual machine to change its network location. with each tunnel router only maintaining entries needed in a particular given time for communication. the LISP first-hop router must detect such an event. Unlike the approach in conventional databases. known as LISP Across Subnet Mode (ASM).Figure 10-28 LISP Tunnel Routers To derive the true location of an endpoint. This mode is best suited for cold migration and disaster recovery scenarios where there is no Layer 2 data center interconnect between the sites. Catering to the virtual machine mobility case. while keeping its assigned IP addresses. This is where OTV can be leveraged.” This mapping is called an EID-to-RLOC mapping database. LISP provides an exciting new mechanism for organizations to improve routing system scalability and introduce new capabilities to their networks. Versatility of LISP allows it to be leveraged in a variety of solutions: Simplified and cost-effective multihoming. ASM assumes that there is no VLAN and subnet extension between the data centers. including ingress traffic engineering . The endpoint that moves to another subnet or across the same subnet extended between the data centers leveraging technology. such as OTV. It also updates the tunnel and proxy tunnel routers that have cached this information. known as LISP Extended Subnet Mode (ESM) mode. Full mapping database is distributed among all tunnel routers in the LISP environment. It compares the source IP address of the received packet to the range of EIDs defined for the interface on which this data packet was received.

IP address portability.com/c/en/us/solutions/data-center-virtualization/unifiedfabric/unified_fabric.html Overlay Transport Virtualization: http://www.cisco. ISR routers.cisco.html Fibre Channel over IP: http://www. and CSR 1000v software routers. including session persistence across mobility events IPv6 transition simplification.com/c/en/us/tech/storage-networking/fiber-channelover-ip-fcip/index. Table 10-2 lists a reference of these key topics and the page numbers on which each is found. ASR 1000 routers. including incremental deployment of IPv6 using existing IPv4 infrastructure (or IPv4 over IPv6) Simplified multitenancy and large-scale VPNs Operation and network simplification At the time of writing this book. Reference List Unified Fabric: http://www. noted with the key topics icon in the outer margin of the page.com/go/lisp Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter.cisco. .com/go/otv Locator ID Separation Protocol: http://www. LISP is supported on Nexus 7000 series switches with F3 and N7KM132XP-12 line cards.cisco. including no renumbering when changing providers or adding multihoming IP address (endhost) mobility.

and check your answers in the glossary: Nexus Multilayer Director Switch (MDS) Unified Fabric Clos Spine-Leaf Fibre Channel over Ethernet (FCoE) local-area network (LAN) storage-area network (SAN) Fibre Channel Unified Port converged network adapter (CNA) Internet Small Computer Interface (iSCSI) NX-OS Spanning Tree .Table 10-2 Key Topics for Chapter 10 Define Key Terms Define the following key terms from this chapter.

virtual port channel (vPC) FabricPath Virtual Extensible LAN (VXLAN) Virtual Network Identifier (VNID) Virtual Supervisor Module (VSM) Virtual Ethernet Module (VEM) Virtual Security Gateway (VSG) virtual ASA vPath Wide Area Application Services (WAAS) TCP flow optimization (TFO) data redundancy elimination (DRE) Web Cache Communication Protocol (WCCP) Data Center Network Manager (DCNM) Overlay Transport Virtualization (OTV) Locator ID Separation Protocol (LISP) .

“Answers to the ‘Do I Know This Already?’ Quizzes.Chapter 11. If you do not know the answer to a question or are only partially sure of the answer. highly performing and robust fabric. . You can find the answers in Appendix A. Table 11-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. In this chapter we also observe how coupling of FCoE with Cisco Fabric Extender technologies allows building even larger fabrics for ever-increasing scale demands. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. read the entire chapter. FCoE preserves current operational models where particular technology domain expertise is leveraged to achieve deployments conforming to best practices for both network and storage environments. Data Center Bridging and FCoE This chapter covers the following exam topics: Components and Operation of the IEEE Data Center Bridging Standards FCoE Principles FCoE Connectivity Options FCoE and Fabric Extenders FCoE and Server Adapter Virtualization Running Fibre Channel protocol on top of Data Center Bridging enhanced Ethernet is one of the key principles of data center Unified Fabric architecture. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. It has a profound impact on the data center economics in a form of lower capital expenditure (CAPEX) and operational expense (OPEX) by consolidating traditionally disparate physical environments of network and storage into a single.” Table 11-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. you should mark that question as wrong for purposes of the self-assessment.

4. Which two of the following are part of the Data Center Bridging standards? (Choose two. b.1. It provides lossless Ethernet service by pausing traffic based on Class of Service value.) a. LDP c. QSFP-H40G-CU 6. QSFP-40G-SR-BD c. Which of the following allows 40 Gb Ethernet connectivity by leveraging a single pair of multimode fibers? a. . d. e. ETS b. d. It is an invalid port type. What is a VN_Port? a. c. b. It is a virtual port on the FCoE endpoint connected to the VF_Port on the switch. None of the above. 5. It is First Priority Multicast Algorithm used by the FCoE endpoints to select the FCoE Fibre Channel Forwarder for a fabric login. It provides lossless Ethernet service by pausing traffic based on DSCP value. It is a virtual port used to internally attach the FCoE Fibre Channel Forwarder to the Ethernet switch. c. QSFP-40G-SR4 b. c. PFC e. SCSI 2. It is a MAC address of the FCoE Fibre Channel Forwarder. b. d. It is a Fabric Extender Transceiver that can be used to connect Cisco Nexus 2000 Series Fabric Extenders to either the upstream parent switches or the servers. b. It is a port on the FCoE Fabric Extenders toward the FCoE server. FET-40G d. It is a configuration exchange protocol to negotiate Class of Service value for the FCoE traffic. It provides lossless Ethernet service by pausing traffic based on MTU value. 3. It is a Fabric Extender Transceiver that can be used to connect Cisco Nexus 2000 Series Fabric Extenders to the upstream parent switches. What is an FET? a. It is a MAC addresses dynamically assigned to the Converged Network Adapter by the FCoE Fibre Channel Forwarder during the fabric login. FCF d. It is a MAC addresses assigned to the Converged Network Adapter by the manufacturer. What function does Priority Flow Control provide? a. What is FPMA? a.

where network and storage traffic are carried over the same Ethernet links. d. What is Adapter-FEX? a. part of the Unified Fabric solution. 2224TP c. as well as necessary bandwidth and traffic priority services. but only when FCoE traffic on Cisco Nexus 2000 Series FCoE Fabric Extender is pinned to a single Cisco Nexus 5000 Series parent switch d. profile-veth Foundation Topics Data Center Bridging Due to the lack of Ethernet protocol support for transport characteristics required for successful storage-area network implementation. Is FCoE supported in Enhanced vPC topology? a. to make Ethernet a viable transport for storage traffic. 7. It is a Fault-Tolerant Extended Transceiver that can be used to connect Cisco Nexus 2000 Series Fabric Extenders to the upstream parent switches in the most reliable manner. Enhancements required for a successful Unified Fabric implementation. Yes. Which two of the following Cisco Nexus 2000 Series Fabric Extenders support FCoE? a. It is a Fully Enabled Transceiver that can be used to connect Cisco Nexus 2000 Series Fabric Extenders to the upstream parent switches. vethernet d. consolidation of network and storage traffic over the same Ethernet network requires special considerations. Yes.c. 10. Yes b. It is the name of the Cisco next-generation Fabric Extender. d. Ethernet protocol must be enhanced. What type of port profile is used to dynamically instantiate vEth interface on the Cisco Nexus 5000 Series switch? a. It is the name of the transceiver used to connect Fabric Extender to the parent switch. 2248TP b. c. but only when FIP is not used 8. it requires the capability to provide lossless traffic delivery. dynamic-interface b. 2232TM-E 9.1Qbb. 2232PP d. No c. It is required to run FCoE on the Fabric Extender. b. Specifically. are defined under the Data Center Bridging standards defined by the IEEE. It is a feature for virtualizing the network adapter on the server. These are Priority Flow Control (PFC) defined by IEEE 802. The lack of those characteristics poses no concerns for the network traffic. Enhanced . however. vethinterface c.

such bandwidth . low latency.3X standard in which flow control operates on the entire physical link level. or best effort. Upon receipt of a PAUSE frame. it is up to the higher-level protocols. This process is known as a back pressure. Enhanced Transmission Selection Enhanced Transmission Selection (ETS) enables optimal QoS strategy for links between DCBcapable switches and a bandwidth management of virtual links. enabling the current DCB-capable switch to “drain” its receive buffer. Priority Flow Control Ethernet protocol provides no guarantees for traffic delivery. that is. When such per-physical link PAUSE frames are triggered. PAUSE frames are transmitted when the receive buffer allocated to a no-drop queue on the current DCBcapable switch is exhausted. Figure 11-1 demonstrates the principle of how PFC leverages PAUSE frames to pause transmission of FCoE traffic over a DCB-enabled Ethernet link. Figure 11-1 Priority Flow Control Operation With PFC. PAUSE frames are signaled back to the previous hop DCB-capable Ethernet switch for the negotiated FCoE Class of Service (CoS) value (Class of Service value of 3 by default). the previous hop DCB-capable switch pauses its transmission of the FCoE frames. such as TCP.1Qau. they indiscriminately cause buffering for types of traffic that otherwise would rely on TCP protocol for congestion management and might be better off dropped. ETS creates priority groups by allowing differentiated treatment for traffic belonging to the same Priority Flow Control class of service based on bandwidth allocation. the queue becomes full. For example.1Qaz. If any Ethernet frame carrying data is dropped. that is.1AB (LLDP) with TLV extensions. Quantized Congestion Notification (QCN) defined by IEEE 802. and it can be triggered hop-by-hop from the point of congestion toward the source of the transmission (FCoE initiator or target). pausing either all or none of them. Ethernet defines the flow control mechanism under the IEEE 802.1p Class of Service level. and Data Center Bridging Exchange (DCBX) defined by IEEE 802. and it cannot differentiate between different types of traffic forwarded through the link. and hence lossless behavior can no longer be guaranteed. to retransmit the lost portions.Transmission Selection (ETS) defined by IEEE 802. All other traffic is subject to regular forwarding and queuing rules. PFC provides lossless Ethernet transport services for FCoE traffic. which are essential because Fibre Channel frames encapsulated within FCoE packets expect lossless fabric behavior. PFC enables the Ethernet flow control mechanism to operate on the per-802. thus selectively pausing at Layer 2 only traffic requiring guaranteed delivery. such as FCoE.

3AB standard. such as Cisco Nexus 5000 Series switches. Figure 11-2 demonstrates traffic bandwidth allocation carried out by the ETS feature.allocation allows assigning 8 Gb to the storage traffic and 2 Gb to the network traffic on the 10 Gb shared Ethernet link. traffic conditions called incasts can be encountered. With incast. At times. . Parameters exchanged using DCBX protocol are encoded in Type-Length-Value (TLV) format. is to assign 50% of the interface bandwidth to FCoE no-drop class. traffic arriving on multiple ingress interfaces on a given switch is attempted to be sent out through a single switch egress interface. ETS leverages advanced traffic scheduling features by providing guaranteed minimum bandwidth to the FCoE and high-performance computing traffic. It is an extension of the Link Layer Discovery Protocol (LLDP) defined under the IEEE 802. Bandwidth allocations made by ETS are not hard allocations. Figure 11-2 Enhanced Transmission Selection Operation The default behavior of Cisco switches supporting DCB standards. Data Center Bridging Exchange Data Center Bridging Exchange (DCBX) is a negotiation and exchange protocol between FCoE initiator/target and a DCB-capable switch or between two DCB-capable switches. If the amount of ingress traffic exceeds the capacity of the switch egress interface. This approach allows supporting bursty traffic patterns while still maintaining bandwidth guarantees. Figure 11-3 demonstrates how DCBX messages are exchanged hop-by-hop across the DCB-enabled links. meaning that traffic classes are allowed to use bandwidth allocated to a different traffic class. until that traffic class demands the bandwidth. special care is needed to maintain lossless link characteristics.

Quantized Congestion Notification As discussed earlier. DCBX plays a pivotal role in making sure behaviors such as Priority Flow Control. Successful FCoE operation relies on Ethernet enhancements introduced with DCB and as such. Quantized Congestion Notification mechanism. where notification messages are sent from the point of congestion in the FCoE fabric directly to the source of transmission. takes a different approach of applying back-pressure from the point of congestion directly toward the source of transmission without involving intermediate switches. such as FCoE. Figure 11-4 demonstrates the principle behind the QCN feature operation. Network Interface Virtualization (multihop FCoE behavior is discussed in Chapter 12. defined in the IEEE 802. and logical link up/down are successfully negotiated. a back-pressure is applied from the point of congestion toward the source of transmission (FCoE initiator or target) in a hop-by-hop fashion. between two adjacent DCB-capable switches. .1Qau standard. PFC enables you to throttle down transmission for no-drop traffic classes. “Multihop Unified Fabric”). This approach maintains feature isolation and affects only the parts of the network causing the congestion.Figure 11-3 Data Center Bridging Exchange Operation DCBX performs the discovery of adjacent DCB-capable switches. detects configuration mismatches. Enhanced Transmission Selection. and negotiates DCB link parameters. As PFC is invoked.

Figure 11-4 Quantized Congestion Notification Operation FCoE protocol operates at Layer 2 as Ethernet encapsulated Fibre Channel frames are forwarded between initiators and targets across the FCoE Fibre Channel Forwarders along the path. When these frames reach the FCoE FCF. . the Fibre Channel frames are reencapsulated with Ethernet headers. Subsequently. When the FCoE endpoint sends FCoE frames to the network. respectively. Figure 11-5 demonstrates the swap of outer Ethernet encapsulation MAC addresses as the FCoE frame is forwarded between initiator and target. these frames include the source MAC address of the endpoint and the destination MAC address of the FCoE Fibre Channel Forwarder. the MAC address swapping behavior occurring at FCoE Fibre Channel Forwarders makes its operation resemble routing more than bridging. Even though Layer 2 operation implies bridging. while source and destination MAC addresses assigned are the MAC addresses of the current and next-hop FCoE FCF or the frame target. and a Fibre Channel forwarding decision is taken on the exposed Fibre Channel frame. Ethernet headers are removed. frames are deencapsulated.

Fibre Channel Over Ethernet Over the years. it is impossible for the intermediate FCoE Fibre Channel Forwarders to generate QCN messages directly to the source of transmission to throttle down its FCoE transmission into the network. FCoE is . Figure 11-6 Fibre Channel Encapsulated in Ethernet The FC-BB-5 working group of INCITS Technical Committee T11 developed the FCoE standard. Figure 11-6 demonstrates the principle of encapsulating the original Fibre Channel frame with Ethernet headers for transmission across the Ethernet network. while allowing the flexibility of running it on top of the ever-increasing high-speed Ethernet fabric. The idea behind FCoE is simple: run Fibre Channel protocol frames encapsulated in Ethernet headers. Fibre Channel protocol has established itself as the most reliable and best performing protocol for storage traffic transport. It leverages the same fundamental characteristics of the traditional Fibre Channel protocol. Participation of all FCoE Fibre Channel Forwarders in the QCN process brings little value beyond Priority Flow Control and as such.Figure 11-5 FCoE Forwarding MAC Address Swap This MAC address swap occurs for all FCoE frames passing through the FCoE Fibre Channel Forwarders along the path. unless all FCoE Fibre Channel Forwarders along the path from the point of congestion toward the source of transmission participate in the QCN process. Because those FCoE frames no longer contain the original MAC address of the initiator. Cisco Nexus 5000 Series DCB-capable switches do not implement the QCN feature.

Figure 11-7 depicts the breadth of storage transport technologies. because network and storage environments are governed by different principles and have different characteristics. Figure 11-8 depicts the mapping between the layers of a traditional Fibre Channel protocol and the FCoE. FCoE Logical Endpoint The FCoE Logical Endpoint (FCoE_LEP) is responsible for the encapsulation and deencapsulation functions of the FCoE traffic. Figure 11-7 Storage Transport Technologies Landscape FCoE protocol is the essential building block of the Cisco Unified Fabric architecture for converging network and storage traffic over the same Ethernet network. starting with FC-2 and continuing up the Fibre Channel Protocol stack. The capability to converge LAN and SAN traffic over a single Ethernet network is not trivial. which must be preserved. . FCoE_LEP has the standard Fibre Channel layers.one of the options available for storage traffic transport.

which enables it to be ubiquitously deployed while maintaining operational. FCoE operation defines several types of ports. To higher-level system functions. and management models of the traditional Fibre Channel protocol. Figure 11-9 depicts FCoE and traditional Fibre Channel port designation for the single hop storage topology.Figure 11-8 Fibre Channel and FCoE Layers Mapping As you can see in Figure 11-8. FCoE does not alter higher levels and sublevels of the Fibre Channel protocol stack. FCoE Port Types Similar to the traditional Fibre Channel protocol. administrative. FCoE network appears as a standard Fibre Channel network. This compatibility enables all the tools used in the native Fibre Channel environment to be used in the FCoE environment. Each port type plays a specific role in the overall FCoE solution. .

although Ethernet is used as a transport for the Fibre Channel traffic. FPMA is 48 bits in length and consists of two concatenated 24-bit fields. This compares closely with the traditional Fibre Channel environment where initiators and targets present the N_Port port type connected to the F_Port port type on the first-hop upstream Fibre Channel switch. including Fibre Channel addressing. it does not change the Fibre Channel characteristics. You can think of them as equivalent to Host Bus Adapters (HBA) in the traditional Fibre Channel world. FCoE ENode creates at least one VN_Port toward the first-hop FCoE access switch. or FC-MAP. The dynamically assigned MAC address is also known as the fabric-provided MAC address (FPMA). FCoE ENodes FCoE ENodes are the combination of FCoE Logical Endpoints (FCoE_LEP) and Fibre Channel protocol stack on the Converged Network Adapter. Figure 11-10 depicts the structure of the FPMA MAC address. The first field is called FCoE MAC address prefix. Each one presents a VN_Port port type connected to the VF_Port port type on the first hop upstream FCoE switch. such as the Cisco Nexus 5000 Series switch. An ENode also has a globally unique burnt-in MAC address. . which is assigned to it by the CNA vendor. and it is defined by the FC-BB-5 standard as a range of 256 prefixes. which is dynamically assigned to it by the FCoE Fibre Channel Forwarder on the upstream FCoE switch during the fabric login process. The second field is called FC_ID. Remember.Figure 11-9 FCoE and Fibre Channel Port Types in Single Hop Storage Topology FCoE ENodes could be either initiators in the form of servers or targets in the form of storage arrays. and it is basically a Fibre Channel ID assigned to the CNA during the fabric login process. Each VN_Port has a unique ENode MAC address. such as the Cisco MDS 9000 Series switch. These first hop switches can operate in either full Fibre Channel switching mode or the FCoE-NPV mode (more on that in Chapter 12).

In essence. a challenging situation can occur if each fabric allocates the same FCID to its respective FCoE ENodes. as well as the encapsulation and deencapsulation of the FCoE traffic. This will ensure the uniqueness of an FPMA address without the need for multiple FC-MAP values. the FCoE_LEP. To resolve this situation. Just as . Figure 11-11 Fibre Channel Forwarder. With only default FC-MAP. Fibre Channel Forwarder Simply put.Figure 11-10 Fabric-Provided MAC Address Cisco has established simple best practices that make manipulation of FC-MAP values beyond the default value of 0E-FC-00 unnecessary. FCoE Fibre Channel Forwarder is responsible for the host fabric login. The 256 FC-MAP values are still available in case of a nonunique FC_ID address. Figure 11-11 depicts a relationship between FCoE Fibre Channel Forwarder. if two Fibre Channel fabrics are mapped to the same Ethernet FCoE VLAN. you can leverage two different FC-MAP values to maintain FPMA uniqueness. and Ethernet Switch FCoE Encapsulation As fundamentally Ethernet traffic. FCoE_LEP. Cisco strongly recommends not mapping multiple Fibre Channel fabrics into the same Ethernet FCoE VLAN. When Fibre Channel frames are encapsulated and transported over the Ethernet network in a converged fashion. FCoE Fibre Channel Forwarder (FCF) is a logical Fibre Channel switch entity residing in the FCoE-capable Ethernet switch. and the Ethernet switch. to make sure that the overall FPMA address remains unique across the FCoE fabric. it’s like having two servers with the same MAC address in the same VLAN. FCoE packets must be exchanged in a context of a VLAN. For example. FCoE-capable Ethernet switches must have the proper intelligence to enforce Fibre Channel protocol characteristics along the forwarding path. FCoE ENodes across the two fabrics will end up with identical FPMA values. One or more FCoE_LEPs are used to attach the FCoE Fibre Channel Forwarder internally to the Ethernet switch.

so do virtual SANs by creating logical separation for the storage traffic. Figure 11-12 depicts FCoE frame format. The IEEE 802. Both ends of the FCoE link must agree on the same FCoE VLAN ID. Cisco Nexus 5000 Series switches enable mapping between FCoE VLANs and Fibre Channel VSANs. .1Q priority field is used to indicate the Class of Service marking to provide an adequate Quality of Service. where all FCoE packets tagged with specific VLAN ID are assumed to belong to the corresponding Fibre Channel VSAN. as FCoE frames traverse the Ethernet links.virtual LANs create logical separation for the network traffic. Inside the IEEE 802. it enables creation of multiple virtual network topologies across a single physical network infrastructure.1Q tag. that is. Figure 11-12 FCoE Frame Format Let’s take a closer look at some of the frame header fields. reserved fields can also be leveraged for padding.1Q tag serves the same purpose as in traditional Ethernet network. should be carried over these VLANs. FCoE leverages EtherType value of 0x8906.1Q tags are used to segregate between network and FCoE VLANs on the wire. such as IP network traffic. that is. Each field is 48 bits in length. Source and Destination MAC Address defines Layer 2 connectivity addressing for FCoE frames. Each field is 8 bits in length. With FCoE. IEEE 802.1Q tag is 4 bytes in length. IEEE 802. With FCoE.3 standard defines the minimum-length Ethernet frame as 64 bytes (including Ethernet headers). Start of Frame (SOF) and End of Frame (EOF) fields indicate the start and end of the encapsulated Fibre Channel frame. and the incorrectly tagged frames are simply discarded. Reserved fields have been inserted to allow future protocol extensibility. If the Fibre Channel payload and all the encapsulating headers result in an Ethernet frame size smaller than 64 bytes. especially assigned by IEEE for the protocol to indicate Fibre Channel payload. respectively. The IEEE 802. The Cisco Nexus 5000 Series switches implementation also expects all VLANs carrying FCoE traffic to be exclusively used for this purpose. The IEEE 802. no other types of traffic.

you must manually create class-fcoe default system class. Example 11-1 depicts the MTU assigned to FCoE traffic under class class-fcoe. Before enabling FCoE on the Cisco Nexus 5500 Series switches. Cisco Nexus 5000 Series switches define by default the Maximum Transmission Unit (MTU) of 2240 bytes for FCoE traffic in class-fcoe. Non-FCoE traffic is set up by default for the Ethernet standard 1500 byte MTU (excluding Ethernet headers). These MTU settings are observed for FCoE traffic arriving directly on the physical switch interfaces or through the host interfaces on the Cisco Nexus 2000 Series FCoE Fabric Extenders attached to the parent Cisco Nexus 5000 Series switches. Figure 11-13 FCoE Frame Length As you can see. Ethernet Frame Check Sequence (FCS) represents the last 4 bytes of the Ethernet frame. On the Cisco Nexus 5010 and 5020 switches. This is a standard FCS field in any Ethernet frame. Because of this. the total size of an Ethernet frame carrying a Fibre Channel payload can be up to 2180 bytes. FCoE traffic is assigned to the class based on the FCoE EtherType value (0x8906). Jumbo Frame support must be enabled to allow successful transmission of all FCoE frame sizes. including Fibre Channel cyclic redundancy check (CRC).Encapsulated Fibre Channel frame in its entirety. class-fcoe default system class is automatically created at the system startup and cannot be deleted. Example 11-1 Displaying FCoE MTU Click here to view code image Nexus5000# show policy-map Type qos policy-maps ==================== policy-map type qos default-in-policy class type qos class-fcoe set qos-group 1 Type network-qos policy-maps =============================== policy-map type network-qos default-nq-policy class type network-qos class-fcoe pause no-drop mtu 2240 . Figure 11-13 depicts the overall length of the FCoE frame. After FCoE features are enabled. is included in the payload. This class provides no-drop service to facilitate guaranteed delivery of FCoE frames end-to-end.

FIP also differentiates from the FCoE protocol messages by using EtherType value 0x8914 in the encapsulating Ethernet header fields. continuously verifying reachability between the two virtual Fibre Channel (vFC) interfaces on the Ethernet network and offering primitives to delete the virtual link in response to administrative actions to that effect. The FCoE ENode discovers the VLAN to which it needs to send the FCoE traffic by sending an FIP VLAN discovery request to the All-FCF-MAC multicast MAC address. Establishment of the virtual link carries three main steps: discovering the FCoE VLAN.FCoE Initialization Protocol FCoE Initialization Protocol (FIP) is the FCoE control plane protocol responsible for establishing and maintaining Fibre Channel virtual links between pairs of adjacent FCoE devices (FCoE ENodes or FCoE Fibre Channel Forwarders). storage array). Figure 11-14 FCoE Initialization Protocol Operation FIP VLAN discovery relies on the use of native VLAN leveraged by the initiator (for example. Figure 11-14 depicts the phases of FIP operation. an exchange of Fibre Channel payload encapsulated with the Ethernet headers can begin. server) or the target (for example. discovering the MAC address of an upstream FCoE Fibre Channel Forwarder. FIP continues operating in the background performing virtual link maintenance functions—specifically. All FCoE Fibre Channel Forwarders listen to the All-FCF-MAC MAC . After the virtual link has been established. and ultimately performing fabric login/discovery operation (FLOGI or FDISC). potentially across multiple Ethernet segments.

. they still require two sets of drivers. in which case FIP VLAN discovery functionality is not invoked. Converged Network Adapter Converged Network Adapter (CNA) is in essence an FCoE ENode. From the operating system perspective. which can accept and process fabric login requests. Note FIP VLAN discovery is not a mandatory component of the FC-BB-5 standard. Cisco Nexus 5000 Series switches support FIP VLAN Discovery functionality and will respond to the querying FCoE ENode. one for the Ethernet and the other for Fibre Channel. The FCoE ENode can also be manually configured for the FCoE VLAN it should use. Converged Network Adapters are also available in the mezzanine form factor for the use in server blade systems. These messages are sent out periodically. VLAN ID 1002 is commonly assumed in such case. which contains the functions of a traditional Host Bus Adapter (HBA) and a network interface card (NIC) all in one. The converged nature of CNAs requires them to operate at 10 Gb speeds or higher. as such. which could delay FCoE Fibre Channel Forwarder discovery for the new FCoE ENodes joining the fabric during the send interval. The FC-BB-5 standard enables FCoE ENodes to send out advertisements to the All-FCF-MAC multicast MAC address to solicit unicast discovery advertisement back from FCoE Fibre Channel Forwarder to the requesting ENode. After FCoE VLAN has been discovered. FCoE Fibre Channel Forwarder discovery messages are sent to the All-ENode-MAC multicast MAC address over the FCoE VLAN discovered earlier by the FIP protocol. This is the phase when the FCoE ENode MAC address used for FCoE traffic is assigned and virtual link establishment is complete. just like in the case of physically segregated adapters. CNAs are represented as two separate devices. Figure 11-15 depicts the fundamental principle behind leveraging a separate set of operating system drivers when converging NIC and HBA with Converged Network Adapter. FIP performs fabric login or discovery by sending FLOGI or FDISC unicast frames from VN_Port to VF_Port. Following the discovery of FCoE VLAN and the FCoE Fibre Channel Forwarder.address on the native VLAN. FIP performs the function of discovering the upstream FCoE Fibre Channel Forwarder. Discovery messages are also used by the FCoE Fibre Channel Forwarder to inform any potential FCoE ENode attached through the FCoE VLAN that its VF_Ports are available for virtual link establishment with the ENode’s VN_Ports. CNA is usually a PCI express (PCIe) type adapter.

however. server CNAs and FCoE storage arrays offer at least 10 Gb FCoE connectivity speeds. Although 16 Gb Fibre Channel is only 33% faster than 10 Gb FCoE. Table 11-3 16 Gb Fibre Channel and 40 Gb FCoE . both network and storage traffic is sent over a single unified wire connection from the server to the upstream first hop FCoE Access Layer switch. FCoE Speed Rapid progress in the speed increase of Ethernet networks versus a slower pace of speed increase in the traditional Fibre Channel networks makes FCoE a compelling technology for high throughput storage-area networks. rarely do Fibre Channel connections from the endpoints to the first hop FCoE switches exceed the speed of 8 Gb.Figure 11-15 Separate Ethernet and Fibre Channel Drivers for NICs. HBAs. 10 Gb FCoE achieves 50% more throughput than 8 Gb Fibre Channel. due to different encoding schemes. 40 Gb FCoE is four times faster than 10 Gb FCoE! Table 11-3 depicts the throughputs and encoding for the 16 Gb Fibre Channel and 40 Gb FCoE traffic. such as Cisco Nexus 5000 Series switch. Table 11-2 8 Gb Fibre Channel and 10 Gb FCoE When comparing with higher speeds. Table 11-2 depicts the throughputs and encoding for the 8 Gb Fibre Channel and 10 Gb FCoE traffic. At the same time. we discover even greater differences. The difference might not seem significant. and CNAs With CNAs. During the time of writing this book. despite appearing to be only 2 Gb faster.

Figure 11-16 Fiber Optic Core and Coating Fiber manufacturers use various vapor deposition processes to make the pre-forms. Multimode fiber type has a much larger core diameter compared to single-mode fiber. Multimode fiber refers to the fact that numerous modes or light rays are simultaneously carried through. Figure 11-17 depicts Step-Index multimode fiber design. given the similar characteristics between Fibre Channel and FCoE. you can clearly see the very distinct throughput advantages in favor of the FCoE. which are glass rods drawn into fine threads of glass protected by a plastic coating.Speed is definitely not the only factor in storage-area networks. Optical Cabling An optical fiber is a flexible filament of very clear glass capable of carrying information in the form of light. . Such modes result from the fact that light will propagate in the fiber core only at discrete angles within the cone of acceptance. Optical fibers are hair-thin structures created by forming pre-forms.5 microns core diameter. FCoE Cabling Options for the Cisco Nexus 5000 Series Switches This section reviews the options for physically connecting Converged Network Adapters (CNA) to Cisco Nexus 5000 Series switches. with Graded Index being more popular because it offers hundreds of times more bandwidth than Step-Index fiber does. Figure 11-16 depicts the fiber optic core and coating. allowing for a larger number of modes. Current multimode cabling standards include 50 microns and 62. however. and connecting the switches to an upstream LAN and Fibre Channel SAN environments. which are then placed into an operating environment for decades of reliable performance. The fibers drawn from these pre-forms are then typically packaged into cable configurations. Step-Index and Graded Index offer two different fiber designs.

.Figure 11-17 Step-Index Multimode Fiber Design Figure 11-18 depicts Graded Index multimode fiber design. Current single-mode cabling standards include 9 microns core diameter. Figure 11-19 depicts a single-mode fiber design. Consequently. information can be transmitted over longer distances. Figure 11-18 Graded Index Multimode Fiber Design Single-mode (or monomode) fiber enjoys lower fiber attenuation than multimode fiber and retains better fidelity of each light pulse because it exhibits no dispersion caused by multiple modes.

Transceiver Modules Cisco Transceiver Modules support Ethernet. Sonet/SDH. SFP Transceiver Modules A Small Form-Factor Pluggable (SFP) is a compact. SFP is responsible for converting between the electrical and optical signals.3z 1000BASE-LX standard. Figure 11-20 depicts SFP module LC connector. 1000BASE-LX/LH transceivers transmitting in the 1300 nanometers wavelength over multimode FDDI-grade OM1/OM2 fiber require mode conditioning patch cables. Due to its small form factor.2 miles) and up to 550 meters (~1800 feet) over any multimode fiber. Figure 11-20 Fiber Optic SFP Module LC Connector Let’s now review various transceiver module options available at the time of writing this book for deploying 1 Gigabit Ethernet. input/output device that is used to link the ports on different network devices together.62 miles) over laser-optimized 50 microns multimode fiber cable. while allowing both receive and transmit operations to take place. It is compatible with the IEEE 802. . 1000BASE-SX SFP 1 Gigabit Ethernet fiber transceiver is used only for multimode fiber networks. MDI-X is Medium Dependent Interface Crossover. 100 Megabit.3z 1000BASE-SX standard. LC is the most popular one for both multimode and single-mode fiber optic connections. MDI is Medium Dependent Interface. It is compatible with the IEEE 802. Following are the different types of SFP transceivers existing at the time of writing this book: 1000BASE-T SFP 1 Gigabit Ethernet copper transceiver for copper networks operates on standard Category 5 unshielded twisted-pair copper cabling for links of up to 100 meters (~330 feet) in length. hot swappable. and Fibre Channel applications across all Cisco switching and routing platforms.5 microns Fiber Distributed Data Interface (FDDI)-grade multimode fiber for a distance of up to 220 meters (~720 feet). These modules support 10 Megabit. and 1000 Megabit (1 Gigabit) auto-negotiation and auto-MDI/MDIX. 10 Gigabit Ethernet.Figure 11-19 Single-Mode Fiber There are many fiber optic connector types. SFP is also sometimes referred to as mini Gigabit Interface Converter (GBIC) because both provide similar functionality. SFP type transceivers can be leveraged in a variety of high-speed Ethernet and Fibre Channel deployments. as an example. and 40 Gigabit Ethernet connectivity. It can also support distances of up to 1 kilometer (~0. 1000BASE-LX/LH SFP 1 Gigabit Ethernet long-range fiber transceiver operates over standard single-mode fiber-optic link spanning distances of up to 10 kilometers (~6. and it operates over either 50 microns multimode fiber links for a distance of up to 550 meters (~1800 feet) or over 62.

1000BASE-ZX SFP 1 Gigabit Ethernet extra long-range fiber transceiver operates over standard single-mode fiber-optic link spanning extra long-reaching distances of up to approximately 70 kilometers (~43. Connectivity up to 300 meters (~1000 feet) is possible when leveraging 2. whereas 1000BASE-BX10-U transmits at a 1310 nanometer wavelength and receives a 1490 nanometer signal. 1000BASE-BX-D and 1000BASE-BX-U 1 Gigabit Ethernet transceivers.5 miles). .700MHz/km OM4 multimode fiber with a core size of 50 microns. Single-Mode Fiber SFP+ Transceiver Modules The SFP+10 Gigabit Ethernet transceiver module is a bidirectional device with a transmitter and receiver in the same physical package.25 miles) is possible when leveraging 4.3ah 1000BASE-BX10-D and 1000BASE-BX10-U standards. 1000BASE-BX10-D transmits a 1490 nanometer wavelength and receives a 1310 nanometer signal. It operates on 850 nanometer wavelength. The communication over a single strand of fiber is achieved by separating the transmission wavelength of the two devices.2 miles). Figure 11-21 Bidirectional Transmission over a Single Strand.5 microns. Connectivity up to 400 meters (~0. This SFP provides an optical link budget of 21 dB.1000BASE-EX SFP 1 Gigabit Ethernet extended-range fiber transceiver operates over standard single-mode fiber-optic link spanning long-reaching distances of up to 40 kilometers (~25 miles). leveraging an LC type connector. Figure 11-21 depicts the principle of wavelength multiplexing to achieve transmission over a single strand fiber. such as fiber quality. and connectors. compatible with the IEEE 802. operate over a single strand of standard single-mode fiber for a transmission range of up to 10 kilometers (~6. This module has a 20-pin connector on the electrical interface and duplex Lucent Connector (LC) on the optical interface. 1000BASE-BX10-D and 1000BASE-BX10-U transceivers represent two sides of the same fiber link. The following options exist for 10 Gigabit Ethernet SFP transceivers during the time of writing this book: SFP-10G-SR SFP 10 Gigabit Ethernet short-range transceiver supports a link length of 26 meters (~85 feet) over standard FDDI-grade multimode fiber with a core size of 62. A 5dB inline optical attenuator should be inserted between the fiber-optic cable and the receiving port on the SFP at each end of the link for back-to-back connectivity.000MHz/km OM3 multimode fiber with a core size of 50 microns. number of splices. Note the presence of a wavelength-division multiplexing (WDM) splitter integrated into the SFP to split the 1310 nanometer and 1490 nanometer light paths. but the precise link span length depends on multiple factors. Both Cisco Nexus 5000 Series switches and Cisco Nexus 2000 Series FCoE Fabric Extenders support SFP-10G-SR SFP.

5 microns.SFP-10G-LRM SFP 10 Gigabit Ethernet short-range transceiver supports link lengths of 220 meters (~720 feet) over standard FDDI-grade multimode fiber with a core size of 62. At the time of writing this book. the SFP-10G-ZR SFP transceiver is neither supported on Cisco Nexus 5000 Series switches nor Cisco Nexus 2000 Series FCoE Fabric Extenders. such as Cisco Nexus 5000 Series switches.2 miles) over standard G. No mode conditioning patch cord is required for applications over OM3 or OM4. QSFP Transceiver Modules The Cisco 40GBASE QSFP (Quad Small Form-Factor Pluggable) hot-swappable input/output device portfolio (see Figure 11-22) offers a wide variety of high-density and low-power 40 Gigabit Ethernet connectivity options for the data center. Connectivity is typically between the server and a Top-of-Rack switch or a server and a Cisco Nexus 2000 Series FCoE Fabric Extender. SFP-10G-LR SFP 10 Gigabit Ethernet long-range transceiver supports a link length of 10 kilometers (~6.5 miles). It operates on 1550 nanometer wavelength. It operates on 1310 nanometer wavelength. leveraging an LC type connector. SFP+ Copper Twinax direct-attach 10 Gigabit Ethernet pre-terminated cables are suitable for very short distances and offer a cost-effective way to connect within racks and across adjacent racks. FET-10G is supported only on fabric uplinks between the Cisco Nexus 2000 Series FCoE Fabric Extender and a Cisco parent switch. the transmitter should be coupled through a mode conditioning patch cord. These cables can also be used for fabric uplinks between the Cisco Nexus 2000 Series FCoE Fabric Extender and a Cisco parent switch. . Cisco Nexus 5000 switches do not support SFP-10G-ER SFP. leveraging an LC type connector. A 5dB 1550 nanometer fixed loss attenuator is required for distances greater than 20 kilometers (~12.652 single-mode fiber.652 single-mode fiber. Cisco Nexus 5500 switches and Cisco Nexus 2000 Series FCoE Fabric Extenders support SFP-10G-ER SFP. neither Cisco Nexus 5000 Series switches nor Cisco Nexus 2000 Series FCoE Fabric Extenders support SFP-10G-LRM optics. It operates over a core size of 50 microns and at 850 nanometers wavelength. such as Cisco Nexus 5000 Series switches. This interface is not specified as part of the 10 Gigabit Ethernet standard and is instead built according to Cisco specifications. At the time of writing this book. FET-10G Fabric Extender Transceiver (FET) 10 Gigabit Ethernet cost-effective short-range transceiver supports link lengths of 25 meters (~80 feet) over OM2 multimode fiber and the link length of up to 100m (~330 feet) over laser-optimized OM3 or OM4 multimode fiber.652 single-mode fiber. leveraging an LC type connector.652). SFP-10G-LRM Module also supports link lengths of 300 meters (~1000 feet) over standard single-mode fiber (G. leveraging an LC type connector. It operates on 1550 nanometer wavelength. SFP-10G-ER SFP 10 Gigabit Ethernet extended range transceiver supports a link length of up to 40 kilometers (~25 miles) over standard G. SFP-10G-ZR 10 Gigabit Ethernet very long-range transceiver supports link lengths of up to about 80 kilometers (~50 miles) over standard G. To ensure that specifications are met over FDDI-grade OM1 and OM2 fibers. It operates on 1310 nanometer wavelength. Both Cisco Nexus 5000 Series switches and Cisco Nexus 2000 Series FCoE Fabric Extenders support SFP-10G-LR SFP.

over laser-optimized OM3 and OM4 multimode fibers. The 4×10 Gigabit Ethernet connectivity is achieved using an external 12-fiber parallel to 2-fiber duplex breakout cable. It primarily enables high-bandwidth 40 Gigabit Ethernet optical links over 12-fiber parallel fiber terminated with MPO/MTP multifiber connectors.3ba standard. respectively. It supports link lengths of 100 meters (~330 feet) and 150 meters (~500 feet). The following options exist for 40 Gigabit Ethernet QSFP transceivers during the time of writing this book: QSFP 40Gbps Bidirectional (BiDi) 40 Gigabit Ethernet short-range cost-effective transceiver is a pluggable optical transceiver with a duplex LC connector interface over 50 micron core size multimode fiber. respectively. It supports link lengths of 100 meters (~330 feet) and 150 meters (~500 feet) over laser-optimized OM3 and OM4 multimode fibers. Figure 11-23 Cisco QSFP BiDi Wavelength Multiplexing Each QSFP 40 Gigabit Ethernet BiDi transceiver consists of two 20 Gigabit Ethernet transmit and receive channels enabling an aggregated 40 Gigabit Ethernet link over a two-strand multimode fiber connection. BiDi transceiver complies with the QSFP MSA specification. enabling its use on all QSFP 40 Gigabit Ethernet platforms.Figure 11-22 Cisco 40GBASE QSFP Transceiver It offers flexibility of interface choice for different distance requirements and fiber types. respectively. Figure 11-23 depicts the use of wavelength multiplexing technology allowing 40 Gigabit Ethernet connectivity across just a pair of multimode fiber strands. QSFP-40G-SR4 QSFP 40 Gigabit Ethernet short-range transceiver operates over 850 nanometer and 50 micron core size multimode fiber. which connects the 40GBASE-SR4 module to four 10GBASE-SR . which enables reuse of existing 10 Gigabit Ethernet duplex multimode cabling infrastructure for migration to 40 Gigabit Ethernet connectivity. and its highspeed electrical interface is compliant with the IEEE 802. It can also be used in a 4×10 Gigabit Ethernet mode for interoperability with 10GBASE-SR interfaces up to 100 meters (~330 feet) and 150 meters (~500 feet) over OM3 and OM4 fibers. BiDi transceivers offer a compelling solution. while being interoperable with other IEEE-compliant 40GBASE interfaces.

It extends the reach of the IEEE 40GBASE-SR4 interface to 300 meters (~1000 feet) and 400 meters (~0. Figure 11-24 Cisco 40GBASE-CR4 QSFP Direct-Attach Copper Cables Active optical cables (AOC) are much thinner and lighter than copper cables. At the time of writing this book. Figure 11-25 depicts short range pre-terminated 40Gb QSPF to QSFP active optical cable. which is critical in high-density racks. which makes cabling easier. AOCs enable efficient system airflow and have no EMI issues. The 40 Gigabit Ethernet signal is carried over four wavelengths. QSFP-40G-CSR4 QSFP 40 Gigabit Ethernet short-range transceiver operates over 850 nanometer and 50 micron core size multimode fiber. Cisco offers both active and passive copper cables.optical interfaces. Each 10 Gigabit lane of this module is compliant with IEEE 10GBASE-SR specifications. Figure 11-25 Cisco 40GBASE-CR4 QSFP Active Optical Cables QSFP-40G-LR4 QSFP 40 Gigabit Ethernet long-range transceiver module supports link lengths of up to 10 kilometers (~6. Such cables can either be active copper. respectively. This module can be used for native 40 Gigabit Ethernet optical links over 12-fiber parallel cables with MPO/MTP connectors or in a 4×10 Gigabit Ethernet mode with parallel to duplex fiber breakout cables for connectivity to four 10GBASE-SR interfaces. Figure 11-24 depicts short range pre-terminated 40 Gigabit Ethernet QSPF to QSFP copper cable. Both QSFP-40G-LR4 and QSFP-40GE-LR4 operate over 1310 nanometer single-mode fiber.2 miles) over a standard pair of G.652 single-mode fiber with duplex LC connectors. passive copper. QSFP-40GE-LR4 QSFP 40 Gigabit Ethernet long-range transceiver supports 40 Gigabit . or active optical cables.25 miles) over laseroptimized OM3 and OM4 multimode parallel fiber. Multiplexing and de-multiplexing of the four wavelengths are managed within the device. 40GBASE-CR4 QSFP to QSFP direct-attach 40 Gigabit Ethernet short-range pre-terminated cables offer a highly cost-effective way to establish a 40 Gigabit Ethernet link between QSFP ports of Cisco switches within the rack and across adjacent racks.

Figure 11-26 Cisco 40Gb QSFP to Four SFP+ Copper Breakout Cable Active optical breakout cables (AOC) are much thinner and lighter than copper breakout cables. AOCs enable efficient system airflow and have no EMI issues. respectively.25 miles). which makes cabling easier. This module can be used for native 40 Gigabit Ethernet optical links over 12-fiber ribbon cables with MPO/MTP connectors or in 4×10Gb mode with parallel to duplex fiber breakout cables for connectivity to four FET10G interfaces. whereas the QSFP-40G-LR4 supports OTU3 data rate in addition to 40 Gigabit Ethernet Ethernet rate. The 40 Gigabit Ethernet signal is carried over four wavelengths. It is interoperable with QSFP-40G-LR4 for distances up to 2 kilometers (~1. passive copper. The interconnect will work over parallel 850 nanometer. These breakout cables connect to a 40 Gigabit Ethernet QSFP port of a Cisco switch on one end and to for 10 Gigabit Ethernet SFP+ ports of a Cisco switch on the other end. which is critical in high-density racks.Ethernet rate only. Multiplexing and de-multiplexing of the four wavelengths are managed within the device. Figure 11-27 depicts the 40 Gigabit Ethernet active optical breakout cable for connecting four 10 Gigabit Ethernet capable nodes (servers) into a single 40 Gigabit Ethernet QSPF switch interface. such as Cisco Nexus 5000 Series switches. These active optical breakout cables connect to a 40 Gigabit Ethernet QSFP port of a Cisco switch on one end and to four 10 Gigabit Ethernet SFP+ ports of a Cisco switch on the other end. Such cables can either be active copper. FET-40G QSFP 40 Gigabit Ethernet cost-effective short-range transceiver connects links between the Cisco Nexus 2000 Series FCoE Fabric Extender and the Cisco parent switch. QSFP to four SFP+ Direct-Attach 40 Gigabit Ethernet pre-terminated breakout cables are suitable for very short distances and offer a highly cost-effective way to connect within the rack and across adjacent racks. WSP-Q40GLR4L QSFP 40 Gigabit Ethernet long-range transceiver supports link lengths of up to 2 kilometers (~1.25 miles) over a standard pair of G. or active optical cables. . 50 micron core size laser-optimized OM3 and OM4 multimode fiber across distances of up to 100 meters (~330 feet) and 150 meters (~500 feet). Figure 11-26 depicts the 40 Gigabit Ethernet copper breakout cable for connecting four 10 Gigabit Ethernet capable nodes (servers) into a single 40 Gigabit Ethernet QSPF switch interface.652 single-mode fiber with duplex LC connectors.

Cisco will continue to provide support for the affected product under warranty or covered by a Cisco support program. When a product fault or defect occurs in the network. the following warning message will be displayed on the terminal: Caution When Cisco determines that a fault or defect can be traced to the use of third-party transceivers installed by a customer or reseller. If a product fault or defect is detected and Cisco believes the fault or defect can be traced to the use of third-party transceivers (or other non-Cisco components). Delivering FCoE Using Cisco Fabric Extender Architecture Cisco Fabric Extender (FEX) technology delivers an extensible and scalable fabric solution. which complements Cisco Unified Fabric strategy. Cisco may withhold support under warranty or a Cisco support program. Cisco may withhold support under warranty or a Cisco support program. such as SMARTnet service. at Cisco’s discretion. and Cisco concludes that the fault or defect is not attributable to the use of third-party transceivers (or other non-Cisco components) installed inside Cisco network devices. In the course of providing support for a Cisco networking product Cisco may require that the end user install Cisco transceivers if Cisco determines that removing third-party parts will assist Cisco in diagnosing the cause of a support issue. with no reliance on . scalable. and adaptive server and server-adapter agnostic architecture across data center racks and points of delivery (PoD) Simplified operations with investment protection by delivering new features and functionality on the parent switches with automatic inheritance for physical and virtual server-attached infrastructure Single point of management and policy enforcement across thousands of network access ports Scalable 1 Gigabit Ethernet and 10 Gigabit Ethernet server access. It is based on the IEEE 802. at Cisco’s discretion.1BR (Bridge Port Extension) standard and enables the following main benefits: Common. then. When the switch detects a non-Cisco certified transceiver. then.Figure 11-27 Cisco 40Gb QSFP to Four SFP+ Active Optical Breakout Cable Unsupported Transceiver Modules Cisco highly discourages the use of non-officially certified and approved transceivers inside Cisco network devices.

Figure 11-28 Cisco Fabric Extender Architectural Choices 1. Cisco UCS 2000 Series Fabric Extender (I/O module): Extends the fabric into the Cisco UCS Blade Chassis Cisco Fabric Extender architectural choices are not mutually exclusive. Cisco Adapter Fabric Extender (Adapter-FEX): Extends the fabric into the server 3. for example. and it is possible to have cascading Fabric Extender topologies deployed. Cisco Virtual Machine Fabric Extender (VM-FEX): Extends the fabric into the hypervisor for virtual machines 4. leverage Adapter-FEX on the server to .Spanning Tree Protocol for logical loop-free topology Low and predictable latency at scale for any-to-any traffic patterns Extended and expanded switching fabric access layer all the way to the server hypervisor for virtualized servers or to the server converged network interface card in case of nonvirtualized servers Reduced power and cooling needs Overall reduced operating expenses and capital expenditures Cisco Fabric Extender technology is delivered in the four following architectural choices depicted in Figure 11-28. Cisco Nexus 2000 Series Fabric Extenders (FEX): Extends the fabric to the top of rack 2.

Data Center Bridging Exchange (DCBX). they are referred to as Cisco Nexus 2000 Series FCoE Fabric Extenders. must be forwarded to the parent switch for forwarding. . and 2348TQ. More on that later in this chapter. Cisco NX-OS provides the capability to offload link-level protocol processing to the FCoE Fabric Extender CPU. administration. Short intra-rack server cable runs: Twinax or fiber cables connect servers to top-of-rack Cisco Nexus 2000 Series FCoE Fabric Extenders. the following Cisco Nexus 2000 Series Fabric Extender models support FCoE: 2232PP. All operation. 2232TM-E. Cisco Nexus 2000 Series FCoE Fabric Extenders operate as remote line cards to the Cisco Nexus 5000 Series parent switch. 2248PQ. FCoE Fabric Extenders have no local switching capabilities. FCoE and Cisco Nexus 2000 Series Fabric Extenders Extending FCoE to the top of rack by leveraging Cisco Nexus 2000 Series Fabric Extenders creates a scale-out Unified Fabric solution.connect to Cisco Nexus 2000 Series FCoE Fabric Extender. Cisco Nexus 2000 Series FCoE Fabric Extenders support optimal cabling strategy that simplifies network operations. and Link Aggregation Control Protocol (LACP). so any traffic. 2348UPQ. At the time of writing this book. even between the two ports on the same FCoE Fabric Extender. Note To reduce the load on the control plane of the parent device. This model allows simple cabling and easier rollout of racks and PODs. Longer inter-rack horizontal runs of fiber: Cisco Nexus 2000 Series FCoE Fabric Extenders in each rack are connected to parent switches that are placed at the end or middle of the row. Collectively. and management functions are performed on the parent switch. Figure 11-29 depicts the optimal cabling strategy made available by leveraging Cisco Fabric Extender architecture for Top-of-Rack deployment. Cisco Discovery Protocol (CDP). The following protocols are supported: Link Layer Discovery Protocol (LLDP). Physical host interfaces on a Cisco Nexus 2000 Series FCoE Fabric Extender are represented as logical interfaces on the Cisco Nexus 5000 Series parent switch. 2232TM. or FCoE Fabric Extenders for simplicity.

up to a maximum number of uplinks available on the FCoE Fabric Extender. The following sections examine each one in the context of FCoE. each FCoE Fabric Extender is connected to a single Cisco Nexus parent switch.Figure 11-29 FCoE Fabric Extenders Cabling Strategy Each rack could have two FCoE Fabric Extenders to provide in-rack server connectivity redundancy. which can influence power consumption and cooling. Connection could be in a form of a single link or a port channel. FCoE Fabric Extenders can also be deployed every few racks to accommodate the required switch interface capacity. In that case. Figure 11-30 depicts straight-through FCoE Fabric Extender attachment topology. Cisco Nexus 2232 FCoE Fabric Extender has eight 10 Gigabit Ethernet fabric uplinks. FCoE and Straight-Through Fabric Extender Attachment Topology In the straight-through Fabric Extender attachment topology. For example. and as such should be carefully considered. This parent switch exclusively manages the ports on the FCoE Fabric Extender and provides its features. If the rack server density is low enough. Two attachment topologies exist when connecting Cisco Nexus 2000 Series FCoE Fabric Extenders to Cisco Nexus 5000 Series parent switch: straight-through and vPC. . server cabling will be extended beyond a single rack (for the racks where FCoE Fabric Extenders are absent). also known as network interfaces (NIF).

State: Active Eth2/2 . Example 11-2 shows FCoE Fabric Extender registration information as displayed on the Cisco Nexus 5000 Series parent switch. and its front panel interfaces.Interface Up.Interface Up. Example 11-2 Displaying FCoE Fabric Extender Registration Info on the Parent Switch Click here to view code image nexus5000# show fex 100 FEX: 100 Description: FEX0100 state: Online FEX version: 6. State: Active Eth2/4 .0(2)N1(1)] Extender Serial: FOC1616R00Q Extender Model: N2K-C2248PQ-10GE.Figure 11-30 Straight-Through FCoE Fabric Extender Attachment Topology As long as a single uplink between the FCoE Fabric Extender and the parent switch exists.0(2)N1(1) [Switch version: 6.Interface Up. State: Active Eth2/1 . State: Active The number of uplinks also determines the oversubscription levels introduced by the FCoE Fabric Extender into server traffic. will keep being up on the parent switch.Interface Up. . For example. Table 11-4 shows the oversubscription levels that each server will experience as its traffic passes through the Fabric Extender. State: Active Eth2/3 . Part No: 73-14775-02 Pinning-mode: static Max-links: 1 Fabric port for control traffic: Eth2/3 FCoE Admin: false FCoE Oper: true FCoE FEX AA Configured: false Fabric interface state: Po100 . FCoE Fabric Extender will keep being registered.Interface Up. also known as host interfaces (HIF). if all host interfaces on the Cisco Nexus 2232 FCoE Fabric Extender are connected to 1 Gb servers.

as shown in Example 11-3. For FCoE traffic. Note It is recommended to use a power of 2 to determine the amount of FCoE Fabric Extender fabric uplinks. higher oversubscription levels will translate into more PAUSE frame and overall slower storage connectivity. With static pinning mode. a range of host interfaces on the Cisco Nexus 2000 Series FCoE Fabric Extender toward the servers are statically pinned to a given fabric uplink port. static-pinning mapping looks like what’s shown in Table 11-5. Example 11-3 Display Static Pinning Mapping on the Cisco Nexus 2232 FCoE Fabric Extender Click here to view code image . In the case of Cisco Nexus 2232 FCoE Fabric Extenders. With straight-through FCoE Fabric Extender attachment topology. you can choose to leverage either static or dynamic pinning. where X/Y indicates the interface toward the Cisco Nexus 2000 Series FCoE Fabric Extender. Table 11-5 Static Pinning Mapping on the Cisco Nexus 2232 FCoE Fabric Extender It is also possible to display the static pinning mapping by issuing the show interface ethernet X/Y fex-intf command on the Cisco Nexus 5000 Series parent switch command-line interface. This facilitates better port channel hashing and more even traffic distribution across the FCoE Fabric Extender fabric uplinks.Table 11-4 Number of Fabric Extender Uplinks and Oversubscription Levels It is recommended that you provision as many fabric uplinks as possible to limit the potential adverse effects of the oversubscription.

if the second fabric uplink interface on the Cisco Nexus 2232 FCoE Fabric Extender fails. nexus5000# show interface ethernet 1/8 fex-intf Fabric FEX Interface Interfaces --------------------------------------------------Eth1/8 Eth100/1/32 Eth100/1/31 Eth100/1/30 Eth100/1/1 Eth100/1/29 The essence of the static pinning operation is that if Cisco Nexus 2000 Series FCoE Fabric Extender’s fabric uplink interface toward the Cisco Nexus 5000 Series parent switch fails. Figure 11-31 FCoE Fabric Extender Static Pinning and Server NIC Failover . which are pinned to that specific fabric uplink interface. or if the servers are dual-homed to the different pinning groups on the same Cisco Nexus 2000 Series FCoE Fabric Extender. FCoE Fabric Extender will disable the server facing host interfaces. For example.. Figure 11-31 depicts server connectivity failover in Cisco Nexus 2232 FCoE Fabric Extender static pinning deployment. which are connected to these host interfaces. host interfaces 5 to 8 pinned to that fabric uplink interface will be brought down and the servers. If a server is singly connected. in which case the server will failover to the redundant network connection.nexus5000# show interface ethernet 1/1 fex-intf Fabric FEX Interface Interfaces --------------------------------------------------Eth1/1 Eth100/1/4 Eth100/1/3 Eth100/1/2 . will see their Ethernet link go down. it will lose its network connectivity until FCoE Fabric Extender uplink is restored or until the server is physically replugged into an operational Ethernet port. a failover event will be triggered on the server.. If the servers are dual-homed to two different Cisco Nexus 2000 Series FCoE Fabric Extenders...

This results in decreased uplink bandwidth. the oversubscription levels will raise with any failed individual FCoE Fabric Extender uplink interface. which translates to higher oversubscription levels for the server traffic. Figure 11-32 FCoE Fabric Extender Dynamic Pinning and Server NIC Failover The most significant difference between static and dynamic pinning modes is that static pinning mode allows maintaining the same oversubscription levels for the traversing server traffic. With dynamic pinning mode. The possible downside of static pinning is that it relies on healthy server network connectivity failover operation. Figure 11-32 depicts dynamic pinning behavior on the Cisco Nexus 2232 FCoE Fabric Extender. where each group of four 1 Gb FCoE Fabric Extender host interfaces shares the bandwidth of one 10 GB FCoE Fabric Extender uplink.Static pinning configuration on the Cisco Nexus 2232 FCoE Fabric Extender results in a 4:1 oversubscription ratio. a port channeled fabric uplink interface is leveraged to pass the traffic between the Cisco Nexus 2000 Series FCoE Fabric Extender and the Cisco Nexus 5000 Series parent switch. FCoE and vPC Fabric Extender Attachment Topology In the vPC FCoE Fabric Extender attachment topology. server traffic crossing the FCoE Fabric Extender is hashed onto the remaining member interfaces of the uplink port channel. even in the case of FCoE Fabric Extender uplink interface failure. and as such. If an individual port channel fabric uplink member interface failure occurs. This connection could be in a form of a single link or a port . a failure of a single port channel member interface does not bring down any host facing interfaces on the FCoE Fabric Extender. while in case of dynamic pinning. each FCoE Fabric Extender is connected to both Cisco Nexus parent switches. which should be regularly verified.

FCoE Fabric Extender will keep being registered. Both parent switches manage the ports on that FCoE Fabric Extender and provide its features. the number of uplinks determines the oversubscription levels introduced by the FCoE Fabric Extender for server traffic. Figure 11-33 depicts FCoE Fabric Extender attachment topology leveraging vPC topology with either single or port-channeled fabric uplink interfaces toward each of the Cisco Nexus 5000 Series parent switches.channel. As in the case of straight-through FCoE Fabric Extender attachment. and its front panel interfaces will keep being up on the parent switches. Figure 11-33 vPC FCoE Fabric Extender Attachment Topology Because in case of vPC FCoE Fabric Extender attachment topology both Cisco Nexus 5000 Series parent switches control the FCoE Fabric Extender. Cisco Nexus 5000 Series parent switches automatically check for compatibility of some of these parameters on the vPC . Example 11-4 Fabric Extender Uplink Port Channel for vPC Attached FCoE Fabric Extender Click here to view code image nexus5000-1(config)# interface e1/1-4 nexus5000-1(config-if-range)# channel-group 100 nexus5000-1(config)# interface po100 nexus5000-1(config-if)# vpc 100 nexus5000-1(config-if)# switchport mode fex-fabric nexus5000-1(config-if)# fex associate 100 Relevant configuration parameters must be identical on both parent switches. up to a maximum number of uplinks available on the FCoE Fabric Extender. they must be placed in a vPC domain. Example 11-4 shows an example of Cisco Nexus 5000 Series parent switch configuration of the FCoE Fabric Extender uplink port channel for vPC attached FCoE Fabric Extender. As long as a single uplink exists between the FCoE Fabric Extender and any one of its two parent switches.

The per-interface parameters must be consistent per interface. or even at the physical network level. and Cisco Nexus parent switches. . such as Cisco Nexus 5500 switches. the FCoE traffic must be single homed to maintain SAN Side A / Side B isolation.interfaces. whether it is accomplished at the port level. Port channel mode (on. configuration changes that are made to host interfaces on a vPC attached FCoE Fabric Extender must be manually synchronized between the two parent switches. put in a vPC domain. FCoE servers can use a port channeled connection toward a pair of upstream Cisco Nexus 2000 Series FCoE Fabric Extenders attached to two different Cisco Nexus parent switches. In this case. It is referred to as Enhanced vPC. while network traffic is allowed to use both parent switches. therefore. and the global parameters must be consistent globally. so dynamic pinning is automatically leveraged instead. High availability is an essential requirement in most data center designs. Note Cisco Nexus 5000 (5010 and 5020) switches do not support Enhanced vPC. To simplify the configuration tasks. even though network traffic is dual homed between the FCoE Fabric Extender and a parent switch pair. FCoE Fabric Extenders. Cisco Nexus 5000 Series parent switches support a configuration-synchronization feature. MTU) Note The two parent switches are configured independently. which allows the configuration for host interfaces on the Cisco Nexus 2000 Series FCoE Fabric Extender to be synchronized between the two parent switches. To achieve connectivity redundancy. vPC FCoE Fabric Extender attachment topology does not support static pinning. Figure 11-34 depicts Enhanced vPC topology where each FCoE Fabric Extender is pinned to only a single Cisco Nexus 5500 parent switch for the FCoE traffic. off. or active) Link speed per port channel Duplex mode per port channel Trunk mode per port channel (native VLAN) Spanning Tree Protocol mode Spanning Tree Protocol region configuration for Multiple Spanning Tree (MST) Protocol Enable or disable state per VLAN Spanning Tree Protocol global settings Spanning Tree Protocol interface settings Quality of service (QoS) configuration and parameters (PFC. This setup creates a dual-layer vPC between servers. DWRR. the supervisor level.

In the Enhanced vPC topology. the FCoE traffic from FEX 101 is sent only to N5500-1. Example 11-5 FCoE Traffic Pinning in Enhanced vPC Topology Configuration Click here to view code image nexus5500-1(config)# fex 101 nexus5500-1(config-fex)# fcoe nexus5500-2(config)# fex 102 nexus5500-2(config-fex)# fcoe In this configuration. traffic load on the FCoE Fabric Extender uplink ports is not evenly distributed because FCoE traffic from a Fabric Extender is pinned to only a single Cisco Nexus . Example 11-5 shows the command-line interface configuration required to pin the FCoE Fabric Extender to only a single Cisco Nexus 5500 parent switch for the FCoE traffic in Enhanced vPC topology. As a result. This is also true for the return FCoE traffic from a Cisco Nexus 5500 parent switches toward Cisco Nexus 2000 Series FCoE Fabric Extenders where FCoE servers are connected.Figure 11-34 FCoE Traffic Pinning in Enhanced vPC Topology With Enhanced vPC topology. and the FCoE traffic from FEX 102 is sent only to N5500-2. SAN A and SAN B separation is achieved. although FEX 101 and FEX 102 are connected and registered to both N5500-1 and N5500-2. restricting FCoE traffic to a single Cisco Nexus parent switch is called Fabric Extender pinning.

We are now going to further extend those advantages into the servers. FCoE and Server Fabric Extenders Previous sections describe how Cisco Fabric Extender architecture leveraging Cisco Nexus 2000 Series FCoE Fabric Extenders provides substantial advantages for FCoE deployment. This results in a reduced number of network and storage switch ports. as well as the server physical adapters. require servers to be equipped with a large number of network connections. Adapter-FEX Cisco Adapter-FEX technology virtualizes PCIe bus. Straight-through FCoE Fabric Extender topology also allows easier control of how the bandwidth of FCoE Fabric Extender uplink is shared between the network and FCoE traffic by configuring an ingress and egress queuing policy. To prevent such an imbalance of traffic distribution across FCoE Fabric Extender uplinks. Virtualization technologies and the increased number of CPU cores. Figure 11-35 Adapter-FEX Server Topologies The Cisco Adapter-FEX technology addresses these main challenges: Organizations are deploying virtualized workloads to meet the strong need for saving costs and reducing physical equipment. you can leverage straight-through topology for FCoE deployments. When there is no congestion. so network and FCoE traffic each get half of the guaranteed bandwidth in case of congestion. Enough FCoE Fabric Extender uplink bandwidth should be provisioned to prevent the undesirable oversubscription of links that carry both FCoE and network traffic. as discussed earlier in this chapter. leveraging Adapter-FEX technology. each type of traffic can take all available bandwidth. Figure 11-35 depicts server connectivity topologies with and without Cisco Nexus 2000 Series FCoE Fabric Extenders supporting Adapter-FEX solution deployment. . however. presenting a server operating system with a set of virtualized adapters for network and storage connectivity. while the network traffic is hashed across all FCoE Fabric Extender uplinks. The default QoS template allocates half of the bandwidth to each one.parent switch for SAN side separation. including the ones going to the second Cisco Nexus parent switch.

private VLANs. the physical Cisco Nexus 5500 switch interface the server is connected to must be configured with VN-Tag mode. Features. cooling. and maintaining policy consistently across mobility events. which in turn is connected and registered with the Cisco Nexus 5500 parent switch. network administrators experience lack of visibility into the traffic that is exchanged among virtual machines residing on the same host. in which case it has full network connectivity via its physical (nonvirtualized) interface to the upstream Cisco Nexus 5500 switch or Cisco Nexus 2000 Series FCoE Fabric Extender. and operational models. and switch ports. and so on. Example 11-6 Installing and Activating the Cisco Adapter-FEX Feature on Cisco Nexus 5500 Switches Click here to view code image nexus5500(config)# install feature-set virtualization nexus5500(config)# feature-set virtualization Note Cisco Nexus 5000 (5010 and 5020) switches do not support Adapter-FEX. Data Center Bridging Exchange (DCBX) protocol must also run between the Cisco Nexus 5500 switch interface and the connected server. and inconsistency between the physical and virtual data center access layers. management.This increase has a tremendous impact on CapEx and OpEx because of the large number of adapters. such as ACLs. it does require feature installation and activation. latency. Network administrators struggle to link the virtualized network infrastructure and virtualized server platforms to the physical network infrastructure without degrading performance and with a consistent feature set and operational model. however. In virtualized environments. Example 11-7 depicts command-line interface configuration required to enable VN-Tag mode on the switch interface or the FCoE Fabric Extender host interface. available on the Cisco Nexus 5500 switches can be equally applied on the vEth interfaces representing vNICs. the network adapter on the server operates in a Classical Ethernet mode. They also often encounter a dramatic increase in the number of management points. The server can either be directly connected to the Cisco Nexus 5500 switch or it can be connected through the Cisco Nexus 2000 Series FCoE Fabric Extender. and isolation of traffic across multiple cores and virtual machines. cables. QOS. Adapter-FEX allows instantiating virtual network interface card (vNIC) server interface mapped to the virtual Ethernet (vEth) interface on the upstream Cisco Nexus 5500 switch. Network administrators face challenges in establishing policies. disparate provisioning. which directly affects power. Virtual Ethernet Interfaces By virtualizing the server Converged Network Adapter. enforcing policies. and cable management costs. as shown in Example 11-6. Before vEth interfaces can be created. With a consolidated infrastructure. Before VN-Tag mode is activated. . it is challenging to provide guaranteed bandwidth. No licensing requirement exists on the Cisco Nexus 5500 switch to run Adapter-FEX.

supporting Converged Network Adapter virtualization—you can inspect the Cisco UCS Integrated Management Controller (CIMC) tool and observe the Encap type of NIV listed for the physical adapter. Example 11-8 shows vEth configuration and the channel binding. The vNIC on the Converged Network Adapter also must be configured to use channel 50 for successful binding. Figure 11-36 Displaying Network Adapter Mode of Operation on Cisco UCS C-Series VIC It is important to distinguish between static and dynamic vEth interface provisioning. the network administrator specifies which channel a given vEth interface uses. the server starts to include Network Interface Virtualization (NIV) capability type-length-value (TLV) fields in its Data Center Bridging Exchange (DCBX) advertisements. as depicted in Figure 11-36. With a static vEth switch interface. In this configuration. Example 11-8 Static vEth Interface Provisioning Click here to view code image nexus5500(config)# interface veth10 nexus5500(config-if)# bind interface ethernet 101/1/1 channel 50 . For the Cisco UCS C-series servers. Be sure to reboot the server to switch Converged Network Adapter. leveraging channel 50. It is possible to associate configuration to the interface before it is even brought up. before the server administrator completes all the necessary configuration tasks on the server side to bring up the vEth-to-vNIC relationship. the UCS Virtual Interface Card. the network administrator manually configures interface provisioning model parameters.Example 11-7 Switch Interface VN-Tag Mode Configuration Click here to view code image nexus5500-1(config)# int eth101/1/1 nexus5500-1(config-if)# switchport mode vntag nexus5500-1(config)# int eth1/1 nexus5500-1(config-if)# switchport mode vntag After VN-Tag mode had been activated. to validate that the Converged Network Adapter is operating in NIV mode—that is. operation from Classical Ethernet to Network Interface Virtualization. as well as between fixed and floating vEth interfaces. the server is connected to a Cisco Nexus 2000 Series FCoE Fabric Extender host interface Ethernet 101/1/1. The server administrator must define a vNIC on the Converged Network Adapter with the same channel number. that is. When creating a static vEth interface. which is bound to a vEth interface 10 on the upstream Cisco Nexus 5500 switch.

Adapter-FEX uses a vethernet type port profile to interface with the server administrative provisioning tool. This enables the network administrator to use the lower range for statically provisioned vEth interfaces without being concerned about interface number overlap. upon reboot of the Cisco Nexus 5500 switch. causing them to be connected through . to associate port profiles to vNICs. the dynamic provisioning model is preferred. switch running configuration must be saved by issuing copy start run CLI command. Example 11-10 Dynamic vEth Interface Provisioning Click here to view code image nexus5500(config)# interface vethernet32769 nexus5500(config-if)# inherit port-profile NIC-VLAN50 nexus5500(config-if)# bind interface Ethernet105/1/2 channel 1 nexus5500(config)# interface vethernet32770 nexus5500(config-if)# inherit port-profile NIC-VLAN60 nexus5500(config-if)# bind interface Ethernet105/1/2 channel 2 The interface numbering for dynamically instantiated vEth interfaces is always above 32768. as such. Example 11-9 vethernet Port Profile for Adapter-FEX Click here to view code image port-profile type vethernet NIC-VLAN10 switchport access vlan 10 switchport mode access state enabled Port profiles are configured on the Cisco Nexus 5500 switch and are communicated to the Converged Network Adapter on the server through the VIC protocol. This way. The dynamic provisioning model of Adapter-FEX is based on the concept of port profiles of type vethernet. To preserve this dynamically instantiated vEth interface across reboots. Such manual provisioning process could be prone to human error and. The server administrator can also leverage Converged Network Adapter management tools. the association of the vEth interface to the physical switch interface and the channel will be already present. which does not support migration across physical switch interfaces. Port profiles provide a template configuration. where each vNIC in the Converged Network Adapter might be associated with the vNIC of the virtual machine. such as UCS Cisco Integrated Management Controller.It is required to have a unique channel number given to each network interface. Example 11-10 shows a dynamic provisioning model to instantiate two vEth interfaces on the Cisco Nexus 5500 switch leveraging port profiles. which in turn triggers the dynamic creation of the vEth interface on the upstream Cisco Nexus 5500 switch. Example 11-9 depicts an example of a vethernet port profile setting port mode to access and assigning it to VLAN 10. which can be applied identically to multiple switch interfaces. A fixed vEth interface is a virtual interface. This could present an issue in server virtualization environments. Virtual machines often migrate from one physical server to another.

All Fibre Channel properties. Shutting down the server vHBA interface has no effect on the server vNIC interfaces. so any FCoE traffic generated by the server is sent from vHBA to the vFC. The vEth interface is in turn bound to the vFC interface. The binding is to a physical switch interface when the server is directly connected to the Cisco Nexus 5500 switch or the Cisco Nexus 2000 Series FCoE Fabric Extender. so they can be shut down. with network interface virtualization powered by Adapter-FEX. Example 11-11 shows configuration of binding virtual Fibre Channel interface for Side A fabric. interface 0 and interface 1. A virtual vEth interface that “migrates” across the different physical switch interfaces along with a virtual machine is called a floating vEth interface. apply to virtual Fibre Channel interfaces just as they apply to physical Fibre Channel interfaces. Side B fabric configuration applied on the other Cisco Nexus 5500 switch is identical. Virtual Interface Redundancy Given that the server can have two physical interfaces. The vFC interfaces on the Cisco Nexus 5500 switches are equivalent to the regular Fibre Channel interfaces. VN-Tag channel 10 is leveraged on both sides. zoning. a single instantiated vNIC might be able to use both for the failover. vNICs and vHBAs are separate logical entities. . Traditionally. such as addressing. interface Ethernet1/1 is connected to a server with virtualized Converged Network Adapter. or to a port channel interface when the server is connected to the Cisco Nexus 5500 switches over a vPC. even when instantiated on the same Converged Network Adapter on the server side. mapping the vHBA defined on the server to the vEth interface defined on Cisco Nexus 5500 switch.different physical switch interfaces. FCoE and Virtual Fibre Channel Interfaces By virtualizing the server Converged Network Adapter. Adapter-FEX allows instantiating a virtual Host Bus Adapter (vHBA) server interface mapped to the virtual Fibre Channel (vFC) interface on the upstream Cisco Nexus 5500 switch. on its Converged Network Adapter. Remember. A virtual Fibre Channel interface must be bound before it can be used. Example 11-11 virtual Fibre Channel Interface Binding Click here to view code image nexus5500-SideA(config)# configure terminal nexus5500-SideA (config)# interface ethernet 1/1 nexus5500-SideA (config-if)# switchport mode vntag nexus5500-SideA nexus5500-SideA nexus5500-SideA nexus5500-SideA (config)# interface veth 10 (config-if)# bind interface eth 1/1 channel 10 (config-if)# switchport mode trunk (config-if)# switchport trunk allowed vlan 50 nexus5500-SideA (config)# interface vfc 10 nexus5500-SideA (config-if)# bind interface veth 10 nexus5500-SideA (config-if)# no shutdown In the previous example. resulting in shutting down of the corresponding server vHBA interface. and so on. to a MAC address when the server is remotely connected over a Layer 2 bridge.

With the Active/Active model. such as virtual PortChannel. one leveraging NIC failover and the other making use of vPC. each vNIC can be associated with both physical interface 0 and the physical interface 1 on the virtualized Converged Network Adapter. With the Active/Standby model. which was terminated on either one or two upstream switches. upstream switches had to support multilink aggregation features. which simplifies the setup even further. passing network traffic to and from the network. Loss of network connectivity on the active server NIC forced a failover event. one of the server NICs was chosen to actively pass network traffic to and from the network. There is also no need to run vPC between the upstream Cisco Nexus 5500 switches. With fabric failover. and it is available on Cisco UCS blade and rack servers. Figure 11-38 depicts fabric failover operation. one that does not involve the server operating system. server administrators had a choice between the Active/Standby and Active/Active models. The VN-Tag/channel provides the virtual connection between the vNIC in the Converged Network Adapter and the vEth interface on the upstream Cisco Nexus 5500 switch. in which case the second server NIC became active.to achieve server network connectivity redundancy. Figure 11-37 Server NIC Connectivity Redundancy Models The use of the Adapter-FEX feature allows a different approach to network connectivity failover. In the latter case. . This approach is known as fabric failover. Figure 11-37 contrasts the two traditional server NIC connectivity redundancy models. all server NICs were typically bundled into a port channel.

This also implies that at least two vHBAs should be instantiated in the Converged Network Adapter to achieve FCoE connectivity redundancy. these two solutions are not mutually exclusive. we discussed solutions for delivering scale-out FCoE deployment leveraging either Cisco Nexus 2000 Series FCoE Fabric Extenders or the Server Fabric Extenders in the form of AdapterFEX. Cisco Nexus 5500 switches enable the choice of topology for connecting FCoE servers with the Adapter-FEX feature through the Cisco Nexus 2000 Series FCoE Fabric Extenders.Figure 11-38 Fabric Failover Operation For FCoE connectivity. In case of Side A failure. storage traffic fails over to Side B. FCoE and Cascading Fabric Extender Topologies So far. and vice versa. With Adapter-FEX. SAN best practices dictate physical separation of Side A/Side B fabrics for proper redundancy. each vHBA is mapped to a separate server physical interface. The following sections describe three of these choices: Adapter-FEX with Cisco Nexus 2000 Series FCoE Fabric Extenders in vPC topology Adapter-FEX with Cisco Nexus 2000 Series FCoE Fabric Extenders in straight-through topology with vPC domain Adapter-FEX with Cisco Nexus 2000 Series FCoE Fabric Extenders in straight-through topology (without vPC domain) Adapter-FEX and FCoE Fabric Extenders in vPC Topology In this case. Cisco Nexus 2000 Series FCoE Fabric Extenders are attached in a vPC topology to the upstream Cisco Nexus 5500 parent switches. instantiated vHBAs do not leverage fabric failover functionality. At the same time. As mentioned earlier. and they can be leveraged at the same time in a cascading fashion. but not both. and failover is handled in a more traditional way between the vHBA adapters. each mapped to one of the two server physical interfaces in the Converged Network Adapter. The Adapter-FEX feature instantiates two vHBA interfaces. because FCoE Fabric Extenders are attached to both Cisco Nexus 5500 parent .

The use of IEEE 802. even though it is physically connected to both. vEth interface on the Cisco Nexus 5500 switch is then associated with the vHBA channel defined in the server Converged Network Adapter settings. The vEth interface used for sending Ethernet encapsulated Fibre Channel traffic (FCoE) must be defined as trunk.1Q tags enables you to segregate network and storage traffic over the same link. Example 11-12 Virtual Fibre Channel Interface Configuration and Binding Click here to view code image . Figure 11-39 depicts the use of Adapter-FEX and FCoE in a vPC Fabric Extender attachment topology. The vEth interface is then bound to the vFC interface. and using FCoE initialization protocol (FIP) facilitates negotiation of the VLAN used for FCoE traffic. This is achieved by pinning the FCoE Fabric Extender (for FCoE and FIP traffic) to only a single Cisco Nexus 5500 parent switch. each Cisco Nexus 2000 Series FCoE Fabric Extender must belong to just one of the two fabrics.switches in vPC topology. Configuration of Side B fabric is practically identical. Example 11-12 shows the Virtual Fibre Channel interface configuration and binding for servers connected to Cisco Nexus 2000 Series FCoE Fabric Extender host interfaces in Side A fabric. FCoE VLANs should also not be trunked over a vPC peerlink between the two Cisco Nexus 5500 parent switches to maintain Side A/Side B separation. with the exception of using a different Cisco Nexus 2000 Series FCoE Fabric Extender and Cisco Nexus 5500 switch. they are in fact connected to both Side A and Side B SAN fabrics. To follow SAN best practices with Side A/Side B separation. Figure 11-39 Adapter-FEX and FCoE in vPC Fabric Extender Topology The configuration is carried out by defining the Cisco Nexus 2000 Series FCoE Fabric Extender host interface as mode VN-Tag.

but not both. There is no need to use Fabric Extender pinning for Side A/Side B separation. server Converged Network Adapter physical interface 0 and physical interface 1 can be utilized by vNICs in a round-robin fashion. Adapter-FEX and FCoE Fabric Extenders in Straight-Through Topology with vPC In this case.nexus5500(config)# configure terminal nexus5500(config)# fex 101 nexus5500(config-fex)# fcoe nexus5500(config)# interface ethernet 101/1/1 nexus5500(config-if)# switchport mode vntag nexus5500(config)# interface veth 10 nexus5500(config-if)# bind interface eth 101/1/1 channel 10 nexus5500(config-if)# switchport mode trunk nexus5500(config-if)# switchport trunk allowed vlan 50 nexus5500(config)# interface vfc 10 nexus5500(config-if)# bind interface veth 10 nexus5500(config-if)# no shutdown Server vNICs can leverage fabric failover for network traffic. Figure 11-40 depicts the use of Adapter-FEX and FCoE in a Fabric Extender straight-through attachment topology with vPC on the upstream Cisco Nexus 5500 switches. Cisco Nexus 2000 Series FCoE Fabric Extenders are attached in a straight-through topology to the upstream Cisco Nexus 5500 parent switches. meaning half of the vNICs can be bound to physical interface 0 as the primary path toward the network. For traffic load distribution. Adapter-FEX instantiates two vHBA interfaces. while switches participate in a vPC domain. . each mapped to one of the two physical interfaces on the server Converged Network Adapter. and the other half of the vNICs can be bound to physical interface 1 as their primary path toward the network.

Server vNICs can leverage fabric failover for network traffic.Figure 11-40 Adapter-FEX and FCoE in Straight-Through Fabric Extender Topology with vPC FCoE VLANs should also not be trunked over a vPC peer-link between the two Cisco Nexus 5500 parent switches to maintain Side A/Side B separation. server Converged Network Adapter physical interface 0 and physical interface 1 can be utilized by vNICs in a round-robin fashion. . Cisco Nexus 2000 Series FCoE Fabric Extenders are attached in a straight-through topology to the upstream Cisco Nexus 5500 parent switches. half of the vNICs can be bound to physical interface 0 as the primary path toward the network. that is. and the other half of the vNICs can be bound to physical interface 1 as their primary path toward the network. Figure 11-41 depicts the use of Adapter-FEX and FCoE in a Fabric Extender straight-through attachment topology without vPC on the upstream Cisco Nexus 5500 switches. For traffic load distribution. while those switches do not participate in a vPC domain. Adapter-FEX and FCoE Fabric Extenders in Straight-Through Topology In this case.

cisco. The server administrator should be aware of this condition.html Adapter-FEX feature: http://www.com/c/en/us/solutions/data-center-virtualization/ieee-802-1data-center-bridging/index.com/c/en/us/products/switches/nexus-2000series-fabric-extenders/index. Second.cisco.cisco.html Fibre Channel over Ethernet: http://www. First. Fabric failover does not rely on the upstream vPC domain. the lack of a vPC domain between upstream Cisco Nexus 5500 switches disallows the use of a port channeled vPC connection from server vNICs.com/c/en/us/solutions/data-centervirtualization/fibre-channel-over-ethernet-fcoe/index.com/c/en/us/solutions/data-center-virtualization/adapterfabric-extender-adapter-fex/index. which is not a major concern because fabric failover provides superior redundancy and high availability to using port channels. the lack of a vPC peer link between Cisco Nexus 5500 switches simplifies configuration tasks and minimizes the risk of human error in the sense that Side A and Side B are isolated by design.cisco.html Cisco Fabric Extender Technology: http://www.Figure 11-41 Adapter-FEX and FCoE in Straight-Through Fabric Extender Topology Without vPC This case is similar to the previous one with two distinct differences.html Exam Preparation Tasks . Reference List Data Center Bridging: http://www.

1Qbb Enhanced Transmission Selection (ETS) IEEE 802.1Qau Data Center Bridging Exchange (DCBX) IEEE 802.Review All Key Topics Review the most important topics in the chapter. Table 11-6 Key Topics for Chapter 11 Define Key Terms Define the following key terms from this chapter.1Qaz Quantized Congestion Notification (QCN) IEEE 802. Table 11-6 lists a reference for these key topics and the page numbers on which each is found.1AB Link Layer Discovery Protocol (LLDP) . and check your answers in the Glossary: Data Center Bridging Adapter-FEX Virtual Machine Fabric Extender (VM-FEX) Priority Flow Control (PFC) IEEE 802. noted with the key topic icon in the outer margin of the page.

Type-Length-Value (TLV) Fibre Channel over Ethernet (FCoE) virtual network interface card (vNIC) virtual Host Bus Adapter (vHBA) virtual Ethernet (vEth) port profile vethernet network interface card (NIC) Converged Network Adapter (CNA) VN-Tag .

Table 12-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. If you do not know the answer to a question or are only partially sure of the answer. Multihop deployment carries the advantages introduced by the Unified Fabric across the entire environment. you should mark that question as wrong for purposes of the self-assessment. and flexible converged infrastructure. You can find the answers in Appendix A. 1. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security.Chapter 12. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. F2e . Multihop Unified Fabric This chapter covers the following exam topics: Network Topologies and Deployment Scenarios for Multihop Unified Fabric FCoE and Cisco FabricPath Stretching Unified Fabric beyond the boundary of a single hop allows building a scalable. read the entire chapter. “Answers to the ‘Do I Know This Already?’ Quizzes. robust.” Table 12-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. In this chapter we look closer into the characteristics of the multihop Unified Fabric and examine various multihop topologies you can leverage for such deployment. Which of the following Cisco Nexus 7000 series switches I/O modules support FCoE multihop? a. F2 b.

What do FIP snooping bridges do? a. No. VE_Port c. VNP_Port d. M2 2. What do Cisco Nexus 7000 Series switches running FCoE-NPIV mode do? a. They snoop CDP and LLDP packets to make sure there is no man-in-the-middle attack from a node masquerading as an FCoE Fibre Channel Forwarder. unless you issue the system fcoe-npv force command-line interface command 6. but only if they have 16 Gb Fibre Channel I/O modules d. They snoop FCoE data packets to make sure there is no man-in-the-middle attack from a node masquerading as an FCoE Fibre Channel Forwarder. Have interfaces of a type VF_Port d. Do not run Fibre Channel services 7. They snoop FIP packets during the fabric login process to make sure there is no man-in-themiddle attack from a node masquerading as an FCoE Fibre Channel Forwarder.c. No c. VF_Port b. Accept multiple fabric logins on the same physical interface b. What are the main drivers for access layer consolidation in the Unified Fabric environment? a. They snoop FIP packets to provide lossless transport over multiple Ethernet segments. Number of discrete switching systems d. F1 e. What is an FCoE port type on the aggregation layer switch running in full Fibre Channel switching mode toward the access layer switch also running in full Fibre Channel switching mode? a. True or false? A switch running in FCoE-NPV mode proxies fabric logins toward the upstream FCoE Fibre Channel Forwarder. c. a. VN_Port . Yes. Proxy fabric logins toward the upstream Fibre Channel Forwarder switch c. Number of server adapters b. 4. False 5. d. b. Number of storage I/O operations 3. F2a d. Yes b. Can Cisco Nexus 7000 Series switches be put into FCoE-NPV mode? a. True b. Number of total IP phones c.

Decoupling of topological dependencies between network and storage traffic. False Foundation Topics First Hop Access Layer Consolidation Multihop Unified Fabric carries network and storage traffic in a converged manner across an Ethernet medium beyond a single-hop topology. FCoE is also supported over Cisco Nexus 2000 series Fabric Extenders attached to the parent Cisco Nexus 5000 Series switches. the FCoE traffic is forwarded over which of the following? a. where switching elements can be deployed in the Top-of-Rack (TOR) or in the Mid/End-of-Row (MOR/EOR) fashion with or without Cisco Nexus 2000 series Fabric Extenders. These topologies offer consolidation in the different tiers of the data center switching design. vPC orphan port d. With Dynamic FCoE. In larger access layer environments. True or false? In Dynamic FCoE topology. All of the above 9. such as Cisco Nexus 5000 series switches. FabricPath FCoE port 10. Gradual introduction of FCoE into the larger environment while providing investment protection d. the interface that connects to the FCoE server is defined as a shared .8. vPC Peer-Link c. which replace network interface cards and host bus adapters by converging network and storage traffic over a single 10G (or faster) physical connection from the server to its first hop switch. that is. Various topologies exist for deploying multihop Unified Fabric. Better traffic oversubscription management for the storage traffic with lesser dependency on features such as Priority Flow Control b. FabricPath spine nodes run in full Fibre Channel switching mode. Cisco Nexus 7000 Series switches can be leveraged for direct server connectivity. Access layer consolidation offers advantages in the form of reducing the number of adapters and the amount of discrete switching platforms to accommodate network and storage connectivity. In this case. the data network can employ any technology or protocol of choice without influencing the storage network c. What is the reason to leverage the dedicated wire approach in multihop Unified Fabric deployment between access and aggregation layer switches? a. a. The most common case occurs with consolidation at the access layer. Server traffic leverages a unified wire approach to consolidate network and storage traffic over the same physical cable. True b. FabricPath core port b. The reduction in server adapters results from the use of FCoE and Converge Network Adapters.

Sup 2 or Sup2e are required. and in case of F2 or F2e line cards. For any other EtherType value. Example 12-1 shows a configuration example where the Ethernet interface is defined as a shared interface. the frame is forwarded to the storage VDC defined ahead of time on the switch. F1 line cards support FCoE with Sup1. If the EtherType value matches 0x8906 or 0x8914. the frame is forwarded to one of the network VDCs. Cisco Nexus 7000 F3 line cards do not yet support FCoE functionality. At this time. FCoE is also not supported over Cisco Nexus 2000 Series Fabric Extenders attached to the parent Cisco Nexus 7000 Series switches. which means the switch examines the EtherType value in the incoming Ethernet frame header to determine whether the frame carries FCoE or FIP payload. Example 12-1 Shared Interface Configuration Example Click here to view code image N7K(config)# vdc fcoe type storage N7K(config-vdc)# allocate shared interface e2/1 The reduction in discrete switching platforms results from the use of Cisco Nexus 5000/7000 Series .interface. Figure 12-1 Shared Interface Operation on Cisco Nexus 7000 Series Switches At the time of writing this book. Figure 12-1 depicts the logic behind storage and network frame forwarding over the shared interfaces on the Cisco Nexus 7000 Series switches based on the EtherType field Ethernet header value in the incoming frames from the directly attached FCoE servers.

preventing manin-the-middle attacks or erroneous operation. access layer switches can employ one of the three modes of operation: FIP snooping. A separate set of cables assists in operational transition from disparate fabrics to the Unified Fabric. it dynamically creates an access control list to guarantee that the communication between that CNA and FCoE Fibre Channel Forwarder remains point-to-point. After an FIP snooping bridge sees a CNA log in to the fabric through a specific FCoE Fibre Channel Forwarder. At the time of writing this book. Each one comes with its own set of characteristics.switches participating in the multihop Unified Fabric topology while allowing connectivity to the traditional Fibre Channel SAN fabrics. The FIP snooping feature helps secure the logical point-to-point relationships between ENodes and FCoE Fibre Channel Forwarders. It is still possible to make sure that each ENode physically connects to the FCoE Fibre Channel Forwarder. This is a particularly important functionality in a shared medium like Ethernet to make sure that rogue devices do not insert themselves into the fabric pretending to be FCoE Fibre Channel Forwarders. which prevents man-in-the-middle attacks where FCFMAC (the MAC address of the FCoE Fibre Channel Forwarder) spoofing can occur. With FCoE. which have not yet converged into the Unified Fabric. To support multihop FCoE behavior. aggregation layer consolidation is typically achieved by leveraging a separate set of cables between the access layer and aggregation layer Unified Fabric switches. Because Fibre Channel links are pointto-point links. while building logical point-to-point links between the two. access layer and aggregation layer switches work in tandem to deliver a scale-out Unified Fabric solution. This prevents erroneous or malicious behavior resulting from spoofing Fibre Channel addressing. N_Port Virtualization (NPV). nodes (ENodes) that want to communicate through the fabric must log in to the fabric first. FIP Snooping In traditional Fibre Channel Storage-Area Network (SAN). the Fibre Channel switch has complete control of the traffic received from the ENodes into the fabric. however. In this case. Figure 12-2 depicts FIP snooping behavior. where aggregation layer switches process both network and storage traffic coming from the access layer. including Fabric Login (FLOGI) messages. Aggregation Layer FCoE Multihop The next level of consolidation occurs at the aggregation layer. physical point-to-point relationships between the ENodes and FCoE Fibre Channel Forwarders is replaced by the FIP protocol. or full Fibre Channel switching. between a CNA on the server and the upstream FCoE Fibre Channel Forwarder. . which we review next. FIP snooping bridges (FSB) are lossless Ethernet switches typically deployed at the access layer of the Unified Fabric environment that are capable of inspecting or “snooping” FIP messages. this option is not always optimal in a scale-out Unified Fabric deployment. Cisco Nexus 7000 Series switches do not currently support native Fibre Channel connectivity. which enables the ENodes and the FCoE Fibre Channel Forwarders to be separated by several DCB-enabled Ethernet segments. Cisco Nexus 5000 Series switches leverage native Fibre Channel interfaces or the unified ports that are config