About This eBook
ePUB is an open, industry-standard format for eBooks. However, support of ePUB and its many
features varies across reading devices and applications. Use your device or app settings to customize
the presentation to your liking. Settings that you can customize often include font, font size, single or
double column, landscape or portrait mode, and figures that you can click or tap to enlarge. For
additional information about the settings and features on your reading device or app, visit the device
manufacturer’s Web site.
Many titles include programming code or configuration examples. To optimize the presentation of
these elements, view the eBook in single-column, landscape mode and adjust the font size to the
smallest setting. In addition to presenting code and configurations in the reflowable text format, we
have included images of the code that mimic the presentation found in the print book; therefore, where
the reflowable format may compromise the presentation of the code listing, you will see a “Click here
to view code image” link. Click the link to view the print-fidelity code image. To return to the
previous page viewed, click the Back button on your device or app.

CCNA Data Center
DCICT 640-916 Official Cert Guide

NAVAID SHAMSEE, CCIE No. 12625
DAVID KLEBANOV, CCIE No. 13791
HESHAM FAYED, CCIE No. 9303
AHMED AFROSE
OZDEN KARAKOK, CCIE No. 6331

Cisco Press
800 East 96th Street
Indianapolis, IN 46240

CCNA Data Center DCICT 640-916 Official Cert Guide
Navaid Shamsee, David Klebanov, Hesham Fayed, Ahmed Afrose, Ozden Karakok
Copyright © 2015 Pearson Education, Inc.
Published by:
Cisco Press
800 East 96th Street
Indianapolis, IN 46240 USA
All rights reserved. No part of this book may be reproduced or transmitted in any form or by any
means, electronic or mechanical, including photocopying, recording, or by any information storage
and retrieval system, without written permission from the publisher, except for the inclusion of brief
quotations in a review.
Printed in the United States of America
First Printing February 2015
Library of Congress Control Number: 2015930189
ISBN-13: 978-1-58714-422-6
ISBN-10: 1-58714-422-0

Warning and Disclaimer
This book is designed to provide information about the 640-916 DCICT exam for CCNA Data Center
certification. Every effort has been made to make this book as complete and as accurate as possible,
but no warranty or fitness is implied.
The information is provided on an “as is” basis. The authors, Cisco Press, and Cisco Systems, Inc.
shall have neither liability nor responsibility to any person or entity with respect to any loss or
damages arising from the information contained in this book or from the use of the discs or programs
that may accompany it.
The opinions expressed in this book belong to the authors and are not necessarily those of Cisco
Systems, Inc.

Trademark Acknowledgements
All terms mentioned in this book that are known to be trademarks or service marks have been
appropriately capitalized. Cisco Press or Cisco Systems, Inc., cannot attest to the accuracy of this
information. Use of a term in this book should not be regarded as affecting the validity of any
trademark or service mark.

Special Sales
For information about buying this title in bulk quantities, or for special sales opportunities (which
may include electronic versions; custom cover designs; and content particular to your business,
training goals, marketing focus, or branding interests), please contact our corporate sales department

at corpsales@pearsoned.com or (800) 382-3419.
For government sales inquiries, please contact governmentsales@pearsoned.com.
For questions about sales outside the U.S., please contact international@pearsoned.com.

Feedback Information
At Cisco Press, our goal is to create in-depth technical books of the highest quality and value. Each
book is crafted with care and precision, undergoing rigorous development that involves the unique
expertise of members from the professional technical community.
Readers’ feedback is a natural continuation of this process. If you have any comments regarding how
we could improve the quality of this book, or otherwise alter it to better suit your needs, you can
contact us through email at feedback@ciscopress.com. Please make sure to include the book title and
ISBN in your message.
We greatly appreciate your assistance.
Publisher: Paul Boger
Associate Publisher: Dave Dusthimer
Business Operation Manager, Cisco Press: Jan Cornelssen
Executive Editor: Mary Beth Ray
Managing Editor: Sandra Schroeder
Development Editor: Ellie Bru
Senior Project Editor: Tonya Simpson
Copy Editor: Barbara Hacha
Technical Editor: Frank Dagenhardt
Editorial Assistant: Vanessa Evans
Cover Designer: Mark Shirar
Composition: Mary Sudul
Indexer: Ken Johnson
Proofreader: Debbie Williams

Americas Headquarters
Cisco Systems, Inc.
San Jose, CA
Asia Pacific Headquarters
Cisco Systems (USA) Pte. Ltd.

Singapore
Europe Headquarters
Cisco Systems International BV
Amsterdam, The Netherlands
Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed
on the Cisco Website at www.cisco.com/go/offices.
CCDE, CCENT, Cisco Eos, Cisco HealthPresence, the Cisco logo, Cisco Lumin, Cisco Nexus, Cisco
StadiumVision, Cisco TelePresence, Cisco WebEx, DCE, and Welcome to the Human Network are
trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks;
and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP,
CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo,
Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity,
Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me
Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, IronPort,
the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound,
MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels,
ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to
Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of
Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective
owners. The use of the word partner does not imply a partnership relationship between Cisco and any
other company. (0812R)

About the Authors
Navaid Shamsee, CCIE No.12625, is a senior solutions architect in the Cisco Services organization.
He holds a master’s degree in telecommunication and a bachelor’s degree in electrical engineering.
He also holds a triple CCIE in routing and switching, service provider, and data center technologies.
Navaid has extensive experience in designing and implementing many large-scale enterprise and
service provider data centers. In Cisco, Navaid is focused on security of data center, cloud, and
software-defined networking technologies. You can reach Navaid on Twitter: @NavaidShamsee.
David Klebanov, CCIE No.13791 (Routing and Switching) is a technical solutions architect with
Cisco Systems. David has more than 15 years of diverse industry experience architecting and
deploying complex network environments. In his work, David influences strategic development of the
industry-leading data center switching platforms, which lay the foundation for the next generation of
data center fabrics. David also takes great pride in speaking at industry events, releasing
publications, and working on patents. You can reach David on Twitter: @DavidKlebanov.
Hesham Fayed, CCIE No.9303 (Routing and Switching/Data Center), is a consulting systems
engineer for data center and virtualization based in California. Hesham has been with Cisco for more
than 9 years and has 18 years of experience in the computer industry, working with service providers
and large enterprises. His main focus is working with customers in the western region of the United
States to address their challenges by doing end-to-end data center architectures.
Ahmed Afrose is a solutions architect with the Cisco Cloud and IT Transformation (CITT) services
organization. He is responsible for providing architectural design guidance and leading complex
multitech service deliveries. Furthermore, he is involved in demonstrating the Cisco value
propositions in cloud software, application automation, software-defined data centers, and Cisco
Unified Computing System (UCS). He is experienced in server operating systems and virtualization
technologies and holds various certifications. Ahmed has a bachelor’s degree in Information Systems.
He started his career with Sun Microsystem-based technologies and has 15 years of diverse
experience in the industry. He was also directly responsible for setting up Cisco UCS Advanced
Services delivery capabilities while evangelizing the product in the region.
Ozden Karakok, CCIE No. 6331, is a technical leader from the data center products and
technologies team in the Technical Assistant Center (TAC). Ozden has been with Cisco Systems for
15 years and specializes in storage area and data center networks. Prior to joining Cisco, Ozden spent
five years working for a number of Cisco’s large customers in various telecommunication roles.
Ozden is a Cisco Certified Internetwork Expert in routing and switching, SNA/IP, and storage. She
holds VCP and ITIL certifications and is a frequent speaker at Cisco and data center events. She
holds a degree in computer engineering from Istanbul Bogazici University. Currently, she works on
Application Centric Infrastructure (ACI) and enjoys being a mother of two wonderful kids.

About the Technical Reviewer
Frank Dagenhardt, CCIE No. 42081, is a systems engineer for Cisco, focusing primarily on data
center architectures in commercial select accounts. Frank has more than 19 years of experience in
information technology and holds certifications from HP, Microsoft, Citrix, and Cisco. A Cisco
veteran of more than eight years, he works with customers daily, designing, implementing, and
supporting end-to-end architectures and solutions, such as workload mobility, business continuity,
and cloud/orchestration services. In recent months, he has been focusing on Tier 1 applications,
including VDI, Oracle, and SAP. He presented on the topic of UCS/VDI at Cisco Live 2013. When
not thinking about data center topics, Frank can be found backpacking, kayaking, or at home spending
time with his wife Sansi and sons Frank and Everett.

Dedications
Navaid Shamsee: To my parents, for their guidance and prayers. To my wife, Hareem, for her love
and support, and to my children, Ahasn, Rida, and Maria.
David Klebanov: This book is dedicated to my beautiful wife, Tanya, and our two wonderful
daughters, Lia and Maya. You fill my life with great joy and make everything worth doing! In the
words of a song: “Everything I do, I do it for you.” This book is also dedicated to all of you who
relentlessly pursue the path of becoming Cisco-certified professionals. Through your dedication you
build your own success. You are the industry leaders!
Hesham Fayed: This book is dedicated to my lovely wife, Eman, and my beautiful children, Ali,
Laila, Hamza, Fayrouz, and Farouk. Thank you for your understanding and support, especially when I
was working on the book during our vacation to meet deadlines. Without your support and
encouragement I would never have been able to finish this book. I can’t forget to acknowledge my
parents; your guidance, education, and making me always strive to be better is what helped in my
journey.
Ahmed Afrose: I dedicate this book to my parents, Hilal and Faiziya. Without them, none of this
would have been possible, and also to those people who are deeply curious and advocate self
advancement.
Ozden Karakok: To my loving husband, Tom, for his endless support, encouragement, and love.
Merci beaucoup, Askim.
To Remi and Mira, for being the most incredible miracles of my life and being my number one source
of happiness.
To my wonderful parents, who supported me in every chapter of my life and are an inspiration for
life.
To my awesome sisters, Gulden and Cigdem, for their unconditional support and loving me just the
way I am.
To the memory of Steve Ritchie, for being the best mentor ever.

Acknowledgements
To my co-authors, Afrose, David, Hesham, and Ozden: Thank you for your hard work and dedication.
It was my pleasure and honor to work with such a great team of co-authors. Without your support, this
book would not have been possible.
To our technical reviewer, Frank Dagenhardt: Thank you for providing us with your valuable
feedback and suggestions. Your input helped us improve the quality and accuracy of the content.
To Mary Beth Ray, Ellie Bru, Tonya Simpson, and the Cisco Press team: A big thank you to the entire
Cisco Press team for all their support in getting this book published. Special thanks to Mary Beth and
Ellie for keeping us on track and guiding us in the journey of writing this book.
—Navaid Shamsee
My deepest gratitude goes to all the people who have shared my journey over the years and who have
inspired me to always strive for more. I have been truly blessed to have worked with the most
talented group of individuals.
—David Klebanov
I’d like to thank my co-authors Navaid, Afrose, Ozden, and David for working as a team to complete
this project. A special thanks goes to Navaid for asking me to be a part of this book. Afrose, thank
you for stepping in to help me complete Chapter 18 so we could meet the deadline; really, thank you.
Mary Beth and Ellie, thank you both for your patience and support through my first publication. It has
been an honor working with you both, and I have learned a lot during this process.
I want to thank my family for their support and patience while I was working on this book. I know it
was not nice, especially on holidays and weekends, when I had to stay to work on the book rather
than taking you out.
—Hesham Fayed
The creation of this book, my first publication, has certainly been a tremendous experience, a source
of inspiration, and a whirlwind of activity. I do hope to impart more knowledge and experience in the
future on different pertinent topics in this ever-changing world of data center technologies and
practices.
Navaid Shamsee has been a good friend and colleague. I am so incredibly thankful to him for this
awesome opportunity. He has helped me achieve a milestone in my career by offering me the
opportunity to be associated with this publication and the world-renowned Cisco Press.
I’m humbled by this experienced and talented team of co-authors: Navaid, David, Hesham, and
Ozden. I enjoyed working with a team of like-minded professionals.
I am also thankful to all our professional editors, especially Mary Beth Ray and Ellie Bru for their
patience and guidance every step of the way. A big thank you to all the folks involved in production,
publishing, and bringing this book to the shelves.
I would also like to thank our technical reviewer, Frank Dagenhardt, for his keenness and attention to
detail; it helped gauge depth, consistency, and improved the quality of the material overall for the
readers.
And last, but not least, a special thanks to my girlfriend, Anna, for understanding all the hours I

needed to spend over my keyboard almost ignoring her. You still cared, kept me energized, and
encouraged me with a smile. You were always in my mind; thanks for the support, honey.
—Ahmed Afrose
This book would never have become a reality without the help, support, and advice of a great number
of people.
I would like to thank my great co-authors Navaid, David, Afrose, and Hesham. Thank you for your
excellent collaboration, hard work, and priceless time. I truly enjoy working with each one of you. I
really appreciate Navaid taking the lead and being the glue of this diverse team. It was a great
pleasure and honor working with such professional engineers. It would have been impossible to
finish this book without your support.
To our technical reviewer, Frank Dagenhardt: Thank you for providing us with your valuable
feedback, suggestions, hard work, and quick turnaround. Your excellent input helped us improve the
quality and accuracy of the content. It was a great pleasure for all of us to work with you.
To Mary Beth, Ellie, and the Cisco Press team: A big thank you to the entire Cisco Press team for all
their support in getting this book published. Special thanks to Mary Beth and Ellie for keeping us on
track and guiding us in the journey of writing this book.
To the extended teams at Cisco: Thank you for responding to our emails and phone calls even during
the weekends. Thank you for being patient while our minds were in the book. Thank you for covering
the shifts. Thank you for believing in and supporting us on this journey. Thank you for the world-class
organization at Cisco.
To my colleagues Mike Frase, Paul Raytick, Mike Brown, Robert Hurst, Robert Burns, Ed Mazurek,
Matthew Winnett, Matthias Binzer, Stanislav Markovic, Serhiy Lobov, Barbara Ledda, Levent
Irkilmez, Dmitry Goloubev, Roland Ducomble, Gilles Checkroun, Walter Dey, Tom Debruyne, and
many wonderful engineers: Thank you for being supportive in different stages of my career. I learned
a lot from you.
To our Cisco customers and partners: Thank you for challenging us to be the number one IT company.
To all extended family and friends: Thank you for your patience and giving us moral support in this
long journey.
—Ozden Karakok

Contents at a Glance
Introduction
Getting Started
Part I Data Center Networking
Chapter 1 Data Center Network Architecture
Chapter 2 Cisco Nexus Product Family
Chapter 3 Virtualizing Cisco Network Devices
Chapter 4 Data Center Interconnect
Chapter 5 Management and Monitoring of Cisco Nexus Devices
Part I Review
Part II Data Center Storage-Area Networking
Chapter 6 Data Center Storage Architecture
Chapter 7 Cisco MDS Product Family
Chapter 8 Virtualizing Storage
Chapter 9 Fibre Channel Storage-Area Networking
Part II Review
Part III Data Center Unified Fabric
Chapter 10 Unified Fabric Overview
Chapter 11 Data Center Bridging and FCoE
Chapter 12 Multihop Unified Fabric
Part III Review
Part IV Cisco Unified Computing
Chapter 13 Cisco Unified Computing System Architecture
Chapter 14 Cisco Unified Computing System Manager
Chapter 15 Cisco Unified Computing System Pools, Policies, Templates, and Service Profiles
Chapter 16 Administration, Management, and Monitoring of Cisco Unified Computing System
Part IV Review
Part V Data Center Server Virtualization
Chapter 17 Server Virtualization Solutions
Chapter 18 Cisco Nexus 1000V and Virtual Switching
Part V Review

Part VI Data Center Network Services
Chapter 19 Cisco ACE and GSS
Chapter 20 Cisco WAAS and Application Acceleration
Part VI Review
Part VII Final Preparation
Chapter 21 Final Review
Part VIII Appendices
Appendix A Answers to the “Do I Know This Already?” Questions
Glossary
Index
On the CD:
Appendix B Memory Tables
Appendix C Memory Tables Answer Key
Appendix D Study Planner

Contents
Introduction
Getting Started
A Brief Perspective on the CCNA Data Center Certification Exam
Suggestions for How to Approach Your Study with This Book
This Book Is Compiled from 20 Short Read-and-Review Sessions
Practice, Practice, Practice—Did I Mention: Practice?
In Each Part of the Book You Will Hit a New Milestone
Use the Final Review Chapter to Refine Skills
Set Goals and Track Your Progress
Other Small Tasks Before Getting Started
Part I Data Center Networking
Chapter 1 Data Center Network Architecture
“Do I Know This Already?” Quiz
Foundation Topics
Modular Approach in Network Design
The Multilayer Network Design
Access Layer
Aggregation Layer
Core Layer
Collapse Core Design
Spine and Leaf Design
Port Channel
What Is Port Channel?
Benefits of Using Port Channels
Port Channel Compatibility Requirements
Link Aggregation Control Protocol
Port Channel Modes
Configuring Port Channel
Port Channel Load Balancing
Verifying Port Channel Configuration
Virtual Port Channel
What Is Virtual Port Channel?
Benefits of Using Virtual Port Channel
Components of vPC

vPC Data Plane Operation
vPC Control Plane Operation
vPC Limitations
Configuration Steps of vPC
Verification of vPC
FabricPath
Spanning Tree Protocol
What Is FabricPath?
Benefits of FabricPath
Components of FabricPath
FabricPath Frame Format
FabricPath Control Plane
FabricPath Data Plane
Conversational Learning
FabricPath Packet Flow Example
Unknown Unicast Packet Flow
Known Unicast Packet Flow
Virtual Port Channel Plus
FabricPath Interaction with Spanning Tree
Configuring FabricPath
Verifying FabricPath
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 2 Cisco Nexus Product Family
“Do I Know This Already?” Quiz
Foundation Topics
Describe the Cisco Nexus Product Family
Cisco Nexus 9000 Family
Cisco Nexus 9500 Family
Chassis
Supervisor Engine
System Controller
Fabric Modules
Line Cards
Power Supplies

Fan Trays
Cisco QSFP Bi-Di Technology for 40 Gbps Migration
Cisco Nexus 9300 Family
Cisco Nexus 7000 and Nexus 7700 Product Family
Cisco Nexus 7004 Series Switch Chassis
Cisco Nexus 7009 Series Switch Chassis
Cisco Nexus 7010 Series Switch Chassis
Cisco Nexus 7018 Series Switch Chassis
Cisco Nexus 7706 Series Switch Chassis
Cisco Nexus 7710 Series Switch Chassis
Cisco Nexus 7718 Series Switch Chassis
Cisco Nexus 7000 and Nexus 7700 Supervisor Module
Cisco Nexus 7000 Series Supervisor 1 Module
Cisco Nexus 7000 Series Supervisor 2 Module
Cisco Nexus 7000 and Nexus 7700 Fabric Modules
Cisco Nexus 7000 and Nexus 7700 Licensing
Cisco Nexus 7000 and Nexus 7700 Line Cards
Cisco Nexus 7000 and Nexus 7700 Series Power Supply Options
Cisco Nexus 7000 and Nexus 7700 Series 3.0-KW AC Power Supply Module
Cisco Nexus 7000 and Nexus 7700 Series 3.0-KW DC Power Supply Module
Cisco Nexus 7000 Series 6.0-KW and 7.5-KW AC Power Supply Module
Cisco Nexus 7000 Series 6.0-KW DC Power Supply Module
Cisco Nexus 6000 Product Family
Cisco Nexus 6001P and Nexus 6001T Switches and Features
Cisco Nexus 6004 Switches Features
Cisco Nexus 6000 Switches Licensing Options
Cisco Nexus 5500 and Nexus 5600 Product Family
Cisco Nexus 5548P and 5548UP Switches Features
Cisco Nexus 5596UP and 5596T Switches Features
Cisco Nexus 5500 Products Expansion Modules
Cisco Nexus 5600 Product Family
Cisco Nexus 5672UP Switch Features
Cisco Nexus 56128P Switch Features
Cisco Nexus 5600 Expansion Modules
Cisco Nexus 5500 and Nexus 5600 Licensing Options
Cisco Nexus 3000 Product Family
Cisco Nexus 3000 Licensing Options

Cisco Nexus 2000 Fabric Extenders Product Family
Cisco Dynamic Fabric Automation
Summary
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 3 Virtualizing Cisco Network Devices
“Do I Know This Already?” Quiz
Foundation Topics
Describing VDCs on the Cisco Nexus 7000 Series Switch
VDC Deployment Scenarios
Horizontal Consolidation Scenarios
Vertical Consolidation Scenarios
VDCs for Service Insertion
Understanding Different Types of VDCs
Interface Allocation
VDC Administration
VDC Requirements
Verifying VDCs on the Cisco Nexus 7000 Series Switch
Describing Network Interface Virtualization
Cisco Nexus 2000 FEX Terminology
Nexus 2000 Series Fabric Extender Connectivity
VN-Tag Overview
Cisco Nexus 2000 FEX Packet Flow
Cisco Nexus 2000 FEX Port Connectivity
Cisco Nexus 2000 FEX Configuration on the Nexus 7000 Series
Summary
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Chapter 4 Data Center Interconnect
“Do I Know This Already?” Quiz
Foundation Topics
Introduction to Overlay Transport Virtualization
OTV Terminology

OTV Control Plane
Multicast Enabled Transport Infrastructure
Unicast-Enabled Transport Infrastructure
Data Plane for Unicast Traffic
Data Plane for Multicast Traffic
Failure Isolation
STP Isolation
Unknown Unicast Handling
ARP Optimization
OTV Multihoming
First Hop Redundancy Protocol Isolation
OTV Configuration Example with Multicast Transport
OTV Configuration Example with Unicast Transport
Summary
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Chapter 5 Management and Monitoring of Cisco Nexus Devices
“Do I Know This Already?” Quiz
Foundation Topics
Operational Planes of a Nexus Switch
Data Plane
Store-and-Forward Switching
Cut-Through Switching
Nexus 5500 Data Plane Architecture
Control Plane
Nexus 5500 Control Plane Architecture
Control Plane Policing
Control Plane Analyzer
Management Plane
Nexus Management and Monitoring Features
Out-of-Band Management
Console Port
Connectivity Management Processor
Management Port (mgmt0)
In-band Management

Role-Based Access Control
User Roles
Rules
User Role Policies
RBAC Characteristics and Guidelines
Privilege Levels
Simple Network Management Protocol
SNMP Notifications
SNMPv3
Remote Monitoring
RMON Alarms
RMON Events
Syslog
Embedded Event Manager
Event Statements
Action Statements
Policies
Generic Online Diagnostics (GOLD)
Smart Call Home
Nexus Device Configuration and Verification
Perform Initial Setup
In-Service Software Upgrade
ISSU on the Cisco Nexus 5000
ISSU on the Cisco Nexus 7000
Important CLI Commands
Command-Line Help
Automatic Command Completion
Entering and Exiting Configuration Context
Display Current Configuration
Command Piping and Parsing
Feature Command
Verifying Modules
Verifying Interfaces
Viewing Logs
Viewing NVRAM logs
Exam Preparation Tasks
Review All Key Topics

Complete Tables and Lists from Memory
Define Key Terms
Part I Review
Part II Data Center Storage-Area Networking
Chapter 6 Data Center Storage Architecture
“Do I Know This Already?” Quiz
Foundation Topics
What Is a Storage Device?
What Is a Storage-Area Network?
How to Access a Storage Device
Block-Level Protocols
File-Level Protocols
Putting It All Together
Storage Architectures
Scale-up and Scale-out Storage
Tiered Storage
SAN Design
SAN Design Considerations
1. Port Density and Topology Requirements
2. Device Performance and Oversubscription Ratios
3. Traffic Management
4. Fault Isolation
5. Control Plane Scalability
SAN Topologies
Fibre Channel
FC-0: Physical Interface
FC-1: Encode/Decode
Encoding and Decoding
Ordered Sets
FC-2: Framing Protocol
Fibre Channel Service Classes
Fibre Channel Flow Control
FC-3: Common Services
Fibre Channel Addressing
Switched Fabric Address Space
Fibre Channel Link Services

Fibre Channel Fabric Services
FC-4: ULPs—Application Protocols
Fibre Channel—Standard Port Types
Virtual Storage-Area Network
Dynamic Port VSAN Membership
Inter-VSAN Routing
Fibre Channel Zoning and LUN Masking
Device Aliases Versus Zone-Based (FC) Aliases
Reference List
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 7 Cisco MDS Product Family
“Do I Know This Already?” Quiz
Foundation Topics
Once Upon a Time There Was a Company Called Andiamo
The Secret Sauce of Cisco MDS: Architecture
A Day in the Life of a Fibre Channel Frame in MDS 9000
Lossless Networkwide In-Order Delivery Guarantee
Cisco MDS Software and Storage Services
Flexibility and Scalability
Common Software Across All Platforms
Multiprotocol Support
Virtual SANs
Intelligent Fabric Applications
Network-Assisted Applications
Cisco Data Mobility Manager
I/O Accelerator
XRC Acceleration
Network Security
Switch and Host Authentication
IP Security for FCIP and iSCSI
Cisco TrustSec Fibre Channel Link Encryption
Role-Based Access Control
Port Security and Fabric Binding
Zoning

Availability
Nondisruptive Software Upgrades
Stateful Process Failover
ISL Resiliency Using Port Channels
iSCSI, FCIP, and Management-Interface High Availability
Port Tracking for Resilient SAN Extension
Manageability
Open APIs
Configuration and Software-Image Management
N-Port Virtualization
Autolearn for Network Security Configuration
FlexAttach
Cisco Data Center Network Manager Server Federation
Network Boot for iSCSI Hosts
Internet Storage Name Service
Proxy iSCSI Initiator
iSCSI Server Load Balancing
IPv6
Traffic Management
Quality of Service
Extended Credits
Virtual Output Queuing
Fibre Channel Port Rate Limiting
Load Balancing of Port Channel Traffic
iSCSI and SAN Extension Performance Enhancements
FCIP Compression
FCIP Tape Acceleration
Serviceability, Troubleshooting, and Diagnostics
Cisco Switched Port Analyzer and Cisco Fabric Analyzer
SCSI Flow Statistics
Fibre Channel Ping and Fibre Channel Traceroute
Call Home
System Log
Other Serviceability Features
Cisco MDS 9000 Family Software License Packages
Cisco MDS Multilayer Directors
Cisco MDS 9700

Cisco MDS 9500
Cisco MDS 9000 Series Multilayer Components
Cisco MDS 9700–MDS 9500 Supervisor and Crossbar Switching Modules
Cisco MDS 9000 48-Port 8-Gbps Advanced Fibre Channel Switching Module
Cisco MDS 9000 32-Port 8-Gbps Advanced Fibre Channel Switching Module
Cisco MDS 9000 18/4-Port Multiservice Module (MSM)
Cisco MDS 9000 16-Port Gigabit Ethernet Storage Services Node (SSN) Module
Cisco MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module
Cisco MDS 9000 8-Port 10-Gbps Fibre Channel over Ethernet (FCoE) Module
Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module
Cisco MDS 9700 48-Port 10-Gbps Fibre Channel over Ethernet (FCoE) Module
Cisco MDS 9000 Family Pluggable Transceivers
Cisco MDS Multiservice and Multilayer Fabric Switches
Cisco MDS 9148
Cisco MDS 9148S
Cisco MDS 9222i
Cisco MDS 9250i
Cisco MDS Fibre Channel Blade Switches
Cisco Prime Data Center Network Manager
Cisco DCNM SAN Fundamentals Overview
DCNM-SAN Server
Licensing Requirements for Cisco DCNM-SAN Server
Device Manager
DCNM-SAN Web Client
Performance Manager
Authentication in DCNM-SAN Client
Cisco Traffic Analyzer
Network Monitoring
Performance Monitoring
Reference List
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 8 Virtualizing Storage
“Do I Know This Already?” Quiz
Foundation Topics

What Is Storage Virtualization?
How Storage Virtualization Works
Why Storage Virtualization?
What Is Being Virtualized?
Block Virtualization—RAID
Virtualizing Logical Unit Numbers
Tape Storage Virtualization
Virtualizing Storage-Area Networks
N-Port ID Virtualization (NPIV)
N-Port Virtualizer (NPV)
Virtualizing File Systems
File/Record Virtualization
Where Does the Storage Virtualization Occur?
Host-Based Virtualization
Network-Based (Fabric-Based) Virtualization
Array-Based Virtualization
Reference List
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Chapter 9 Fibre Channel Storage-Area Networking
“Do I Know This Already?” Quiz
Foundation Topics
Where Do I Start?
The Cisco MDS NX-OS Setup Utility—Back to Basics
The Power On Auto Provisioning
Licensing Cisco MDS 9000 Family NX-OS Software Features
License Installation
Verifying the License Configuration
On-Demand Port Activation Licensing
Making a Port Eligible for a License
Acquiring a License for a Port
Moving Licenses Among Ports
Cisco MDS 9000 NX-OS Software Upgrade and Downgrade
Upgrading to Cisco NX-OS on an MDS 9000 Series Switch

Downgrading Cisco NX-OS Release on an MDS 9500 Series Switch
Configuring Interfaces
Graceful Shutdown
Port Administrative Speeds
Frame Encapsulation
Bit Error Thresholds
Local Switching
Dedicated and Shared Rate Modes
Slow Drain Device Detection and Congestion Avoidance
Management Interfaces
VSAN Interfaces
Verify Initiator and Target Fabric Login
VSAN Configuration
Port VSAN Membership
Verify Fibre Channel Zoning on a Cisco MDS 9000 Series Switch
About Smart Zoning
About LUN Zoning and Assigning LUNs to Storage Subsystems
About Enhanced Zoning
VSAN Trunking and Setting Up ISL Port
Trunk Modes
Port Channel Modes
Reference List
Exam Preparation Tasks
Review All Key Topics
Complete Tables and Lists from Memory
Define Key Terms
Part II Review
Part III Data Center Unified Fabric
Chapter 10 Unified Fabric Overview
“Do I Know This Already?” Quiz
Foundation Topics
Challenges of Today’s Data Center Networks
Cisco Unified Fabric Principles
Convergence of Network and Storage
Scalability and Growth
Security and Intelligence

Inter-Data Center Unified Fabric
Fibre Channel over IP
Overlay Transport Virtualization
Locator ID Separation Protocol
Reference List
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Chapter 11 Data Center Bridging and FCoE
“Do I Know This Already?” Quiz
Foundation Topics
Data Center Bridging
Priority Flow Control
Enhanced Transmission Selection
Data Center Bridging Exchange
Quantized Congestion Notification
Fibre Channel Over Ethernet
FCoE Logical Endpoint
FCoE Port Types
FCoE ENodes
Fibre Channel Forwarder
FCoE Encapsulation
FCoE Initialization Protocol
Converged Network Adapter
FCoE Speed
FCoE Cabling Options for the Cisco Nexus 5000 Series Switches
Optical Cabling
Transceiver Modules
SFP Transceiver Modules
SFP+ Transceiver Modules
QSFP Transceiver Modules
Unsupported Transceiver Modules
Delivering FCoE Using Cisco Fabric Extender Architecture
FCoE and Cisco Nexus 2000 Series Fabric Extenders
FCoE and Straight-Through Fabric Extender Attachment Topology
FCoE and vPC Fabric Extender Attachment Topology
FCoE and Server Fabric Extenders

Adapter-FEX
Virtual Ethernet Interfaces
FCoE and Virtual Fibre Channel Interfaces
Virtual Interface Redundancy
FCoE and Cascading Fabric Extender Topologies
Adapter-FEX and FCoE Fabric Extenders in vPC Topology
Adapter-FEX and FCoE Fabric Extenders in Straight-Through Topology with vPC
Adapter-FEX and FCoE Fabric Extenders in Straight-Through Topology
Reference List
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Chapter 12 Multihop Unified Fabric
“Do I Know This Already?” Quiz
Foundation Topics
First Hop Access Layer Consolidation
Aggregation Layer FCoE Multihop
FIP Snooping
FCoE-NPV
Full Fibre Channel Switching
Dedicated Wire Versus Unified Wire
Cisco FabricPath FCoE Multihop
Dynamic FCoE FabricPath Topology
Dynamic FCoE Virtual Links
Dynamic Virtual Fibre Channel Interfaces
Redundancy and High Availability
Reference List
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Part III Review
Part IV Cisco Unified Computing
Chapter 13 Cisco Unified Computing System Architecture
“Do I Know This Already?” Quiz
Foundation Topics
Evolution of Server Computing in a Nutshell

Cloud-Ready Infrastructure Describing the Cisco UCS Product Family Cisco UCS Computer Hardware Naming Cisco UCS Fabric Interconnects What Is a Unified Port? Cisco UCS Fabric Interconnect—Expansion Modules Cisco UCS 5108 Blade Server Chassis Cisco UCS 5108 Blade Server Chassis—Power Supply and Redundancy Modes Cisco UCS I/O Modules (FEX) Cisco UCS B-series Blade Servers Cisco UCS B-series Best Practices for Populating DRAM Cisco Adapters (Mezzanine Cards) for UCS B-series Blade Servers Cisco Converged Network Adapters (CNA) Cisco Virtual Interface Cards Cisco UCS Port Expander for VIC 1240 What Is Cisco Adapter FEX? What Is Cisco Virtual Machine-Fabric Extender (VM-FEX)? Cisco UCS Storage Accelerator Adapters Cisco UCS C-series Rackmount Servers Cisco UCS C-series Best Practices for Populating DRAM Cisco Adapters for UCS C-series Rackmount Servers Cisco UCS C-series RAID Adapter Options Cisco UCS C-series Virtual Interface Cards and OEM CNA Adapters What Is SR-IOV (Single Root-IO Virtualization)? Cisco UCS C-series Network Interface Card Cisco UCS C-series Host Bus Adapter What Is N_Port ID Virtualization (NPIV)? Cisco UCS Storage Accelerator Adapters .What Is a Socket? What Is a Core? What Is Hyperthreading? Understanding Server Processor Numbering Value of Cisco UCS in the Data Center One Unified System Unified Fabric Unified Management and Operations Cisco UCS Stateless Computing Intelligent Automation.

Cisco UCS E-series Servers Cisco UCS “All-in-One” Cisco UCS Invicta Series—Solid State Storage Systems Cisco UCS Software Cisco UCS Manager Cisco UCS Central Cisco UCS Director Cisco UCS Platform Emulator Cisco goUCS Automation Tool Cisco UCS Connectivity Architecture Cisco UCS 5108 Chassis to Fabric Interconnect Physical Connectivity Cisco UCS C-series Rackmount Server Physical Connectivity Cisco UCS 6200 Fabric Interconnect to LAN. SAN Connectivity Ethernet Switching Mode Fibre Channel Switching Mode Cisco UCS I/O Module and Fabric Interconnect Connectivity Cisco UCS I/O Module Architecture Fabric and Adapter Port Channel Architecture Cisco Integrated Management Controller Architecture Reference List Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Chapter 14 Cisco Unified Computing System Manager “Do I Know This Already?” Quiz Foundation Topics Cabling a Cisco UCS Fabric Interconnect HA Cluster Cisco UCS Fabric Interconnect HA Architecture Cisco UCS Fabric Interconnect Cluster Connectivity Initial Setup Script for Primary Fabric Interconnect Initial Setup Script for Subordinate Secondary Fabric Interconnect Verify Cisco UCS Fabric Interconnect Cluster Setup Changing Cluster Addressing via Command-Line Interface Command Modes Changing Cluster IP Addresses via Command-Line Interface Changing Static IP Addresses via Command-Line Interface .

Cisco UCS Manager Operations Cisco UCS Manager Functions Cisco UCS Manager GUI Layout Cisco UCS Manager: Navigation Pane—Tabs Equipment Tab Servers Tab LAN Tab SAN Tab VM Tab Admin Tab Device Discovery in Cisco UCS Manager Basic Port Personalities in the Cisco UCS Fabric Interconnect Verifying Device Discovery in Cisco UCS Manager Reference List Exam Preparation Tasks Review All Key Topics Define Key Terms Chapter 15 Cisco Unified Computing System Pools. and Service Profiles “Do I Know This Already?” Quiz Foundation Topics Cisco UCS Service Profiles Cisco UCS Hardware Abstraction and Stateless Computing Contents of a Cisco UCS Service Profile Other Benefits of Cisco UCS Service Profiles Ease of Incremental Deployments Plan and Preconfigure Cisco UCS Solutions Ease of Server Replacement Right-Sized Server Deployments Cisco UCS Templates Organizational Awareness with Service Profile Templates Cisco UCS Service Profile Templates Cisco UCS vNIC Templates Cisco UCS vHBA Templates Cisco UCS Logical Resource Pools UUID Identity Pools MAC Address Identity Pools WWN Address Identity Pools . Templates. Policies.

and Monitoring of Cisco Unified Computing System “Do I Know This Already?” Quiz Foundation Topics Cisco UCS Administration Organization and Locales .WWNN Pools WWPN Pools Cisco UCS Physical Resource Pools Server Pools Cisco UCS Server Qualification and Pool Policies Server Auto-Configuration Policies Creation of Policies for Cisco UCS Service Profiles and Service Profile Templates Commonly Used Policies Explained Equipment Tab—Global Policies BIOS Policy Boot Policy Local Disk Configuration Policy Maintenance Policy Scrub Policy Host Firmware Packages Adapter Policy Cisco UCS Chassis and Blade Power Capping What Is Power Capping? Cisco UCS Power Capping Strategies Static Power Capping (Explicit Blade Power Cap) Dynamic Power Capping (Implicit Chassis Power Cap) Power Cap Groups (Explicit Group) Utilizing Cisco UCS Service Profile Templates Creating Cisco UCS Service Profile Templates Modifying a Cisco UCS Service Profile Template Utilizing Cisco UCS Service Profile Templates for Different Workloads Creating Service Profiles from Templates Reference List Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Chapter 16 Administration. Management.

Role-Based Access Control Authentication Methods LDAP Providers LDAP Provider Groups Domain. Backup. Event. Realm. and License Management Firmware Terminology Firmware Definitions Firmware Update Guidelines Host Firmware Packages Cross-Version Firmware Support Firmware Auto Install UCS Backups Backup Automation License Management Cisco UCS Management Operational Planes In-Band Versus Out-of-Band Management Remote Accessibility Direct KVM Access Advanced UCS Management Cisco UCS Manager XML API goUCS Automation Toolkit Using the Python SDK Multi-UCS Management Cisco UCS Monitoring Cisco UCS System Logs Fault. and Default Authentication Settings Communication Management UCS Firmware. and Audit Logs (Syslogs) System Event Logs UCS SNMP UCS Fault Suppression Collection and Threshold Policies Call Home and Smart Call Home Reference List Exam Preparation Tasks Review All Key Topics .

Define Key Terms Part IV Review Part V Data Center Server Virtualization Chapter 17 Server Virtualization Solutions “Do I Know This Already?” Quiz Foundation Topics Brief History of Server Virtualization Server Virtualization Components Hypervisor Virtual Machines Virtual Switching Management Tools Types of Server Virtualization Full Virtualization Paravirtualization Operating System Virtualization Server Virtualization Benefits and Challenges Server Virtualization Benefits Server Virtualization Challenges Reference List Exam Preparation Tasks Review All Key Topics Define Key Terms Chapter 18 Cisco Nexus 1000V and Virtual Switching “Do I Know This Already?” Quiz Foundation Topics Evolution of Virtual Switching Before Server Virtualization Server Virtualization with Static VMware vSwitch Virtual Network Components Virtual Access Layer Standard VMware vSwitch Overview Standard VMware vSwitch Operations Standard VMware vSwitch Configuration VMware vDS Overview VMware vDS Configuration .

VMware vDS Enhancements VMware vSwitch and vDS Cisco Nexus 1000V Virtual Networking Solution Cisco Nexus 1000V System Overview Cisco Nexus 1000V Salient Features and Benefits Cisco Nexus 1000V Series Virtual Switch Architecture Cisco Nexus 1000V Virtual Supervisory Module Cisco Nexus 1000V Virtual Ethernet Module Cisco Nexus 1000V Component Communication Cisco Nexus 1000V Management Communication Cisco Nexus 1000V Port Profiles Types of Port Profiles in Cisco Nexus 1000V Cisco Nexus 1000V Administrator View and Roles Cisco Nexus 1000V Verifying Initial Configuration Cisco Nexus 1000V VSM Installation Methods Initial VSM Configuration Verification Verifying VMware vCenter Connectivity Verify Nexus 1000V Module Status Cisco Nexus 1000V VEM Installation Methods Initial VEM Status on ESX or ESXi Host Verifying VSM Connectivity from vCenter Server Verifying VEM Agent Running Verifying VEM Uplink Interface Status Verifying VEM Parameters with VSM Configuration Validating VM Port Groups and Port Profiles Verifying Port Profile and Groups Key New Technologies Integrated with Cisco Nexus 1000V What Is the Difference Between VN-Link and VN-Tag? What Is VXLAN? What Is vPath Technology? Reference List Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Part V Review .

Part VI Data Center Network Services Chapter 19 Cisco ACE and GSS “Do I Know This Already?” Quiz Foundation Topics Server Load-Balancing Concepts What Is Server Load Balancing? Benefits of Server Load Balancing Methods of Server Load Balancing ACE Family Products ACE Module Control Plane Data Plane ACE 4710 Appliances Global Site Selector Understanding the Load-Balancing Solution Using ACE Layer 3 and Layer 4 Load Balancing Layer 7 Load Balancing TCP Server Reuse Real Server Server Farm Server Health Monitoring In-band Health Monitoring Out-of-band Health Monitoring (Probes) Load-Balancing Algorithms (Predictor) Stickiness Sticky Groups Sticky Methods Sticky Table Traffic Policies for Server Load Balancing Class Maps Policy Map Configuration Steps ACE Deployment Modes Bridge Mode Routed Mode One-Arm Mode ACE Virtualization .

Contexts Domains Role-Based Access Control Resource Classes ACE High Availability Management of ACE Remote Management Simple Network Management Protocol XML Interface ACE Device Manager ANM Global Load Balancing Using GSS and ACE Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Chapter 20 Cisco WAAS and Application Acceleration “Do I Know This Already?” Quiz Foundation Topics Wide-Area Network Optimization Characteristics of Wide-Area Networks Latency Bandwidth Packet Loss Poor Link Utilization What Is WAAS? WAAS Product Family WAVE Appliances Router-Integrated Form Factors Virtual Appliances (vWAAS) WAAS Express (WAASx) WAAS Mobile WAAS Central Manager WAAS Architecture WAAS Solution Overview Key Features and Benefits of Cisco WAAS .

Cisco WAAS WAN Optimization Techniques Transport Flow Optimization TCP Window Scaling TCP Initial Window Size Maximization Increased Buffering Selective Acknowledgement Binary Increase Congestion TCP Compression Data Redundancy Elimination LZ Compression Application-Specific Acceleration Virtualization Exam Preparation Tasks Review All Key Topics Complete Tables and Lists from Memory Define Key Terms Part VI Review Part VII Final Preparation Chapter 21 Final Review Advice About the Exam Event Learn the Question Types Using the Cisco Certification Exam Tutorial Think About Your Time Budget Versus the Number of Questions A Suggested Time-Check Method Miscellaneous Pre-Exam Suggestions Exam-Day Advice Exam Review Take Practice Exams Practicing Taking the DCICT Exam Advice on How to Answer Exam Questions Taking Other Practice Exams Find Knowledge Gaps Through Question Review Practice Hands-On CLI Skills Other Study Tasks Final Thoughts Part VIII Appendices Appendix A Answers to the “Do I Know This Already?” Questions .

Glossary Index On the CD: Appendix B Memory Tables Appendix C Memory Tables Answer Key Appendix D Study Planner .

Icons Used in This Book .

The Command Reference describes these conventions as follows: Boldface indicates commands and keywords that are entered literally as shown. Braces within brackets ([{ }]) indicate a required choice within an optional element. . Vertical bars (|) separate alternative. Square brackets ([ ]) indicate an optional element. In actual configuration examples and output (not general command syntax). Italic indicates arguments for which you supply actual values. Braces ({ }) indicate a required choice.Command Syntax Conventions The conventions used to present command syntax in this book are the same conventions used in the IOS Command Reference. boldface indicates commands that are manually input by the user (such as a show command). mutually exclusive elements.

which leads to the Cisco Certified Entry Network Technician (CCENT) certification. Unified Computing (the term used by Cisco for its server products and services). as shown in Figure I-1. Cisco dominates the networking marketplace. network services. The DCICT exam instead focuses on technologies specific to the data center. These technologies include storage networking. you’ve probably already decided to pursue your Cisco CCNA Data Center certification. . Unified Fabric. CCNA Data Center itself has no other prerequisites. If you want to succeed as a technical person in the networking industry in general. To achieve the CCNA Data Center certification. DCICN focuses on networking technology. and after a few short years of entering the server marketplace. Figure I-1 Path to Cisco CCNA Data Center Certification The DCICN and DCICT exams differ quite a bit in terms of the topics covered. Cisco has achieved significant market share and has become one of the primary vendors for server hardware. focusing on Ethernet switching and IP routing. and in data centers in particular. it overlaps quite a bit with the topics in the ICND1 100-101 exam.Introduction About the Exam Congratulations! If you are reading far enough to look at this book’s Introduction. Getting your CCNA Data Center certification is a great first step in building your skills and becoming a recognized authority in the data center field. In fact. you need to know Cisco. and data networking features unique to the Cisco Nexus series of switches. The only data center focus in the DCICN exam is that all the configuration and verification examples use Cisco Nexus data center switches. Exams That Help You Achieve CCNA Data Center Certification Cisco CCNA Data Center is an entry-level Cisco data center certification that is also a prerequisite for other Cisco Data Center certifications. DCICN explains the basics of networking. you must pass two exams: 640-911 Introduction to Cisco Data Center Networking (DCICN) and 640-916 Introduction to Cisco Data Center Technologies (DCICT).

one at a time. The testlet asks you several multiple-choice questions. while your task is to configure one or more routers and switches to fix it. The exam then grades the question based on the configuration you changed or added. correlating show command output with your knowledge of networking theory and configuration commands. The multiple-choice format requires you to point to and click a circle or square beside the correct answer(s). you can take a sample quiz just to get accustomed to the PC and the testing engine. At the testing center. After the exam starts. For some questions.” What’s on the DCICT Exam? Everyone wants to know what is on the test. you might need to put a list of items in the proper order or sequence.com and search for “exam tutorial. and you must fix it to answer the question correctly. and the testing software prevents you from choosing too many answers. These questions require you to use the simulator to examine network behavior by interpreting the output of show commands you decide to leverage to answer the question. The questions typically fall into one of the following categories: Multiple choice. you can complete a few other tasks on the PC. these questions begin with a broken configuration. for any test. The last two types. Interestingly. both use a network simulator to ask questions. sim questions generally describe a problem. DND questions require you to move some items around in the graphical user interface (GUI). simlets require you to analyze both working and broken networks. Anyone who has basic skills in getting around a PC should have no problems with the testing environment. on the PC screen. single answer Multiple choice. multiple answers Testlet Drag-and-drop (DND) Simulated lab (sim) Simlet The first three items in the list are all multiple-choice questions. Cisco wants the candidates to know the . Basically. you sit in a quiet room in front of the PC.Types of Questions on the Exams Cisco certification exams follow the same general format.cisco. To find the Cisco Certification Exam Tutorial. Cisco traditionally tells you how many answers you need to choose. Before the exam timer begins. you will be presented with a series of questions. but instead of answering the question by changing or adding the configuration. they include one or more multiple-choice questions. all based on the same larger scenario. sim and simlet questions. You left-click the mouse to hold the item. First. You can watch and even experiment with these command types using the Cisco Exam Tutorial. Simlet questions also use a network simulator. move it to another area. the two types enable Cisco to assess two very different skills. go to www. Whereas sim questions require you to troubleshoot problems related to configuration. to get the question correct. and release the mouse button to place the item in its destination (usually into a list). for example. ever since the early days of school. Cisco openly publishes the topics of each of its certification exams.

For example.cisco. Cisco’s disclaimer language mentions that fact. Cisco makes an effort to keep the exam questions within the confines of the stated exam topics. The DCICT contains six major headings with their respective weighting. shown in Table I-1. More often today. Cisco has begun making two stylistic improvements to the posted exam topics. with one table for each of the major topic .” and another with “Troubleshoot. plan to study all the exam topics. Pay attention to the question verbiage. the topics were simply listed as bullets with indentation to imply subtopics under a major topic. The weighting might indicate where you should spend a little more time during the last days before taking the exam. and be able to verify it (to find the root cause of the problem).” Questions beginning with “Troubleshoot” require the highest skills level. In the past.. Over time. which gets you to the page for CCNA Routing and Switching.. data center technologies require you to put many concepts together.” another with “Configure.variety of topics and get an idea about the kinds of knowledge and skills required for each topic.. however. where you can easily navigate to the nearby CCNA Data Center page. All six topic areas hold enough weighting so that if you completely ignore an individual topic. the weighting probably does not change how you study.. in the authors’ opinion. Cisco lists the weighting for each of the major topic headings. are only guidelines. you probably will not pass. be able to configure it (to see what’s wrong with the configuration). and we know from talking to those involved that every question is analyzed for whether it fits the stated exam topic... Furthermore. including for the DCICN and DCICT exam topics.com by searching. you can go to www..com/go/ccna.. one topic might begin with “Describe. Exam topics are very specific and the verb used in their description is very important. Additionally. DCICT 640-916 Exam Topics The exam topics for both the DCICN and DCICT exams can be easily found at Cisco. Alternatively. Cisco’s posted exam topics.” another with “Verify. because to troubleshoot.. The verb tells us to what degree the topic must be understood and what skills are required. Table I-1 Six Major Topic Areas in the DCICT 640-916 Exam Note that while the weighting of each topic area tells you something about the exam. Tables I-2 through I-7 list the details of the exam topics... making it easier to refer to specific ones. Cisco also numbers the exam topics.. but otherwise. which should come from each major topic area. The weighting tells the percentage of points from your exam. you must understand the topic. so you need all the pieces before you can understand the holistic view.

Table I-2 Exam Topics in the First Major DCICT Exam Topic Area . Note that these tables also list the book chapters that discuss each of the exam topics.areas listed in Table I-1.

Table I-3 Exam Topics in the Second Major DCICT Exam Topic Area Table I-4 Exam Topics in the Third Major DCICT Exam Topic Area Table I-5 Exam Topics in the Fourth Major DCICT Exam Topic Area Table I-6 Exam Topics in the Fifth Major DCICT Exam Topic Area .

CCNA entry-level certification is the foundation for many of the Cisco professional-level certifications. and it would be a disservice to you if this book did not help you truly learn the material. if the primary objective of this book were different. CCNA Data Center DCICN 640-911 Official Cert Guide. discusses the content needed to pass the 640-911 DCICN certification exam. In fact.cisco. This book uses several tools to help you discover your weak topic areas. but by truly learning and understanding the topics. to help you improve your knowledge and skills with those topics. This book’s companion title. We strongly recommend that you plan and structure your learning to align with both exam requirements. it might be worth the time to double-check the exam topics as listed on the Cisco website (www.Table I-7 Exam Topics in the Sixth Major DCICT Exam Topic Area Note Because it is possible that the exam topics may change over time. the book’s title would be misleading! At the same time. Importantly. the methods used in this book to help you pass the exam are also designed to make you much more knowledgeable in the general field of the data center and help you in your daily job responsibilities.com/go/certifications and navigate to the CCNA Data Center page). this book does not try to help you pass the exams only by memorization. This book helps you pass the CCNA exam by using the following methods: Helping you discover which exam topics you have not mastered Providing explanations and information to fill in your knowledge gaps . About the Book This book discusses the content and skills needed to pass the 640-916 DCICT certification exam. Book Features The most important and somewhat obvious objective of this book is to help you pass the DCICT exam and help you achieve the CCNA Data Center certification. and to prove that you have retained your knowledge of those topics. which is the second and final exam to achieve CCNA Data Center certification.

along with checklists. “Define this term. The part review includes sample test questions that require you to apply the concepts from multiple chapters in that part. the core chapters have several features that help you make the best use of your time: “Do I Know This Already?” Quizzes: Each chapter begins with a quiz that helps you determine the amount of time you need to spend studying the chapter. you should definitely know the information listed in each key topic. Part Review The part review tasks help you prepare to apply all the concepts you learned in that part of the book. This section lists the most important terms from the chapter. The following list explains the most common tasks you will see in the part review: Review “Do I Know This Already?” (DIKTA) Questions: Although you have already seen the DIKTA questions from the chapters in a part. so that you can track your progress.” the CCNA exams require that you learn and know a lot of networking terminology.Supplying exercises that enhance your ability to grasp topics and deduce the answers to subjects related to the exam Providing practice exercises on the topics and the testing process via test questions on the CD Chapter Features To help you customize study time using this book. the “Exam Preparation Tasks” section lists a series of study activities that should be completed at the end of the chapter. The part reviews list tasks. Each book part contains several related chapters. Each chapter includes the activities that make the most sense for studying the topics in that chapter. They explain the protocols. The “Part Review” section suggests that you repeat the DIKTA questions. many of the more important lists and tables from the chapter are included in a document on the CD. Define Key Terms: Although the exam might be unlikely to ask a question such as. Foundation Topics: These are the core sections of each chapter. answering those questions again can be a useful way to review facts. uncovering what you truly understood and what you did not quite yet understand. Exam Preparation Tasks: At the end of the “Foundation Topics” section of each chapter. allowing you to complete the table or list. asking you to write a short definition and compare your answer to the Glossary at the end of this book. References: Some chapters contain a list of reference links for additional information and details on the topics discussed in that particular chapter. and configuration for the topics in the chapter. The activities include the following: Review Key Topics: The Key Topic icon is shown next to the most important items in the “Foundation Topics” section of the chapter. concepts. Although the content of the entire chapter could appear on the exam. This document lists only partial information. Complete Tables and Lists from Memory: To help you exercise your memory and memorize certain lists of facts. The “Review Key Topics” activity lists the key topics from the chapter and their corresponding page numbers. but use the Pearson IT Certification Practice Test (PCPT) exam software that comes .

One exam database holds part review questions. mobile device.” but the CCNA exams require that you learn and know a lot of technology concepts and architectures. Companion website: The website www. for extra practice in answering multiple-choice questions on a computer. These questions purposefully include multiple concepts in each question. blogs. EPUB (for reading on your tablet. including the following: CD-based practice exam: The companion CD contains the powerful PCPT exam engine. has additional study resources. we have included a special offer on a coupon card inserted in the CD sleeve in the back of the book.with the book.pearsonitcertification. or Nook or other e-reader). Final Prep Tasks Chapter 21. The core chapters cover the following topics: . written specifically for part reviews. as a whole. You can answer the questions in study mode or take simulated DCICT exams with the CD and activation code included in this book.com: The website www. sometimes from multiple chapters. Review Key Topics: Yes. In addition to three versions of the e-book—PDF (for reading on your computer). videos. This offer enables you to purchase the CCNA Data Center DCICT 640-916 Official Cert Guide. again! They are indeed the most important topics in each chapter.com/title/9781587144226 posts up-to-theminute material that further clarifies complex exam topics. Other Features In addition to the features in each of the core chapters. “Define this term. Book Organization. Each core chapter covers a subset of the topics on the 640-916 DCICT exam. Premium Edition e-book and practice test at a 70 percent discount off the list price. and Mobi (the native Kindle version)—you will receive additional practice test questions and enhanced practice test features. Answer Part Review Questions: The PCPT exam software includes several exam databases. this book.” lists a series of tasks that you can use for your final preparation before taking the exam. eBook: If you are interested in obtaining an e-book version of this title. to help build the skills needed for the more challenging analysis questions on the exams. This section asks you some open questions that you should try to describe or explain in your own words. Open Self-Assessment Questions: The exam is unlikely to ask a question such as. with Chapter 21 including some suggestions for how to approach the actual exams. Chapters. The core chapters are organized into sections. “Final Review. PearsonITCertification. and Appendixes This book contains 20 core chapters—Chapters 1 through 20. Check out the great CCNA articles. This will help you develop a thorough understanding of important exam topics pertaining to that part. Check this site regularly for new and updated postings written by the authors that provide further insight into the more troublesome topics on the exam.ciscopress. and other certification preparation tools from the industry’s best authors and trainers.com is a great resource for all things IT-certification related.

Nexus 3000. Fibre Channel. It goes into the detail of multilayer data center network design and technologies. Cisco MDS Multiservice and Multilayer Fabric Switches. Chapter 8. It compares Small Computer System Interface (SCSI). Nexus 7000. and Nexus 2000. Nexus 5000. It details the VDC concept and the VDC deployment scenarios. “Data Center Network Architecture”: This chapter provides an overview of the data center networking architecture and design practices relevant to the exam. control plane. and Cisco FabricPath. Part II: Data Center Storage-Area Networking Chapter 6. using Virtual Device Contexts (VDC) and Network Interface Virtualization (NIV). configuration. network-attached storage (NAS) connectivity for remote server storage. how it works. Chapter 4. These methods are also discussed with some important commands for initial setup. “Virtualizing Storage”: Storage virtualization is the process of combining multiple . The different VDC types and commands used to configure and verify the setup are also included in the chapter. Basic configuration and verification commands for these technologies are also included in this chapter. “Data Center Interconnect”: This chapter covers the latest Cisco innovations in the data center extension solutions and in LAN extension in particular. and management plane. such as port channel. Cisco MDS Software and Storage Services. Chapter 5. It explains the functions performed by each plane and the key features of data plane. It also provides an overview of out-of-band and in-band management interfaces. and how it is deployed. and verification. Chapter 7. Chapter 2. “Virtualizing Cisco Network Devices”: This chapter covers the virtualization capabilities of the Nexus 7000 and Nexus 5000. and Data Center Network Management. “Cisco MDS Product Family”: This chapter provides an overview of the Cisco Multilayer Data Center Switch (MDS) product family in four main areas: Cisco MDS Multilayer Directors. The edge/core layer of the SAN design is also included. It covers Fibre Channel protocol and operations in detail. which is called Overlay Transport Virtualization (OTV). which includes Nexus 9000. Commands to configure and how to verify the operation of the OTV are also covered in this chapter. “Management and Monitoring of Cisco Nexus Devices”: This chapter provides an overview of operational planes of the Nexus platform. “Cisco Nexus Product Family”: This chapter provides an overview of the Nexus switches product family. and storage-area network (SAN). “Data Center Storage Architecture”: This chapter provides an overview of the data center storage-networking technologies. how it works. virtual port channel. The Nexus platform provides several methods for device configuration and management. It goes into detail about OTV. Also covered in this chapter is the NIV—what it is. Nexus 6000. and the focus is on the current generation of modules and storage services solutions. and how it is configured. This chapter goes directly into the key concepts of the Cisco Storage Networking product family. Chapter 3. The chapter explains the different specifications of each product.Part I: Data Center Networking Chapter 1. This chapter also identifies the mechanism available in the Nexus platform to protect the control plane of the switch.

Part III: Data Center Unified Fabric Chapter 10. It goes directly into the key concepts of storage virtualization. This chapter guides you through storage virtualization concepts answering basic what. Network virtualization is using network resources through a logical segmentation of a single physical network. Server virtualization is partitioning a physical server into smaller virtual servers. zoning. and intelligent services. why. It also discusses the use of converged Unified Fabric leveraging Cisco FabricPath technology. Chapter 12. It also explains how to monitor and verify this process. “Cisco Unified Computing System Manager”: This chapter starts by describing how to set up. and verify the Cisco UCS Fabric Interconnect cluster. Chapter 9. hardware. Part IV: Cisco Unified Computing Chapter 13. “Multihop Unified Fabric”: This chapter discusses various options for multihop FCoE topologies to extend the reach of Unified Fabric beyond the single-hop boundary. It also details the Cisco Integrated Management Controller architecture and purpose. Then the chapter explains UCS architecture in terms of component connectivity options and unification of blade and rackmount server connectivity. “Fibre Channel Storage-Area Networking”: This chapter provides an overview of how to configure Cisco MDS 9000 Series multilayer switches and how to update software licenses. followed by an introduction to the Cisco UCS value proposition. It then describes the process of hardware and software discovery in Cisco UCS. It also describes how to verify virtual storage-area networks (VSAN). as well as their benefits and challenges. configure. Application virtualization is decoupling the application and its data from the operating system. fabric domain. It discusses FCoE protocol and its operation over numerous switching topologies leveraging a variety of Cisco Fabric Extension technologies. . “Data Center Bridging and FCoE”: This chapter introduces principles behind IEEE data center bridging standards and explains how they enhance traditional Ethernet environments to support convergence of network and storage traffic over a single Unified Fabric. VSAN trunking. This chapter goes directly into the practical configuration steps of the Cisco MDS product family and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. secure.network storage devices into what appears to be a single storage unit. and software portfolio. and how questions. “Unified Fabric Overview”: This chapter provides an overview of challenges faced by today’s data centers. It also takes a look at different cabling options to support a converged Unified Fabric environment. It also covers the power-on auto provisioning (POAP). Chapter 11. “Cisco Unified Computing System Architecture”: This chapter begins with a quick fly-by on the evolution of server computing. Chapter 14. It focuses on how Cisco Unified Fabric architecture addresses those challenges by converging traditionally disparate network and storage environments while providing a platform for scalable. It also takes a look at some of the technologies allowing extension of the Unified Fabric environment beyond the boundary of a single data center. and setting up an ISL port using the command-line interface. the fabric login. It takes a look at characteristics of each option.

GSS and ACE integration is also explained using an example. Part VI: Data Center Network Services Chapter 19. the Cisco vision—Cisco Nexus 1000V. Policies. As a bonus. Part VII: Final Preparation Chapter 21. “Cisco Nexus 1000V and Virtual Switching”: The chapter starts by describing the challenges of current virtual switching layers in data centers. in particular explaining the many study options available in the book. Chapter 20. Appendix A. “Cisco ACE and GSS”: This chapter gives you an overview of server load balancing concepts required for the DCICT exam. Cisco supports a range of hardware and software platforms that are suitable for various wide-area network optimization and application acceleration requirements. The methods used to optimize the WAN traffic are also discussed in detail. “Final Review”: This chapter suggests a plan for exam preparation after you have finished the core parts of the book. “Cisco WAAS and Application Acceleration”: This chapter gives you an overview of the challenges associated with a wide-area network. An overview of these products is also provided in this chapter. It also details different deployment modes of ACE and their application within the data center design. It explains the Cisco WAAS solution and describes the concepts and features that are important for the DCICT exam. Part V: Data Center Server Virtualization Chapter 17. and integration with VMware vCenter Server. and the well-documented UCS XML API using the Python SDK. “Server Virtualization Solutions”: This chapter takes a peek into the history of server virtualization and discusses fundamental principles behind different types of server virtualization technologies. virtual supervisory modules. Chapter 18. in particular. goUCS automation toolkit. and Monitoring of Cisco Unified Computing System”: This chapter starts by explaining some of the important features used when administering and monitoring Cisco UCS. Management. tips. and then introduces the distributed virtual switches and. The chapter explains installation options. and Service Profiles”: The chapter starts by explaining the hardware abstraction layer in more detail and how it relates to stateless computing. and most relevant features. which helps in choosing the correct solution for the load balancing requirements in the data center. “Cisco Unified Computing System Pools. It also gives you an introduction to Cisco UCS XML. you also see notes.Chapter 15. followed by explanations of logical and physical resource pools and the essentials to create templates and aid rapid deployment of service profiles. Templates. The Glossary contains definitions for all the terms listed in the “Definitions of Key Terms” . It evaluates the benefits and challenges of server virtualization while offering approaches to mitigate performance and security concerns. Chapter 16. commands to verify initial configuration of virtual Ethernet modules. It describes Cisco Application Control Engine (ACE) and Global Site Selector (GSS) products and their key features. “Answers to the ‘Do I Know This Already?’ Quizzes”: Includes the answers to all the questions from Chapters 1 through 20. “Administration.

“Memory Tables”: Holds the key tables and lists from each chapter. Appendix D.section at the conclusion of Chapters 1 through 20. Appendixes On the CD Appendix B. complete the tables and lists. “Memory Tables Answer Key”: Contains the answer key for the exercises in Appendix B. “Study Planner”: A spreadsheet with major study milestones enabling you to track your progress through your study. as a memory exercise. Appendix C. with some of the content removed. The goal is to help you memorize facts that can be useful on the exams. . You can print this appendix and.

The paper lists the activation code for the practice exam associated with this book. After you install the software. including how to get in touch with Cisco Press. you will find a one-time-use coupon code that will give you 70% off the purchase of the CCNA Data Center DCICT 640-916 Official Cert Guide. Respond to windows prompts as with any typical software installation process. The installation process gives you the option to activate your exam with the activation code supplied . The practice exam (the database of DCICT exam questions) is not on the CD. the PCPT software will use your Internet connection and download the latest versions of both the software and the question databases for this book. Do not lose the activation code. Premium Edition eBook and Practice Test. you do not need to reinstall the software. Insert the CD into your PC. The installation process requires two major steps. From the main menu. drag-and-drop. Note Also on this same piece of paper. Using the PCPT engine. The CD in the back of this book has a recent copy of the PCPT engine. Note The cardboard CD case in the back of this book includes both the CD and a piece of thick paper. or you can take a simulated DCICT exam that mimics real exam conditions. In particular. If you have already installed the PCPT software from another Pearson product. Install the Software from the CD The software installation process is routine as compared with other software installation processes. Step 2. including the exam engine. and testlet questions). The following steps outline the installation process: Step 1. which lists several contact details. The software that automatically runs is the Cisco Press software to access and use all CDbased features. make sure to note the final page of this introduction.Reference Information This short section contains a few topics available for reference elsewhere in the book. but you may also skip these topics and refer to them later. fill-in-the-blank. Simply launch the software on your desktop and proceed to activate the practice exam from this book by using the activation code included in the CD sleeve. You may read these when you first use the book. you can either study by going through the questions in study mode. Step 3. Install the Pearson IT Certification Practice Test Engine and Questions The CD in the book includes the Pearson IT Certification Practice Test (PCPT) engine (software that displays and grades a set of exam-realistic multiple-choice. on the opposite side from the exam activation code. click the option to Install the Exam Engine.

only a few steps are required. This will ensure that you are running the latest version of the software engine. Here is a suggested study plan: During part review. use PCPT to review the DIKTA questions for that part. . You will need this login to activate the exam.” or four different sets of questions. Activate and Download the Practice Exam When the exam engine is installed. During part review. Updating your exams will ensure that you have the latest changes and updates to the exam data. Save the remaining exams to use with Chapter 21. select the Tools tab and click the Update Application button. select the Tools tab and click the Update Products button. If you do not see the exam. At this point. click the Activate button. Then click the Activate button. you do not even need the CD at this point. From there. Just use your existing login. You can choose to use any of these exam databases at any time. For instance. you should then activate the exam associated with this book (if you did not do so during the installation process) as follows: Step 1. When you install the PCPT software and type in the activation code. after they have finished reading the entire book. the PCPT software downloads the latest version of all these exam databases. With the DCICT book alone. using study mode. Just select the exam and click the Open Exam button. all you have to do is start PCPT (if not still up and running) and perform Steps 2 through 4 from the previous list. The activation process will download the practice exam. Step 3. there is no need to register again. Step 2. The two modes inside PCPT give you better options for study versus practicing a timed exam event. Click Next. use the questions built specifically for part review (the part review questions) for that part of the book. the software and practice exam are ready to use. if you buy another new Cisco Press Official Cert Guide or Pearson IT Certification Cert Guide. If you want to check for updates to the PCPT software. At the next screen. To activate and download the exam associated with this book. many people find it best to save some of the exams until exam review time. enter the activation key from paper inside the cardboard CD holder in the back of the book. from the My Products or Tools tab. so please do register when prompted. If you already have a Pearson website login. Step 4. To update a particular product’s exams that you have already activated and downloaded.on the paper in the CD sleeve. Then for each new product. “Final Review. extract the activation code from the CD sleeve in the back of that book. make sure that you have selected the My Products tab on the menu. and then click Finish. This process requires that you establish a Pearson website login. Start the PCPT software from the Windows Start menu or from your desktop shortcut icon. the My Products tab should list your new exam. both in study mode and practice exam mode. However. The questions come in different exams or exam databases. When the activation process completes. Activating Other Products The exam software installation process and the registration process have to happen only once. you get four different “exams. PCPT Exam Databases with This Book This book includes an activation code that allows you to load a set of practice questions.” using practice exam mode. using study mode.

click at the bottom of the screen to deselect all objectives. How to View Only Part Review Questions by Part The exam databases you get with this book include a database of questions created solely for study during the part review process. Although you could simply scan the book pages to review these questions. Click Start to start reviewing the questions. Step 6. In this same window. Select any other options on the right side of the window. with a timer event. from all chapters. the DIKTA questions from the beginning of each chapter). Click Start to start reviewing the questions. It gives you a preset number of questions. Step 6. The top of the next window should list some exams. but select the part review database instead of the book database. Step 2. Select any other options on the right side of the window. Step 3. On this same window. select the item for this product. Step 2. From the main (home) menu. so you can study the topics more easily. . follow the same process as you did with DIKTA/book questions. you can view questions from the chapters in only one part of the book. Start the PCPT software. To view these DIKTA (book) questions inside the PCPT software. and deselect the other boxes. for instance. Step 4. Practice exam mode also gives you a score for that timed event. This selects the questions intended for part-ending review. then select the box beside each chapter in the part of the book you are reviewing. Step 3. you can see the answers immediately. DIKTA questions focus more on facts. with basic application. with a name like DCICT 640916 Official Cert Guide. Practice exam mode creates an event somewhat like the actual exam. follow these steps: Step 1. it is slightly better to review these questions from inside the PCPT software. Also. Start the PCPT software. you can choose a subset of the questions in an exam database. and click Open Exam. and then select (check) the box beside the book part you want to review. just to get a little more practice in how to read questions from the testing software. From the main (home) menu. with a name like DCICT 640916 Official Cert Guide. Step 5. click at the bottom of the screen to deselect all objectives (chapters). The top of the next window that appears should list some exams. Step 5. select the box beside DCICT Book Questions. This selects the “book” questions (that is. This tells the PCPT software to give you part review questions from the selected part. To view these questions.In study mode. Step 4. select the item for this product. and click Open Exam. The part review questions instead focus more on application and look more like real exam questions. as follows: Step 1. select the box beside Part Review Questions. How to View Only DIKTA Questions by Part Each “Part Review” section asks you to repeat the Do I Know This Already (DIKTA) quiz questions from the chapters in that part. and deselect the other boxes.

but the real work is up to you! We trust that your time will be well spent. select Contact Us. You should always check www.com.For More Information If you have any comments about the book. and type your message. Cisco might make changes that affect the CCNA data center certification from time to time. This is the DCICT exam prep book from the only Cisco-authorized publisher. The CCNA Data Center DCICT 640-916 Official Cert Guide helps you attain the CCNA data center certification. We at Cisco Press believe that this book certainly can help you achieve CCNA data center certification.ciscopress. Just go to the website.com/go/certification for the latest details. .cisco. submit them via www.

Getting Started It might look like this is yet another certification exam preparation book—indeed. and troubleshoot networking features. and today you are making a very significant step in achieving your goal. It requires a bit of practice every day. On running as a metaphor. This “Getting Started” chapter guides you through building a strategy toward success. piling up the races. But that’s not the point. including all the exams related to CCNA certifications. A Brief Perspective on the CCNA Data Center Certification Exam Cisco sets the bar somewhat high for passing its certification exams. and so on. a question might give you a data center topology diagram and then ask what needs to be configured to make that setup work. The point is whether or not I improved over yesterday. In long-distance running the only opponent you have to . since then. what needs to be considered when designing that topology. and plan accordingly to review material for exam topics you are less comfortable with. Such skills require you to prepare by doing more than just reading and memorizing this book’s material. compute. At the same time. but at the same time it also encourages you to continue your journey to become a better data center networking professional. Spend time thinking about key milestones you want to reach. he has competed in more than 20 marathons and an ultramarathon. what do you need to do to be ready to pass the exam? We encourage you to build your theoretical knowledge by thoroughly reading this book and taking time to remember the most critical facts. At least that’s why I’ve put in the effort day after day: to raise my own level. and analyze emerging technology trends. and security to the test. These exams challenge you from many angles. Murakami started running in the early 1980s. it is. such as your ability to analyze. What I Talk About When I Talk About Running is a memoir by Haruki Murakami in which he writes about his interest and participation in long-distance running. configure. Suggestions for How to Approach Your Study with This Book Although Cisco certification exams are challenging. Running day after day. For example. virtualization. such as CCNA data center. bit-by-bit I raise the bar. This book offers a solid foundation for the knowledge required to pass the CCNA data center certification exam. and by clearing each level I elevate myself. We welcome your decision to become a Cisco Certified Network Associate Data Center professional! You are probably wondering how to get started. understand relevant technology landscape. running is both exercise and a metaphor. he wrote in his book: “For me. Think about studying for the CCNA data center certification exam as preparing for a marathon. I’m at an ordinary—or perhaps more like mediocre—level. Take time to review it before starting your CCNA data center certification journey. I’m no great runner. by any means. many people pass them every day. Strategizing about your studying is a key element to your success. The CCNA data center exam puts your knowledge in network. Many questions in the CCNA data center certification exam apply to connectivity between the data center devices. So. A team of five data center professionals came together to carefully pick the content for this book to send you on your CCNA data center certification journey. what is the missing ingredient of the solution. storage. You will need certain analytical problem-solving skills. we also encourage you to develop your practical knowledge by having hands-on experience in the topics covered in the CCNA data center certification exam.

” Similar principles apply here. This book has 20 content chapters covering the topics you need to know to pass the exam. Each chapter consists of numerous “Foundation Topics” sections and “Exam Preparation Tasks” at the end. Review those sections carefully and make sure you are thoroughly familiar with their content. remembering terms. plan to use the practice tasks at the end of each chapter. view the book as having six major milestones. you can choose to read the entire core (Foundation Topics) section of the chapter or just skim the chapter.beat is yourself. The following list describes the majority of the activities you will find in “Exam Preparation Tasks” sections: Review key topics Complete memory tables Define key terms Approach each chapter with the same plan. DIKTA is a self-assessment quiz appearing at the beginning of each content chapter. and doing them at the end of the chapter. and linking the concepts in your brain so that you can remember how it all fits together. This Book Is Compiled from 20 Short Read-and-Review Sessions First. really helps you get ready. Server Virtualization. and Storage Area Networks in a Data Center. Each chapter marks key topics you should pay extra attention to. do the study tasks in the “Exam Preparation Tasks” section at the end of the chapter. the way you used to be. Table 1 shows the suggested study approach for each content chapter.” Doing these tasks. This book organizes the content into topics of a more manageable size to give you something more digestible to build your knowledge. These topics are structured to cover main data center infrastructure technologies. Practice—Did I Mention: Practice? Second. Computer. Each chapter ends with practice and study tasks under the heading “Exam Preparation Tasks. Based on your score in the “Do I Know This Already?” (DIKTA) quiz. look at your study as a series of read-and-review tasks. Practice. each on a relatively small set of related topics. Remember. such as Network. . Practice. one for each major topic. Table 1 Suggested Study Approach for the Content Chapter In Each Part of the Book You Will Hit a New Milestone Third. Plan your time accordingly and make progress every day. and the only opponent you have to beat is yourself. regardless of whether you skim or read thoroughly. Take time to review this book chapter by chapter and topic by topic. Do not put off using these tasks until later! The chapter-ending “Exam Preparation Tasks” section helps you with the first phase of deepening your knowledge and skills of the key topics.

To set the goals. Each “Part Review” instructs you to repeat the DIKTA questions while using the PCPT software. At the end of each part. More reading will not necessarily develop these skills. The tasks in the chapter also help you find your weak areas. Consider setting a goal date for finishing each part of the book and reward yourself for achieving this goal! Plan breaks—some personal time out—to get refreshed and motivated for the next part. you need to know what tasks you plan to do. Set Goals and Track Your Progress Finally. Table 2 lists the six parts in this book. this chapter’s tasks give you activities to make further progress. Table 2 Six Major Milestones: Book Parts Note that the “Part Review” directs you to use the Pearson Certification Practice Test (PCPT) software to access the practice questions. Many of the questions are purposely designed to test your knowledge in areas where the most common mistakes and misconceptions occur. do the tasks outlined at the end of the book in Chapter 21. Use the Final Review Chapter to Refine Skills Fourth. or the “Final Review” . depending on your personality. configuration.” gives some recommendations on how to best use those questions for your final exam preparation. take extra time to complete “Part Review” tasks and ask yourself where your weak and strong areas are. This final element gives you repetition with high-challenge exam questions. this book also organizes the chapters into six major topic areas called book parts. the “Part Review” tasks section. set some goals. before you start reading the book. Completing each part means that you have completed a major area of study. It also instructs you how to access a specific set of questions reserved for reviewing concepts at the “Part Review. such as putting down every single task in the “Exam Preparation Tasks” section. it helps you further develop the analytical skills you need to answer the more complicated questions in the exam. verification. Acknowledge yourself for completing major milestones. Although making lists of tasks might or might not appeal to you. goal setting can help everyone studying for these exams. “Final Review. helping you avoid some of the pitfalls people experience with the actual exam. and troubleshooting. Chapter 21.” Note that the PCPT software and exam databases included with this book also give you the rights to additional questions. and be ready to track your progress. The list of tasks does not have to be very detailed. First. Chapter 21 has two major goals. take the time to make a plan. uncovering any gaps in your knowledge.Beyond the more obvious organization into chapters. Many questions require that you cross-connect ideas about design.

and set up an e-mail filter to redirect the messages to a separate folder. find some PDFs. and so on. should you encounter software installation problems.chapter. you can browse through the posts to find . such as install software. work commitments. think about how fast you read and the length of each chapter’s “Foundation Topics” section. listing only the major tasks can be sufficient. do not start skipping the tasks listed at the ends of the chapters! Instead. Do not delay. Even if you do not spend time reading all the posts yet. Instead.” Table 3 shows a sample of a task planning table for Part I of this book. Other Small Tasks Before Getting Started You will need to complete a few overhead tasks. think about what is impacting your schedule—family commitments. Register.com) and join the CCNA data center study group. Table 3 Sample Excerpt from a Planning Table Use your goal dates as a way to manage your study. when you have time to read. If you finish a task sooner than planned. you have sufficient time to resolve them before you need the tool. join the group. Pick reasonable dates that you can meet. You can complete these tasks now or do them in your spare time when you need a study break during the first few chapters of the book. Register (for free) at the Cisco Learning Network (CLN. for both the DCICN and DCICT exams. This mailing list allows you to lurk and participate in discussions about CCNA data center topics. When setting your goals. You should track at least two tasks for each typical chapter: reading the “Foundation Topics” section and doing the “Exam Preparation Tasks” section at the end of the chapter. later. and so on—and either adjust your goals or work a little harder on your study. as listed in the Table of Contents. and do not get discouraged if you miss a date. http://learningnetwork. If you miss a few dates. Of course. do not forget to list tasks for “Part Reviews” and the “Final Review.cisco. move up the next few goal dates.

Install the PCPT exam software and activate the exams. You can also search the posts from the CLN website. There are many recorded CCNA data center sessions from the technology experts. For more details on how to load the software.relevant topics. which you can listen to as much as you want. refer to the “Introduction.” under the heading “Install the Pearson Certification Practice Test Engine and Questions. and enjoy the ride! .” Keep calm. All the recorded sessions last a maximum of one hour.

Data Plane. Aggregation.Part I: Data Center Networking Exam topics covered in Part I: Modular Approach in Network Design Core. and Management Plane Initial Setup of Nexus Switches Nexus Device Configuration and Verification Chapter 1: Data Center Network Architecture Chapter 2: Cisco Nexus Product Family Chapter 3: Virtualizing Cisco Network Devices Chapter 4: Data Center Interconnect Chapter 5: Management and Monitoring of Cisco Nexus Devices . and Access Layer Collapse Core Model Port Channel Virtual Port Channel FabricPath Describe the Cisco Nexus Product Family Describe FEX Products Describe Device Virtualization Identify Key Differentiator Between DCI and Network Interconnectivity Control Plane.

and from the remote locations over the public Internet. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section.Chapter 1. Data Center Network Architecture This chapter covers the following exam topics: Modular Approach in Network Design Core. read the entire chapter. from a branch office over the widearea network (WAN). These resources can be utilized from a campus building of an enterprise network. This chapter discusses the data center networking architecture and design practices relevant to the exam.” Table 1-1 “Do I Know This Already?” Section-to-Question Mapping . This chapter goes directly into the key concepts of data center networking and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. Table 1-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. Aggregation. and Access Layer Collapse Core Model Port Channel Virtual Port Channel FabricPath Data centers are designed to host critical computing resources in a centralized place. Network architecture is an important subject that addresses many functional areas of the data center to ensure that data center services are running around the clock. “Answers to the ‘Do I Know This Already?’ Quizzes. You can find the answers in Appendix A. network plays an important role in the availability of data center resources. It is assumed that you are familiar with the basics of networking concepts outlined in the Introducing Cisco Data Center Networking (DCICN). Basic network configuration and verification commands are also included in this chapter. Therefore.

Aggregation layer c. c. Which of the following statements about the data center aggregation layer is correct? a. Security d. The purpose of the aggregation layer is to provide high-speed packet forwarding. Services layer 5. Load balancing c. Used to enforce network security 3. Quality of Service d. What are the benefits of modular approach in network design? a. Increased capacity b. Collapse Core is a design where only one core switch is used. Access layer b. Scalability b. d. In Collapse Core design. access layer. b. If you do not know the answer to a question or are only partially sure of the answer. distribution layer. Where will you connect a server in a data center? a. b. Which of the following are key characteristics of data center core layer? a. 1. Provides connectivity to aggregation layer b. d. 6. It provides policy-based connectivity. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. In Collapse Core design. and core layers are collapsed. High availability d. Which of the following are benefits of port channels? a.Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. you should mark that question as wrong for purposes of the self-assessment. Simplicity 2. Collapse Core design is a nonredundant design. High availability . c. What is a Collapse Core data center network design? a. 4. It provides connectivity to network-based services (load balancer and firewall). High availability c. distribution and core layers are collapsed. This layer is optional in data center design. Provides connectivity to servers c. Core layer d.

It uses all available uplink bandwidth. None of the above 12. Passive d. Duplex c. All Cisco switches 13. a. ON e. Active b. ON 10. b. LACP port priority d. Nexus 7000 Series switches c. LACP is enabled on a port. The port initiates negotiations with a peer switch by sending LACP packets. Speed b.7. Which LACP mode is used? a. LACP is enabled on a port. Which of the following are benefits of using vPC? a. OFF 9. Active 11. Progressive c. Which LACP mode is used? a. Catalyst 6500 switches b. Which of the following attributes must match to bundle ports into a port channel? a. Passive d. Active d. Standby c. It eliminates Spanning Tree Protocol (STP) blocked ports. The port responds to LACP packets that it receives but does not initiate LACP negotiation. Source and destination IP address b. Passive c. . Cisco Nexus switches support which of the following port channel load-balancing methods? a. Source and destination MAC address c. ON b. Flow control 8. Which of the following are valid port channel modes? a. Source and destination TCP/UDP port number d. Progressive b. Identify the Cisco switching platform that supports vPC. Nexus 5000 Series switches d.

the root of the first multidestination tree is elected based on which of the following parameters? a. Root priority b. Cisco Fabric Services over Ethernet (CFSoE) performs which of the following operations? a. System ID c. It performs load balancing of traffic between two vPC switches. It monitors the status of vPC member ports. IP address of packet 20. 255 d. b. In vPC setup. What is the name of the FabricPath control plane protocol? a. 14. vPC peer link c. d. Source switch ID in outer source address (OSA) b. which protocol is used to automatically assign a unique switch ID to each FabricPath switch? a. Destination MAC address c. It provides fast convergence if either the link or the device fails. There is no limit. FabricPath routes the frames based on which of the following parameters? a. It is supported on all Cisco switches. 16. Destination switch ID in outer destination address (ODA) d. How many vPC domains can be configured on a switch or VDC? a. It is used to exchange Layer 2 forwarding tables between vPC peers. Cisco Fabric Services over Ethernet (CFSoE) c. Switch ID d. In a FabricPath network. MAC address 19. Only two c. It performs consistency and compatibility checks. vPC ports d. IS-IS . Dynamic Host Configuration Protocol (DHCP) b. you can configure as many as you want. In a FabricPath network. the switch ID must be allocated manually. d. c. vPC peer keepalive link b. Only one b. 18. In FabricPath. vPC peer link and vPC peer keepalive link 15. 17. Dynamic Resource Allocation Protocol (DRAP) d. Cisco Fabric Service over Ethernet (CFSoE) uses which of the following links? a.c.

Modularity also enables faster moves. Therefore. Modular networks are scalable. This section describes the function of each layer in the data center. The main advantage occurs during the changes in the network. These building blocks are used to construct a network topology within the data center. modular design provides easy control of traffic flow and improved security. Figure 1-1 shows functional layers of a multilayer network design. The Multilayer Network Design When you design a data center using a modular approach. EIGRP Foundation Topics Modular Approach in Network Design In a modular design. configure.b. but it is very often forgotten. In other words. these blocks are independent from each other. a change in one block does not affect other blocks. These layers can be physical or logical. Modularity also implies isolation of building blocks. and incremental changes in the network. and troubleshoot. Modular designs are simple to design. Aggregation. A network must always be kept as simple as possible. OSPF c. It allows for the considerable growth or reduction in the size of a network without making drastic changes or needing any redesign. and Access. so these blocks are separated from each other and communicate through specific network connections between the blocks. These blocks or modules can be added and removed from the network without redesigning the entire data center network. Scalable data center network design is achieved by using the principle of hierarchy and modularity. This approach has several advantages. the network is divided into three functional layers: Core. the network is divided into distinct building blocks based on features and behaviors. The concept seems obvious. the addressing is also simplified within the data center network. Because of the hierarchical topology of a modular design. . and changes (MACs). BGP d. adds.

The hierarchical model divides networks or their modular blocks into three layers. .Figure 1-1 Functional Layers in the Hierarchical Model The hierarchical network model provides a modular framework that enables flexibility in network design and facilitates ease of implementation and troubleshooting. Table 1-2 describes the key features of each layer within the data center.

Layer 2 and Layer 3 switching in the distribution layer. In the most general model. and failure recovery.Table 1-2 Data Center Functional Layers The multilayer network design consists of several building blocks connected across a network backbone. Figure 1-2 shows sample topology of a multilayer network design. including segmentation. The multilayer network design takes maximum advantage of many Layer 3 services. New buildings and server farms can be easily added or removed without changing the design. and Layer 3 switching in the core. Layer 2 switching is used in the access layer. An advantage of the multilayer network design is scalability. . The redundancy of the building block is extended with redundancy in the backbone. load balancing.

you need two access layer switches with the same VLAN extended across both access switches using a trunk port. new designs try to confine Layer 2 to the access layer. A mix of both Layer 2 and Layer 3 access models permits a flexible solution and enables application environments to be optimally positioned. With a dual-homing network interface card (NIC). end stations. With Layer 2 at the aggregation layer. The switches in the access layer are connected to two separate distribution layer switches for redundancy.Figure 1-2 Multilayer Network Design Access Layer The access layer is the first point of entry into the network for edge devices. Layer 3. and IPS devices) are often deployed as a module in the aggregation layer. Aggregation Layer The aggregation (or distribution) layer aggregates the uplinks from the access layer to the data center core. depending on whether you use Layer 2 or Layer 3 access. This design lowers TCO and reduces complexity by reducing the number of components that you need to configure and manage. and mainframe connectivity. the default gateway for the servers can be configured at the aggregation layer. and servers. Rapid Per-VLAN Spanning Tree Plus (Rapid PVST+) is recommended to ensure a logically loop-free topology over the physical topology. This design also enables the use of Layer 2 clustering. firewalls. The design of the access layer varies. Security and application service devices (such as load-balancing devices. Although Layer 2 at the aggregation layer is tolerated for traditional designs. SSL offloading devices. The access layer in the data center is typically built at Layer 2. . This topology provides access layer high availability. This layer is the critical point for control and application services. there are physical loops in the topology that must be managed by the STP. which requires the servers to be Layer 2 adjacent. which allows better sharing of service devices across multiple servers. The data center access layer provides Layer 2. With Layer 2 access. The VLAN or trunk supports the same IP address allocation on the two server links to the two separate switches. The default gateway would be implemented at the aggregation layer switch.

The data center typically connects to the campus core using Layer 3 links. service devices that are deployed at the aggregation layer are shared among all the servers. or the content switching devices. The data center network is summarized. ACLs. 10 Gigabit Ethernet density: Without a data center core. Anticipation of future development: The impact that can result from implementing a separate data center core layer at a later date might make it worthwhile to install the core layer at the beginning. these layers do not have to be physical layers. the firewalls. If the same VLAN is required across multiple access layer devices. and the core injects a default route into the data center network. will there be enough 10 Gigabit Ethernet ports on the campus core switch pair to support both the campus distribution and the data center aggregation modules? Administrative domains and policies: Separate cores help to isolate campus distribution layers from data center aggregation layers for troubleshooting. the Cisco Nexus 7000 Series switches can now support much higher bandwidth densities.Note In traditional multilayer data center design. but can be logical layers. troubleshooting. the boundary between Layer 2 and Layer 3 at the aggregation layer can be in the multilayer switches. When the core is implemented in an initial data center design. administration. The aggregation layer typically provides Layer 3 connectivity to the core. This topic describes the reasons for combining the core and . and implementation of policies (such as quality of service. Depending on the requirements. and maintenance). Key core characteristics include the following: Distributed forwarding architecture Low-latency switching 10 Gigabit Ethernet scalability Scalable IP multicast support Collapse Core Design Although there are three functional layers in a network design. Service devices that are deployed at the access layer provide benefit only to the servers that are directly attached to the specific access switch. such as 40 or 100 Gigabit Ethernet? With the introduction of the Cisco Nexus 7000 M-2 Series of modules. Core Layer Implementing a data center core is a best practice for large data centers. The following drivers are used to determine whether a core solution is appropriate: 40 and 100 Gigabit Ethernet density: Are there requirements for higher bandwidth connectivity. it helps ease network expansion and avoid disruption to the data center environment. the aggregation layer might also need to support a large Spanning Tree Protocol (STP) processing load.

and adding more leaf switches can increase the port density on the access layer. As the small campus begins to scale. while keeping other layers of the data center intact. the small or medium-sized business can reduce costs because the same switch performs both distribution and core functions. Figure 1-4 Spine and Leaf Architecture The following are key characteristics of spine and leaf design: . a threetier architecture should be considered. Figure 1-4 shows a sample spine and leaf topology. server farm. With this option. Figure 1-3 shows sample topology of a collapse core network design. Spine and Leaf Design Most large-scale modern data centers are now using spine and leaf design. such as servers and other edge devices. the core switches would provide direct connections to the access-layer switches. The end hosts. In the spine and leaf design. In the collapse core design. therefore. Spine and leaf design can also be mixed with traditional multilayer data center design. For the campus of a small or medium-sized business. and it connects to all the spine switches. connect to leaf switches. adding more spine switches can increase the capacity of the network. a leaf switch acts as the access layer. In this mixed approach you can build a scalable access layer using spine and leaf architecture. One disadvantage of two-tier architecture is scalability. This design can have only two layers that are spine and leaf. and edge modules. Examples of spine and leaf architectures are FabricPath and Application Centric Infrastructure (ACI).aggregation layers. the three-tier architecture might be excessive. The spine provides connectivity between the leaf switches. Figure 1-3 Collapse Core Design The three-tier architecture previously defined is traditionally used for a larger enterprise campus. An alternative would be to use a two-tier architecture in which the distribution and the core layer are combined.

. delay. or port channel. you learn about the important concepts of a port channel. big network pipe between the two devices. a port channel is an aggregation of multiple physical interfaces that creates a logical interface. The current power consumption of a 40-Gbps optic is more than ten times a single 10-Gbps optic. shared buffer-switching platform for leaf node. Future proofing for higher-performance network through the selection of higher port count switches at the spine. bandwidth. and up to 16 ports on the F-Series module. Figure 1-5 Port Channel When a port channel is created. On the Nexus 7000 Series switch. This logical group is called EtherChannel.Multiple design and deployment options enabled using a variable-length spine with ECMP to leverage multiple available paths between leaf and spine. Note At the time of writing this book. These switches are connected to each other using physical network links. data center network topology is built using multiple switches. On the Nexus 5000 Series switch you can bundle up to 16 links into a port channel. spine and leaf topology is supported only by Cisco Nexus switches. This new interface is a logical representation of all the member ports of the port channel. you can bundle eight active ports on an M-Series module. Lower power consumption using multiple 10 Gbps between spine and leaf instead of a single 40-Gbps link. Reduced network congestion using a large. The port channel interface can be configured with its own speed. IP address. and physical interfaces participating in this group are called member ports of the group. duplex. Figure 1-5 shows how multiple physical interfaces are combined to create a logical port channel between two switches. Therefore. The two devices connected via this logical group of ports see it as a single. you can group these links to form a logical bundle. When you connect two devices using multiple physical links. Port Channel In this section. you will see a new interface in the switch configuration. What Is Port Channel? As you learned in the previous section. Network congestion results when multiple packets inside a switch request the same outbound port because of application architecture. flow control.

For example. Port mode. which will result in shutting down all member ports. Some of these attributes are Speed. Some of these benefits are as follows: Increased capacity: By combing multiple Ethernet links into one logical link. Typically. it is recommended that you use member ports from different line cards. This enables you to distribute traffic across multiple physical interfaces. Media type. For example. Duplex. port channel creation will fail. and interface description. therefore. You can force ports with incompatible parameters to join the port channel if the following parameters are the same: Speed. In case of a physical link failure. you can use the show port-channel compatibility-parameters command to see the full list of compatibility checks. Other interface attributes are checked by switch software to ensure that the interface is compatible with the channel group. Flow-control. In the case of port channels. and so on. On the Nexus platform. Benefits of Using Port Channels There are several benefits of using port channels. eight 10-Gbps links can be combined to create a logical link with bandwidth of 80 Gbps. In other words. You can also shut down the port channel interface. when you add an interface to a port channel. these interfaces must meet the compatibility requirements. This will help you increase network availability in the case of line card failure. its speed must match with other member interfaces in this port channel. increasing the efficiency of your network. the capacity of the link can be increased. to avoid network loops. Port channel load balancing configuration is discussed later in this chapter. the port channel continues to operate even if a single member link is alive. In case of an incompatible attribute. multiple Ethernet links are bundled together to create a logical link.maximum transmission unit (MTU). it automatically increases availability of the network. you cannot put a 1 G interface and a 10 G interface into the same port channel. port channels can be used to simplify network topology by aggregating multiple links into one logical link. VLANs. Load balancing: The switch distributes traffic across all operational interfaces in the port channel. if you have multiple links between a pair of switches. Duplex and Flow-control. STP blocks all the links except one. the software suspends that port in the port channel. . Port Channel Compatibility Requirements To bundle multiple switch interfaces into a port channel. As an example. it simplifies the network topology by avoiding the STP calculation and reducing network complexity by reducing the number of links between switches. Simplified network topology: In some cases. Therefore. when creating a port channel on a modular network switch. If you configure a member port with an incompatible attribute. multiple links are combined into one logical link. and because of these benefits you will find it commonly used within data center networks. High availability: In a port channel. MTU.

active. you can bundle up to 16 active links into a port channel on the F Series module. Flow-control.1AX standard. Port Channel Modes Member ports in a port channel are configured with channel mode. Nexus switches support three port channel modes: on. When you configure a static port channel without specifying a channel mode.3ad standard and later moved to the IEEE 802. . Duplex. LACP was initially part of the IEEE 802. the channel mode is set to on. This protocol ensures that both sides of the link have compatible configuration (Speed. You can configure channel mode active or passive for individual links in the LACP channel group when you are adding the links to the channel group. Network devices use LACP to negotiate an automatic bundling of links by sending LACP packets to the peer. Starting from NX-OS Release 5. you first enable the LACP feature globally on the device. LACP must be enabled on switches at both ends of the link. Table 1-3 shows the port channel modes supported by Nexus switches. allowed VLAN) to form a bundle.Link Aggregation Control Protocol The Link Aggregation Control Protocol (LACP) provides a standard mechanism to ensure that peer switches exchange information and agree on the necessary details to bundle ports into a port channel. Then you enable LACP for each channel by setting the channel mode for each interface to active or passive. On M Series modules. On Nexus 7000 switches. LACP enables you to configure up to 16 interfaces into a port channel.1. and a maximum of 8 interfaces can be placed in a standby state. To run LACP. and passive. Note Nexus switches do not support Cisco proprietary Port Aggregation Protocol (PAgP) to create port channels. a maximum of 8 interfaces can be in active state.

as shown in Figure 1-6. active. you can enable LACP on ports by configuring each interface in the channel for the channel mode as either active or passive.Table 1-3 Port Channel Modes If you attempt to change the channel mode to active or passive before enabling the LACP feature on the switch. This command automatically creates an interface port channel with the number that you specified in the command. or passive within the channel-group command. Both the passive and active modes enable LACP to negotiate between ports to determine whether they can form a port channel. or partner. Figure 1-6 Port Channel Mode Compatibility Configuring Port Channel Port channel configuration on the Cisco Nexus switches includes the following steps: 1. Ports can form an LACP port channel when they are in different LACP modes as long as the modes are compatible. based on criteria such as the port speed and the trunking state. Configure the physical interface of the switch with the channel-group command and specify the channel number. The passive mode is useful when you do not know whether the remote system. it does not join the LACP channel group. Enable the LACP feature. therefore. and it becomes an individual link for that interface. You can also specify the channel mode on. it does not receive any LACP packets. the device returns an error message. After LACP is enabled globally. This step is required only if you are using active mode or passive mode. When an LACP attempts to negotiate with an interface in the on state. supports LACP. . 2.

Configure the newly created interface port channel with the appropriate configuration. but it also ensures that frames from a particular address are always forwarded on the same port. Figure 1-7 shows configuration of a port channel between two switches. allowed VLANs. Table 1-4 provides some useful commands. hence.3. such as description. . These commands are helpful to verify port channel configuration and troubleshoot the port channel issue. and so on. You can configure the device to use one of the following methods to load balance across the port channel: Destination MAC address Source MAC address Source and destination MAC address Destination IP address Source IP address Source and destination IP address Source TCP/UDP port number Destination TCP/UDP port number Source and destination TCP/UDP port number Verifying Port Channel Configuration Several show commands are available on the Nexus switch to check port channel configuration. Figure 1-7 Port Channel Configuration Example Port Channel Load Balancing Nexus switches distribute the traffic across all operational ports in a port channel. This provides load balancing across all member ports of a port channel. This load balancing is done using a hashing algorithm that takes addresses in the frame as input and generates a value that selects one of the links in the channel. they are always received in order by the remote device. trunk configuration.

These two switches work together to create a virtual domain. It means that member ports in a virtual port channel can be from two different network switches. you learn about important concepts of virtual port channel (vPC). In Layer 2 network design.Table 1-4 Verify Port Channel Configuration Virtual Port Channel In this section. A vPC enables the extension of a port channel across two physical switches. Figure 1-8 shows a downstream switch connected to two upstream switches in a vPC configuration. Cisco vPC technology allows dual-homing of a downstream device to two upstream switches. What Is Virtual Port Channel? You learned in the previous section that a port channel bundles multiple physical links into a logical link. . The upstream switches present themselves to the downstream device as one switch from the port channel and Spanning Tree Protocol (STP) perspective. so a port channel can be extended across the two devices within this virtual domain. All the member ports of a port channel belong to the same network switch.

Note A brief summary of Spanning Tree is provided later in the FabricPath section. such as VMware ESX Server. Over a period of time. this preference is changing. However. only one will be active at any given time. STP became very mature and stable. FabricPath technology is discussed later in this chapter. it is important to migrate away from STP as a primary loop management technology toward new technologies such as vPC and Cisco FabricPath. These loops can cause a meltdown of network resources. with the current data center virtualization trend. it still has a fundamental issue that makes it suboptimal for the network.Figure 1-8 Virtual Port Channel In a conventional network design. However. and Citrix XenServer. STP does not provide an efficient way to manage large Layer 2 networks. therefore. for an Ethernet network this redundant topology can result in a Layer 2 loop. Building redundancy is an important design consideration to protect against device or link failure. that require member servers to be on the same Layer 2 Ethernet network to support private interprocess communication and public client facing communication using a virtual IP of the cluster. within and across the data center locations. Detailed . As a result. it was necessary to develop protocols and control mechanisms to limit the disastrous effects of a Layer 2 topology loop in the network. Microsoft Hyper-V. requires Layer 2 Ethernet connectivity between the physical servers to support seamless workload mobility. and after going though a number of enhancements and extensions. causing high CPU and link utilization. data center network architecture is shifting from a highly scalable Layer 3 network model to a highly scalable Layer 2 model. A virtualized server running hypervisor software. A Layer 2 loop is a condition where packets are circulating within the redundant network topology. Spanning Tree Protocol (STP) was the primary solution to this problem because it provides loop detection and loop management capability for Layer 2 Ethernet networks. such as high availability clusters. A large Layer 2 network is built using many network switches connected to each other in a redundant topology using multiple network links. There are some other applications. This issue is related to the STP principle that allows only one active network path from one device to another to break the topology loops in a network. It means that no matter how many connections exist between network devices. therefore. smaller Layer 2 segments are preferred. These requirements lead to the use of large Layer 2 networks. The innovation of new technologies to manage large Layer 2 network environments is the result of this shift in the data center architecture. Although STP can support very large network environments.

Virtual Port Channel (vPC) addresses this limitation by allowing a pair of switches acting as a virtual endpoint. Benefits of Using Virtual Port Channel Virtual Port Channel provides the following benefits: . Figure 1-9 shows a comparison of STP topology and vPC topology. Figure 1-9 shows the comparison between STP and vPC topology using the same devices and network links. It also provides the capability to equally balance traffic by sending packets to all available links using a load-balancing algorithm. with minimal loss of traffic and no effect on the active network topology. It can recover from a link loss in the bundle within a second. This enhancement enables the use of multiple Ethernet links to forward traffic between the two network devices. the alternative path is often connected to a different network switch in a topology that would cause a loop. Figure 1-9 Virtual Port Channel Bi-Section Bandwidth The two switches that act as the logical port channel endpoint are still two separate devices. Port channel technology has another great benefit. The limitation of the classic port channel is that it operates between only two devices. so it looks like a single logical entity to port channel–attached devices. This configuration keeps the switches from forwarding broadcast and multicast traffic back to the same link. resulting in efficient use of network capacity. Layer 2 loops are managed by bundling multiple physical links into one logical link. In a port channel. there is only one logical path between the two switches. These switches collaborate only for the purpose of creating a vPC. Port channel technology discussed earlier in this chapter was a key enhancement to Layer 2 Ethernet networks. In large networks with redundant devices.discussion on STP is available in Introducing Cisco Data Center Networking (DCICN). It means that if you configure a port channel between two network devices. avoiding loops. This environment combines the benefits of hardware redundancy with the benefits of port channel. It is important to observe that there is no blocking port in the vPC topology.

data.Enables a single device to use a port channel across two upstream switches Eliminates STP blocked ports Provides a loop-free topology Uses all available uplink bandwidth Provides fast convergence in the case of link or device failure Provides link-level resiliency Helps ensure high availability Components of vPC In vPC configuration. The control planes of these switches also exchange information so it appears as a single logical Layer 2 switch to a downstream device. and management planes. data. “Management and Monitoring of Cisco Nexus Devices. it is important to understand the components of vPC and the function of these components. These two upstream Nexus switches operate independently with their own control.” Before going into the detail of vPC control and data plane operation. This subject is covered in Chapter 5. and management planes. However. the data planes of these switches are modified to ensure optimal packet forwarding in vPC configuration. . Note The operational plane of a switch includes control. a downstream network device that connects to a pair of upstream Nexus switches sees them as a single Layer 2 switch. Figure 1-10 shows the components of a virtual port channel.

This link creates the illusion of a single control plane by forwarding bridge protocol data units (BPDUs) and Link Aggregation Control Protocol (LACP) packets to the primary vPC switch from the secondary vPC switch. this link is built using two physical interfaces in port channel configuration. The peer link is also used to synchronize MAC address tables between the vPC peers and to synchronize Internet Group Management Protocol (IGMP) entries for IGMP snooping. . The peer link provides the necessary transport for data plane traffic. the peer link also carries Hot Standby Router Protocol (HSRP) and Virtual Router Redundancy Protocol (VRRP) packets. vPC peer link: This is a link between two vPC peers to synchronize state. Typically. If a vPC device is also a Layer 3 switch. which enables other devices to connect to the two chassis using vPC. and for the traffic of orphaned ports. This pair of switches acts as a single logical switch. such as multicast traffic.Figure 1-10 Virtual Port Channel Components The vPC architecture consists of the following components: vPC peers: The two Cisco Nexus switches combined to build a vPC architecture are referred to as vPC peers. The vPC peer link is the most important connectivity element in the vPC system.

These VLANs must also be allowed on the vPC peer link. and for this device there is no vPC configured on the peer switches. vPC peer keepalive link: This is a Layer 3 communication path between vPC peer switches. It means that the orphan device is not using port channel configuration. It helps the vPC switch to determine whether the peer has failed or whether the vPC peer switch has failed entirely. Peer keepalive can also use the management port of the switch and often runs over an out-of-band (OOB) network. The vPC peer keepalive link is not used for any data or synchronization traffic. A vPC domain is created using two vPC peer devices that are connected together using a vPC peer keepalive link and a vPC peer link. In a normal network condition. the device automatically enables Cisco Fabric Services over Ethernet (CFSoE). Orphan port: This is a switch port that is connected to an orphan device. This situation can happen in a failure condition where a device connected to vPC loses all its connection to one of the vPC peer switches. This term is also used for the vPC member ports that are connected to only one vPC switch. vPC VLANs: The VLANs that are allowed on vPC are called vPC VLANs. It connects to the vPC peer switches using a regular port channel. To help ensure that the vPC peer link communication for the CFSoE protocol is always available. A peer switch can join only one vPC domain. The device on the other end of vPC sees the vPC peer switches as a single logical switch. This device can be configured to use LACP or a static port channel configuration. Spanning Tree has been modified to keep the peer-link ports always forwarding. When you enable the vPC feature. A numerical domain ID is assigned to the vPC domain. Non-vPC VLANs: These VLANs are not part of any vPC and are not present on a vPC peer link. There is no need for this device to support vPC itself. vPC domain: This is a pair of vPC peer switches combined using vPC configuration. CFSoE carries messages and packets for many features linked with vPC.Cisco Fabric Services: The Cisco Fabric Services is a reliable messaging protocol used between the vPC peer switches to synchronize control and data plane information. it only sends IP packets to make sure that the originating switch is operating and running vPC. vPC: This is a Layer 2 port channel that spans the two vPC peer switches. Peer keepalive is used as a secondary test to make sure that the remote peer switch is operating properly. vPC Data Plane Operation vPC technology is designed to always forward traffic locally when possible. such as STP and IGMP. Orphan device: The term orphan device is used for any device that is connected to only one switch within a vPC domain. The peer keepalive link is used to monitor the status of the vPC peer when the vPC peer link goes down. It uses the vPC peer link to exchange information between peer switches. No CFSoE configuration is required for vPC to function properly. vPC member port: This port is a member of a virtual port channel on the vPC peer switch. vPC peer link is not used for the regular vPC traffic and is considered to be an extension of .

multicast. and then goes to other peer switches via peer link. BPDUs. and LACP messages Flooding traffic. The vPC peer link carries the following type of traffic: vPC control traffic. vPC Control Plane Operation . normal switch forwarding rules are applied. the vPC peer switch will be allowed to forward the traffic that is received on the peer link to one of the remaining active vPC member ports. Cisco Fabric Services over Ethernet (CFSoE) is used to synchronize the Layer 2 forwarding tables between the vPC peer switches. CFSoEbased MAC address learning is applicable only to the vPC ports. it is used specifically for switch management traffic and occasionally for the data packets from failed network ports. such as an L3 port or an orphan port. and unknown unicast traffic Traffic from orphan ports It means that vPC peer link does not typically forward data packets. but the other peer switch has lost all the associated vPC member ports. For this type of orphan port. In this special case.the control plane between the vPC peer switches. the vPC loop avoidance rule is disabled. For this type of orphan port. This behavior of vPC enables the solution to scale because the bandwidth requirement for the vPC peer link is not directly related to the total bandwidth of all vPC ports. This packet can exit on any other type of port. such as Cisco FSoE. there is no dependency on the regular MAC address learning between the vPC peer switches. The traffic for this type of orphan port can use a vPC peer link as a transit link to reach the devices on the other vPC peer switch. This rule prevents the packets that are received on a vPC from being flooded back onto the same vPC by the other peer switch. vPC performs loop avoidance at the data plane by implementing forwarding rules in the hardware. To implement the specific vPC forwarding behavior. such as broadcast. One of the most important forwarding rules for vPC is that a packet that enters the vPC peer switch via the vPC member port. The first type of orphan port is the one that is connected to an orphan device and is not part of any vPC configuration. This method of learning is not used for the ports that are not part of vPC configuration. Figure 1-11 vPC Loop Avoidance It is important to understand vPC data plane operation for the orphan ports. The second type of orphan port is the one that is a member of a vPC configuration. is not allowed to exit the switch on the vPC member port. Therefore. The vPC loop avoidance rule is depicted in Figure 1-11.

It is helpful in reducing data traffic on the vPC peer link by forwarding the traffic to the local vPC member ports. Similar to the conventional port channel. On the Nexus 5000 Series switch. Synchronize the IGMP snooping status: In vPC configuration. and this MAC is programmed on the Layer 2 Forwarding (L2F) table of both peer devices. and LACP messages. an election is held to determine the primary and secondary role of the peer switches. Monitor the status of the vPC member ports: For a smooth vPC operation during failure conditions. The ARP table is synchronized between the vPC peer devices using CFSoE. When IGMP traffic enters a vPC peer switch. Determine primary and secondary vPC devices: When you combine the vPC peer switches in a vPC domain. It checks the switch configuration for vPC as well as switch-wide parameters that need to be configured consistently on both peer switches. This role is a control plane function. and for Nexus 7000 it is implemented in Cisco NXOS Software Version 4.0(2). Agree on LACP and STP parameters: In a vPC domain. CFSoE monitors the status of vPC member ports. vPC is also subject to the port compatibility checks. the vPC peer link is used for control plane messaging. the other peer switch is notified that its ports are now orphan ports. it triggers the synchronization of the multicast entry on both vPC peer switches. This new method of MAC address learning using CFSoE replaces the conventional method of MAC address learning for vPC. both vPC peer switches know that MAC address. such as CFSoE. When all member ports of a vPC go down on one of the peer switches. In the case of peer link failure. the secondary switch shut downs all vPC member ports and all switch virtual interfaces (SVI) that are associated with the vPC VLANs. CFSoE performs the following vPC control plane operations: Exchange the Layer 2 forwarding table between the vPC peers: It means that if one vPC peer learns a new MAC address. IGMP snooping behavior is modified to synchronize the IGMP entries between the peer switches. this feature is implemented in Cisco NX-OS Software Release 5. This vPC role is nonpreemptive. CFSoE is used to validate that vPC member ports are compatible and can be combined to form a port channel. the secondary vPC peer device takes the primary role. Synchronize the Address Resolution Protocol (ARP) table: This feature enables faster convergence time upon reload of a vPC switch. the peer keepalive mechanism is used to find whether the peer switch is still operational. This mechanism protects from split brain between the vPC peer switches. and it remains primary even if the original device is restored.As discussed earlier. If both switches are operational and the peer link is down. Switch modifies the forwarding behavior so that all traffic received on the peer link for that vPC is now forwarded locally on the vPC port.2(6). It also helps to stabilize the vPC operation during failure conditions. and it defines which switch will be primarily responsible for the generation and processing of spanning tree BPDUs. two peer switches agree on LACP . The CFSoE is used as the primary control plane protocol for vPC. If the primary vPC peer device fails. Perform consistency and compatibility checks: The peer switches exchange information using CFSoE to ensure that vPC configuration is consistent across both switches. BPDUs.

For STP. It is not possible to add more than two switches or Virtual Device Context (VDC) to a vPC domain. both switches send and process BPDU. vPCs can be configured in multiple virtual device contexts (VDC). which are combined with the vPC domain ID. the secondary switch only relays BPDUs and does not generate any BPDU. both primary and secondary switches use the same bridge ID to present themselves as a single switch. In this case. Use the following command to enable the vPC feature: feature vpc 2. vPC domains cannot be stretched across multiple VDCs on the same switch. When the peer-switch option is used. and that a Layer 3 path is available between the two peer switches. For LACP. In this case. It is recommended that you use at least two 10 Gigabit Ethernet ports in dedicated mode on two different I/O modules. vPC Limitations It is important to know the limitations of vPC technology. If the peer-switch option is not used. Peer-link is always 10 Gbps or more: Only 10 Gigabit Ethernet ports can be used for the vPC peer link. The vPC technology does not support the configuration of Layer 3 port channels. Before you start vPC configuration. vPC configuration on the Cisco Nexus switches includes the following steps: 1. vPC is a Layer 2 port channel: A vPC is a Layer 2 port channel. a common LACP system ID is generated from a reserved pool of MAC addresses. This domain ID must be unique .and STP parameters to present themselves as a single logical switch to the device that is connected on a vPC. A separate vPC peer link and vPC peer keepalive link is required for each of the VDCs. Configuration Steps of vPC When you are configuring a vPC. Some of the key points about vPC limitations are outlined next: Only two switches per vPC domain: A vPC domain by definition consists of a pair of switches identified by a shared vPC domain ID. but the configuration is entirely independent. the behavior depends on the peer-switch option. only the primary vPC switch sends and processes BPDUs. Create the vPC domain and assign a domain ID to this domain. Dynamic routing from the vPC peers to routers connected on a vPC is not supported. Only one vPC domain ID per switch: Only one vPC domain ID can be configured on a single switch or VDC. It is recommended that routing adjacencies be established on separate routed links. It is not possible for a switch or VDC to participate in more than one vPC domain. the configuration must be done on both peer switches. and all ports for a given vPC must be in the same VDC. make sure that peer switches are connected to each other via physical links that can support the requirements of a vPC peer link. Each VDC is a separate switch: vPC is a per-VDC function on the Cisco Nexus 7000 Series switches. and it uses its own bridge ID. which must be enabled on both peer switches. Enable the vPC feature.

4. A peer keepalive link must be configured before you set up the vPC peer link. The peer keepalive link can be in any VRF. The vPC peer link is configured in a port channel configuration. Use the following command under the vPC domain to configure the peer keepalive link: Click here to view code image peer-keepalive destination <remote peer IP> source <local IP> vrf mgmt. Click here to view code image Interface port-channel <vPC number> vpc <vPC number> Figure 1-12 shows configuration of vPC to connect an access layer switch to a pair of distribution layer switches. Use the following command to create the vPC domain: vpc domain <domain-id> 3. Create the vPC itself. Create a regular port channel and add the vpc command under the port channel configuration to create a vPC. Use the following commands under the port channel to configure the peer link: Click here to view code image Interface port-channel <peer link number> switchport mode trunk vpc peer-link 5. including management VRF.and should be configured on both vPC switches. Create a peer link. . This configuration must be the same on both peer switches. Establish peer keepalive connectivity. This port channel must be trunked to enable multiple VLANs over the vPC.

. several show commands are available for vPC. Table 1-5 shows some useful vPC commands. you can use all port channel show commands discussed earlier.Figure 1-12 vPC Configuration Example Verification of vPC To verify the status of vPC. These commands are helpful in verifying vPC configuration and troubleshooting the vPC issue. In addition.

switches are normally interconnected using redundant links.Table 1-5 Verify Port Channel Configuration FabricPath As mentioned earlier in this chapter. Figure 1-13 shows the STP topology. To understand FabricPath. A large-scale. This tree topology implies that certain links are unused. . and traffic does not necessarily take the optimal path. STP is a Layer 2 control plane protocol that runs on switches to ensure that you do not create topology loops when you have these redundant paths in the network. STP builds a treelike structure by blocking certain ports in the network. highly scalable. Spanning Tree Protocol To support high availability within a Layer 2 domain. and this information is used to elect a root bridge and perform subsequent network configuration by selecting only the shortest path to the root bridge. and STP ensures that only one network path to the root bridge exists. A root bridge is elected. Switches exchange information using bridge protocol data units (BPDU). it is important to discuss limitations of STP. All network ports connecting to the root bridge via an alternative network path are blocked. and flexible Layer 2 fabric is needed in most data centers to support distributed and virtualized application workloads. To create loop-free topology. server virtualization and seamless workload mobility is a leading trend in the data center.

This leads to inefficient utilization of network resources. Inefficient path selection: STP does not always provide a shortest path between two switches. there is no safety against STP failure or misconfiguration. A switch has limited space to store Layer 2 forwarding information. In addition to the large forwarding table. Limited scalability: Several factors limit scalability of classic Ethernet.Figure 1-13 Spanning Tree Topology Key limitations of STP are as follows: Lack of multipathing: In STP. Rapid Spanning Tree Protocol (RSTP) has improved the STP convergence to a few seconds. so in a Layer 2 loop condition. causing a major network outage. the Layer 2 MAC addressing is nonhierarchical. when there are changes in the spanning tree topology the switches will clear and rebuild their forwarding table. This results in more flooding to learn new MAC addresses. and all the redundant links are blocked. equal cost multipath cannot be used to efficiently load balance traffic across multiple network links. all the switches build only one shortest path to the root switch. certain failure scenarios are complex and can destabilize the network for longer periods of time. Therefore. Furthermore. Therefore. . however. the demand to store more addresses is also growing with computing capacity and the introduction of server virtualization. As you know. where host A and host B cannot use the direct link between the two switches because it is blocked by STP. This limit is increasing with improvements in the silicon technology. and it is not possible to optimize Layer 2 forwarding tables by summarizing the Layer 2 addresses. Protocol safety: A Layer 2 packet header does not have a time to live (TTL). packet flooding for broadcast and unknown unicast packets populates all end host MAC addresses in all switches in the network. Slow convergence: A link failure or any other small change in network topology can cause large changes in the spanning tree that must be propagated to all switches and could take about 30 seconds in older versions of STP. however. Such failures can be extremely dangerous for the switched network because a topology loop can consume all network bandwidth. packets can continue circulating in the network with no time out. A simple example of inefficient path selection is shown in Figure 1-13.

which makes it less flexible for workload mobility within a virtualized data center. and traffic can be load balanced across these paths. A single control protocol is used for unicast forwarding. High performance: High performance in a FabricPath network is achieved using an equal-cost multipath (ECMP) feature. and shortest path selection from any host to any other host in the network. multipathing. fast convergence. It combines the benefits of Layer 3 routing with the advantages of Layer 2 switching. The switch control plane protocol for FabricPath starts automatically. as soon as the feature is enabled and ports are assigned to the FabricPath network. Reliability based on proven technologies: Cisco FabricPath uses a reliable and proven . FabricPath brings the stability and performance of Layer 3 routing to the Layer 2 switched network and builds a highly resilient. Layer 3 creates network segmentation. the network latency is reduced and higher performance can be achieved. massively scalable. A FabricPath network is created by connecting a group of switches into an arbitrary topology. hence. combining them in a fabric using a few simple commands. In addition to ECMP. Because all servers within this fabric reside in one common Layer 2 network. The Ethernet fabric can use all the available paths in the network. There is no need to run Spanning Tree Protocol within the FabricPath network. Layer 3 routing provides scalability. This protocol requires less configuration than the STP in the conventional Layer 2 network. FabricPath can be deployed in any arbitrary topology. FabricPath supports the shortest path to the destination. However. overall complexity of the solution is reduced significantly. therefore. The level of enhanced flexibility translates into lower operational costs. however. and flexible Layer 2 fabric. Figure 1-14 FabricPath Topology Benefits of FabricPath FabricPath converts the entire data center into one big switching fabric. Intermediate System-to-Intermediate System (IS-IS) Protocol is used to provide fabric-wide intelligence and ties the elements together. they can be moved around within the data center in a nondisruptive manner without any constrains on the design. Figure 1-14 shows the FabricPath topology.What Is FabricPath? Cisco FabricPath is an innovation to build a simplified and scalable Layer 2 fabric. Benefits of Cisco FabricPath technology are as follows: Operational simplicity: Cisco FabricPath is simple to configure and easy to operate. multicast forwarding. Flexibility: FabricPath provides a single domain Layer 2 fabric to connect all the servers. and VLAN pruning. a typical spine and leaf topology is used in the data center. offering a much more scalable and flexible network fabric.

The role of . FabricPath provides increased redundancy and high availability with the multiple paths between the end hosts. FabricPath data plan also provides robust controls for loop prevention and mitigation. As the network size grows. Figure 1-15 shows components of a FabricPath network. allowing scaling the network beyond the limits of the MAC address table of individual switches. FabricPath learns MAC addresses selectively at the edge. The FabricPath frames includes a time-to-live (TTL) field similar to the one used in IP. which is required as computing and virtualization density continue to grow. Spine switches act as the backbone of the switching fabric. which is based on the IS-IS routing protocol. Figure 1-15 Components of FabricPath Spine switches: In a two-tier architecture. large networks with many hosts require a large forwarding table. and a Reverse Path Forwarding (RPF) check is applied for multidestination frames. Furthermore. This protocol provides fast convergence that has been proven in large service provider networks. High scalability and high availability: FabricPath delivers highly scalable bandwidth with capabilities such as ECMP and multi-topology forwarding. It means incremental levels of resiliency and bandwidth capacity.control plane mechanism. In addition. Efficiency: Conventional Ethernet switches learn the MAC addresses from the frames they receive on the network. enabling IT architects to start with a small Layer 2 network and build a much bigger data center fabric. it is important to understand the components of FabricPath. and it increases the cost of switches. you might require equipment upgrades. Therefore. spine switches are deployed to provide connectivity between leaf switches. Components of FabricPath To understand FabricPath control and data plane operation. driving scale requirements up in the data center. the Ethernet MAC address cannot be summarized to reduce the size of the forwarding table.

FabricPath network: A network that connects the FabricPath switches is called a FabricPath network. The source could be a local switch interface or a remote switch. You can configure an edge port as an access port or as an IEEE 802. a typical FabricPath network is built using a spine and leaf topology. An edge port can carry both CE as well as FabricPath VLANs. Core port: Ports that are part of a FabricPath network are called core ports. The packet is then encapsulated in the FabricPath . the packet is delivered using classic Ethernet format. When you create a VLAN. FabricPath core ports always forward Ethernet frames encapsulated in a FabricPath header. Leaf switches: Leaf switches provide access layer connectivity for servers and other access layer devices. Nexus switches can be part of both a FabricPath network and a classic Ethernet network at the same time. and packets are transmitted using standard IEEE 802. MAC learning is performed as usual. Switch table: The switch table provides Layer 2 routing information based on Switch ID. FabricPath VLANs: The VLANs that are allowed on the FabricPath network are called FabricPath VLANs.spine switches is to provide a transit network between the leaf switches. a lookup is done in the switch table to find the next-hop interface. In this FabricPath network. The hosts on two leaf switches use spine switches to communicate with each other. This link state protocol is the core of the FabricPath control plane. Therefore. by default it operates in Classic Ethernet (CE) mode. Because these ports are part of classic Ethernet. The FabricPath header is added to the Layer 2 packet when a packet enters the FabricPath network at the ingress switch. packets are forwarded based on a new header. and you can connect any Ethernet device to these ports. Dynamic Resource Allocation Protocol: Dynamic Resource Allocation Protocol (DRAP) is an extension to FabricPath IS-IS that ensures networkwide unique and consistent Switch IDs and FTAG values. If a MAC address table has a local switch interface. you must go to VLAN configuration and change the mode to FabricPath. Classic Ethernet (CE): Conventional Ethernet using transparent bridging and running STP is referred to as classic Ethernet. To allow a VLAN over the FabricPath network. FabricPath IS-IS: FabricPath switches run a Shortest Path First (SPF) routing protocol based on standard IS-IS to build their forwarding table similar to a Layer 3 network. Only FabricPath VLANs are allowed on core ports. Edge port: Edge ports are switch interfaces that are part of classic Ethernet.3 Ethernet frames. and it is stripped when the packet leaves the FabricPath network at the egress switch. devices connected on an edge port in FabricPath VLAN can send traffic to a remote host over the FabricPath network. It contains Switch IDs and next-hop interfaces to reach the switch. MAC address table: This table contains MAC address entries and the source from where these MAC addresses are learned. Ethernet frames transmitted on core ports always carry an IEEE 802. These interfaces behave like normal Ethernet ports. In a case where a MAC address table is pointing to a remote switch. with some ports participating in FabricPath and some ports in a classic Ethernet network.1Q trunk. which includes information such as switch ID and hop count. In a data center.1Q VLAN tag and can be considered as a trunk port.

The ODA and SDA contain a 12bit unique identifier called SwitchID that is used to forward packets within the FabricPath network. a 48bit outer MAC source address (OSA). Figure 1-16 shows the FabricPath frame format. The purpose of this field is to provide future capabilities. The function of the OOO (out of order)/don’t learn (DL) bit varies depending on whether it is used in outer source address (OSA) or outer destination address (ODA). This is required because the OSA and ODA fields are not MAC addresses and do not uniquely identify a particular hardware component the way a standard MAC address would. administrative purposes. Figure 1-16 FabricPath Frame Format Endnode ID: This field is currently not in use. This header includes a 48-bit outer MAC destination address (ODA). the switch encapsulates the frame with a 16byte FabricPath header. Any multidestination addresses have this bit set. FabricPath topology: Similar to STP. This could be used for traffic engineering. U/L bit: FabricPath switches set this bit in all unicast OSA and ODA fields. and different topologies could be constructed for a different set of VLANs. FabricPath Frame Format When an Ethernet frame enters the FabricPath network. or security. . a forwarding tag called FTAG that uniquely identifies a forwarding graph and the TTL field that has the same functionality as in the IP header. and a 32-bit FabricPath tag. in FabricPath a set of VLANs that have the same forwarding behavior will be mapped to a topology or forwarding instance. where a set of VLANs could belong to one or a different instance. OOO/DL: This field is currently not in use. indicating that the MAC address is a locally administered address. The 48-bit source and destination MAC address contains a 12-bit unique identifier called SwitchID that is used for forwarding in the FabricPath network. The FabricPath tag contains the new Ethertype for the FabricPath header. The purpose of this field is to provide future capabilities for perpacket load sharing when multiple equal cost paths are available. where a virtual or physical end host located behind a single port can uniquely identify itself to the FabricPath network.header and sent to the next hop within the FabricPath network. I/G bit: The I/G bit serves the same function in FabricPath as in standard Ethernet.

the FTAG identifies a multidestination forwarding tree within a given topology. the TTL prevents Layer 2 frames from circulating endlessly within the network. vPC+ is discussed later in this chapter. TTL: This field represents the Time to Live (TTL) of the FabricPath frame. In case of temporary loops during protocol convergence. LID is used to identify the specific physical or logical edge port from which the packet is originated or is destined to. There is no need to run STP within the FabricPath network. TTL operates in the same manner within FabricPath as it does in traditional IP forwarding. In the case of multicast and broadcast traffic. this field is used to identify the switch that originated the packet. In the outer source address (OSA) of the FabricPath header. Provides Shortest Path First (SPF) routing: Excellent topology building and reconvergence characteristics. this field is always set to 0. For unicast traffic it identifies the FabricPath topology the frame is traversing. The system selects the FTAG value for each multidestination tree. FabricPath Control Plane Cisco FabricPath uses a single control plane that functions for unicast. This emulated switch represents a vPC+ port-channel interface associated with a pair of FabricPath switches. This switch ID can be assigned manually within the switch configuration or allocated automatically using Dynamic Resource Allocation Protocol (DRAP). Sub-SwitchID: Sub-SwitchID is a unique identifier assigned to an emulated switch within the FabricPath SwitchID. FabricPath IS-IS provides the following benefits: No IP dependency: No need for IP reachability to form adjacency between devices.Switch ID: The switch ID is a 12-bit unique identifier for a switch within the FabricPath network. broadcast. Ethertype: An Ethertype value of 0x8903 is used for FabricPath. Easily extensible: Using custom type. and frames with an expired TTL are discarded. In the absence of vPC+. IS-IS devices can exchange information about virtually anything. Each FabricPath switch hop decrements the TTL by 1. In the outer destination address (OSD) this field identifies the destination switch for the FabricPath packet. These switches then start a negotiation process that . LID: Local ID (LID) is also known as port ID. The value of this field in the FabricPath header is locally significant to each switch. The forwarding decision at the destination switch can use the LID to send the packet to the correct edge port without any additional lookups on the MAC table. and multicast packets by leveraging the Layer 2 Intermediate System-to-Intermediate System (IS-IS) Protocol. FTAG: FabricPath forwarding tag (FTAG) is a 10-bit traffic identification tag within the FabricPath header. The FabricPath control plane uses IS-IS Protocol to perform the following functions: Switch ID allocation: FabricPath switches establish IS-IS adjacency to the directly connected neighboring switches on the core ports. value (TLV) settings. length. The Cisco FabricPath Layer 2 IS-IS is a separate process from Layer 3 IS-IS. The system selects a unique FTAG for each topology configured within FabricPath.

the core interfaces are brought up but not added to the FabricPath topology.2(2). Figure 1-17 FabricPath Multidestination Trees Within the FabricPath domain. Multidestination trees: To ensure loop-free topology for broadcast. first a root switch is elected for tree number 1 in a topology. FabricPath IS-IS automatically creates multiple multidestination trees connecting to all FabricPath switches and uses them to forward multidestination traffic.ensures all FabricPath switches have a unique switch ID. Unicast routing: After IS-IS adjacencies are built and routing information is exchanged. Figure 1-17 shows a network with two multidestination trees. the FabricPath calculates its unicast routing table. However. up to 16 next hops are stored in the FabricPath routing table. this root switch selects the root for each additional multidestination tree in the topology and assigns each multidestination tree a unique FTAG value. It also ensures that the type and number of FTAG values in use are consistent. By default. Dynamic Resource Allocation Protocol (DRAP) automatically performs this task. As of NX-OS release 6. two such trees are supported per topology. and no data plane traffic is passed on these interfaces. If multiple equal-cost paths to a particular switch ID exist. which includes the FabricPath switch ID and its next hop. FabricPath switches use the following three parameters to elect a root for the first tree. multidestination trees are built within the FabricPath network. multicast. and unknown unicast traffic. where switch S1 is the root for tree number 1 and switch S4 is the root for tree number 2. You can modify the number of next hops installed using the maximum paths command in FabricPath-domain configuration mode. This routing table stores the best path to the destination FabricPath switch. a configuration command is also provided for the network administrator to statically assign a switch ID to a FabricPath switch. While the initial negotiation is in progress. After a switch becomes the root for the first tree. and select .

Performs switch ID table lookup to determine on which next-hop interface frame it is destined to. FabricPath IS-IS computes the SwitchID reachability for each switch in the network. Performs MAC table lookup to identify the FabricPath switch ID. and as it passes different switches. It is composed of the VDC MAC address. Therefore. or broadcast packets. In this case. Known unicast forwarding is done based on the mapping between the unicast MAC address and the switch ID that is already learned by the network. Root priority: An 8-bit value. the packets are flow-based load balanced to distribute traffic over multiple equal cost paths. unknown unicast. Multicast forwarding is done based on multidestination trees computed by IS-IS. A higher value of these parameters is preferred. it goes though different lookups and forwarding logic. . FabricPath uses IS-IS to propagate multicast receiver locations within the FabricPath network. Decrements TTL and sends the frame to the core port for the next-hop FabricPath switch. The default root priority is 64. You can influence root bridge election by using the root-priority command.root for any additional trees. Multiple equal cost paths could exist to a destination switch ID. This allows switches to prune off subtrees that do not have downstream receivers. This forwarding logic is based on the role that particular switch plays relative to the traffic flow within the FabricPath network. spine switches are the best candidates for the root. Broadcast and unknown unicast forwarding is also done based on a multidestination tree computed by IS-IS for each topology. FabricPath Data Plane The Layer 2 packets forwarded through a FabricPath network are of the following types: known unicast packet. Switch ID: The unique 12-bit switch ID. Following are three general roles and the forwarding logic for FabricPath traffic: Ingress FabricPath switch Receives the classic Ethernet (CE) frame from a FabricPath edge port. these trees are further pruned based on IGMP snooping at the edges of the FabricPath network. For optimized IP multicast forwarding. multicast packet. Performs switch ID table lookup to determine FabricPath next-hop interface. Encapsulates frames with the FabricPath header and sends it to the core port for the next-hop FabricPath switch. The FTAG uniquely identifies a topology because different topology could have different broadcast trees. Core FabricPath switch Receives a FabricPath-encapsulated frame on a FabricPath core port. It is recommended to configure a centrally connected switch as root switch. System ID: A 48-bit value. When a unicast packet enters the FabricPath network.

allowing a significant reduction in the size of the MAC address table. For example. and frames are forwarded based on the switch ID in the outer destination address (ODA). There are two spine switches and three leaf switches in this sample topology. The unknown unicast frames being flooded in the FabricPath network do not necessarily trigger learning on edge switches. FabricPath is configured correctly and the network is fully converged. The MAC address table of switch S15 has only one local entry for Host B pointing to port 1/2. you review an example of packet flow within a FabricPath network. The following are MAC learning rules FabricPath uses: Only FabricPath edge switches populate the MAC table and use MAC table lookups to forward frames. Spine switches are S1 and S2. Multicast frames trigger learning on FabricPath switches (both edge and core). Unknown Unicast Packet Flow In the beginning. FabricPath Packet Flow Example In this section. This MAC learning rule is the same as the classic Ethernet switch. Uses the LID value in the outer destination address (ODA). The following steps explain . This selective MAC address learning is based on bidirectional conversation of end hosts. if a host moves from one switch to another and sends a gratuitous ARP to update the Layer 2 forwarding tables. and S15. FabricPath VLANs use conversational MAC address learning. In a FabricPath switch. Performs switch ID table lookup to determine whether it is the egress FabricPath switch. the switch learns remote MAC addresses only if the remote device is having a bidirectional “conversation” with a locally connected device. it is called conversational learning. to determine on which FabricPath edge port the frame should be forwarded to. S14. and classic Ethernet VLANs use traditional MAC address learning. however. the MAC address table of switch S10 and S14 is empty. In other words. the switch learns the source MAC address of the frame as a remote MAC entry only if the destination MAC address matches an already learned local MAC entry. broadcast frames are used to update any existing MAC entries already in the table. Broadcast frames do not trigger learning on edge switches. For unicast frames received with FabricPath encapsulation. FabricPath core switches generally do not learn any MAC addresses.Egress FabricPath switch Receives a FabricPath-encapsulated frame on a FabricPath core port. Host A connected to the leaf switch S10 would like to communicate with the Host B connected to the leaf switch S15. therefore. Conversational Learning FabricPath switches learn MAC addresses selectively. because several key LAN protocols (such as HSRP) rely on source MAC learning from multicast frames to facilitate proper forwarding. FabricPath switches receiving that broadcast will update an existing entry for the source MAC address. and leaf switches are S10. the switch unconditionally learns the source MAC address as a local MAC entry. For Ethernet frames received from a directly connected access or trunk port. or a MAC address table lookup.

The first entry is local for Host B pointing toward port 1/2. Figure 1-18 FabricPath Unknown Unicast Known Unicast Packet Flow After the unknown unicast packet flow from switch S10 to S14 and S15. and the Host A MAC address is added to the MAC table on switch S15. This flow is known unicast because switch S15 knows how to reach the destination MAC address of Host A. 7. The MAC address table on S15 shows Host A pointing to switch S10. Switch S14 has no entry in its MAC address table. Host A sends an unknown unicast packet to switch S10 on port 1/1. The MAC address lookup is missed. Switch S15 has two entries in its MAC address table. therefore. S15 receives the packet and performs a lookup for Host B in its MAC address table. Figure 1-18 illustrates these steps for an unknown unicast packet flow. and the second entry is remote for Host A pointing toward switch S10. 3. S14 receives the packet and performs a lookup for Host B in its MAC address table. 8. Host B sends a known unicast packet to Host A on switch S15 port 1/2. the Host A MAC address is not learned on switch S14. some MAC address tables are populated.unknown unicast packet flow originated from Host A: 1. A lookup is performed for the destination MAC address of Host B. This packet has switch S10 switch ID in its outer source address (OSA). The following steps explain known unicast packet flow originated from Host B: 1. This flow is unknown unicast because switch S10 does not know how to reach destination Host B. The MAC address lookup finds a hit. S10 updates the classic Ethernet MAC address table with the entry for the source MAC address of Host A and the port 1/1 where the host is connected. 2. . 6. This MAC address is learned on FabricPath and the switch ID in OSA is S10. This lookup is missed because no entry exists for Host B in the switch S10 MAC address table. 5. S10 encapsulates the packet into the FabricPath header and floods it on the FabricPath multidestination tree. The packet is delivered to port 1/2 on switch S15. Switch S10 has only one entry for Host A connected on port 1/1. The FabricPath frame is received by switch S14 and switch S15. 4.

2. The packet is encapsulated in FabricPath header. the MAC address table is updated with the switch ID of the second vPC switch. In a vPC configuration. The MAC address table on S10 shows Host B pointing to switch S15. When the remote FabricPath switch receives a packet for the same MAC address with the switch ID of the second vPC switch in its OSA. the next traffic flow can go via a second vPC switch. which is switch S10. On this remote FabricPath switch. The packet is received by switch S10 and a MAC address of Host B is learned by switch S10. When a packet is received at a remote FabricPath switch. it sees it as MAC move. S15 performs the MAC address lookup and finds a hit pointing toward switch S10. 4. Because the virtual port channel performs load balancing. This behavior is called MAC flapping. This MAC address is learned on FabricPath and the switch ID in OSA is S15. Figure 1-19 FabricPath Known Unicast Virtual Port Channel Plus A virtual port channel plus (vPC+) enables a classical Ethernet (CE) vPC domain and Cisco FabricPath to interoperate. This emulated switch is a logical way of combining two or more FabricPath switches to emulate a single switch to the rest of . To address this problem. In a FabricPath network. Switch S10 performs the MAC address table lookup for Host A. and it is performed in the data plane. the Layer 2 MAC addresses are learned only by the edge switches. a dual-connected host can send a packet into the FabricPath network from two edge switches. 5. 3. with the switch ID of the first vPC switch in OSA. The learning consists of mapping the Layer 2 MAC address to a switch ID. with S15 in the OSA and S10 in the ODA. Switch S15 performs the switch ID lookup and selects a FabricPath core port to send the packet. The packet traverses the FabricPath network to reach the destination switch. Figure 1-19 illustrates these steps for a known unicast packet flow. FabricPath implements an emulated switch. the MAC address table of this remote FabricPath switch is updated with the switch ID of the first vPC switch. and it can destabilize the network. and the packet is delivered on the port 1/1.

simply configure the vPC peer-link as a FabricPath core port. no BPDUs are forwarded or tunneled through the FabricPath network.75FA.the FabricPath network. Additionally. As a result of this configuration. remote switch S10 sees the Host B MAC address entry pointing to the emulated switch. all FabricPath switches share a common bridge ID. and there should be a peer keepalive path between the two switches.6000). If you are connecting STP devices to the FabricPath network. This results in MAC flapping at switch S10. The packets are load-balanced across the vPC link. FabricPath Interaction with Spanning Tree FabricPath edge ports support direct Classic Ethernet host connections and connections to the traditional Spanning-Tree switches. To configure vPC+. By default. or BID (C84C. This BID is statically defined and is not user configurable. The other FabricPath switches are not aware of this emulation and simply see the emulated switch reachable through both of these emulating switches. However. and these packets are seen at the remote switch S10. The entire FabricPath network appears as a single STP bridge to any connected STP domains. Host B is connected to switch S14 and S15 in vPC configuration. make sure you configure all edge switches as STP root. if multiple FabricPath edge switches connect to the same STP domain. Figure 1-20 FabricPath with vPC+ In this example. coming from S14 as well as from S15. Figure 1-20 shows an example of vPC+ configuration. FabricPath switches transmit and process STP BPDUs on FabricPath edge ports. Each FabricPath edge switch must be configured as root for all FabricPath VLANs. make sure . BPDUs including TCNs are not transmitted on FabricPath core ports and as a result. To solve this problem. and assign an emulated switch ID to the vPC. To appear as one STP bridge. Therefore. It means that the two-emulating switch must be directly connected via peer link. FabricPath isolates STP domains. The FabricPath packets originated by the two emulating switches are sourced with the emulated switch ID. configure switch S14 and S15 with a FabricPath emulated switch using vPC+ configuration. The emulated switch implementations in FabricPath where two FabricPath edge switches provide a vPC to a thirdparty device is called vPC+.

1 (Nexus 7000) / NX-OS 5. The FabricPath configuration steps are as follows: 1. the port is placed in the L2 Gateway Inconsistent state until the condition is cleared. The Enhanced Layer 2 license is required to run FabricPath. ensure the following: You have Nexus devices that support FabricPath. Use the following command to change the VLAN mode for a range of VLANs: vlan <range> mode fabricpath 3. You perform these steps on all FabricPath switches. identifying FabricPath interfaces. FabricPath Dynamic Resource Allocation Protocol (DRAP) automatically assigns a switch ID to each switch within FabricPath network. use the following command in switch global configuration mode: feature-set fabricpath 2. and identifying FabricPath VLANs. You have the appropriate license for FabricPath. Before you start FabricPath configuration. These ports connect to other FabricPath switches to establish IS-IS adjacency. the FabricPath feature set must be enabled. The system is running minimum NX-OS 5.that those edge switches use the same bridge priority value. install the license using the following command: install license <filename> Install the FabricPath feature set by issuing install feature-set fabricpath. By default. all the VLANs are in Classic Ethernet mode.1. If a superior BPDU is received on a FabricPath edge port. Before you start configuring the FabricPath on a switch. The following command is used to configure an interface in FabricPath mode: Click here to view code image interface <name> switchport mode fabricpath After performing this configuration on all FabricPath switches in the topology. You can also configure a static switch ID using the fabricpath switch-id <ID number> command. Define FabricPath VLANs: identify the VLANs that will be extended across the FabricPath network. The unicast and multicast routing information is exchanged. Figure 1-21 shows an example of FabricPath topology and its configuration. You can change the VLAN mode to FabricPath by going into the VLAN configuration. and switches will start forwarding traffic. There is no need to configure IS-IS protocol for FabricPath. Configuring FabricPath FabricPath can be configured using a few simple steps. .3 (Nexus 5500) software release.1. all FabricPath edge ports have the STP rootguard function enabled implicitly. If required. FabricPath devices will form IS-IS adjacencies. These steps involve enabling the FabricPath feature. Configuring a port in FabricPath mode automatically enables IS-IS protocol on the port. Identify FabricPath interfaces: identify the FabricPath core ports. To enable the FabricPath feature set. To ensure that the FabricPath network acts as STP root.

The Cisco FabricPath routing table shows the best paths to all the switches in the fabric. all paths will be installed in the Cisco FabricPath routing table to provide ECMP. The command shows local addresses with a pointer to the interface on which the address was learned. it provides a pointer to the remote switch from which this address was learned. The show fabricpath route command can be used to view the Cisco FabricPath routing table that results from the Cisco FabricPath IS-IS SPF calculations. For remote addresses. . Table 1-6 lists some useful commands to verify FabricPath.Figure 1-21 FabricPath Configuration Example Verifying FabricPath The show mac address table command can be used to verify that MAC addresses are learned on Cisco FabricPath edge devices. If multiple equal paths are available between two switches.

Table 1-7 lists a reference for these key topics and the page numbers on which each is found.Table 1-6 Verify FabricPath Configuration Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. noted with the key topics icon in the outer margin of the page. .

Table 1-7 Key Topics for Chapter 1 Complete Tables and Lists from Memory Print a copy of Appendix B.” (found on the disc) or at least the section for this chapter. “Memory Tables Answer Key. “Memory Tables. and complete the tables and lists from memory. includes completed tables and lists to check your work. Define Key Terms Define the following key terms from this chapter.” also on the disc. and check your answers in the glossary: Virtual LAN (VLAN) Virtual Port Channel (vPC) Network Interface Card (NIC) Application Centric Infrastructure (ACI) Maximum Transmit Unit (MTU) Link Aggregation Control Protocol (LACP) Spanning Tree Protocol (STP) . Appendix C.

Virtual Device Context (VDC) Cisco Fabric Services over Ethernet (CFSoE) Hot Standby Router Protocol (HSRP) Virtual Router Redundancy Protocol (VRRP) Bridge Protocol Data Unit (BPDU) Internet Group Management Protocol (IGMP) Time To Live (TTL) Dynamic Resource Allocation Protocol (DRAP) .

iSCSI. 1.Chapter 2. Cisco Nexus products provide the performance. “Answers to the ‘Do I Know This Already?’ Quizzes. read the entire chapter.) . Nexus 7000. You learn the different specifications of each product. You can find the answers in Appendix A. it is covered in detail in Chapter 18. Nexus 3000. and so on. which includes Nexus 9000. If you do not know the answer to a question or are only partially sure of the answer. Nexus 5000. such as FCoE. Table 2-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. FC. Nexus 6000. and Nexus 2000.” “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. Note Nexus 1000V is not covered in this chapter. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. This chapter focuses on the Cisco Nexus product family.” Table 2-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. Which of the following Nexus 9000 hardwares can work as a spine in a Spine-Leaf topology? (Choose three. you should mark that question as wrong for purposes of the self-assessment. Cisco Nexus Product Family This chapter covers the following exam topics: Describe the Cisco Nexus Product Family Describe FEX Products The Cisco Nexus product family offers a broad portfolio of LAN and SAN switching products spanning software switches that integrate with multiple types of hypervisors to hardware data center core switches and campus core switches. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. “Cisco Nexus 1000V and Virtual Switching. and availability to accommodate diverse data center needs. including different types of data center protocols. scalability.

6 Tbps 5. 2.a. 2 c. a. 1. 8+1 d. 16 4. a. 2+1 b. Nexus 93128TX e. you can use FCoE with F2 cards. How many unified ports does the Nexus 6004 currently support? a. True or false? By upgrading SUP1 to the latest software and using the correct license. 4+1 7. How many VDCs can a SUP2 have? a. 220 Gbps c. a. False 6. False 3.3 Tbps d. What is the current bandwidth per slot for the Nexus 7700? a. Nexus 9396PX d. Nexus 9336PQ 2. 48 d. 16+1 c. 550 Gbps b. 8 b. True or false? The Nexus N9K-X9736PQ can be used in standalone mode when the Nexus 9000 modular switches use NX-OS. How many strands do the 40F BiDi optics use? a. 24 c. True b. True b. True or false? The F3 modules can be interchanged between the Nexus 7000 and Nexus 7700. False 8. 64 . 20 b. Nexus 9516 c. 12 d. Nexus 9508 b. True b.

) a. 16 c. The Nexus 5548P has 48 fixed ports and Nexus 5548UP has 32 fixed ports and 16 unified ports. Using the Cisco Nexus products. Fujitsu e. and Nexus 5548UP has 32 fixed FCoE only. whereas the Nexus 5548UP has all ports unified. The Nexus 5548P has 32 fixed ports. Nexus 2248PQ c. Nexus 2248TP-E Foundation Topics Describe the Cisco Nexus Product Family The Cisco Nexus product family is a key component of Cisco unified data center architecture.9. 24 12. False 10. and the Nexus 5548UP has 32 fixed unified ports. The Nexus 5548P has 32 fixed ports. Which of the following statements are correct? (Choose two. It offers high-density 10 G. True b. 11. which are Ethernet only. The objective of the Unified Fabric is to build highly available. IBM c. 8 d. you can build end-to-end data center designs based on three-tier architecture or based on Spine-Leaf architecture. How many unified ports does the Nexus 5672 have? a. which are Ethernet ports only. which is the Unified Fabric. b. c. Dell c. Which models from the following list of switches support FCoE ? (Choose two. a. highly secure network fabrics. Nexus 2232TM d. Modern data center designs need the following properties: . True or false? The 40-Gbps ports on the Nexus 6000 can be used as 10 Gbps. The Nexus 5548P has 32 fixed ports and can have 16 unified ports. All the above 13. HP b. and 100 G ports as well. 40 G. d. 12 b. Nexus 2232PP b.) a. Which vendors support the B22 FEX? a.

This is addressed today using Layer 2 multipathing technologies such as FabricPath and virtual port channels (vPC).Effective use of available bandwidth in designs where multiple links exist between the source and destination and one path is active and the other is blocked by spanning tree. for example. Reducing cabling creates efficient airflow. configuration changes. with consistent network and security policies. which in turn reduces cooling requirements. 8 × 10 G links. you can use 2 × 40 G links and so on. or the design is limiting us to use Active/Standby NIC teaming. which happens by building a computing fabric and dealing with CPU and memory as resources that are utilized when needed. Figure 2-1 shows the different product types available at the time this chapter was written. Improved reliability during software updates. Computing resources must be optimized. Ways to address this problem include using Unified Fabric (converged SAN and LAN). Rather than using. Using the concept of a service profile and booting from SAN in the Cisco Unified Computing system will reduce the time to instantiate new servers. using Cisco virtual interface cards. that should happen with minimum disruption. These products are explained and discussed in the following sections. That will make it easy to build and tear down test and development environments. In this chapter you will understand the Cisco Nexus product family. and using technologies such as VM-FEX and Adapter-FEX. The concept of hybrid clouds can benefit your organization. or adding components to the data center environment. Power and cooling are key problems in the data center today. Doing capacity planning for all the workloads and identifying candidates to be virtualized help reduce the number of compute nodes in the data center. . Cisco is helping customers utilize this concept using Intercloud Fabric. Hybrid clouds extend your existing data center to public clouds as needed. Hosts. especially virtual hosts. must move without the need to change the topology or require an address change.

Figure 2-1 Cisco Nexus Product Family Cisco Nexus 9000 Family There are two types of switches in the Nexus 9000 Series: the Nexus 9500 modular switches and the Nexus 9300 fixed configuration switches. the design follows the standard three-tier architecture. as shown in Figure 2-3: the 4-slot Nexus 9504. they provide an application-centric infrastructure. and the 16-slot Nexus 9516. When they run in NX-OS mode and use the enhanced NX-OS software. In this case. Figure 2-2 Nexus 9000 Spine-Leaf Architecture Cisco Nexus 9500 Family The Nexus 9500 family consists of three types of modular chassis. They can run in two modes. the design follows the Spine-Leaf architecture shown in Figure 2-2. they function as a classical Nexus switch. Therefore. . the 8-slot Nexus 9508. When they run in ACI mode and in combination with Cisco Application Policy Infrastructure Controller (APIC).

system controllers. supervisors.Figure 2-3 Nexus 9500 Chassis Options The Cisco Nexus 9500 Series switches have a modular architecture that consists of the following: Switch chassis Supervisor engine System controllers Fabric modules Line cards Power supplies Fan trays Optics Among these parts. and power supplies are common components that can be shared among the entire Nexus 9500 product family. . Table 2-2 shows the comparison between the different models of the Nexus 9500 switches. line cards.

Table 2-2 Nexus 9500 Modular Platform Comparison Chassis The Nexus 9500 chassis doesn’t have a midplane. . fabric cards and line cards align together. as shown in Figure 2-4. which results in reduced cooling efficiency. Because there is no midplane with a precise alignment mechanism. Midplanes tend to block airflow.

Figure 2-4 Nexus 9500 Chassis Supervisor Engine The Nexus 9500 modular switch supports two redundant half-width supervisor engines. The supervisor engine is responsible for the control plane function. an RS-232 Serial Port (RJ-45). . including two USB ports. as shown in Figure 2-5.8 GHZ CPU. that is. 4 core. and a 10/100/1000-MBps network port (RJ-45). There are multiple ports for management. Each supervisor module consists of a Romely 1. and 16 GB RAM. upgradable to 48 GB RAM and 64 GB SSD storage. The supervisor modules manage all switch operations. pulse per second (PPS). The supervisor has an external clock source.

and fabric modules. The EOBC provides the intrasystem management communication across modules. line cards. and the EPC channel handles the intrasystem data plane protocol communication. as shown in Figure 2-6. They host two main control and management paths—the Ethernet Out-of-Band Channel (EOBC) and Ethernet Protocol Channel (EPC) —between supervisor engines. The system controller is responsible for managing power supplies and fan trays.Figure 2-5 Nexus 9500 Supervisor Engine System Controller There is a pair of redundant system controllers at the back of the Nexus 9500 chassis. . They offload chassis management functions from the supervisor modules.

The NFE is a Broadcom trident two ASIC (T2). The Nexus 9504 has one NFE per fabric module. the Nexus 9508 has two. both contain multiple network forwarding engines (NFE). and the Nexus 9516 has four.Figure 2-6 Nexus 9500 System Controllers Fabric Modules The platform supports up to six fabric modules. as shown in Figure 2-7. The packet lookup and forwarding functions involve both the line cards and the fabric modules. each fabric module consists of multiple NFEs. . All fabric modules are active. and the T2 uses 24 40GE ports to guarantee the line rate.

In addition. in a classical design. Note The fabric modules are behind the fan trays. The ACI-only line cards contain an additional ASIC called application spine engine (ASE).Figure 2-7 Nexus 9500 Fabric Module When you use the 1/10G + 4 40GE line cards. so to install them you must remove the fan trays. ALE performs the ACI leaf function when the Nexus 9500 is used as a leaf node when deployed in the ACI mode. the ASE performs ACI spine functions when the Nexus 9500 is used as a spine in the ACI mode. the ACI-ready leaf line cards contain an additional ASIC called Application Leaf Engine (ALE). Figure 2-8 . There are cards that can be used in standalone mode when used with enhanced NX-OS. There are line cards that can be used in application-centric infrastructure mode (ACI) only. When you use the 36-port 40GE line cards. you will need six fabric modules to achieve line-rate speeds. There are also line cards that can be used in both modes: standalone mode using NX-OS and ACI mode. you need a minimum of three fabric modules to achieve line-rate speeds. Line Cards It is important to understand that there are multiple types of Nexus 9500 line cards. All line cards have multiple NFEs for packet lookup and forwarding.

This CPU is used to speed up some control functions. Figure 2-8 Nexus 9500 Line Cards Positioning Nexus 9500 line cards are also equipped with dual-core CPUs.shows the high-level positioning of the different cards available for the Nexus 9500 Series network switches. and offloading BFD protocol handling from the supervisors. such as programming the hardware table resources. . Table 2-3 shows the different types of cards available for the Nexus 9500 Series switches and their specification. collecting and sending line card counters. statistics.

Table 2-3 Nexus 9500 Modular Platform Line Card Comparison .

as shown in Figure 2-10. however. they support N+1 and N+N (grid redundancy). The 3000W AC power supply shown in Figure 2-9 is 80 Plus platinum rated and provides more than 90% efficiency.Power Supplies The Nexus 9500 platform supports up to 10 power supplies. they are accessible from the front and are hot swappable. each tray consists of three fans. Fan Trays The Nexus 9500 consists of three fan trays. bandwidth. Fan trays are installed after the fabric module installation. and optics. Figure 2-9 Nexus 9500 AC Power Supply Note The additional four power supply slots are not needed with existing line cards shown in Table 2-3. Dynamic speed is driven by temperature sensors and front-to-back air flow with N+1 redundancy per tray. . they offer head room for future port densities. Two 3000W AC power supplies can operate a fully loaded chassis.

the first is the cost barrier of the 40 G port itself.11 shows the difference between the QSFP SR and the QSFP Bi-Di. If one of the fan trays is removed. Bi-Di optics are standard based. and they enable customers to take the current 10 G cabling plant and use it for 40 G connectivity without replacing the cabling. the fan tray must be removed first. Cisco QSFP Bi-Di Technology for 40 Gbps Migration As data center designs evolve from 1 G to 10 G at the access layer. Figure 2. when you migrate from 10 G to 40 G. the other two fan trays will speed up to compensate for the loss of cooling. you must replace the cabling. The 40 G adoption is slow today because of multiple barriers. 10 G operates on what is referred to as two strands of fiber. and second. .Figure 2-10 Nexus 9500 Fan Tray Note To service the fabric modules. 40 G operates on eight strands of fiber. however. access to aggregation and SpineLeaf design at the spine layer will move to 40 G.

Table 2-4 Nexus 9500 Fixed-Platform Comparison . There are currently four chassis-based models in the Nexus 9300 platform.Figure 2-11 Cisco Bi-Di Optics Cisco Nexus 9300 Family The previous section discussed the Nexus 9500 Series modular switches. Table 2-4 summarizes the different specifications of each chassis. The Nexus 9300 is designed for top-of-rack (ToR) and mid-of-row (MoR) deployments. This section discusses details of the Cisco Nexus 9300 fixed configuration switches.

The uplink module is the same for all switches. 40 G and 100 G—in a high density. The Nexus 7000 hardware architecture offers redundant supervisor engines. 8 out of the 12 × 40 Gbps QSFP+ ports will be available.The 40-Gbps ports for Cisco Nexus 9396PX. If used with the Cisco Nexus 93128TX. 10 G. as shown in Table 2-5. The Nexus 9336PQ can operate in ACI mode only and act as as spine node. Figure 2-12 shows the different models available today from the Nexus 9300 switches. the Nexus 9396PX. 9396TX. . and 93128TX are provided on an uplink module that can be serviced and replaced by the user. The Nexus 7000 and the Nexus 7700 switches offer a comprehensive set of features for the data center network. The modular design of the Nexus 7000 and Nexus 7700 enables them to offer different types of network interfaces—1 G. scalable way with a switching capacity beyond 15 Tbps for the Nexus 7000 Series and 83 Tbps for the Nexus 7700 Series. 9396TX. and 93128TX can operate in NX-OS mode and in ACI mode (acting as a leaf node). There are multiple chassis options from the Nexus 7000 and Nexus 7700 product family. you can utilize NX-OS APIs to manage it. and redundant fabric cards for high availability. It is easy to manage using the command line or through data center network manager. It is very reliable by supporting in-service software upgrades (ISSU) with zero packet loss. As shown in Table 2-4. Figure 2-12 Cisco Nexus 9300 Switches Cisco Nexus 7000 and Nexus 7700 Product Family The Nexus 7000 Series switches form the core data center networking fabric. redundant power supplies.

and system software aspects of the chassis. You can have multiple fabric modules. State and configuration are in sync between the two supervisors.Table 2-5 Nexus 7000 and Nexus 7700 Modular Platform Comparison The Nexus 7000 and Nexus 7700 product family is modular in design with great focus on the redundancy of all the critical components. environmental. Supervisor module redundancy: The chassis can have up to two supervisor modules operating in active and standby modes. this has been applied across the physical. Note There are dedicated slots for the supervisor engines in all Nexus chassis. Switch fabric redundancy: The fabric modules support load sharing. With the current shipping fabric cards and current I/O modules the switches support N+1 redundancy. the supervisor modules are not interchangeable between the Nexus 7000 and Nexus 7700 chassis. which provide seamless and stateful switchover in the event of a supervisor module failure. power. the Nexus 7000 supports up to five fabric modules and Nexus 7700 supports up to six fabric modules. .

this will be the same as grid mode but will assure customers that they are protected for either a PSU or a grid failure. These LEDs report the power supply. fan. allowing maximum flexibility. Each service running is an individual memory protected process. Cisco NX-OS In-Service Software Upgrade (ISSU): With the NX-OS modular architecture you can support ISSU. otherwise commonly called N+1 redundancy. which is the combination of PSU redundancy and grid redundancy. and any failure of one of the fans will not result in loss of service. The LEDs alert operators to the need to conduct further investigation. Cooling subsystem: The system has redundant fan trays. The transparent front door allows observation of cabling and module indicator and status lights. In most cases. fabric. There are multiple fans on the fan trays. where the total power available is the sum of all power supplies minus 1. System-level LEDs: A series of LEDs at the top of the chassis provides a clear summary of the status of the major system components. supervisor. Cable management: The integrated cable management system is designed to support the cabling requirements of a fully configured system to either or both sides of the switch. PSU Redundancy is the default. Most of the services allow stateful restart. allowing it to be connected to separate isolated A/C supplies. Figure 2-13 shows these switching models. In the event of an A/C supply failure. Each PSU has two supply inputs. where the total power available is the sum of the outputs of all the power supplies installed.) PSU redundancy. enabling a service experiencing a failure to be restarted and resume operation without affecting other services. where the total power available is the sum of the power from only one input on each PSU. Nexus 7000 and Nexus 7700 have multiple models with different specifications. . (This is not redundant.Note The fabric modules between the Nexus 7000 and Nexus 7700 are not interchangeable. and I/O module status. and Table 2-5 shows their specifications. including multiple instances of a particular service that provide effective fault isolation between services and that make each service individually monitored and managed. which enables you to do a complete system upgrade without disrupting the data plane and achieve zero packet loss. Power subsystem availability features: The system will support the following power redundancy options: Combined mode. which helps to address specific issues and minimize the system overall impact. Full redundancy. but not both at the same time. Cable management: The cable management cover and optional front module doors provide protection from accidental interference with both the cabling and modules that are installed in the system. Modular software upgrades: The NX-OS software is designed with a modular architecture. 50% of power is secure. Grid redundancy.

Cisco Nexus 7004 Series Switch Chassis The Cisco Nexus 7004 switch chassis shown in Figure 2-14 has two supervisor module slots and two I/O modules. we’ll use the Nexus 7718 as an example.2 Tbps system bandwidth]. It has one fan tray and four 3KW power supplies. (4950 Gbps) × (2 for full duplex operation) = 9900 Gbps = 9. the Nexus 7700 is capable of more than double its capacity.8 Tbps system bandwidth].7 Tbps.Figure 2-13 Cisco Nexus 7000 and Nexus 7700 Product Family Note When you go through Table 2-5 you might wonder how the switching capacity was calculated. for example.6Tbps/slot) × (16 payload slots) = 41. so that makes a total of 5 crossbar channels. Using this type of calculation. the complete specification for the Nexus 7004 is shown in Table 2-5. .6 Tbps. (4400 Gbps) × (2 for full duplex operation) = 8800 Gbps = 8. and the 18-slot chassis will have 18. In some bandwidth calculations for the chassis. For example. Although the bandwidth/slot shown in the table give us 1. a 9-slot chassis will have 8. the supervisor slots are taken into consideration because each supervisor slot has a single channel of connectivity to each fabric card.3 Tbps. if the bandwidth/slot is 2.8 Tbps. the Nexus 7010 has 8 line cards slots. and that can be achieved by upgrading the fabric cards for the chassis.6 Tbps. the local I/O module fabrics are connected back-to-back to form a two-stage crossbar that interconnects the I/O modules and the supervisor engines. For the Nexus 7700 Series. the Cisco Nexus 7010 bandwidth is being calculated like so: [(550 Gbps/slot) × (9 payload slots) = 4950 Gbps. It is worth mentioning that the Nexus 7004 doesn’t have any fabric modules. So. the calculation is [(2.9 Tbps system bandwidth]. (4400Gbps) × (2 for full duplex operation) = 83. so the calculation is [(550 Gbps/slot) × (8 payload slots) = 4400 Gbps. Therefore.

.Figure 2-14 Cisco Nexus 7004 Switch Cisco Nexus 7009 Series Switch Chassis The Cisco Nexus 7009 switch chassis shown in Figure 2-15 has two supervisor module slots and seven I/O module slots. The Nexus 7009 switch has a single fan tray. there is a solution for hot aisle–cold aisle design with the 9-slot chassis. there is redundancy within the cooling system due to the number of fans. the complete specification for the Nexus 7009 is shown in Table 2-5. The fans in the fan trays are individually wired to isolate any failure and are fully redundant so that other fans in the tray can take over when one or more fans fail. If an individual fan fails. Figure 2-15 Cisco Nexus 7009 Switch Although the Nexus 7009 has side-to-side airflow. The fan controllers are fully redundant. and the system will continue to function. other fans automatically run at higher speeds. The fan redundancy is two parts: individual fans in the fan tray and fan tray controllers. reducing the probability of a total fan tray failure. So. giving you time to get a spare and replace the fan tray.

Cisco Nexus 7010 Series Switch Chassis The Cisco Nexus 7010 switch chassis shown in Figure 2-16 has two supervisor engine slots and eight I/O modules slots. There are two fabric fan modules in the 7010. and they will continue to cool the system. The failure of a single fan will result in the other fans increasing speed to compensate. Fan tray replacement restores N+1 redundancy. Figure 2-16 Cisco Nexus 7010 Switch There are two system fan trays in the 7010. the box does not overheat. Any single fan failure will not result in degradation in service. Cisco Nexus 7018 Series Switch Chassis The Cisco Nexus 7018 switch chassis shown in Figure 2-17 has two supervisor engine slots and 16 I/O module slots. the system will keep running without overheating as long as the operating environment is within specifications. For the 7018 system there are two system fan trays. the complete specification is shown in Table 2-5. the remaining fabric fan will continue to cool the fabric modules until the fan is replaced. until the fan tray is replaced. Each row cools three module slots (I/O and supervisor). one for the upper half and one for the lower half of the system. The . both are required for normal operation. There are multiple fans on the 7010 system fan trays. restoring the N+1 redundancy. If either fabric fan fails. the complete specification is shown in Table 2-5. Both fan trays must be installed at all times (apart from maintenance). Each fan tray contains 12 fans that are in three rows or four. If either of the fan trays fails. and the fan tray will need replacing to restore N+1. The system should not be operated with either the system fan tray or the fabric fan components removed apart from the hot swap period of up to 3 minutes. The fans on the trays are N+1 redundant.

but when inserted in the lower position the fabric fans are inoperative. . The fabric fans are at the rear of the system fan tray. 96 40G ports. the complete specification is shown in Table 2-5. the fabric fans operate only when installed in the upper fan tray position. the remaining fan will cool the fabric modules. redundant fans. Fan trays are fully interchangeable. Failure of a single fabric fan will not result in a failure. The two fans are in series so that the air passes through both to leave the switch and cool the fabric modules. 48 100-G ports. and redundant fabric cards. Cisco Nexus 7706 Series Switch Chassis The Cisco Nexus 7706 switch chassis shown in Figure 2-18 has two supervisor module slots and four I/O module slots.fan tray should be replaced to restore the N+1 fan resilience. There are 192 10-G ports. Figure 2-17 Cisco Nexus 7018 Switch Note Although both system fan trays are identical. Integrated into the system fan tray are the fabric fans. true front-to-back airflow.

192 100-G ports. 96 100-G ports. true front-to-back airflow. the complete specification is shown in Table 2-5.Figure 2-18 Cisco Nexus 7706 Switch Cisco Nexus 7710 Series Switch Chassis The Cisco Nexus 7710 switch chassis shown in Figure 2-19 has two supervisor engine slots and eight I/O module slots. There are 768 10-G ports. 192 40G ports. 384 40-G ports. and redundant fabric cards. . Figure 2-19 Cisco Nexus 7710 Switch Cisco Nexus 7718 Series Switch Chassis The Cisco Nexus 7718 switch chassis shown in Figure 2-20 has two supervisor engine slots and 16 I/O module slots. and redundant fabric cards. redundant fans. the complete specification is shown in Table 2-5. redundant fans. There are 384 1-G ports. true front-to-back airflow.

Redundancy is achieved by having both supervisor slots populated.Figure 2-20 Cisco Nexus 7718 Switch Cisco Nexus 7000 and Nexus 7700 Supervisor Module Nexus 7000 and Nexus 7700 Series switches have two slots that are available for supervisor modules. . Table 2-6 describes different options and specifications of the supervisor modules.

dual supervisor engines run in active-standby mode with stateful switch over (SSO) and configuration synchronization between both supervisors. The USB ports allow access to USB flash memory devices to software image loading and recovery. the operating system runs on a dedicated dualcore Xeon processor. As shown in Table 2-6.Table 2-6 Nexus 7000 and Nexus 7700 Supervisor Modules Comparison Cisco Nexus 7000 Series Supervisor 1 Module The Cisco Nexus 7000 supervisor module 1 shown in Figure 2-21 is the first-generation supervisor module for the Nexus 7000. An embedded packet analyzer reduces the need for a dedicated packet analyzer to provide faster resolution for control plane problems. There is dual redundant Ethernet out-of-band channels (EOBC) to each I/O and fabric modules to provide resiliency for the communication between control and line card processors. It removes the need for separate terminal server devices for OOB . Figure 2-21 Cisco Nexus 7000 Supervisor Module 1 The Connectivity Management Processor (CMP) provides an independent remote system management and monitoring capability.

The Cisco Nexus 7000 supervisor module 1 incorporates highly advanced analysis and debugging capabilities. they support CPU Shares. larger memory. Supervisor module 2E is the enhanced version of supervisor module 2 with two quad-core CPUs and 32 G of memory. Note . and it also allows access to supervisor logs and full console control on the supervisor engine. corrective action can be taken to mitigate the fault and reduce the risk of a network outage. Cisco Nexus 7000 Series Supervisor 2 Module The Cisco Nexus 7000 supervisor module 2 shown in Figure 2-22 is the next-generation supervisor module. it will support 4+1 VDCs. such as higher VDC and FEX. when choosing the proper line card. As shown in Table 2-6. it has a quad-core CPU and 12 G of memory compared to supervisor module 1.management. Sup2E supports 8+1 VDCs. It has the capability to initiate a complete system restart and shutdown. Both supervisor module 2 and supervisor module 2E support FCoE. Administrators must authenticate to get access to the system through CMP. Sup2 scale is the same as Sup1. Figure 2-22 Cisco Nexus 7000 Supervisor 2 Module Supervisor module 2 and supervisor module 2E have more powerful CPUs. The Power-on Self Test (POST) and Cisco Generic Online Diagnostics (GOLD) provide proactive health monitoring both at startup and during system operation. which has single-core CPU and 8 G of memory. which will enable you to carve out CPU for higher priority VDCs. If a fault is detected. This is useful in detecting hardware faults. faster boot and switchover times. and it offers complete visibility during the entire boot process. and nextgeneration ASICs that together will result in improved performance. such as enhanced user experience. and a higher control plane scale.

Fabric stage 1 and fabric stage 3 are implemented on the line card module. Figure 2-23 Cisco Nexus 7000 Fabric Module In the case of Nexus 7000. you can deliver a maximum of 1. you can deliver a maximum of 230 Gbps per slot using five fabric modules. the remaining fabric modules will load balance the remaining bandwidth to all the remaining line cards. In case of a failure or removal of one of the fabric modules. Cisco Nexus 7000 and Nexus 7700 Fabric Modules The Nexus 7000 and Nexus 7700 fabric modules provide interconnection between line cards and provide fabric channels to the supervisor modules.You cannot mix Sup1 and Sup2 in the same chassis. Nexus 7000 supports virtual output queuing (VOQ) and credit-based arbitration to the crossbar to increase performance. Sup2 and Sup2E can be mixed for migration only. The Nexus 7000 implements a three-stage crossbar switch. when using Fabric Module 1. and stage 2 is implemented on the fabric module. which is 110 Gbps. This will be a nondisruptive migration. When using Fabric Module 2. Figure 2-23 shows the different fabric modules for the Nexus 7000 and Nexus 7700 products. adding fabric modules increases the available bandwidth per I/O slot because all fabric modules are connected to all slots. VOQ and credit-based arbitration allow fair sharing of resources when a speed mismatch exists to avoid head-of-line (HOL) blocking. and the Nexus 7700 has six. Note that this will be a disruptive migration requiring removal of supervisor 1. you can deliver a maximum of 550 Gbps per slot.32 Tbps per slot. Figure 2-24 . which is 46 Gbps. The Nexus 7000 has five fabric modules. which is 220 Gbps per slot. All fabric modules support load sharing. by using Fabric Module 2. and the architecture supports lossless fabric failover. In Nexus 7700.

These connections are also 55 Gbps. the total number of connections from the fabric cards to each line card is 24.shows how these stages are connected to each other. When all the fabric modules are installed. It provides an aggregate bandwidth of 1. providing 230 Gbps of switching capacity per I/O slot for a fully loaded chassis. Figure 2-24 Cisco Nexus 7700 Crossbar Fabric There are two connections from each fabric module to the supervisor module.32 Tbps per slot. Note Cisco Nexus 7000 fabric 1 modules provide two 23 Gbps traces to each fabric module. . Table 2-7 describes each license and the features it enables. Cisco Nexus 7000 and Nexus 7700 Licensing Different types of licenses are required for the Nexus 7000 and the Nexus 7700. providing an aggregate bandwidth of 275 Gbps. and each one of these connections is 55 Gbps. there are 12 connections from the switch fabric to the supervisor module. Each supervisor module has a single 23 Gbps trace to each fabric module. When populating the chassis with six fabric modules. There are four connections from each fabric module to the line cards.

.

Enabling a licensed feature that does not have a license key starts a counter on the grace period. The host ID is also referred to as the device serial number. There is also an evaluation license. To get the license file you must obtain the serial number for your device by entering the show license host-id command. Note To manage the Nexus 7000 two types of licenses are needed: the DCNM LAN and DCNM SAN. or disable the grace period feature. which is a temporary license. Evaluation licenses are time bound (valid for a specified number of days) and are tied to a host ID (device serial number).Table 2-7 Nexus 7000 and Nexus 7700 Software Licensing Features It is worth mentioning that Nexus switches have a grace period. Example 2-1 NX-OS Command to Obtain the Host ID . disable the use of that feature. each of which is a separate license. which is the amount of time the features in a license package can continue functioning without a license. You then have 120 days to install the appropriate license keys. as shown in Example 2-1. the Cisco NX-OS software automatically disables the feature and removes the configuration from the device. If at the end of the 120-day grace period the device does not have a valid license key for the feature.

the usb1: device. In this example. Perform the installation by using the install license command on the active supervisor module from the device console. the slot0: device. save your license file to one of four locations—the bootflash: directory.lic Installing license . Example 2-3 Command Used to Obtain Installed Licenses Click here to view code image switch# show license usage Feature Ins Lic Status Expiry Date Comments Count -------------------------------------------------------------------------------LAN_ENTERPRISE_SERVICES_PKG Yes In use Never -------------------------------------------------------------------------------- Cisco Nexus 7000 and Nexus 7700 Line Cards Nexus 7000 and Nexus 7700 support various types of I/O modules. After executing the copy licenses command from the default VDC. Table 2-8 shows the comparison between M-Series modules.. or the usb2: device. There are two types of I/O modules: the M-I/O modules and the F-I/O modules. . the host ID is FOX064317SQ.Click here to view code image switch# show license host-id License hostid: VDH=FOX064317SQ Tip Use the entire ID that appears after the equal sign (=). Each has different performance metrics and features.done You can check what licenses are already installed by issuing the command shown in Example 2-3. as shown in Example 2-2. Example 2-2 Command Used to Install the License File Click here to view code image switch# install license bootflash:license_file.

Table 2-8 Nexus 7000 and Nexus 7700 M-Series Modules Comparison Note From the table. you see that the M1 32 port I/O module is the only card that supports .

Table 2-9 shows the comparison between F-Series modules. .LISP.

0-KW AC power supply shown in Figure 2-25 is designed only for the Nexus 7004 chassis and is used across all the Nexus 7700 Series chassis. Temperature measurement: Prevents damage due to overheating (every ASIC on the board has a temperature sensor).Table 2-9 Nexus 7000 and Nexus 7700 F-Series Modules Comparison Cisco Nexus 7000 and Nexus 7700 Series Power Supply Options The Nexus 7000 and Nexus 7700 use power supplies with +90% power supply efficiency. and environmental cooling.0-KW AC Power Supply Module The 3. and modules enabling accurate power consumption monitoring. Variable Fan Speed: Automatically adjusts to changing thermal characteristics. connecting to low line nominal voltage (110 VAC) will produce a power output of 1400W. lower fan speeds use lower power. When connecting to high line nominal voltage (220 VAC) it will produce a power output of 3000W. . no downtime in replacing power supplies. reducing power wasted as heat and reducing associated data center cooling requirements. for the right sizing of power supplies. Variable-speed fans adjust dynamically to lower power consumption and optimize system cooling for true load. Fully Hot Swappable: Continuous system operations. UPSs. Internal fault monitoring: Detects component defect and shuts down unit. Power Redundancy: Multiple system-level options for maximum data center availability. They offer visibility into the actual power consumption of the total system. Cisco Nexus 7000 and Nexus 7700 Series 3. The switches offer different types of redundancy modes. Real-time power draw: Shows real-time power consumption. It is a single 20 Ampere (A) AC input power supply.

0-KW AC Power Supply Note Although the Nexus 7700 chassis and the Nexus 7004 use a common power supply architecture.0-kW DC power supply has two isolated input stages. . the system will log an error complaining about the wrong power supply in the system. each delivering up to 1500W of output power.Figure 2-25 Cisco Nexus 7000 3.0-KW DC power supply shown in Figure 2-26 is designed only for the Nexus 7004 chassis and is used across all the Nexus 7700 Series chassis. The unit will deliver 1551W when only one input is active and 3051W when two inputs are active. if you interchange the power supplies. The Nexus 3. although technically this might work.0-KW DC Power Supply Module The 3. it is not officially supported by Cisco. Each stage uses a -48V DC connection. Cisco Nexus 7000 and Nexus 7700 Series 3. Therefore. different PIDs are used on each platform.

with battery backup capability.5-KW AC Power Supply Module The 6. They allow mixed-mode AC and DC operation enabling migration without disruption and providing support for dual environments with unreliable AC power. 7010. and 7018.0-KW DC Power Supply Cisco Nexus 7000 Series 6.Figure 2-26 Cisco Nexus 7000 3.5-KW power supplies shown in Figure 2-27 are common across Nexus 7009.0-KW and 7.0-KW and 7. .

0-KW DC Power Supply Module The 6-KW DC power supply shown in Figure 2-28 is common to the 7009.5-KW Power Supply Specifications Cisco Nexus 7000 Series 6.Figure 2-27 Cisco Nexus 7000 6. . The power supply can be used in combination with AC units or as an all DC setup. Table 2-10 Nexus 7000 and Nexus 7700 6. It supports the same operational characteristics as the AC units: redundancy modes (N+1 and N+N).0-KW and 7.5-KW Power Supply Table 2-10 shows the specifications of both power supplies with different numbers of inputs and input types. each delivering up to 1500W of power (6000W total on full load) with peak efficiency of 91% (high for a DC power supply). and 7018 systems.0-KW and 7. 7010. The 6-KW has four isolated input stages.

so it is recommended. Full redundancy provides the highest level of redundancy.Real-time power—actual power levels Single input mode (3000W) Online insertion and removal Integrated Lock and On/Off switch (for easy removal) Figure 2-28 Cisco Nexus 7000 6. allowing them to be connected to separate isolated A/C supplies. In the event of an A/C supply failure. it is always better to choose the mode of power supply operation based on the requirements and needs. Grid redundancy. otherwise commonly called N+1 redundancy. 50% of power is secure. where the total power available is the sum of the power from only one input on each PSU.) PSU redundancy. Each PSU has two supply inputs. You can lose one power supply or one grid. in most cases this will be the same as grid redundancy. An example of each mode is shown in Figure 2-29. where the total power available is the sum of all power supplies – 1. However. Full redundancy.0-KW DC Power Supply Multiple power redundancy modes can be configured by the user: Combined mode. which is the combination of PSU redundancy and grid redundancy. . (This is not redundant. where the total power available is the sum of the outputs of all the power supplies installed.

Nexus 6004EF.com/go/powercalculator.0-KW Power Redundancy Modes To help with planning for the power requirements. Cisco has made a power calculator that can be used as a starting point. When using the unified module on the Nexus 6004 you can get native FC as well. and Nexus 6004X. . low-latency 1G/10G/40G/FCoE Ethernet port. It is worth mentioning that the power calculator cannot be taken as a final power recommendation.Figure 2-29 Nexus 6. Multiple models are available: Nexus 6001T. The power calculator can be located at http://www. Table 2-11 describes the difference between the Nexus 6000 models. Cisco Nexus 6000 Product Family The Cisco Nexus 6000 product family is a high-performance. Nexus 6001P. high-density.cisco.

the Nexus 6004X was renamed Nexus 5696Q. 6a.Table 2-11 Nexus 6000 Product Specification Note At the time this chapter was written. front-to-back (port side exhaust) and back-to-front (port side intake). Cisco Nexus 6001P and Nexus 6001T Switches and Features The Nexus 6001P is a fixed configuration 1RU Ethernet switch. The Nexus 6001T is a fixed configuration 1RU Ethernet switch.3 microseconds. Nexus 6001T supports FCOE on the RJ-45 ports for up to 30M when CAT 6. It provides 48 1G/10G BASE-T ports and four 40 G Ethernet QSFP+ ports. It offers two choices of airflow. Port-to-port latency is around 1 microsecond. Each 40 GE port can be split into 4 × 10GE ports. or 7 cables are used. Both switches are shown in Figure 2-30. . port-to-port latency is around 3. The split can be done by using a QSFP+ break-out cable (Twinax or Fiber). Each 40 GE port can be split into 4 × 10 GE ports. this can be done by using a QSFP+ break-out cable (Twinax or Fiber). It provides integrated Layer 2 and Layer 3 at wire speed. It provides 48 1G/10G SFP+ ports and four 40-G Ethernet QSFP+ ports. It offers two choices of air flow. It provides integrated Layer 2 and Layer 3 at wire speed. front-to-back (port side exhaust) and back-to-front (port side intake).

The Nexus 6004 is 4RU 10 G/40 G Ethernet switch. it offers eight line card expansion module (LEM) slots. .Figure 2-30 Cisco Nexus 6001 Switches Cisco Nexus 6004 Switches Features The Nexus 6004 is shown in Figure 2-31. The main difference is that the Nexus 6004X supports Virtual Extensible LAN (VXLAN). the Nexus 6004EF and the Nexus 6004X. Port-to-port latency is approximately 1 microsecond for any packet size. Currently. there are two models. The Nexus 6004 offers integrated Layer 2 and Layer 3 at wire rate. It offers up to 96 40 G QSFP ports and up to 384 10 G SFP+ ports. There are two choices of LEMs: 12-port 10 G/40 G QSFP and 20-port unified ports offering either 1 G/10 G SFP+ or 2/4/8 G FC.

Table 2-12 describes each license and the features it enables. This converts each QSFP+ port to one SFP+ port.Figure 2-31 Nexus 6004 Switch Note The Nexus 6004 can support 1 G ports using a QSA converter (QSFP to SFP adapter) on a QSFP interface. The SFP+ can support both 10 G and 1 GE transceivers. disable the use of that feature. You then have 120 days to install the appropriate license keys. which is the amount of time the features in a license package can continue functioning without a license. or disable the grace . enabling a licensed feature that does not have a license key to start a counter on the grace period. It is worth mentioning that Nexus switches have a grace period. Cisco Nexus 6000 Switches Licensing Options Different types of licenses are required for the Nexus 6000.

If at the end of the 120-day grace period the device does not have a valid license key for the feature.period feature. the Cisco NX-OS software automatically disables the feature and removes the configuration from the device. .

Cisco Nexus 5500 and Nexus 5600 Product Family The Cisco Nexus 5000 product family is a Layer 2 and Layer 3 1G/10G Ethernet with unified ports. Each of them is a separate license. it includes Cisco Nexus 5500 and Cisco Nexus 5600 platforms. Note To manage the Nexus 6000 two types of licenses are needed: the DCNM LAN and DCNM SAN. . Evaluation licenses are time bound (valid for a specified number of days) and are tied to a host ID (device serial number). which is a temporary license. Table 2-13 shows the comparison between different models.Table 2-12 Nexus 6000 Licensing Options There is also an evaluation license.

.

Refer to www. The Nexus 5548P has all the 32 ports 1/10 Gbps Ethernet only. the 10 G BASE-T ports support FCoE up to 30m with Category 6a and Category 7 cables.com for more information. The switch has three expansion modules. It has 32 fixed ports of 10 G BASE-T and 16 fixed ports of SFP+. native Fibre Channel and FCoE switch. The Nexus 5548UP has the 32 ports unified ports. the switch supports unified ports on all SFP+ ports.Table 2-13 Nexus 5500 and Nexus 5600 Product Specification Note The Nexus 5010 and 5020 switches are considered end-of-sale. Figure 2-32 Nexus 5548P and Nexus 5548UP Switches Cisco Nexus 5596UP and 5596T Switches Features The Nexus 5596T shown in Figure 2-33 is a 2RU 1/10 Gbps Ethernet.cisco. Cisco Nexus 5548P and 5548UP Switches Features The Nexus 5548P and the Nexus 5548UP are 1/10-Gbps switches with one expansion module. Figure 2-32 shows the layout for them. . meaning that the ports can run 1/10 Gbps or they can run 8/4/2/1 Gbps native FC or mix between both. so they are not covered in this book.

Figure 2-33 Nexus 5596UP and Nexus 5596T Switches Cisco Nexus 5500 Products Expansion Modules You can have additional Ethernet and FCoE ports or native Fibre Channel ports with the Nexus 5500 products by adding expansion modules. and the Nexus 5596UP/5596T has three. and 8 ports 8/4/2/1-Gbps Native Fibre Channel ports using . It has 8 ports 1/10-Gbps Ethernet and FCoE using SFP+ interfaces. Figure 2-34 Cisco N55-M16P Expansion Module The Cisco N55-M8P8FP module shown in Figure 2-35 is a 16-port module. The Cisco N55-M16P module shown in Figure 2-34 has 16 ports 1/10-Gbps Ethernet and FCoE using SFP+ interfaces. The Nexus 5548P/5548UP has one expansion module.

SFP+ and SFP interfaces. It has 16 ports 1/10-Gbps Ethernet and FCoE using SFP+ interfaces or up to 16 ports 8/4/2/1-Gbps Native Fibre Channel ports using SFP+ and SFP interfaces. Figure 2-36 Cisco N55-M16UP Expansion Module The Cisco N55-M4Q shown in Figure 2-37 is a 4-port 40-Gbps Ethernet module. . Each QSFP 40-GE port can only work in 4×10G mode and supports DCB and FCoE. Figure 2-35 Cisco N55-M8P8FP Expansion Module The Cisco N55-M16UP shown in Figure 2-36 is a 16 unified ports module.

which can be ordered with the system.Figure 2-37 Cisco N55-M4Q Expansion Module The Cisco N55-M12T shown in Figure 2-38 is a 12-port 10-Gbps BASE-T module. or it is field upgradable as a spare. Figure 2-38 Cisco N55-M12T Expansion Module The Cisco 5500 Layer 3 daughter card shown in Figure 2-39 is used to enable Layer 3 on the Nexus 5548P and 5548UP. which is shared among all 48 ports. it supports FCoE up to 30m on category 6a and category 7 cables. This daughter card provides 160 Gbps of Layer 3 forwarding (240 million packets per second [mpps]). . This module is supported only in the Nexus 5596T.

There is no need to remove the switch from the rack. which is shared among all 48 ports. This daughter card provides 160 Gbps of Layer 3 forwarding (240 million packets per second [mpps]). Figure 241 shows the Layer 3 expansion module. . you must have a Layer 3 expansion module. you can have only one Layer 3 expansion module per Nexus 5596P and 5596UP. you must replace the Layer 2 I/O module. Figure 2-40 Cisco Nexus 5548P and 5548UP Layer 3 Daughter Card Upgrade Procedure To enable Layer 3 on the Nexus 5596P and 5596UP.Figure 2-39 Cisco Nexus 5548P and 5548UP Layer 3 Daughter Card To install the Layer 3 module. currently. and follow the steps as shown in Figure 2-40. which can be ordered with the system or as a spare. power off the switch.

Nexus 5672UP and Nexus 56128P. Both models bring integrated L2 and L3. Enabling Layer 3 makes the supported number per Nexus 5500 to be 16. Verify the scalability limits based on the NX-OS you will be using before creating a design. and 25 MB buffer per port ASIC. Cisco Nexus 5600 Product Family The Nexus 5600 is the new generation of the Nexus 5000 switches.Figure 2-41 Cisco Nexus 5596P and 5596UP Layer 3 Daughter Card Upgrade Procedure Enabling Layer 3 affects the scalability limits for the Nexus 5500. . The Nexus 5600 has two models. cut-through switching for 10/40 Gbps. 40-Gbps FCoE. For example. the maximum FEXs per Cisco Nexus 5500 Series switches is 24 with Layer 2 only. 1-microsecond port-toport latency with all frame sizes. true 40-Gbps flow. Table 2-14 shows the summary of the features.

The switch has two redundant power supplies and three fan modules. True 40 Gbps ports use QSFP+ for Ethernet/FCOE. of which 16 ports are unified. . The switch supports both port-side exhaust and port-side intake. meaning that the ports can run 8/4/2 Gbps Fibre Channel as well as 10 Gigabit Ethernet and FCoE connectivity options.Table 2-14 Nexus 5600 Product Switches Feature Cisco Nexus 5672UP Switch Features The Nexus 5672UP shown in Figure 2-42 has 48 fixed 1/10-Gbps SFP+ ports.

The Cisco Nexus 56128P has two expansion modules that support 24 unified ports. The 48 fixed SFP+ ports and four 40 Gbps QSFP+ ports support FCOE as well. .Figure 2-42 Cisco Nexus 5672UP Cisco Nexus 56128P Switch Features The Cisco Nexus 56128P shown in Figure 2-43 is a 2RU switch. It has 48 fixed 1/10-Gbps Ethernet SFP+ ports and four 40-Gbps QSFP+ ports.

Enabling a licensed feature that does not have a license key starts a counter on the grace period. If at the end of the 120-day grace period the device does not have a valid license key for the feature. as shown in Figure 2-44. which is the amount of time the features in a license package can continue functioning without a license. four N+1 redundant. There is also an evaluation license. . Table 2-15 describes each license and the features it enables. It has four N+N redundant. Evaluation licenses are time bound (valid for a specified number of days) and are tied to a host ID (device serial number).Figure 2-43 Cisco Nexus 56128P The 24 unified ports provide 8/4/2 Gbps Fibre Channel as well as 10 Gigabit Ethernet and FCoE connectivity options. The Nexus 56128P currently supports one expansion module. hot-swappable independent fans. or disable the grace period feature. the Cisco NX-OS software automatically disables the feature and removes the configuration from the device. and a management and console interface on the fan side of the switch. Figure 2-44 Cisco Nexus 56128P Unified Port Expansion Module Cisco Nexus 5500 and Nexus 5600 Licensing Options Different types of licenses are required for the Nexus 5500 and Nexus 5600. hot-swappable power supplies. Nexus switches have a grace period. the N56M24UP2Q expansion module. which is a temporary license. Cisco Nexus 5600 Expansion Modules Expansion modules enable the Cisco Nexus 5600 switches to support unified ports with native Fibre Channel connectivity. disable the use of that feature. You then have 120 days to install the appropriate license keys. plus two 40 Gbps ports. That module provides 24 ports 10 G Ethernet/FCoE or 2/4/8 G Fibre Channel and 2 ports 40 Gigabit QSFP+ Ethernet/FCoE ports.

.

Each is a separate license. Nexus 3100 Series. and Nexus 3500 switches. The Nexus 3000 Series consists of five models: Nexus 3064X.Table 2-15 Nexus 5500 Product Licensing Note To manage the Nexus 5500 and Nexus 5600. Cisco Nexus 3000 Product Family The Nexus 3000 switches are part of the Cisco Unified Fabric architecture. Table 2-16 gives a comparison and specifications of these different models. two types of licenses are needed: the DCNM LAN and DCNM SAN. they are shown in Figure 2-45. Nexus 3016Q. All of them offer wire rate Layer 2 and Layer 3 on all ports. and 3048. The Nexus 3000 switches are 1 RU server access switches. they offer high performance and high density at ultra low latency. . These are compact 1RU 1/10/40-Gbps Ethernet switches. and ultra low latency. Nexus 3064T. Nexus 3064-32T. This product family consists of multiple switch models: the Nexus 3000 Series.

Figure 2-45 Cisco Nexus 3000 Switches .

.

Table 2-16 Nexus 3000 Product Model Comparison The Nexus 3100 switches have four models: Nexus 3132Q. Nexus 3164Q. Nexus 3172Q. These are compact 1RU 1/10/40-Gbps Ethernet switches. Shown in Figure 2-46. and Nexus 3172TQ. Figure 2-46 Cisco Nexus 3100 Switches . Table 2-17 gives a comparison and specifications of different models. all of them offer wire rate Layer 2 and Layer 3 on all ports and ultra low latency.

They are shown in Figure 2-47. Nexus 3524 and Nexus 3548. Table 2-18 gives a comparison and specifications of the different models.Table 2-17 Nexus 3100 Product Model Comparison The Nexus 3500 switches have two models. . These are compact 1RU 1/10-Gbps Ethernet switches. Both offer wire rate Layer 2 and Layer 3 on all ports and ultra low latency.

which uses a hardware technology called Algo boost.Figure 2-47 Cisco Nexus 3500 Switches Table 2-18 Nexus 3500 Product Model Comparison Note The Nexus 3524 and the Nexus 3548 support a forwarding mode called warp mode. Table 2-19 shows the license options. They are very well suited for highperformance trading and high-performance computing environments. It is worth mentioning that the Nexus 3000 doesn’t support the grace period feature. Algo boost provides ultra-low latency—as low as 190 nanoseconds (ns). Cisco Nexus 3000 Licensing Options There are different types of licenses for the Nexus 3000. .

.

This type of architecture provides the flexibility and benefit of both architectures: top-of-rack (ToR) and end-of-row (EoR). because all these Nexus 2000 Fabric Extenders will be managed from the parent switch. The uplink ports on the Cisco Nexus 2000 Series Fabric Extenders can be connected to a Cisco Nexus 5000. The parent switch can be Nexus 5000. Nexus 6000. All Nexus 2000 Fabric Extenders connected to the same parent switch are managed from a single point. reducing cabling between racks. Figure 2-48 shows both types. The cabling between the servers and the Cisco Nexus 2000 Series Fabric Extenders is contained within the rack. which is explained shortly. . or Nexus 9000 Series switch that is installed in the EoR position as the parent switch. Using Nexus 2000 gives you great flexibility when it comes to the type of connectivity and physical topology. This is a ToR design from a cabling point of view. which can be 10 Gbps or 40 Gbps. but from an operation point of view this design looks like an EoR design. as shown in Figure 2-49. Nexus 6000. So no configuration or software maintenance tasks need to be done with the FEXs. Only the cables between the Nexus 2000 and the parent switches will run between the racks. Cisco Nexus 7000. Rack-01 is using dual redundant Cisco Nexus 2000 Series Fabric Extender. which is placed at the top of the rack. They appear as an extension to the parent switch to which they connect.Table 2-19 Nexus 3000 Licensing Options Cisco Nexus 2000 Fabric Extenders Product Family The Cisco Nexus 2000 Fabric Extenders behave as remote line cards. and Nexus 9000 Series switches. ToR and EoR. Nexus 7000. Figure 2-48 Cisco Nexus 2000 Top-of-Rack and End-of-Row Design As shown in Figure 2-48. It also enables highly scalable servers access design without the dependency on spanning tree. Multiple connectivity options can be used to connect the FEX to the parent switch.

You must use the pinning max-links command to create pinned fabric interface connections so that the parent switch can determine a distribution of host interfaces. Table 2-20 shows the different models of the 1/10-Gbps fabric extenders. the host port will be statically pinned to one of the uplinks between the FEX and the parent switch. Destination MAC. Straight-through. and Source IP Destination IP. using dynamic pinning (port channel): You can use this method to load balance between the down link ports connected to the server and the fabric ports connected to the parent switch. . Static pinning and active-active FEX are currently supported only on the Cisco Nexus 5000 Series switches. Note The Cisco Nexus 2248PQ Fabric Extender does not support the static pinning fabric interface connection. This method is used when the FEX is connected straight through to the parent switch. The traffic is being distributed using a hashing algorithm.Figure 2-49 Cisco Nexus 2000 Connectivity Options Straight-through. The default value is maxlinks equals 1. using static pinning: To achieve a deterministic relationship between the host port on the FEX to which the server connects and the fabric interfaces that connect to the parent switch. The host interfaces are divided by the number of the max-links and distributed accordingly. for Layer 2 it uses Source MAC and Destination MAC. for Layer 3 it uses Source MAC. This method bundles multiple uplink interfaces to one logical port channel. The server port will always use the same uplink port. the FEX is dual homed using vPC to different parent switches. Active-active FEX using vPC: In this scenario. Note The Cisco Nexus 7000 Series switches currently support only straight-through deployment using dynamic pinning.

.Table 2-20 Nexus 2000 1/10-Gbps Model Comparison Table 2-21 shows the different models of the 100M and 1 Gbps fabric extenders.

Table 2-21 Nexus 2000 100 Mbps/1 Gbps Model Comparison As shown in Figure 2-50. There is a separate hardware model for each vendor. and IBM. Dell. . Fujitsu. the Cisco Nexus B22 Fabric Extender is part of the fabric extenders family. They simplify the operational model by making the blade switch management part of the parent switch and make it appear as a remote line card to the parent switch. It extends the FEX to third-party blade centers from HP. similar to the Nexus 2000 fabric extenders.

Similar to the Nexus 2000 Series. This architecture gives the benefit of centralized management through the Nexus parent switch. . which are called fabric ports. host ports for blade server attachments and uplink ports. the B22 consists of two types of ports. as shown in Figure 2-51. It is similar to the Nexus 2000 Series management architecture.Figure 2-50 Cisco Nexus B22 Fabric Extender The B22 topology shown in Figure 2-51 is creating a highly scalable server access design with no spanning tree running. The uplink ports are visible and are used for the connectivity to the upstream parent switch.

and Nexus 5000 to build standard three-tier data center architectures. you can build a Spine-Leaf architecture called Dynamic Fabric Automation (DFA). and this is where the host is connected. Table 2-22 Nexus B22 Specifications Cisco Dynamic Fabric Automation Besides using the Nexus 7000. The DFA architecture includes fabric management using Central Point of Management (CPoM) and the automation of configuration of the fabric components. as well as the L4 to L7 services.Figure 2-51 Cisco Nexus B22 Access Topology The number of host ports and fabric ports for each model is shown in Table 2-22. Using open application programming interfaces (API). and nothing connects to the spines except the leaf switches. . Nexus 6000. The spines are at the core of the network. as shown in Figure 2-52. you can integrate automation and orchestration tools to provide control for provisioning of the fabric components. The leaf nodes are at the edge.

Nexus modular switches go from 1 Gbps ports up to 100 Gbps ports. spanning from 100 Mbps per port and going up to 100 Gbps per port. Summary In this chapter. you learned about the different Cisco Nexus family of switches. The Nexus 5600 Series is the latest addition to the Nexus 5000 family of switches. The Nexus product family is designed to meet the stringent requirements of the next-generation data center. and resource efficiency. budget. and the Nexus 9000 switches. The Nexus product family offers different types of connectivity options that are needed in the data center. Nexus 7700. It supports true 40 Gbps ports with 40 Gbps flows. Following are some points to remember: With the Nexus product family you can have an infrastructure that can be scaled cost effectively.84 Tbps per slot. true .32 Tbps per slot. One of the key features it supports is VXLAN and true 40 Gbps ports. and that helps you increase energy. If you want 100 Mbps ports you must use Nexus 2000 fabric extenders. the Nexus 7700 can scale up to 1.Figure 2-52 Cisco Nexus B22 Topology Cisco DFA enables creation of tenant-specific virtual fabrics and allows these virtual fabrics to be extended anywhere within the physical data center fabric. supervisor module 2. The Nexus family provides operational continuity to meet your needs for an environment where system availability is assumed and maintenance windows are rare. Multiple supervisor modules are available for the Nexus 7000 switches and the Nexus 7700 switches: supervisor module 1. and the Nexus 9000 can scale up to 3. It supports unified ports. It uses a 24-bit (16 million) segment identifier to support a large-scale virtual fabric that can scale beyond the traditional 4000 VLANs. if not totally extinct. The Nexus 7000 can scale up to 550 Gbps per slot. The Nexus 6004X is the latest addition to the Nexus 6000 family. Nexus modular switches are the Nexus 7000. and supervisor module 2E.

Nexus 3000 is part of the unified fabric architecture. and Nexus 5000 switches. The Nexus 2000 can be connected to Nexus 7000. Table 2-23 lists a reference for these key topics and the page numbers on which each is found. They act as the parent switch for the Nexus 2000. and is the only switch in the Nexus 6000 family that supports VXLAN. noted with the key topic icon in the outer margin of the page. It is an ultra-low latency switch that is well suited to the high-performance trading and high-performance computing environments.40 Gbps. . Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. Nexus 6000. Nexus 9000. Nexus 2000 fabric extenders must have a parent switch connected to them.

“Memory Tables Answer Key. Define Key Terms Define the following key terms from this chapter. or at least the section for this chapter.Table 2-23 Key Topics for Chapter 2 Complete Tables and Lists from Memory Print a copy of Appendix B. “Memory Tables” (found on the disc).” also on the disc. and complete the tables and lists from memory. Appendix C. includes completed tables and lists to check your work. and check your answers in the glossary: Application Centric Infrastructure (ACI) port channel (PC) virtual port channel (vPC) Fabric Extender (FEX) Top of Rack (ToR) End of Row (EoR) Dynamic Fabric Automation (DFA) Virtual Extensible LAN (VXLAN) .

” Table 3-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. That approach has its own benefits and business requirements. How many VDCs are supported on Nexus 7000 SUP 1 and SUP 2E (do not count admin VDC)? a. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. too.) a. Default VDC b. If you do not know the answer to a question or are only partially sure of the answer. The first is how to partition one physical switch and make it appear as multiple switches. 1. “Answers to the ‘Do I Know This Already?’ Quizzes. (Choose three. SUP1 support 4 * SUP 2E support 8 2. This chapter covers the virtualization capabilities of the Nexus 7000 and Nexus 5000 switches using Virtual Device Context (VDC) and Network Interface Virtualization (NIV). Table 3-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. you should mark that question as wrong for purposes of the self-assessment. read the entire chapter. each with its own administrative domain and security policy. Choose the different types of VDCs that can be created on the Nexus 7000. You can find the answers in Appendix A.Chapter 3. SUP1 support 4 * SUP 2E support 4 b. Admin VDC . The other option is how to make multiple switches appear as if they are one big modular switch. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. SUP1 support 2 * SUP 2E support 8 d. That approach has its own benefits and business requirements. SUP1 support 4 * SUP 2E support 6 c. Virtualizing Cisco Network Devices This chapter covers the following exam topic: Describe Device Virtualization There are two types of device virtualization when it comes to Nexus devices.

Without creating any VDC on the Nexus 7000. Within a VDC. switchto {vdc name} 5. connectTo b. This VDC is always active and cannot be deleted. 802.1BR d. within this VDC are multiple processes and Layer 2 and Layer 3 services running. . Virtual interface c. what does the acronym VIF stand for? a. VDC {id} command and press Enter d. VN-Tag initial frame b.c. the control plane runs a single VDC. OTV VDC 3. False 6. VN-Tag interface Foundation Topics Describing VDCs on the Cisco Nexus 7000 Series Switch The Virtual Device Context (VDC) is used to virtualize the Nexus 7000 switch.1qpg b. 802. True or false? The Nexus 2000 Series switch can be used as a standalone access switch. The VN-Tag is being standardized under which IEEE standard? a. True or false? We can assign physical interfaces to an admin VDC. a. True b. a. 802. switchto vdc{vdc name} c. True b. Which command is used to access a nondefault VDC from the default VDC? a. Each VDC will have its own administrative domain allowing the management plane as well to be virtualized.1Qbh c. as shown in Figure 3-1.Qbr 7. Storage VDC d. False 4. 802. You can assign a line card to a VDC or a set of ports to the VDC. Virtualized interface d. meaning presenting the Nexus 7000 switch as multiple logical switches. you can have a unique set of VLANs and VRFs allowing the virtualization of the control plane. In the VN-Tag fields.

When enabling multiple VDCs. and hardware partitioning. When enabling Role-Based Access Control (RBAC) for each VDC. Within each VDC you can have duplicate VLAN names and VRF names.Figure 3-1 Single VDC Operation Mode Structure Virtualization using VLANS and VRFs within this VDC can still be used. You can have fault isolation. separate administration per VDC. meaning you can have VRF production in VDC1 and VRF production in VDC2. . Figure 3-2 shows what the structure looks like when creating multiple VDCs. these processes are replicated for each VDC. separate data traffic. each VDC administrator will interface with the process for his or her own VDC.

and Common Criteria Evaluation and Validation Scheme (CCEVS) 10349 certification with EAL4 conformance. VDC Deployment Scenarios There are multiple deployment scenarios using the VDCs. extranet.Figure 3-2 Multiple VDC Operation Mode Structure The following are typical use cases for the Cisco Nexus 7000 VDC. development. it has NSS labs for Payment Card Industry (PCI) compliant environments. and test environments Creation of intranet. power. which enable the reduction of the physical footprint in the data center. Horizontal Consolidation Scenarios . and DMZ Creation of separate organizations on the same physical switch Creation of separate application environments Creation of separate departments for the same organization due to different administration domains VDC separation is industry certified. which can be used in the design of the data center: Creation of separate production. which in turn saves space. Federal Information Processing Standards (FIPS 140-2) certification. and cooling.

Aggregation 1 and Aggregation 2 are completely isolated with a separate role-based access and separate configuration file. you can create multiple VDCs called Aggregation 1 and Aggregation 2. you can consolidate core functions and aggregation functions. Figure 3-3 Horizontal Consolidations for Core Layer Aggregation layers can also be consolidated. Figure 3-4 shows multiple aggregation layers created on the same physical devices. You can allocate ports or line cards to the VDCs. .VDCs can be used to consolidate multiple physical devices that share the same function role. This can happen by creating two VDCs. using a pair of Nexus 7000 switches you can build two redundant data center cores. For example. which can be useful in facilitating migration scenarios. VDC1 will accommodate Core A and VDC2 will accommodate Core B. so rather than building a separate aggregation layer for test and development. and after the migration you can reallocate the old core ports to the new core. as shown in Figure 3-3. In horizontal consolidation.

For example. you can consolidate core functions and aggregation functions. as shown in Figure 3-5. . This can happen by creating two VDCs. You can allocate ports or line cards to the VDCs. it can be useful because it reduces the number of physical switches. you can build a redundant core and aggregation layer. VDC2 accommodates Core B and Aggregation B.Figure 3-4 Horizontal Consolidation for Aggregation Layers Vertical Consolidation Scenarios When using VDCs in vertical consolidation. VDC1 accommodates Core A and Aggregation A. Although the port count is the same. using a pair of Nexus 7000 switches.

Figure 3-6 shows that scenario. as shown in Figure 3-7. Figure 3-6 Vertical Consolidation for Core. Aggregation. The only way is through the firewall. we used a pair of Nexus 7000 switches to build a three-tier architecture. By creating two separate VDCs for two logical areas inside and outside. and Access Layers VDCs for Service Insertion VDCs can be used for service insertion and policy enforcement. Each VDC has its own administrative domain. for traffic to cross VDC-A to go to VDC-B it must pass through the firewall. as you can see. . This design can make service insertion more deterministic and secure.Figure 3-5 Vertical Consolidation for Core and Aggregation Layers Another scenario is consolidation of core aggregation and access while you maintain the same hierarchy. and traffic must go out VDC-A to reach VDC-B.

CPU shares are supported only with SUP2/2e. and FCoE Control Plane Policing (CoPP) Port channel load balancing Hardware IDS checks control ACL capture feature enabled Note With CPU shares. Changes in the nondefault VDC affect only that VDC. the admin always uses it to manage the physical device and other VDCs created. You must be in the default VDC or the Admin VDC (discussed later in the chapter) to do certain tasks. which can be done only in these two VDC types. and management plane. The default VDC is a full-function VDC. The default VDC has certain characteristics and tasks that can only be done in the default VDC. Connecting two VDCs on the same physical switch must be done through external cables. memory NX-OS upgrade across all VDCs EPLD upgrade—as directed by TAC or to enable new features Ethanalyzer captures—control plane traffic Feature-set installation for Nexus 2000. which can be summarized in the following: VDC creation/deletion/suspend Resource allocation—interfaces. Nondefault VDC: The nondefault VDC is a fully functional VDC that can be created from the default VDC or the Admin VDC. The following list describes the different types of VDCs that can be created on the Nexus 7000: Default VDC: When you first log in to the Nexus 7000. There is a separate . An independent process is started for each protocol per VDC created. VDCs simply partition the physical Nexus device into multiple logical devices with a separate control plane. data plane. FabricPath. you log in to the default VDC. you don’t dedicate a CPU to a VDC rather than giving access and assigning priorities to the CPU.Figure 3-7 Using VDCs for Firewall Insertion Understanding Different Types of VDCs As explained earlier in the chapter. the NX-OS on the Nexus 7000 supports Virtual Device Contexts (VDC). you cannot connect VDCs together from inside the chassis.

In that case. Enter the system admin-vdc command after boot. dedicated FCoE interfaces or shared interfaces. The Admin VDC has certain characteristics that are unique to the Admin VDC. you will be prompted to select an Admin VDC. which can carry both Ethernet and FCoE traffic. . The storage VDC counts toward the total number of VDCs. You cannot assign any interface to the Admin VDC. You must be careful or you will lose the nonglobal configuration. The storage VDC relies on the FCoE license and doesn’t need a VDC license.2(2) for SUP1 module. and all nonglobal configuration in the default VDC will be lost. you can create the admin VDC in different ways: During a fresh switch boot. You can enable starting from NX-OS 6. it replaces the default VDC. and each VDC has its own RBAC. When you enable the Admin VDC. it cannot be deleted or changed back to the default VDC. After the Admin VDC is created. and starting from the NX-OS 6. only mgmt0 can be assigned to the Admin VDC. After the creation of the storage VDC. Note Creating the Admin VDC doesn’t require the advanced package license or VDC license. the default VDC becomes the Admin VDC. and so on. SNMP. To change back to the default VDC you must erase the configuration and do a fresh boot script.1 for SUP2/2E modules. Enter the system admin-vdc migrate new vdc name command. we assign interfaces to it. Storage VDC: The storage VDC is a nondefault VDC that helps maintain the operation model when the customer has a LAN admin and a SAN admin.configuration file per VDC. All nonglobal configuration on the default VDC will be migrated to the new VDC. There are two types of interfaces. We can create only one storage VDC on the physical device. ADMIN VDC: The Admin VDC is used for administration functions. Features and feature sets cannot be enabled in an Admin VDC. Figure 3-8 shows the two types of ports.

After you create a VDC. With Supervisor engine 2E you can have up to eight VDCs and one Admin VDC. Beyond four VDCs on Supervisor 2E we require a license to increment the VDCs with an additional four. all interfaces belong to the default VDC. and within a VDC.Figure 3-8 Storage VDC Interface Types Note To run FCoE. When you allocate the physical interface to a VDC. Supervisor engine 2 can have up to four VDCs and one Admin VDC. the physical interface belongs to one Ethernet VDC and one storage VDC at the same time. physical and logical interfaces can be assigned to VLANs or VRFs. When you create a shared interface. such as tunnel interfaces and switch virtual interfaces. Sup2/2e is required. For example. all configurations that existed on it get erased. you start assigning interfaces to that VDC.2. Logical interfaces. Interface Allocation The only thing you can assign to a VDC are the physical ports. The physical interface gets configured within the VDC. A physical interface can belong to only one VDC at a given time. With Supervisor 1 you can have up to four VDCs and one Admin VDC. Note All members of a port group are automatically allocated to the VDC when you allocate an interface. At the beginning and before creating a VDC. are created within the VDC. in the N7K-M132XP-12 (4 interfaces × 8 port groups = 32 interfaces) and N7K-M132XP-12L (same as non-L M132) (1 interface × 8 port groups = . which requires 8 GB of RAM because you must upgrade to NX-OS 6.

A user with the network-operator role is given the VDC-operator role in the nondefault VDC. When a network-admin or network-operator user accesses a nondefault VDC by using the switchto command. Network-operator: The second default role that exists on Cisco Nexus 7000 Series switches is the network-operator role. the first user account on that VDC is admin.8 interfaces). However. See the example for this module in Figure 3-9. This role gives a user complete control over the specific nondefault VDC. This role has no rights to any other VDC. Figure 3-9 VDC Interface Allocation Example VDC Administration The operations allowed for a user in VDC depend on the role assigned to that user. This role grants the user read-only rights in the default VDC. All M132 cards require allocation in groups of four ports. The network-operator role includes the right to issue the switchto command. or change nondefault VDCs. VDC-admin: When a new VDC is created. Note You can create custom roles for a user. which can be used to access a nondefault VDC from the default VDC. and it is available only in the default VDC. Interfaces belonging to the same port group must belong to the same VDC. no users are assigned to this role. and you can configure eight port groups. but you cannot change the default user roles for a VDC. This role includes the ability to create. By default. delete. The network-admin role is automatically assigned to this user. that user is mapped to a role of the same level in the nondefault VDC. That means a user with the network-admin role is given the VDC-admin role in the nondefault VDC. this user does not have any rights in any other VDC and cannot access other VDCs through the switchto command. each VDC maintains its user role . The VDC-admin role is automatically assigned to this admin user on a nondefault VDC. User roles are not shared between VDCs. and the NX-OS provides four default user roles: Network-admin: The first user account that is created on a Cisco Nexus 7000 Series switch in the default VDC is admin. The role must be assigned specifically to a user by a user who has network-admin rights. VDC-operator: The VDC-operator role has read-only rights for a specific VDC. The network-admin role gives a user complete read and write access to the device. Users can have multiple roles assigned to them.

the countdown for the 120 days doesn’t stop if a single feature of the license package is already enabled. 100 days before. The show vdc command displays different outputs depending on where you are executing it from. If you are executing the command from the default VDC. When the grace period expires. As shown in Example 3-1. so if you enable a feature to try it. Example 3-1 Displaying the Status of a Feature Set on a Switch Click here to view code image N7K-VDC-A# show feature-set Feature Set Name ID State -------------------. the show feature-set command is used to verify and show which feature has been enabled or disabled in a VDC. an advanced license is required. When you enable a feature. The storage VDC is enabled using the storage license associated with a specific line card. this command displays information about all VDCs on the physical device. but another feature was enabled. The license is associated with the serial number of a Cisco Nexus 7000 switch chassis.-------fabricpath 2 enabled fex 3 disabled N7K-VDC-A# The show feature-set command in Example 3-1 shows that the FabricPath feature is enabled and the FEX feature is disabled. Verifying VDCs on the Cisco Nexus 7000 Series Switch The next few lines show certain commands to display useful information about the VDC. To stop the grace period. all the features of the license package must be stopped. Physical interfaces also are assigned to nondefault VDCs from the default VDC. VDC Requirements To create VDCs. You need network admin privileges to do that. VDCs are created or deleted from the default VDC only. As stated before. you are left with 20 days to install the license.-------. any nondefault VDCs will be removed from the switch configuration. Example 3-2 Displaying Information About All VDCs (Running the Command from the Default VDC) Click here to view code image . the license package can contain several features. You can try the feature for the 120-day grace period. Note The grace period operates across all features in a license package.database. for example.

Example 3-3 Displaying Information About the Current VDC (Running the Command from the Nondefault VDC) Click here to view code image N7K-Core-Prod# show vdc vdc_id -----1 vdc_name -------prod state ----active mac ---------00:18:ba:d8:3f:fd To display the detailed information about all VDCs. execute show vdc detail from the default VDC. Example 3-4 Displaying Detailed Information About All VDCs Click here to view code image N7K-Core# show vdc detail vdc id: 1 vdc name: N7K-Core vdc state: active vdc mac address: 00:22:55:79:a4:c1 vdc ha policy: RELOAD vdc dual-sup ha policy: SWITCHOVER vdc boot Order: 1 vdc create time: Thu May 14 08:14:39 2012 vdc restart count: 0 vdc vdc vdc vdc vdc vdc vdc vdc vdc id: 2 name: prod state: active mac address: 00:22:55:79:a4:c2 ha policy: RESTART dual-sup ha policy: SWITCHOVER boot Order: 1 create time: Thu May 14 08:15:22 2012 restart count: 0 vdc vdc vdc vdc vdc vdc vdc vdc id: 3 name: dev state: active mac address: 00:22:55:79:a4:c3 ha policy: RESTART dual-sup ha policy: SWITCHOVER boot Order: 1 create time: Thu May 14 08:15:29 2012 . which in our case is the N7K-Core-Prod. as shown in Example 3-4.N7K-Core# show vdc vdc_id -----1 2 3 vdc_name -------prod dev MyVDC state ----active active active mac ---------00:18:ba:d8:3f:fd 00:18:ba:d8:3f:fe 00:18:ba:d8:3f:ff If you are executing the show vdc command within the nondefault VDC. the output displays information about the current VDC.

vdc restart count: 0 You can display the detailed information about the current VDC by executing show vdc {vdc name} detail from the nondefault VDC. which is executed from the . as shown in Example 3-6. You can save all the running configurations to the startup configuration for all VDCs using one command. Use the copy running-config startup-config vdc-all command. it will show the interface membership only for that VDC. Example 3-6 Displaying Interface Allocation per VDC Click here to view code image N7K-Core# show vdc membership vdc_id: 1 vdc_name: N7K-Core interfaces: Ethernet2/1 Ethernet2/2 Ethernet2/4 Ethernet2/5 Ethernet2/7 Ethernet2/8 Ethernet2/10 Ethernet2/11 Ethernet2/13 Ethernet2/14 Ethernet2/16 Ethernet2/17 Ethernet2/19 Ethernet2/20 Ethernet2/22 Ethernet2/23 Ethernet2/25 Ethernet2/26 Ethernet2/28 Ethernet2/29 Ethernet2/31 Ethernet2/32 Ethernet2/34 Ethernet2/35 Ethernet2/37 Ethernet2/38 Ethernet2/40 Ethernet2/41 Ethernet2/43 Ethernet2/44 Ethernet2/48 vdc_id: 2 vdc_name: prod interfaces: Ethernet2/47 vdc_id: 3 vdc_name: dev interfaces: Ethernet2/46 Ethernet2/3 Ethernet2/6 Ethernet2/9 Ethernet2/12 Ethernet2/15 Ethernet2/18 Ethernet2/21 Ethernet2/24 Ethernet2/27 Ethernet2/30 Ethernet2/33 Ethernet2/36 Ethernet2/39 Ethernet2/42 Ethernet2/45 If you execute show VDC membership from the VDC you are currently in. as shown in Example 3-5. Example 3-5 Displaying Detailed Information About the Current VDC When You Are in the Nondefault VDC Click here to view code image N7K-Core-prod# show vdc prod detail vdc id: 2 vdc name: prod vdc state: active vdc mac address: 00:22:55:79:a4:c2 vdc ha policy: RESTART vdc dual-sup ha policy: SWITCHOVER vdc boot Order: 1 vdc create time: Thu May 14 08:15:22 2012 vdc restart count: 0 To verify the interface allocation per VDC. execute show vdc membership from the default VDC.

and several . after that.php N7K-Core-Prod# To switch back to the default VDC. Example 3-8 Navigating Between VDCs on the Nexus 7000 Switch Click here to view code image N7K-Core# switchto vdc Prod TAC support: http://www.php and http://www. A copy of each such license is available at http://www. The FEX technology can be extended all the way to the server hypervisor. when you create the user accounts and configure the IP connectivity.opensource. you must do switchback to the default VDC first before going to another VDC.default VDC. you use Secure Shell (SSH) and Telnet to connect to the desired VDC.cisco. Describing Network Interface Virtualization You can deliver an extensible and scalable fabric solution using the Cisco FEX technology. executed from the default VDC. VLAN. as shown in Example 3-8. Virtual Interface (VIF): Logical construct consisting of a front panel port. You cannot see the configuration for other VDCs except from the default VDC. use the show running-config vdc-all command. This technology enables operational simplicity by having a single point of management and policy enforcement on the access parent switch.org/licenses/lgpl-2.1. Cisco Systems.0.com/tac Copyright (c) 2002-2008. You cannot execute the switchto command from the nondefault VDC.org/licenses/gpl-2. All rights reserved. Cisco Nexus 2000 FEX Terminology The following terminology and acronyms are used with the Nexus 2000 fabric extenders: Network Interface (NIF): Port on FEX. You can navigate from the default VDC to the nondefault VDC using the switchto command. connecting to parent switch. Certain components of this software are licensed under the GNU General Public License (GPL) version 2.opensource.1. use the switchback command. Example 3-7 Saving All Configurations for All VDCs Click here to view code image N7K-Core# copy running-config startup-config vdc-all [########################################] 100% To display the running configurations for all VDCs. Use these commands for the initial setup.0 or the GNU Lesser General Public License (LGPL) Version 2. The copyrights to certain works contained in this software are owned by other third parties and used and distributed under license. Inc. connecting to server. Host Interface (HIF): Front panel port on FEX.

this design is a ToR design. From a cabling point of view. Figure 3-10 Nexus 2000 Interface Types and Acronyms Nexus 2000 Series Fabric Extender Connectivity The Cisco Nexus 2000 Series cannot run as a standalone switch. FEX A/A or dual-attached FEX: Scenario in which the fabric links of FEX are connected. From the logical network deployment point of view. This switch can be a Nexus 9000. this design is an EoR design. The FEX acts as a . Fabric Port: N7K side of the link connected to FEX. Fabric Extender (FEX): NEXUS 2000 is the instantiation of FEX. it needs a parent switch. FEX port: Server-facing port on FEX. actively forwarding traffic to two parent switches. Parent switch: Switch (N7K/N5K) where FEX is connected. or Nexus 5000 Series.other parameters. Nexus 6000. as stated before. The uplink ports from the Nexus 2000 will be connected to the parent switch. dual redundant Nexus 2000 fabric extenders are placed at the top of the rack. The FEX uplink is also called a Network Interface (NIF). The cabling between the servers and the Nexus 2000 Series fabric extenders is contained within the rack. Nexus 5600. Fabric links (Fabric Port Channel or FPC): Links that connect FEX NIF and FEX fabric ports on parent switch. Figure 3-10 shows the ports with reference to the physical hardware. also referred to as server port or host port in this presentation. Nexus 7000. which will be installed at the EoR. Logical interface (LIF): Presentation of a front panel port in parent switch. Fabric Port Channel: Port channel between N7K and FEX. This type of design combines the benefit of the Top-of-Rack (ToR) design with the benefit of the End-of-Row (EoR) design. which will be 10/40 Gbps. Based on the type and the density of the access ports required. FEX port is also called a Host Interface (HIF). Only a small number of cables will run between the racks. FEX uplink: Network-facing port on the FEX side of the link connected to N7K.

1Qbh working group. However. VN-Tag is a Network Interface Virtualization (NIV) technology. 802. so all traffic must be sent to the parent switch. At the time of writing this book there is no local switching happening on the Nexus 2000 fabric extender.remote line card. The effort was moved to the IEEE 802. Note You can connect a switch to the FEX host interfaces. At the beginning. To make this work. all configuration and maintenance tasks are done from the parent switch.1BR in July 2011. All control and management functions are performed by the parent switch. and no operation tasks are required to be performed from the FEXs. as stated earlier. The host connected to the Nexus 2000 fabric extender is unaware of any tags. The server appears as if it is connected directly to the parent switch. Figure 3-10 shows the physical and the logical architecture of the FEX implementation. The Nexus 2000 host ports (HIFs) do not expect to receive BPDUs on the host ports and expect only end hosts like servers. you need to turn off Bridge Protocol Data Units (BPDUs) from the access switch and turn on “bpdufilter” on the Nexus 2000. “Cisco Nexus Product Family” in the section “Cisco Nexus 2000 Fabric Extenders Product Family” to see the different models and specifications. Refer to Chapter 2. From an operation perspective. which might create a loop in the network.1Qbh was withdrawn and is no longer an approved IEEE project. Figure 3-11 shows the physical and logical view for the Nexus 2000 fabric extender deployment. this design has the advantage of the EoR design. but this is not a recommended solution. Forwarding is performed by the parent switch. Figure 3- . and the hosts connected to the Nexus 2000 fabric extender appear as if they are directly connected to the parent switch. The physical ports on the Nexus 2000 fabric extender appear as logical ports on the parent switch. VN-Tag was being standardized under the IEEE 802. A frame exchanged between the Nexus 2000 fabric extender and the parent switch will have an information tag inserted in it called VN-Tag. which enables advanced functions and policies to be applied to it. Figure 3-11 Nexus 2000 Deployment Physical and Logical View VN-Tag Overview The Cisco Nexus 2000 Series fabric extender acts as a remote line card to a parent switch.

these events occur: 1. The Cisco Nexus 2000 Series switch adds a unique VN-Tag for each Cisco Nexus 2000 Series host interface. The frame arrives from the host.12 shows the VN-Tag fields in an Ethernet frame. Figure 3-13 shows the two scenarios. These are the VN-Tag field values: . Figure 3-12 VN-Tag Fields in an Ethernet Frame d: Direction bit (0 is host-to-network forwarding. 2. The Cisco Nexus 2000 Series switch adds a VN-Tag. and the packet is forwarded over a fabric link using a specific VN-Tag. Figure 3-13 Cisco Nexus 2000 Packet Flow When the host sends a packet to the network (diagrams A and B).1 is network-to-host) p: Pointer bit (set if this is a multicast frame and requires egress replication) L: Looped filter (set if sending back to source Cisco Nexus 2000 Series) VIF: Virtual interface Cisco Nexus 2000 FEX Packet Flow The packet processing on the Nexus 2000 happens in two parts: when the host send a packet to the network and when the network sends a packet to the host.

When the network sends a packet to the host (diagram C). Cisco Nexus 2000 FEX Port Connectivity You can connect the Nexus 2000 Series fabric extender to one of the following Ethernet modules that are installed in the Cisco Nexus 7000 Series. and the frame is forwarded to the host interface. and destination virtual interface are undefined (0). The Cisco Nexus switch inserts a VN-Tag with these characteristics: The direction is set to 1 (network to host). The l (looped) bit filter is set if sending back to a source Cisco Nexus 2000 Series switch. which identifies the logical interface that corresponds to the physical host interface on the Cisco Nexus 2000 Series. The frame is received on the physical or logical interface. The p (pointer). The destination virtual interface is set to be the Cisco Nexus 2000 Series port VN-Tag. The Cisco Nexus switch performs standard lookup and policy processing when the egress port is determined to be a logical interface (Cisco Nexus 2000 Series) port.The direction bit is set to 0. The Cisco Nexus 2000 Series switch strips the VN-Tag. 2. Physical link-level properties are based on the Cisco Nexus Series switch port. The packet is received over the fabric link using a specific VN-Tag. The Cisco Nexus switch extracts the VN-Tag. 3. depending on the type and model of card installed in it. The source virtual interface is set based on the ingress host interface. indicating host-to network forwarding. The packet is forwarded over a fabric link using a specific VN-Tag. these events occur: 1. The source virtual interface is set if the packet was sourced from a Cisco Nexus 2000 Series port. 4. The Cisco Nexus switch strips the VN-Tag and sends the packet to the network. 3. The Cisco Nexus switch applies an ingress policy that is based on the physical Cisco Nexus Series switch port and logical interface: Access control and forwarding are based on frame fields and virtual (logical) interface policy. There is a best practice for connecting the Nexus 2000 fabric extender to the Nexus 7000. The p bit is set if this frame is a multicast frame and requires egress replication. 12-port 40-Gigabit Ethernet F3-Series QSFP I/O module (N7K-F312FQ-25) for Cisco Nexus 7000 Series switches 24-port Cisco Nexus 7700 F3 Series 40-Gigabit Ethernet QSFP I/O module (N77-F324FQ-25) 48-port Cisco Nexus 7700 F3 Series 1/10-Gigabit Ethernet SFP+ I/O module (N77-F348XP23) 32-port 10-Gigabit Ethernet SFP+ I/O module (N7K-M132XP-12) 32-port 10-Gigabit Ethernet SFP+ I/O module (N7K-M132XP-12L) 24-port 10-Gigabit Ethernet I/O M2 Series module XL (N7K-M224XP-23L) 48-port 1/10-Gigabit Ethernet SFP+ I/O F2 Series module (N7K-F248XP-25) Enhanced 48-port 1/10-Gigabit Ethernet SFP+ I/O module (F2e Series) (N7K-F248XP-25E) . l (looped).

The host interfaces are divided by the number of the max-links and distributed accordingly. which makes the FEX use individual links connected to the parent switch. A fabric interface that fails in the port channel does not trigger a change to the host interfaces. If all links in the fabric port channel go down. Figure 3-14 Cisco Nexus 2000 Connectivity to N7K-M132XP-12 There are two ways to connect the Nexus 2000 to the parent switch: either use static pinning. You must use the pinning max-links command to create several pinned fabric interface connections so that the parent switch can determine a distribution of host interfaces. or use a port channel when connecting the fabric interfaces on the FEX to the parent switch. all the associated host interfaces go down and will remain down as long as the fabric port is down. all host interfaces on the FEX are set to the down state. The default value is max-links 1. To provide load balancing between the host interfaces and the parent switch. Traffic is automatically redistributed across the remaining links in the port channel fabric interface. This connection bundles 10-Gigabit Ethernet fabric interfaces into a single logical channel. the bandwidth that is dedicated to each end host toward the parent switch is never changed by the switch. when the FEX is powered up and connected to the parent switch.48-port 1/10-Gigabit Ethernet SFP+ I/O module (F2e Series) (N77-F248XP-23E) Figure 3-14 shows the connectivity between the Cisco Nexus 2000 fabric extender and the N7KM132XP-12 line card. This is explained in more detail in the next paragraphs. . In static pinning. but instead is always specified by you. The drawback here is that if the fabric link goes down. its host interfaces are distributed equally among the available fabric interfaces. As a result. you can configure the FEX to use a port channel fabric interface connection.

6000. You can disallow the installed fabric extender feature set in a specific VDC on the device. Figure 3-15 Cisco Nexus 2000 and VDCs Cisco Nexus 2000 FEX Configuration on the Nexus 7000 Series How you configure the Cisco Nexus 2000 Series on the Cisco 7000 Series is different from the configuration on Cisco Nexus 5000. Use the following steps to connect the Cisco Nexus 2000 to the Cisco 7000 Series parent switch when connected to one of the supported line cards. Figure 3-15 shows the FEX connectivity to the Nexus 7000 when you have multiple VDCs.When you have VDCs created on the Nexus 7000. Example 3-11 Entering the Chassis Mode for the Fabric Extender Click here to view code image N7K-Agg(config)# fex 101 . All FEX fabric links belong to the same VDC. The difference comes from the VDC-based architecture of the Cisco Nexus 7000 Series switches. you must follow certain rules. FEX IDs are unique across VDCs (the same FEX ID cannot be used across different VDCs). Example 3-9 Installing the Fabric Extender Feature Set in Default VDC Click here to view code image N7K-Agg(config)# N7K-Agg(config)# install feature-set fex Example 3-10 Enabling the Fabric Extender Feature Set Click here to view code image N7K-Agg(config)# N7K-Agg(config)# feature-set fex By default. when you install the fabric extender feature set. and 5600 Series switches. it is allowed in all VDCs. all FEX host ports belong to the same VDC.

we create a port channel with four member ports. so it might be different depending on which VDC you want to connect to.N7K-Agg(config-fex)# description Rack8-N2k N7K-Agg(config-fex)# type N2232P Example 3-12 Defining the Number of Uplinks Click here to view code image N7K-Agg(config-fex)# pinning max-links 1 You can use this command if the fabric extender is connected to its parent switch using one or more statically pinned fabric interfaces. Example 3-13 Disallowing Enabling the Fabric Extender Feature Set Click here to view code image N7K-Agg# configure terminal N7K-Agg (config)# vdc 1 N7K-Agg (config-vdc)# no allow feature-set fex N7K-Agg (config-vdc)# end N7K-Agg # Note The N7k-agg# VDC1 is the VDC_ID. In Example 3-14. The next step is to associate the fabric extender to a port channel interface on the parent device. Example 3-14 Associate the Fabric Extender to a Port Channel Interface Click here to view code image N7K-Agg# configure terminal N7K-Agg (config)# interface ethernet 1/28 N7K-Agg (config-if)# channel-group 4 N7K-Agg (config-if)# no shutdown N7K-Agg (config-if)# exit N7K-Agg (config)# interface ethernet 1/29 N7K-Agg (config-if)# channel-group 4 N7K-Agg (config-if)# no shutdown N7K-Agg (config-if)# exit N7K-Agg (config)# interface ethernet 1/30 N7K-Agg (config-if)# channel-group 4 N7K-Agg (config-if)# no shutdown N7K-Agg (config-if)# exit N7K-Agg (config)# interface ethernet 1/31 N7K-Agg (config-if)# channel-group 4 N7K-Agg (config-if)# no shutdown N7K-Agg (config-if)# exit N7K-Agg (config)# interface port-channel 4 . There can be only one port channel connection.

Num Macs: 64 Module Sw Gen: 21 [Switch Sw Gen: 21] pinning-mode: static Max-links: 1 Fabric port for control traffic: Po101 Fabric interface state: Po101 .159.N7K-Agg (config-if)# switchport N7K-Agg (config-if)# switchport mode fex-fabric N7K-Agg (config-if)# fex associate 101 After the FEX is associated successfully. Example 3-15 Verifying FEX Association Click here to view code image N7K-Agg# show fex FEX FEX FEX FEX Number Description State Model Serial -----------------------------------------------------------------------101 FEX0101 Online N2K-C2248TP-1GE JAF1418AARL Example 3-16 Display Detailed Information About a Specific FEX Click here to view code image N7K-Agg# show fex 101 detail FEX: 101 Description: FEX0101 state: Online FEX version: 5. Extender Serial: JAF1418AARL Part No: 73-12748-05 Card Id: 99.Interface Up.Interface Up.1(0. State: Active Eth2/1 .Interface Up. State: Active Eth4/2 .Interface Up. State: Active Eth4/1 .1(1)] FEX Interim version: 5. Mac Addr: 54:75:d0:a9:49:42.6) Switch Interim version: 5.1(1) Extender Model: N2K-C2248TP-1GE.1(1) [Switch version: 5. you must do some verification to make sure everything is configured properly. State: Active Fex Port State Fabric Port Primary Fabric Eth101/1/1 Up Po101 Po101 Eth101/1/2 Up Po101 Po101 Eth101/1/3 Down Po101 Po101 Eth101/1/4 Down Po101 Po101 Example 3-17 Display Which FEX Interfaces Are Pinned to Which Fabric Interfaces Click here to view code image N7K-Agg# show interface port-channel 101 fex-intf Fabric FEX Interface Interfaces --------------------------------------------------Po101 Eth101/1/2 Eth101/1/1 .Interface Up. State: Active Eth2/2 .

To send traffic between VDCs you must use external physical cables that lead to greater security of user data. each VDC will have its own management domain. Table 3-2 lists a reference for these key topics and the page numbers on which each is found. Cisco Nexus 2000 cannot operate in a standalone mode. allowing the management plane also to be virtualized. it needs a parent switch to connect to. . noted with the key topics icon in the outer margin of the page. This technology enables operational simplicity by having a single point of management and policy enforcement on the access parent switch Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. The FEX technology can be extended all the way to the server hypervisor.Example 3-18 Display the Host Interfaces That Are Pinned to a Port Channel Fabric Interface Click here to view code image N7K-Agg# show interface port-channel 4 fex-intf Fabric FEX Interface Interfaces --------------------------------------------------Po4 Eth101/1/48 Eth101/1/47 Eth101/1/46 Eth101/1/44 Eth101/1/43 Eth101/1/42 Eth101/1/40 Eth101/1/39 Eth101/1/38 Eth101/1/36 Eth101/1/35 Eth101/1/34 Eth101/1/32 Eth101/1/31 Eth101/1/30 Eth101/1/28 Eth101/1/27 Eth101/1/26 Eth101/1/24 Eth101/1/23 Eth101/1/22 Eth101/1/20 Eth101/1/19 Eth101/1/18 Eth101/1/16 Eth101/1/15 Eth101/1/14 Eth101/1/12 Eth101/1/11 Eth101/1/10 Eth101/1/8 Eth101/1/7 Eth101/1/6 Eth101/1/4 Eth101/1/3 Eth101/1/2 Eth101/1/45 Eth101/1/41 Eth101/1/37 Eth101/1/33 Eth101/1/29 Eth101/1/25 Eth101/1/21 Eth101/1/17 Eth101/1/13 Eth101/1/9 Eth101/1/5 Eth101/1/1 Summary VDCs on the Nexus 7000 switches enable true device virtualization.

Table 3-2 Key Topics for Chapter 3 Define Key Terms Define the following key terms from this chapter. and check your answers in the Glossary: Virtual Device Context (VDC) supervisor engine (SUP) demilitarized zone (DMZ) erasable programmable logic device (EPLD) Fabric Extender (FEX) role-based access control (RBAC) network interface (NIF) host interface (HIF) virtual interface (VIF) logical interface (LIF) End-of-Row (EoR) Top-of-Rack (ToR) VN-Tag .

Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. Data Center Interconnect This chapter covers the following exam topics: Identify Key Differentiators Between DCI and Network Interconnectivity Cisco Data Center Interconnect (DCI) solutions enable IT organizations to meet two main objectives: business continuity in case of disaster events and corporate and regularity compliance to improve data security.” Table 4-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. True b. a. False 3.Chapter 4. If you do not know the answer to a question or are only partially sure of the answer. Which of the following Cisco products supports OTV? a. Nexus 7000 Series b. which enables you to have data replication. Catalyst 6500 Series 2. you learn about the latest Cisco innovations in the Data Center Extension solutions and in LAN Extension in particular. which is called Overlay Transport Virtualization (OTV). we can set up static MAC-to-IP mappings in OTV. ASR1000 Series d. “Answers to the ‘Do I Know This Already?’ Quizzes. server clustering. True or false? For silent hosts. ASR9000 Series c. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. DCI solutions in general extend LAN and SAN connectivity between geographically dispersed data centers. Which version of IGMP is used on the OTV join interface? . you should mark that question as wrong for purposes of the self-assessment. and workload mobility. In this chapter. read the entire chapter. You can find the answers in Appendix A. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. Table 4-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. 1.

A solution that provides Layer 2 connectivity. True or False? You need to do no shutdown for the overlay interface after adding it. Dense wavelength-division multiplexing (DWDM) or coarse wavelengthdivision multiplexing (CWDM) can increase the number of Layer 2 connections that can be run through the fibers. No need to enable IGMP on the OTV join interface 4. IGMPv2 c. yet restricts the reach of the flood domain. 4 d. VPLS uses an MPLS network as the underlying transport network. Dark fiber: In some cases. False Foundation Topics Introduction to Overlay Transport Virtualization Data Center Interconnect (DCI) solutions have been available for quite some time. instead of point-to-point Ethernet pseudowires. dark fiber might be available to build private optical connections between data centers. Different types of technologies provide Layer 2 extension solutions: Ethernet over Multiprotocol Label Switching (EoMPLS): EoMPLS can be used to provision point-to-point Layer 2 Ethernet connections between two sites using MPLS as the transport. application. IGMPv3 d. Virtual Private LAN Services (VPLS): Similar to EoMPLS. VPLS delivers a virtual multiaccess Ethernet network. Transport infrastructure nature: The nature of the transport between data centers varies depending on the location of the data centers and the availability and cost of services in the . Layer 2 extensions might be required at different layers in the data center to enable the resiliency and clustering mechanisms offered by the different applications at the web. is necessary to contain failures and preserve the resiliency achieved by the use of multiple data centers. 8 5. However. and database layers. 2 c. 1 b. True b.a. IGMPv1 b. Using these technologies to extend the LAN across multiple data centers creates a series of challenges: Failure isolation: The extension of Layer 2 domains between multiple data center sites can cause the data centers to share protocols and failures that would normally have been isolated when interconnecting data centers over an IP network. it is mainly used to enable the benefit of geographically separated data centers. a. How many join interfaces can you create per logical overlay? a. These technologies increase the total bandwidth over the same number of fibers.

Bridge Protocol Data Units (BPDU) are not forwarded on the overlay. following are the minimum requirements to run OTV on each platform: Cisco Nexus 7000 Series and Cisco Nexus 7700 platform: .different areas. There is no need to extend Spanning Tree Protocol (STP) across the transport infrastructure. and it provides flexibility and enables long-reach connectivity. A simple overlay protocol with built-in capability and point-to-cloud provisioning is crucial to reducing the cost of providing this connectivity. and no extra configuration is needed to make it work. OTV doesn’t require additional configuration to offer multihoming. distributed provisioning. Bandwidth utilization with replication. this results in failure containment within a data center. load balancing. Complex operations: Layer 2 VPNs can provide extended Layer 2 connectivity across data centers. therefore. which typically rely on data plane learning and the use of flooding across the transport infrastructure to learn reachability. addition or removal of an edge device does not affect other edge devices on the overlay. it is built in natively in OTV using the same control plane protocol along with loop prevention mechanisms. Although the features. and an operationally intensive hierarchical scaling model. Each site will have its own spanning-tree domain. which is independent from the other site even though all sites have a common Layer 2 domain. Cisco has two main products that currently support OTV: Cisco Nexus 7000 Series switches and Cisco ASR 1000 Series aggregation services routers. An IP-capable transport is the most generalized offering. A solution capable of using an IP transport is expected to provide the most flexibility. Mechanisms must be provided to prevent loops that might be induced when connecting bridged networks that are multihomed. multihoming of the Layer 2 sites onto the VPN is required. and path diversity: When extending Layer 2 domains across data centers. Balancing the load across all available paths while providing resilient connectivity between the data center and the transport network requires added intelligence above and beyond that available in traditional Ethernet switching and Layer 2 VPNs. Finally. and convergence times continue to improve. the use of available bandwidth between data centers must be optimized to obtain the best connectivity at the lowest cost. OTV provides Layer 2 extension between remote data centers by using a concept called MAC address routing. scale. Therefore. and failure doesn’t propagate into other data centers and stops at the edge device. A cost-effective solution for the interconnection of data centers must be transport agnostic and give the network designer the flexibility to choose any transport between data centers based on business and operational preferences. This has a huge benefit over traditional data center interconnect solutions. The loop prevention mechanisms prevent loops on the overlay and prevent loops from the sites that are multihomed to the overlay. but will usually involve a mix of complex protocols. This is built-in functionality. and there is no need to establish virtual circuits between the data center sites. Multihoming with end-to-end loop prevention: LAN extension techniques should provide a high degree of resiliency. Each Ethernet frame is encapsulated into an IP packet and delivered across the transport infrastructure. Multicast and broadcast traffic should also be replicated optimally to reduce bandwidth consumption. OTV must be deployed only at specific edge devices. meaning a control plane protocol is used to exchange MAC address reachability information between network devices providing the LAN extension functionality.

These terminologies are used throughout the chapter.5 or later Advanced IP Services or Advanced Enterprise Services license Note This chapter focuses on OTV implementation using the Nexus 7000 Series. OTV configuration is done on the edge device and is completely transparent to the rest of the infrastructure.0(3) or later (Cisco Nexus 7000 Series) or Cisco NX-OS Release 6. which is shown in Figure 4-1. you must know OTV terminology. Each site can have one or more edge devices. Figure 4-1 OTV Terminology OTV edge device: The edge device performs OTV functions. it receives the Layer 2 traffic for all VLANS that need to be extended to remote sites and dynamically encapsulates the Ethernet frames into IP packets and sends them across through the transport infrastructure. OTV Terminology Before you learn how OTV control plane and data plane works.Any M-Series (Cisco Nexus 7000 Series) or F3 (Cisco Nexus 7000 Series or 7700 platform) line card for encapsulation Cisco NX-OS Release 5. Because the edge device needs to see the .2(2) or later (Cisco Nexus 7700 platform) Transport Services license Cisco ASR 1000 Series: Cisco IOS XE Software Release 3.

Layer 2 traffic. which gets encapsulated in an IP unicast or multicast packet and sent to the remote data center. OTV edge devices continue to use the site VLAN to . such as a Layer 3 port channel or a Layer 3 port-channel subinterface. Site VLAN: OTV edge devices send hellos on the site VLAN to detect other OTV edge devices in the same site. The internal interface doesn’t need any specific OTV configuration. the frame is logically forwarded to the overlay interface. It must be defined by the user. The IP address of the join interface is used to advertise reachability of MAC addresses present in the site. You can have only one join interface associated with a given overlay. To address this concern. are performed on the internal interfaces. it is recommended to have it at the Layer 3 boundary for the site so it can see all the VLANs. The join interface “joins” the overlay network and discovers the other overlay remote edge devices. OTV join interface: The OTV join interface is a Layer 3 interface.2 (1) NX-OS release. the only requirement is to allow IP reachability between the edge devices. it can be a logical interface. and you can have one join interface shared across multiple overlays. the mechanism of electing AEDs that is based only on the communication established on the site VLAN can create situations (resulting from connectivity issues or misconfiguration) where OTV edge devices belonging to the same site can fail to detect one another. Prior to NX-OS release 5. OTV internal interface: The Layer 2 interface on the edge device that connects to the VLANS to be extended is called the internal interface. OTV used the Site VLAN within a site to detect and establish adjacencies with other OTV edge devices. spanning-tree. The forwarder is known as the Authoritative Edge Device (AED). Overlay network: This is a logical network that interconnects OTV edge devices. It is part of the OTV control plane and doesn’t need additional configuration. The internal interface is usually configured as an access or trunk interface and receives Layer 2 traffic. The edge devices talk to each other on the internal interfaces to elect the AED. Authoritative edge device: OTV elects a designated forwarding edge device per site for each VLAN to provide loop-free multihoming. So we create one VDC per each Nexus 7000 to have redundancy. which has VLANs that must be extended to another site—meaning another data center. Site: An OTV site is the same as a data center. however. one of the recommended designs is to have a separate VDC on the Nexus 7000 acting as the OTV edge device. The OTV edge device can be deployed in many places. and flooding. starting with the 5. such as local switching. thereby ending up in an active/active mode (for the same data VLAN). each OTV device maintains dual adjacencies with other OTV edge devices belonging to the same DC site. All the OTV configuration is applied to the overlay interface. However. It is recommended to use a dedicated VLAN as a site VLAN that is not extended across the overlay. typical Layer 2 functions. or both.2(1). OTV overlay interface: The OTV overlay interface is a logical multiaccess and multicast capable interface. it can be a physical interface or subinterface. When the OTV edge device receives a Layer 2 frame that needs to be delivered to a remote data center site. At the time of writing this book. This could ultimately result in the creation of a loop scenario. It can be owned by the customer or by the service provider. which need to be extended. It uses the site VLAN to elect the authoritative edge device for the VLANs. Transport network: This is the network that connects the OTV sites.

In addition to the site adjacency. If the transport infrastructure is owned by the service provider. the original frames must be OTV encapsulated. you can use one or more OTV edge devices as an adjacency server. This adjacency is called site adjacency. The combination of the site identifier and the IS-IS system ID is used to identify a neighbor edge device in the same site. named overlay adjacency. we use a multicast group to exchange control protocol messages between the OTV edge devices. You need only to specify the ASM group to be used and associate it with a given overlay interface. The edge devices join the group as hosts. Multicast Enabled Transport Infrastructure When the transport infrastructure is enabled for multicast. The entire edge device will be a source and a receiver on that ASM group. only IGMPv3 is enabled on the interface using the IP IGMP version 3 command. leveraging the join interface. which is used to carry control protocol exchanges. The control protocol function is to advertise MAC address reachability information instead of using data plane learning. the enterprise must negotiate the use of this ASM group with the service provider. The source IP address in the external header is set to the IP address of the join . All edge devices that are in the same site must be configured with the same site identifier. which runs between OTV edge devices. Each OTV edge device sends an IGMP report to join the specific ASM group. This site identifier is advertised in IS-IS hello packets sent over both the overlay and on the site VLAN. All OTV edge devices belonging to a specific overlay register with the available list of the adjacency servers. 3. depending on the nature of the transport infrastructure: Multicast Enabled Transport: If the transport infrastructure is enabled for multicast. established via the join interfaces across the Layer 3 network domain. all OTV edge devices should be configured to join a specific Any Source Multicast (ASM) group. OTV edge devices must become adjacent to each other. For this to happen. Unicast Enabled Transport: If the transport infrastructure is not enabled for multicast. The following steps explain the sequence. which leads to the discovery of all OTV edge devices belonging to a specific overlay: 1. it is now mandatory to configure each OTV device also with a site-identifier value. To be able to exchange MAC reachability information. 2. which relies on flooding of Layer 2 traffic across the transport infrastructure. This is required to communicate its existence and to trigger the establishment of control plane adjacencies within the same overlay. starting from NX-OS 5. OTV Control Plane One of the key differentiators of OTV is the use of the control plane protocol. This happens without enabling PIM on this interface. The OTV control protocol running on each OTV edge device generates hello packets that need to be sent to all other OTV edge devices.2(1). The OTV hello messages are sent across the logical overlay to reach all OTV remote edge devices. adding an external IP header. This can happen in two ways. To enable this new functionality. OTV devices also maintain a second adjacency.discover and establish adjacency with other OTV edge devices in a site.

The only difference with the traditional MAC address table entry is that for the MAC address in the remote data center. 6. the unicast deployment of OTV is . The OTV edge devices receiving the update decapsulate the message and deliver it to the OTV control process. An OTV update message is created containing information for the MAC address learned through the internal interface. The receiving OTV edge devices decapsulate the packets. and the end result is the creation of OTV control protocol adjacencies between all edge devices.interface of the edge device. The hellos are passed to the control protocol process. it will point to the joint interface of the remote OTV edge device. 5. This prevents unauthorized routing messages from being injected into the network routing domain. originally designed with the capability of carrying MAC address information in the TLV. 3. whereas the destination is the multicast address of the ASM group dedicated to carry the control protocol. The multicast frames are carried across the transport and optimally replicated to reach all the OTV edge devices that joined that multicast group. For security. the OTV control plane protocol is enabled. it is possible to leverage the IS-IS HMAC-MD5 authentication feature to add an HMAC-MD5 digest to each OTV control protocol message. The MAC address reachability information is imported to the MAC address tables of the edge devices. the source IP address in the external header is set to the IP address of the join interface of the OTV edge device. Note To enable OTV functionality you must use the feature otv command. The OTV update is replicated in the transport infrastructure and delivered to all the remote OTV edge devices on the same overlay. The message is OTV encapsulated and sent into the OTV transport infrastructure. 4. it doesn’t require a specific configuration command to do it. instead of pointing to a physical interface. the next step is to exchange MAC address reachability information: 1. The OTV edge device learns new MAC addresses on its internal interface using traditional data plane learning. 4. The use of the ASM group to transport the hello messages allows the edge devices to discover each other as if they were deployed on a shared LAN segment. 2. the device doesn’t display any OTV commands until you enable the feature. It was selected because it is a standard-based protocol. The IP destination address of the packet in the outer header is the multicast group used for OTV control protocol exchanges. The resulting multicast frame is then sent to the join interface toward the Layer 3 network domain. The same process occurs in the opposite direction. After enabling the OTV overlay interface. Unicast-Enabled Transport Infrastructure OTV can be deployed with a unicast-only transport infrastructure. The routing protocol used to implement the OTV control plane is IS-IS. After the OTV devices discover each other.

MAC 1 needs to talk to MAC 2. They both belong to Site A and they are in the same VLAN. it builds the list of all the OTV edge devices that are part of the same overlay. each OTV edge device needs to know the list of the other devices that are part of the same overlay. the information in the MAC table points to Eth1 and the frame is delivered to the destination.used when a multicast-enabled transport infrastructure is not an option. When a Layer 2 frame is received at the aggregation device. This list is periodically sent to all the listed OTV edge devices. a normal Layer 2 lookup determines how to reach MAC 2. When the adjacency server receives hello messages from the remote OTV edge devices. As shown in Figure 4-2. The following process details the communication: Intrasite unicast communication: Intrasite means that both servers that need to talk to each other reside in the same data center. This is achieved by having one or more OTV edge devices as an adjacency server. Figure 4-2 Intrasite Layer 2 Unicast Communication Intersite unicast communication: Intersite means that the servers that need to talk to each other . There are two types of Layer 2 communication: intrasite (traffic within the same site) and intersite (traffic between two different sites on the same overlay). To be able to send control protocol messages to all the OTV edge devices. which is the OTV edge device in this case. The only difference is that each OTV edge device must create a copy of the control plane packet and send it to each remote OTV edge device. The OTV control plane over unicast transport infrastructure works in the same way the multicast transport works. The list is called a unicast-replication list. traffic can start flowing across the overlay. Data Plane for Unicast Traffic After all OTV edge devices are part of an overlay establish adjacency and MAC address reachability information is exchanged. Each OTV edge device must join a specific overlay register with the adjacency server.

and the frame is delivered to MAC 3 destination. The destination IP address in the outer header is the IP address of the joint interface of the remote OTV edge device in Site B. it decapsulates it and exposes the original Layer 2 packet. MAC 3 will point to a local physical interface. 3. a normal Layer 2 lookup happens. When the remote OTV edge device receives the IP packets. The OTV encapsulation increases the overall MTU size by 42 bytes. The OTV edge device removes the CRC and the 802. the source IP of the outer header is the IP address of the joint interface of the OTV edge device in Site A. As shown in Figure 4-3. The OTV edge device in Site A encapsulates the Layer 2 frame. . 2. The following procedure details the process for MAC 1 to establish communication with MAC 3: 1. a normal IP unicast packet is sent across the transport infrastructure. after that.reside in different data centers but belong to the same overlay. The Layer 2 frame is received at the aggregation device. 4. MAC 3 in the MAC table in Site A points to the IP address of the remote OTV edge device in Site B.1Q fields from the original Layer 2 frame and adds an OTV shim (containing the VLAN and the overlay information) and an external IP header. The OTV edge device at Site B performs another Layer 2 lookup on the original Ethernet frame. MAC 1 needs to talk to MAC 3. This time. Figure 4-3 Intersite Layer 2 Unicast Communication Figure 4-4 shows the OTV data plane encapsulation on the original Ethernet frame. MAC 1 belongs to Site A and MAC 3 belongs to Site B. which is the OTV edge device at Site A in this case.

Figure 4-4 OTV Data Plane Encapsulation All OTV control and data plane packets originate from an OTV edge device with the Don’t Fragment (DF) bit set. Step 1 starts when a multicast source is activated in a given data center site. This means that mechanisms like Path MTU Discovery (PMTUD) are not an option in this case. That means you need to accommodate the additional 42 bytes in the MTU size along the path between source and destination OTV edge devices. including the join interface and any links between the data centers that are in the OTV transport. In a Layer 2 domain. Note Nexus 7000 doesn’t support fragmentation and reassembly capabilities. The mapping happens between the site multicast groups and the SSM group range in the transport infrastructure. 1. In the OTV control plane section. Data Plane for Multicast Traffic You might be faced with situations in which there is a need to establish multicast communication between remote sites. A multicast source is activated on the west side and starts streaming traffic to the multicast group Gs. in this case. Lower the MTU on all servers so that the total packet size does not exceed the MTU of the interfaces where traffic is encapsulated. which is explained in the following steps and shown in Figure 4-5. There are two ways to overcome the issue of the increased MTU size: Configure a larger MTU on all interfaces where traffic will be encapsulated. we distinguished between two scenarios where the transport infrastructure is multicast enabled or not. These groups are different from the ASM groups used for the OTV control plane. the assumption is that all intermediate LAN segments support at least the configured interface MTU size of the host. The multicast traffic needs to flow across the OTV overlay. This can happen when the multicast sources are deployed in one site and the multicast receivers are in another remote site. . we use a set of Source Specific Multicast (SSM) groups in the transport to carry the Layer 2 multicast streams.

The mapping information specifies the VLAN (VLAN A) to which the multicast source belongs and the IP address of the OTV edge device that created the mapping. 1. Gd) SSM group. in turn. belonging to VLAN A. 5. The range of SSM groups to be used to carry Layer 2 multicast data streams are specified during the configuration of the overlay interface. The OTV edge device snoops the IGMP message and realizes there is an active receiver in the site interested in group Gs. 2. The client sends an IGMP report inside the east site to join the Gs group. The local OTV edge device receives the first multicast frame and creates a mapping between the group Gs and a specific SSM group Gd available in the transport infrastructure. This allows building an SSM tree (group Gd) across the transport infrastructure that can be used to deliver the multicast stream Gs. which is explained in the following steps and shown in Figure 4-6. 3. 3. Figure 4-5 Mapping of the Site Multicast Group to the SSM Range Step 2 starts when a receiver in the remote site deployed in the same VLAN as the multicast source decides to join the multicast group Gs.2. The edge device in the east side finds the mapping information previously received from the OTV edge device in the west side identified by the IP address IP A. The OTV device sends an OTV control protocol message to all the remote edge devices to communicate this information. sends an IGMPv3 report to the transport to join the (IP A. . The OTV control protocol is used to communicate the Gs-to-Gd mapping to all remote OTV edge devices. 4. The east edge device. The remote edge device in the west side receives the GM-Update and updates its Outgoing Interface List (OIL) with the information that group Gs needs to be delivered across the OTV overlay.

IGMP snooping is used to ensure that Layer 2 multicast traffic is sent to the remote DC only when an active receiver exists. Broadcast traffic will be handled the same way as the OTV hello messages. Layer 2 multicast traffic can be sent across the overlay using head-end replication by the OTV edge device deployed in the DC where the multicast source is enabled. ARP suppression. and broadcast policy control. Unknown Unicast Traffic suppression. Failure Isolation Data center interconnect solutions should provide a Layer 2 extension between multiple data center sites without giving up resilience and stability. you don’t need to add any configuration to achieve it. we leverage the same ASM multicast groups in the transport already used for OTV control protocol. For unicast only transport infrastructure. you can now enforce different QoS treatments for data traffic and control traffic. This control plane feature makes each site STP domain independent. Note Cisco NX-OS 6.2(2) introduces a new optional feature called dedicated broadcast group. This is native in OTV. By separating the two multicast addresses. OTV achieves this goal by providing four main functions: Spanning Tree Protocol (STP) isolation. head-end replication performed on the OTV device in the site originating the broadcast would ensure traffic delivery to all the remote OTV edge devices that are part of the unicast-only list. This section explains each of these functions STP Isolation OTV doesn’t forward STP BPDUs across the overlay. The STP root placement configuration and parameters are decided for each site separately. This helps to reduce the amount of head-end replication.Figure 4-6 Receiver Joining Multicast Group Gs When the OTV transport infrastructure is not multicast enabled. When we have a multicast-enabled transport. This feature enables the administrator to configure a dedicated broadcast group to be used by OTV in the multicast core network. Unknown Unicast Handling .

OTV control protocol will map the MAC address to the destination IP address of the join interface of the remote data center. Starting with NX-OS 6. Note For the ARP caching. When the OTV edge device receives a frame destined to a MAC address that is not in the MAC address table. all unknown unicast frames will not be forwarded across the logical overlay. you should put the ARP aging timer of the device to a value lower than its MAC aging timer. This feature is important. By default on the Nexus 7000. in the case of Microsoft Network Load Balancing Services (NLBS). It is assumed that no silent hosts are available and eventually the OTV edge device will learn the MAC address of the host and will advertise it to the other OTV edge devices on the overlay. If the host’s default gateway is placed on a device different from Nexus 7000. ARP Optimization ARP optimization is one of the native features in OTV that helps to reduce the amount of traffic sent across the transport infrastructure.Unknown Unicast Handling OTV control protocol advertises MAC address reachability information between OTV edge devices. OTV Multihoming Multihoming is a native function in OTV where two or more OTV edge devices provide a LAN extension for a given site. that doesn’t happen via the overlay. This helps to reduce the amount of traffic sent across the transport infrastructure. such as Site A. you can statically add the MAC address of the silent host in the OTV edge device MAC address table. The ARP aging timer on the OTV edge devices should always be set lower than the CAM table aging timer.2(2). selective unknown unicast flooding based on the MAC address is supported. because OTV . If the MAC address is in a remote data center. which in turn creates an ARP reply message and sends it back to the originating host. otherwise. The command otv flood mac mac-address vlan vlan-id must be added. the OTV ARP aging timer is 480 seconds and the MAC aging timer is 1800 seconds. which is a broadcast message. This is the normal behavior of a classical Ethernet interface. the Layer 2 traffic is flooded out the internal interfaces. The ARP request eventually reaches the destination machine. When a subsequent ARP request originates to the same IP address in Site A. When a device in one of the data centers. this request is sent across the OTV overlay to Site B. which require the flooding of Layer 2 traffic to function. for example. it will not be forwarded across the OTV overlay but will be locally answered by the OTV edge device. Having more than one OTV edge device at a given site. consider the interaction between ARP and CAM table aging timers because incorrect settings can lead to traffic black holing. The OTV edge device in Site A can snoop the ARP reply and cache the contained mapping information. sends out an ARP request. however. As a workaround.

OTV uses a VLAN called Site VLAN within a site to detect adjacency with other OTV edge devices. In this case. OTV Configuration Example with Multicast Transport This configuration example addresses OTV deployment at the aggregation layer with a multicast transport infrastructure. the outbound traffic will be able to follow the optimal and shortest path. All OTV edge devices in the same site are configured with the same site-identifier value. and to avoid creating an end-to-end loop. can lead to an end-to-end loop. if we don’t filter the hello messages between the sites it will lead to the election of a single default gateway for a given VLAN that will lead to a suboptimal path for the outbound traffic. Because we have the same VLANs exist in multiple sites. which happens via the OTV join interface across the Layer 3 network domain. the OTV device must be configured with a site-identifier value. it allows the use of the same FHRP configuration in different sites. Figure 4-7 shows the topology used. and it is characterized by the same virtual IP and virtual MAC addresses in each data center. First Hop Redundancy Protocol Isolation Another capability introduced by OTV is to filter the First Hop Redundancy Protocol (FHRP —HSRP. In this case. It is assumed the VDC is already created. and so on) messages across the overlay. This is necessary to localize the gateway at a given site and optimize the outbound traffic from that site. Examples 4-1 through 4-5 use a dedicated VDC to do the OTV function. In addition to the site adjacency. The site identifier is advertised through IS-IS hello packets. an Authoritative Edge Device (AED) concept is introduced. It is important that you enable the filtering of FHRP messages across the overlay. You will be using VLANs 10 and 5 to be extended to VLAN 15 as the site VLAN. . always using the local default gateway. To be able to support multihoming.doesn’t send spanning tree BPDUs across the overlay. the same default gateway is available. The combination of the system ID and the site identifier is used to identify a neighbor edge device belonging to the same site. The AED role is elected on a per-VLAN basis at a given site. To enable this functionality. so you will not see steps to create the Nexus 7000 VDC. VRRP. OTV maintains a second adjacency named overlay adjacency.

255.Figure 4-7 OTV Topology for Multicast Transport Example 4-1 Sample Configuration for How to Enable OTV VDC VLANs Click here to view code image OTV-VDC-A(config)# vlan 5-10.26.99/30 AGG-VDC-A(config-if)# ip pim sparse-mode AGG-VDC-A(config-if)# ip igmp version 3 AGG-VDC-A(config-if)# no shutdown Example 4-4 Sample PIM Configuration on the Aggregation VDC Click here to view code image .26. 15 Example 4-2 Sample Configuration of the Join Interface Click here to view code image OTV-VDC-A(config)# interface ethernet 2/2 OTV-VDC-A(config-if)# description [ OTV Join-Interface ] OTV-VDC-A(config-if)# ip address 172.255.98/30 OTV-VDC-A(config-if)# ip igmp version 3 OTV-VDC-A(config-if)# no shutdown Example 4-3 Sample Configuration of the Point-to-Point Interface on the Aggregation Layer Facing the Join Interface Click here to view code image AGG-VDC-A(config)# interface ethernet 2/1 AGG-VDC-A(config-if)# description [ Connected to OTV Join-Interface ] AGG-VDC-A(config-if)# ip address 172.

15 no shutdown Before creating and configuring the overlay interface.26.0.1. Configure the OTV site identifier.) Click here to view code image OTV-VDC-A(config)# otv site-identifier 0x1 3. Create and configure the overlay interface: Click here to view code image OTV-VDC-A(config)# interface overlay 1 OTV-VDC-A(config-if)# otv join-interface ethernet 2/2 5. you will not see any OTV commands: Click here to view code image OTV-VDC-A(config)# feature otv 2.AGG-VDC-A# show running pim feature pim interface ethernet 2/1 ip pim sparse-mode interface ethernet 1/10 ip pim sparse-mode interface ethernet 1/20 ip pim sparse-mode ip pim rp-address 172. otherwise.101 ip pim ssm range 232.1 6.0. Configure the OTV site VLAN: Click here to view code image OTV-VDC-A(config)# otv site-vlan 15 4.0/8 Example 4-5 Sample Configuration of the Internal Interfaces Click here to view code image interface ethernet 2/10 description [ OTV Internal Interface ] switchport switchport mode trunk switchport trunk allowed vlan 5-10.1. certain configurations must be done. Enable the OTV feature. Configure the OTV control group: Click here to view code image OTV-VDC-A(config-if)# otv control-group 239. The following shows a sample configuration for the overlay interface for a multicast-enabled transport infrastructure: 1. Configure the OTV data group: .255. (It should be the same for all the OTV edge devices belonging to the same site.

94 Up Time 1w0d 1w0d Adj-State UP UP Example 4-7 Check the MAC Addresses Learned Through the Overlay Click here to view code image OTV-VDC-A# show otv route OTV Unicast MAC Routing Table For Overlay1 VLAN MAC-Address Metric Uptime Owner ---.54c2. Create a static default route to point out to the join interface.26.0.255.98) Site vlan : 15 (up) OTV-VDC-A# show otv adjacency Overlay Adjacency database Overlay-Interface Overlay1 : Hostname System-ID Dest Addr OTV-VDC-C 0022.1.0/0 172.5579.255.0/26 7.90 OTV-VDC-D 0022.0.-----. This is required to allow communication from the OTV VDC to the Layer 3 network domain: Click here to view code image OTV-VDC-A(config)# ip route 0.26.26.1.3dc1 1 00:00:55 Overlay Next-hop(s) ----------ethernet 2/10 OTV-VDC-C OTV Configuration Example with Unicast Transport Examples 4-8 through 4-13 show sample configurations for an OTV deployment in a unicast-only .99 9.--------10 0000.1.255.Click here to view code image OTV-VDC-A(config-if)# otv data-group 232.1. it is shut down by default: Click here to view code image OTV-VDC-A(config-if)# no shutdown Example 4-6 Verify That the OTV Edge Device Joined the Overlay Click here to view code image OTV-VDC-A# sh otv overlay 1 OTV Overlay Information Overlay interface Overlay1 VPN name : Overlay1 VPN state : UP Extended vlans : 5-10 (Total:6) Control group : 239.36c2 172.7c42 172.1.0c07.26.ac64 1 00:00:55 site 10 001b. Configure the VLAN range to be extended across the overlay: Click here to view code image OTV-VDC-A(config-if)# otv extend-vlan 5-10 8.1 Data group range(s) : 232.0/26 Join interface(s) : Eth2/2 (172.5579.255. You must bring up the overlay interface.1.-------.-------------.

one of the OTV edge devices. The first step is to define the role of the adjacency server. only the OTV edge devices for the new site need to be configured with adjacency server addresses. The client devices are configured with the address of the adjacent server.1 unicast-only otv extend-vlan 5-10 Example 4-10 Client OTV Edge Device Configuration Click here to view code image feature otv .transport. Figure 4-8 OTV Topology for Unicast Transport Example 4-8 Primary Adjacency Server Configuration Click here to view code image feature otv otv site-identifier 0x1 otv site-vlan 15 interface Overlay1 otv join-interface e2/2 otv adjacency-server unicast-only otv extend-vlan 5-10 Example 4-9 Secondary Adjacency Server Configuration Click here to view code image feature otv otv site-identifier 0x2 otv site-vlan 15 interface Overlay1 otv join-interface e1/2 otv adjacency-server unicast-only otv use-adjacency-server 10. Figure 4-8 show the topology that will be used in Examples 4-8 through 4-13. meaning acting as a client.1. The second step is to do the required configuration on the OTV edge device not acting as an adjacency server. It is always recommended to configure a pair with a secondary adjacency server in another site for redundancy.1. When a new site is added.

0003 Overlay interface Overlay1 VPN name : Overlay1 VPN state : UP Extended vlans : 5-10 (Total:6) Join interface(s) : E1/1 (10.1) Site vlan : 15 (up) AED-Capable : Yes Capability : Unicast-Only Is Adjacency Server : Yes Adjacency Server(s) : [None] / [None] Example 4-12 Identifying the Role of the Secondary Adjacency Server Click here to view code image Secondary_AS# show otv overlay 1 OTV Overlay Information Site Identifier 0000.1.2.0000.0001 Overlay interface Overlay1 VPN name : Overlay1 VPN state : UP Extended vlans : 5-10 (Total:6) Join interface(s) : E2/2 (10.3.1.1.1 / [None] Example 4-13 Identifying the Role of the Client OTV Edge Device Click here to view code image Generic_OTV# show otv overlay 1 OTV Overlay Information Site Identifier 0000.3) Site vlan : 15 (up) AED-Capable : Yes .1.0002 Overlay interface Overlay1 VPN name : Overlay1 VPN state : UP Extended vlans : 5-10 (Total:6) Join interface(s) : E1/2 (10.2 unicast-only otv extend-vlan 5-10 Example 4-11 Identifying the Role of the Primary Adjacency Server Click here to view code image Primary_AS# show otv overlay 1 OTV Overlay Information Site Identifier 0000.2.1 10.0000.2) Site vlan : 15 (up) AED-Capable : Yes Capability : Unicast-Only Is Adjacency Server : Yes Adjacency Server(s) : 10.0000.1.otv site-identifier 0x3 otv site-vlan 15 interface Overlay1 otv join-interface e1/1 otv use-adjacency-server 10.2.1.3.2.

you can run OTV over any transport infrastructure. and is simple to configure.1 / 10.1. It provides native multihoming. Table 4-2 Key Topics for Chapter 4 Define Key Terms Define the following key terms from this chapter. With a few command lines you can extend Layer 2 between multiple sites. Table 4-2 lists a reference of these key topics and the page numbers on which each is found. noted with the key topics icon in the outer margin of the page. Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. OTV uses a concept called MAC address routing. the only requirement is IP connectivity between the sites.2. we went through the fundamentals of OTV and how it can be used as a data center interconnect solution.1.2.Capability : Unicast-Only Is Adjacency Server : No Adjacency Server(s) : 10. loop prevention end-to-end.2 Summary In this chapter. OTV is the latest innovation. When it comes to Layer 2 extension solutions. and check your answers in the glossary: overlay transport virtualization (OTV) Data Center Interconnect (DCI) Ethernet over Multiprotocol Label Switching (EoMPLS) Multiprotocol Label Switching (MPLS) Virtual Private LAN Services (VPLS) coarse wavelength-division multiplexing (CWDM) Spanning Tree Protocol (STP) Authoritative Edge Device (AED) virtual LAN (VLAN) bridge protocol data unit (BPDU) .

Media Access Control (MAC) address resolution protocol (ARP) Virtual Router Redundancy Protocol (VRRP) First Hop Redundancy Protocol (FHRP) Hot Standby Router Protocol (HSRP) Internet Group Management Protocol (IGMP) Protocol Independent Multicast (PIM) .

These methods are discussed with some important commands for initial setup. Data Plane. . Table 5-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. If you do not know the answer to a question or are only partially sure of the answer. control plane. This includes initial configuration of devices and ongoing maintenance of all the components. you should mark that question as wrong for purposes of the self-assessment. and Management Plane Initial Setup of Nexus Switches Nexus Device Configuration and Verification Management and monitoring of network devices is an important element of data center operations. It also provides an overview of out-of-band and in-band management interfaces.” Table 5-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. The Nexus platform provides several methods for device configuration and management. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. Management and Monitoring of Cisco Nexus Devices This chapter covers the following exam topics: Control Plane. This chapter also identifies the mechanism available in the Nexus platform to protect the control plane of the switch. The operational efficiency is an important business priority because it helps in reducing the operational expenditure (OPEX) of the data center. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. read the entire chapter. and verification. configuration. This chapter provides an overview of the operational planes of the Nexus platform. “Answers to the ‘Do I Know This Already?’ Quizzes. The Cisco Nexus platform provides a rich set of management and monitoring mechanisms. An efficient and rich management and monitoring interface enables a network operations team to effectively operate and maintain the data center network. You can find the answers in Appendix A. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics.Chapter 5. and management plane. It explains the functions performed by each plane and the key features of the data plane.

Which of the following parameters are not deleted by this command? a. Control plane b. Data plane c. Admin user password d. User plane 3. Default route in the global routing table 7. User plane 4. Management plane d. It forwards user traffic. STP d. LACP 6. It maintains the routing table. Select the protocols that belong to the control plane of the switch. d. NETCONF c. Management plane d. a. b.1. mgmt0 IP address and subnet mask c. Which of the following operational planes is responsible for maintaining routing and switching tables of a multilayer switch? a. SNMP b. a. Select the protocols that belong to the management plane of the switch. Data plane c. SNMP b. Control plane b. Switch name b. The initial setup script of a Nexus switch performs the startup configuration of a system. IP address of loopback 0 e. c. Boot variable . 2. NETCONF c. When you execute the write erase command. Which of the following operational planes of a switch is responsible for configuring and monitoring the switch? a. What is the function of the data plane in a switch? a. Which of the following parameters are configurable by initial setup script? a. it deletes the startup configuration of the switch. LACP 5. It stores user data on the switch. STP d. It maintains mac-address-table.

From CLI using the install all command b. Admin user password 8. Link Aggregation Control Protocol (LACP) fast timers are configured. FEX is connected to the switch. From a web browser using HTTP protocol c. which of the following methods can be used? a. SVI 13. To upgrade Nexus switch software. vPC is configured on the switch. b. you must create VLAN 10 on the switch. Kickstart image b. You want to check the current configuration of a Nexus switch. An application on the switch has Cisco Fabric Services (CFS) lock enabled. I/O module BIOS and image 10. Loopback d. Which of the following commands is used to perform this task? a. and then create SVI. you get an error message. However. with all the default configuration parameters. FEX image e. d. When using ISSU to upgrade the Nexus 7000 switch software. E1/1 c. c. a. IPv4 configuration on mgmt0 interface c. 12. Port channel e. c. Switch name d. 11. First. The appropriate software license is missing. Using DCNM software d.b. The command syntax is incorrect. which of the following components can be upgraded? (Choose all that apply. First. Supervisor BIOS c. Which of the following are examples of logical interfaces on a Nexus switch? a. mgmt0 b. b. You are configuring a Cisco Nexus switch and would like to create a VLAN interface for VLAN 10.) a. show running-config default . From NMS server using SNMP 9. d. enable the feature interface-vlan. What could be the possible reason of this error? a. System image d. when you type the interface vlan 10 command. Identify the conditions in which In Service Software Upgrade (ISSU) cannot be performed on a Nexus 5000 switch.

show logging last 10 c. Figure 5-1 shows the operational planes of a Nexus switch in a data center.b. show interface brief | include up c. control plane. show interface up brief b. You want to check the last 10 entries of the log file. When a network endpoint sends a packet (data). connected to each other and configured in different topologies. Which command do you use to perform this task? a. Which command will show you a brief list of interfaces with up state? a. it is important to build a management framework to monitor and manage these devices. These network nodes are switches and routers. show last 10 logging d. show running-config all d. These network endpoints are servers and workstations. and management plane. . In a large network with hundreds of these devices. show interface connected Foundation Topics Operational Planes of a Nexus Switch The data center network is built using multiple network nodes or devices. show running-config 14. show startup-config c. it is switched by these network devices based on the best path they learned using the control protocols. these network devices talk to each other using different control protocols to find the best network path. show interface status d. This simple explanation of a network node leads to the three operational components of a network node. You want to see the list of interfaces that are currently in up state. show logging b. The purpose of building such network controls is to provide a reliable data transport for network endpoints. Typically. These components are data plane. show logging 10 15.

The switch then performs the forwarding process. with ASICs and FPGAs packets can be switched without storing them in the memory. Store-and-Forward Switching In store-and-forward switches. . The advancements in integrated circuit technologies allowed network device vendors to move the Layer 2 forwarding decision from Complex Instruction Set Computing (CISC) and Reduced Instruction Set Computing (RISC) processors to the Application-Specific Integrated Circuits (ASIC) and Field-Programmable Gate Arrays (FPGA). however. or send it to the control plane for further processing. data packets received from an incoming interface must be completely stored in switch memory or ASIC before sending them to an outgoing interface. It can process incoming traffic by looking at the destination address in its forwarding table and deciding on an appropriate action. The earlier method requires the packet to be stored in the memory before a forwarding decision can be made. This approach is helpful in reducing the packethandling time within the network device.Figure 5-1 Operational Plane of Nexus Data Plane The data plane is also called a forwarding plane. thereby improving the data plane latency of network switches to tens of microseconds. This component of a network device receives endpoint traffic from a network interface and decides what to do with this traffic. This action could be to forward traffic to an outgoing interface. the forwarding decisions are based on the destination MAC address of data packets. Following are a few characteristics of store-and-forward switching: Error Checking: Store-and-forward switches receive a frame completely and analyze it using the frame-check-sequence (FCS) calculation to make sure the frame is free from data-link errors. drop the traffic. In a typical Layer 2 switch.

This technique reduces the network latency. and each port has a dedicated data path. but the egress port is busy transmitting a frame coming in from another interface. Therefore. Each UPC manages eight ports. the switch can examine the necessary portions to permit or deny that frame. Unified Crossbar Fabric (UCF): It provides a switching fabric by cross-connecting the UPCs. The switch does not have to wait for the rest of the packet to make its forwarding decision. packets with errors are forwarded to other segments of the network. Therefore. The Cisco Nexus 5500 switch data plane architecture is primarily based on two custom built ASICs developed by Cisco: Unified Port Controller (UPC): It provides data plane packet processing on ingress and egress interfaces.Buffering: The process of storing and then forwarding enables the switch to handle various networking conditions. Following are a few characteristics of cut-through switching: Low Latency Switching: A cut-through switch can make a forwarding decision as soon as it has looked up the DMAC address of the data packet. Invalid Packets: It starts forwarding the packet before receiving it completely and making sure it is valid. The scheduler coordinates the use of the crossbar for a contention-free match between input and output pairs. Cut-Through Switching In the cut-through switching mode. an enhanced iSLIP algorithm is used for scheduling. Egress Port Congestion: If a cut-though forwarding decision is made. The UCF is always used to schedule and switch packets between the ports of UPC. Access Control List: Because a store-and-forward switch stores the entire packet. Each data path connects to Unified Crossbar Fabric (UCF) via a dedicated fabric interface at 12 Gbps. nonblocking crossbar with an integrated scheduler. In this switching mode. The forwarding decision is made as soon as the switch sees the portion of the frame with the destination address. however. low latency. The store-and-forward switch architecture is simple because it is easier to forward a stored packet. which helps ensure line-rate throughput regardless of the internal packet overhead imposed by ASICs. high-performance. The ingress buffering process provides the flexibility to support any mix of Ethernet speeds. Nexus 5500 Data Plane Architecture Data plane architecture of Nexus switches vary based on the hardware platform. which ensures high throughput. for a 10 Gbps port it provides a 20 percent over-speed rate. and weighted fairness across all inputs. the basic principles of data plane operations are the same. cut-through switching only flags but does not drop invalid packets. Figure 5-2 shows Nexus 5500 data plane architecture. switches start forwarding a frame before the frame has been completely received. In this case. the frame is not forwarded in a cut-through fashion. . It is a single-stage. In Nexus 5500. the switch needs to buffer this packet.

Table 5-2 Nexus 5500 Switching Modes Control Plane . The 1 G to 1 G switching is performed in store-and-forward mode. Cut-through switching can be performed only when the ingress data rate is equivalent or faster than the egress data rate. The switch fabric is designed to forward 10 G packets in cut-through mode.Figure 5-2 Nexus 5500 Data Plane Nexus 5500 utilizes both cut-through and store-and-forward switching. Table 5-2 shows the interface types and modes of switching.

This information is then processed to create tables that help the data plane to operate. Table 5-3 summarizes the control plane specifications of a Nexus 5500 switch. or through the out-of-band 10/100/1000-Mbps management port. The control plane of this switch runs Cisco NX-OS software on a dual-core 1. The control plane performs several other functions. The supervisor complex is connected to the data plane in-band through two internal ports running 1-Gbps Ethernet. Routing protocols such as OSPF. and the system is managed inband. However. or BGP are examples of control plane protocol. EIGRP. the control plane is not limited to only routing protocols. let’s look at the Cisco Nexus 5548P switch. the configuration of these components is different based on the platform and hardware version. . however. To understand control plane of Nexus. including Cisco Discovery Protocol (CDP) Bidirectional Forwarding (BFD) Unidirectional Link Detection (UDLD) Link Aggregation Control Protocol (LACP) Address Resolution Protocol (ARP) Spanning Tree Protocol (STP) FabricPath Figure 5-3 shows the Layer 2 and Layer 3 control plane protocols of a Nexus switch.7-GHz Intel Xeon Processor C5500/C3500 Series with 8 GB of DRAM.The control plane maintains the information necessary for the data plane to operate. The control plane of a network node can speak to the control plane of its neighbor to share routing and switching information. This information is collected and computed using complex protocols and algorithms. Figure 5-3 Control Plane Protocols Nexus 5500 Control Plane Architecture The Nexus switches control plane architecture has a similar component for almost all Nexus switches.

This isolation means that a disruption within one plane doe not disrupt the other. Figure 5-4 Control Plane of Nexus 5500 Cisco NX-OS software provides isolation between control and data forwarding planes within the device. Control Plane Policing .Table 5-3 Nexus 5500 Control Plane Specifications Figure 5-4 shows the control plane of a Nexus 5500 switch.

Another example is a denial of service (DoS) attack on the supervisor module. In this case. Glean packets: If a Layer 2 MAC address for a destination IP address is not present in the FIB. The purpose of CoPP is to protect the control plane of the switch from the anomalies within the data plane. Features such as Dynamic Host Configuration Protocol (DHCP) snooping or dynamic Address Resolution Protocol (ARP) inspection redirect some packets to the supervisor module. The supervisor module of the Nexus switch performs both the management plane and the control plane function. reachability. you will observe the following symptoms on the network: High CPU utilization Route flaps due to loss of the routing protocol neighbor relationship.Control Plane Policing The Control Plane Policing (CoPP) feature prevents unnecessary traffic from overwhelming the control plane resources. such as the memory and buffers Indiscriminate drops of incoming packets Different types of packets can reach the control plane: Receive packets: Packets that have the destination address of a router. This includes an ICMP unreachable packet and a packet with IP options set. The CoPP feature enables a policy map to be applied to the supervisor bound traffic. it is a critical element for overall network stability and availability. or a slow response for applications Slow or unresponsive interactive sessions with the CLI Processor resource exhaustion. such as poor voice. and packet delivery. Redirected packets: Packets that are redirected to the supervisor module. CoPP classifies the packets to different classes and provides a mechanism to individually control the rate at which the supervisor module receives these packets. updates. Any disruption or attacks to the supervisor module can bring down the control and management plane of the switch. it ensures network stability. This policy map is similar to a normal QoS policy and is applied to all traffic entering the switch from a nonmanagement port. The attacker can generate IP traffic streams to the control plane or management plane at a very high rate. Exception packets: Packets that need special handling by the supervisor module. or keepalives Unstable Layer 2 topology Reduced service quality. a high volume of traffic from the data plane to the supervisor module could overload and slow down the performance of the entire switch. resulting in serious network outages. the supervisor module spends a large amount of time in handling these malicious packets and preventing the control plane from processing genuine traffic. video. the supervisor module receives the packet and sends an ARP request to the host. Some examples of DoS attacks are outlined next: Internet Control Message Protocol (ICMP) echo requests IP fragments TCP SYN flooding If the control plane of the switch is impacted by excessive traffic. Packet . therefore. These packets include router updates and keepalive messages. Hence. or from a denial of service (DoS) attack. For example. such as a broadcast storm.

Dual-rate policers monitor both CIR and peak information rate (PIR) of traffic. A policer determines three colors or conditions for traffic. exceed (yellow). normal. You can define only one action for each condition. Moderate: This policy is 1 rate and 2 color and has a BC value of 310 ms (except for the important class. or drop the packet. These values are 25 percent greater than the strict policy. normal-dhcp. l2-default. The classes important. therefore. which has a value of 1500 ms). You can configure the following parameters for policing: Committed information rate (CIR): This is the desired bandwidth. which has a value of 1250 ms). One is called policing and the other is called rate limiting. management. Single-rate policers monitor the specified committed information rate (CIR) of traffic. The actions can transmit the packet. The setup utility allows building an initial configuration file using the system configuration dialog. Dense: This policy is 1 rate and 2 color. Extended burst (BE): The size that a traffic burst can reach before all traffic exceeds the PIR. Committed burst (BC): The size of a traffic burst that can exceed the CIR within a given unit of time. exception. the Cisco NX-OS software installs the default CoPP system policy to protect the supervisor module from DoS attacks. and monitoring have a BC value of 1000 ms. and default have a BC value of 250 ms.2. If you do not select a policy. normal-dhcp-relay-response. In Cisco NX-OS releases prior to 5. After the initial setup. These values are 50 percent greater than the strict policy. which has a value of 1000 ms). this option is named none. redirect. Policing is the monitoring of data rates and burst sizes for a particular class of traffic. These conditions or colors are as follows: conform (green). mark down the packet. CoPP does not support distributed policing. The Cisco Nexus device hardware performs CoPP on a per-forwarding-engine basis. Skip: No control plane policy is applied. undesirable.classification can be done using the following parameters: Source and destination IP address Source and destination MAC address VLAN Source and destination port Exception cause After the packets are classified. the rate at which packets arrive at the supervisor module can be controlled by two mechanisms. The class l2-unpoliced has a BC value of 5 MB. NX-OS software applies strict policy. you should choose rates so that the aggregate traffic does not overwhelm the . Lenient: This policy is 1 rate and 2 color and has a BC value of 375 ms (except for the important class. Peak information rate (PIR): The rate above which data traffic is negatively affected. Choosing one of the following CoPP policy options from the initial setup can set the level of protection for the control plane of a switch: Strict: This policy is 1 rate and 2 color and has a BC value of 250 ms (except for the important class. and violate (red). The classes critical.

Manage the network node using an element manager such as Data Center Network Manager (DCNM). based on the command-line version of Wireshark. Following are the key benefits of Ethanalyzer: It is part of the NX-OS integrated management tool set. Configure the network node using network management protocols such as simple network management protocol (SNMP) and network configuration protocol (NETCONF). Program the network node using One Platform Kit (onePK). This tool is useful for troubleshooting problems related to the switch itself. Representational State Transfer (REST). Monitor network statistics by collecting health and utilization data from the network node using network management protocols such as simple network management protocol (SNMP). The management plane is used in day-to-day administration of the network node. JavaScript Object Notation (JSON). Application Program Interface (API). and guest shell with support for Python. It provides an interface to the network administrators (see Figure 5-5) and NMS tools to perform actions such as the following: Configure the network node using command-line interface (CLI). . The packets captured by Ethanalyzer are only control plane traffic destined to the supervisor CPU. You can use Ethanalyzer to troubleshoot your network by capturing and decoding the control-plane traffic. Network administrators can learn about the amount and nature of the control-plane traffic within the switch. Control Plane Analyzer Ethanalyzer is a Cisco NX-OS built-in protocol analyzer.supervisor module. Ethanalyzer enables you to perform the following functions: Capture packets sent to and received by the switch supervisor CPU Set the number of packets to be captured Set the length of the packets to be captured Display packets with either very detailed protocol information or a one-line summary Open and save the packet data captured Filter packet captures based on many criteria Filter packet displays based on many criteria Decode the internal header of the control packet Management Plane The management plane of the network node deals with the configuration and monitoring of the control plane. It improves troubleshooting and reduces time to resolution. DCNM leverages XML APIs of the Nexus platform. It preserves and improves the operational continuity of the network infrastructure. It does not capture hardware-switched traffic between data ports of the switch.

Some of these tools are discussed in the following section. Figure 5-6 shows an overview of the management and monitoring tools.Figure 5-5 Management Plane of Nexus Nexus Management and Monitoring Features The Nexus platform provides a wide range of management and monitoring features. out-of-band (OOB) management means that management traffic is traversing though a dedicated path in the network. Figure 5-6 Nexus Management and Monitoring Tools Out-of-Band Management In a data center network. The purpose of creating an out-of-band network is to increase .

This network is dedicated for management traffic only and is completely isolated from the user traffic. network administrator 1 is using OOB network management. monitoring. This network provides connectivity to management tools. Figure 5-7 Out-of-Band Management Cisco Nexus switches can be connected to the OOB management network using the following methods: Console port Connectivity Management Processor (CMP) Management port Console Port The console port is an asynchronous serial port used for initial switch configuration. It provides an RS-232 connection with an RJ-45 interface. Figure 5-8 shows remote access to the console port via OOB network. OOB management enables a network administrator to access the devices in the data center for routine changes. dedicated switches. In Figure 5-7. It is recommended to connect all console ports of the devices to a terminal or comm server router such as Cisco 2900 for remotely connecting to the console port of devices. and firewalls are deployed to create an OOB network. and troubleshooting without any dependency on the network build for user traffic. it is important to secure it so only administrators can access it. In a data center. . The remote administrator connects to the terminal server router and performs a reverse telnet to connect to the console port of the device. In-band management is discussed later in this section. network operators can reach the devices using this OOB management network.the security and availability of management capabilities within the data center. the management port of network devices. Because this network is built for management of data center devices. and ILO ports of servers. The advantage of this approach is that during a network outage. routers. and network administrator 2 is using in-band network management.

its supervisor module. ensure that you are in the default virtual device context (VDC). You can access the CMP from the active supervisor module. It provides monitoring of supervisor status and initiation of resets. It provides complete visibility of the system during the entire boot process. and all other modules without the need for separate terminal servers. Before you begin. It provides the capability to take complete console control of the supervisor. Note that CMP is available only in the Nexus 7000 Supervisor engine 1. Figure 5-9 shows how to connect to CMP from the active supervisor module. It provides access to critical log information on the supervisor. Key features of the CMP include the following: CMP provides a dedicated operating environment independent of the switch processor. It provides the capability to initiate a complete system restart.Figure 5-8 Console Port Connectivity Management Processor For management high availability. . the Cisco Nexus 7000 series switches have a connectivity management processor that is known as the connectivity management processor (CMP). The CMP enables lightsout remote monitoring and management of the Cisco Nexus 7000 Series system. The CMP provides OOB management and monitoring capability independent from the primary operating system.

in-band management will also not work. When all vty lines are in use. SNMP. You can configure an inactive session timeout and a maximum sessions limit for virtual terminals. you must globally disable the specific protocol. you must disable Telnet globally using the no feature telnet command. Cisco NX-OS devices have a limited number of vty lines. In this method. The network administrator manages the switch using this in-band IP address. proper controls must be enforced on vty lines.Figure 5-9 Connectivity Management Processor Management Port (mgmt0) The management port of the Nexus switch is an Ethernet interface to provide OOB management connectivity. In-band Management In-band management utilizes the same network as user data to manage network devices. or loopback interface that is reachable via the same path as user data. up to 16 concurrent vty sessions are allowed. The administrator uses Secure Shell (SSH) and Telnet to the vty lines to create a virtual terminal session. vty lines automatically accept connections using any configured transport protocols. SCP. the number of configured lines available can be determined by using the show run vshd command. There is no need to build a separate OOB management network. or Telnet are examples). To disable a specific protocol from accessing vty sessions. For example. If the network is down. to prevent a Telnet session to the vty line. In Cisco NX-OS. . A vty line is used for all remote network connections supported by the device. new management sessions cannot be established. regardless of protocol (SSH. and other management information. Each VDC has its own representation for mgmt0 with a unique IP address from a common management subnet for the physical switch that can be used to send syslog. OOB connectivity via mgmt0 port is shown earlier in Figure 5-7. Management port is also known as mgmt0 interface. By default. It enables you to manage the device using an IPv4 or IPv6 address. SVI. This port is part of management VRF on the switch and supports speeds of 10/100/1000 Ethernet. To help ensure that a device can be accessed through a local or remote management session. The OOB management interface (mgmt0) can be used to manage all VDCs. switches are configured with an in-band IP address on a routed interface. In-band is done using vty of a Cisco NX-OS device.

this user also has Role-B. and interfaces of a Nexus switch. NX-OS enables you to create local users on the switch and assign a role to these users. . Table 5-5 shows the default roles on a Nexus 7000 switch. which allows access to the configuration command. if Role-A allows access to only configuration operations. For example. In this case. VLANs. NX-OS has some default user roles. which then determines what the individual user is allowed to do on the switch. Table 5-4 shows the default roles on a Nexus 5000 switch. When you create a user account on the switch. a user has Role-A.Role-Based Access Control To manage a Nexus switch. but he is not permitted to make changes to the switch configuration. which denies access to a configuration command. For example. With RBAC. you first define the required user roles and then specify the rules for each switch management operation that user role is allowed to perform. User Roles User roles contain rules that define the operations allowed on the switch. a Nexus 7000 supports additional roles to administer and operate the VDCs. users log in to the switch and perform tasks as per their role in the organization. however. If a user belongs to multiple roles. For example. and each role can contain multiple rules. a network operations center (NOC) engineer might want to check the health of the network. However. Table 5-4 Default Roles on Nexus 5000 A Nexus 7000 switch supports features such as VDC. Role-Based Access Control (RBAC) provides a mechanism to define the amount of access that each user has when the user logs in to the switch. Each role has a list of rules that define the permission to perform a task on the switch. You can also define roles to limit access to specific VSANs. users who belong to both Role-A and Role-B can access configuration and debug operations. which is a virtual switch within the Nexus chassis. These roles are assigned to users so they have limited access to switch operations as per their role. and Role-B allows access to only debug operations. When roles are combined. the user is allowed access to the configuration commands. associate the appropriate role with the user account. Each user can have multiple roles. The roles are defined to restrict user authorization to perform switch management operations. the user can execute a combination of all the commands permitted by these roles. Therefore. a permit rule takes priority over deny. you can define new roles and assign them to the users.

The next level of control parameter is the feature. Enter the show role feature group command to display the default feature groups available for this parameter. you define a user role policy to access a specific interface. For example. Rules are executed in descending order (for example. . 64 user-defined roles can be configured per VDC.Table 5-5 Default Roles on Nexus 7000 Rules A rule is the basic element of a role that defines the switch operations that are permitted or denied by a user role. You can apply rules for the following parameters: Command: NX-OS command or group of commands defined in a regular expression. If a user account belongs to multiple roles. Feature group: Default or user-defined group of features. 256 rules can be assigned to a single role. and VSANs. that user can execute the most privileged union of all the rules in all assigned roles. User Role Policies The user role policies are used to limit user access to the switch resources. They allow limiting access to the interfaces. which represents all commands associated with the feature. RBAC Characteristics and Guidelines Some characteristics and guidelines for RBAC are outlined next: Roles are associated to user accounts in the local database or to the users in a remote Authentication Authorization and Accounting (AAA) database. and the user does not have access to the interface unless you configure a command rule to permit the interface command. Rules defined for a user role override the user role policies. Role changes do not take effect until the user logs out and logs in again. rule 2 is checked before rule 1). The most granular control parameter of the rule is the command. Users see only the CLI commands they are permitted to execute when using context-sensitive help. A user account can belong to 64 roles. Feature: All the commands associated with a switch feature. The last level of control parameter is the feature group. VLANs. You can check the available feature names for this command using the show role feature command. The feature group combines related features for the ease of management.

Network administrators use SNMP to remotely manage network performance. Table 5-6 NX-OS Privilege Levels Simple Network Management Protocol Cisco NX-OS supports Simple Network Management Protocol (SNMP) to remotely manage the Nexus devices. RBAC roles can be distributed to multiple Nexus 7000 devices using Cisco Fabric Services (CFS). On Nexus switches. the privilege level feature is disabled on the Nexus switch. Similar to the RBAC roles.Role features and feature groups should be used to simplify RBAC configuration when applicable. Table 5-6 shows the IOS privilege levels and their corresponding NX-OS system-defined roles. By default. SNMP works at the application layer and facilitates the exchange of management information between network management tools and the network devices. the privilege level and RBAC roles can be configured simultaneously. and plan for network growth. AAA/TACACS+). SNMP agent: A software component that resides on the device that is managed. These predefined privilege levels cannot be deleted. To enable privilege level support on the Nexus switch. use the feature privilege configuration command. They cannot be referenced in an AAA server command authorization policy (for example. A user account can be associated to a privilege level or to RBAC roles. There are three components of SNMP framework: SNMP manager: This is a network management system (NMS) that uses SNMP protocol to manage the network devices. This feature is useful in a network environment that uses a common AAA policy for managing Cisco IOS and NX-OS devices. Role features and feature groups can be referenced only in the local user account database. the predefined privilege levels are created as roles. When this feature is enabled. To enable SNMP agent. the privilege levels can also be modified to meet different security requirements. troubleshoot faults. Privilege Levels Privilege levels are introduced in NX-OS to provide IOS-compatible authentication and authorization functionality. . you must define the relationship between manager and agent. such as switches and routers.

Figure 5-10 shows components of SNMP. You can configure Cisco NX-OS to send notifications to multiple host receivers. for which the manager must acknowledge the receipt of the message. for communication between the SNMP manager and agent. These messages are less reliable because there is no way for an agent to discover whether the SNMP manager has received the message. The network management system (NMS) uses the Cisco MIB variables to perform set operations to configure a device or a get operation to poll devices for specific information. the SNMP agent can generate notifications without any polling request from the SNMP manager. and monitor traffic loads. The information collected from polling is analyzed to help troubleshoot network problems.Management Information Base (MIB): The collection of managed objects on the SNMP agent. SNMPv3 Security was always a concern for SNMP. NX-OS can generate the following two types of SNMP notifications: Traps: An unacknowledged message sent from the agent to the SNMP managers listed in the host receiver table. such as authentication and encryption. The security features provided in SNMPv3 are as follows: Message integrity: Ensures that a packet has not been tampered with in transit. If the Cisco Nexus device never receives a response for a message. verify configuration. Figure 5-10 Nexus SNMP The Cisco Nexus devices support the agent and MIB. The community string was sent in clear text and used for authentication between the SNMP manager and agent. create network performance. SNMP Notifications In case of a network fault or other important event. Informs: A message sent from the SNMP agent to the SNMP manager. These messages are more reliable because of the acknowledgement of the message from the SNMP manager. . SNMPv3 solves this problem by providing security features. Authentication: Determines the message is from a valid source. it can send the inform request again.

RMON Alarms An RMON alarm monitors a specific management information base (MIB) object for a specified interval. You can set an alarm on any MIB object that resolves into an SNMP INTEGER type. You can enable RMON and configure your alarms and events by using the CLI or an SNMP-based network management station. and no events or alarms are configured. Table 5-7 NX-OS SNMP Security Models and Levels Remote Monitoring Remote Monitoring (RMON) is an industry standard remote network monitoring specification. RMON is disabled by default.1. .Encryption: Scrambles the packet contents to prevent it from being seen by unauthorized sources. Table 5-7 shows SNMP security models and levels supported by NX-OS.17). and SNMPv3. When you create an alarm. The specified object must be an existing SNMP MIB object in standard dot notation (for example.17 represents ifOutOctets.1. events. developed by the Internet Engineering Task Force (IETF). SNMPv2c. The Cisco NX-OS supports RMON alarms. You can use alarms with RMON events to generate a log entry or an SNMP notification when the RMON alarm triggers.2. The Cisco Nexus device supports SNMPv1.2. Each SNMP version has different security models and levels. 1. triggers an alarm at a specified threshold value.1. and logs to monitor a Cisco Nexus device. In a Cisco Nexus switch.6. A security model is an authentication strategy that is set up for a user and the role in which the user resides.3. and resets the alarm at another threshold value. Sampling interval: The interval to collect a sample value of the MIB object. you specify the following parameters: MIB: The object to monitor. A combination of a security model and a security level determines which security mechanism is employed when handling an SNMP packet.2. A security level is the permitted level of security within a security model. RMON allows various network console systems and agents to exchange network-monitoring data.

RMON Events RMON events are generated by RMON alarms. Falling threshold: Device triggers a falling alarm or resets a rising alarm on this threshold. When the switch first initializes. Events: The action taken when an alarm (rising or falling) is triggered. Nexus switches support logging of system messages. Delta sample is the difference between two consecutive sample values. the system outputs messages at that level and lower. Rising threshold: Device triggers a rising alarm or resets a falling alarm on this threshold. Different events can be generated for a falling alarm and a rising alarm. Table 5-8 describes the severity levels used in system messages. You can associate an event to an alarm. This log can be sent to the terminal sessions. Table 5-8 System Message Severity Levels Embedded Event Manager .Sample type: Absolute sample is the value of MIB object. messages are sent to syslog servers only after the network is initialized. RMON supports the following type of events: SNMP notification: Sends an SNMP notification when a rising alarm or a falling alarm is triggered Log: Adds an entry in the RMON log table when the associated alarm is triggered Both: Sends an SNMP notification and adds an entry in the RMON log table when the associated alarm is triggered Syslog Syslog is a standard protocol defined in IETF RFC 5424 for logging system messages to a remote server. You can configure the Nexus switch to send syslog messages to a maximum of eight syslog servers. To support the same configuration of syslog servers on all switches in a fabric. and syslog servers on remote systems. By default. When you configure the severity level. you can use Cisco Fabric Services (CFS) to distribute the syslog server configuration. a log file. Cisco Nexus switches only log system messages to a log file and output them to the terminal sessions.

NX-OS EEM supports the following actions in action statements: Execute any CLI commands. There are three major components of EEM: Event statement Action statement Policies Event Statements An event is any device activity for which some action. In Cisco NX-OS releases prior to 5. Policies NX-OS EEM policy consists of an event statement and one or more action statements. these events are related to faults in the device. Update a counter. Generate an SNMP notification.2. EEM still observes events but takes no actions. beginning in Cisco NX-OS Release 5. such as a workaround or a notification. EEM defines event filters so only critical events or multiple occurrences of an event within a specified time period trigger an associated action. programmability. Figure 5-11 shows a high-level overview of NX-OS Embedded Event Management. The Embedded Event Manager (EEM) monitors events that occur on the device and takes action to recover or troubleshoot these events. . you can configure only one event statement per policy. The action statement defines the action EEM takes when the event occurs. Generate a syslog message. should be taken. Action Statements Action statements describe the action triggered by a policy. Log an exception. Reload the device. These actions could be sending an e-mail or disabling an interface to recover from an event. Generate a Call Home event. In many cases. Shut down specified modules because the power is over budget. such as when an interface or a fan malfunctions.2. The event statement defines the event to look for as well as the filtering characteristics for the event. If no action is associated with a policy. and automation within the device. Each policy can have multiple action statements. you can configure multiple event triggers. Use the default action for the system policy.Cisco NX-OS Embedded Event Manager provides real-time network event detection. Force the shutdown of any module. However. based on the configuration. Event statements specify the event that triggers a policy to run.

Figure 5-11 Nexus Embedded Event Manager Generic Online Diagnostics (GOLD) Cisco generic online diagnostics (GOLD) is a suite of diagnostic functions that provide hardware testing and verification. If a fault is detected. GOLD provides the following tests as part of the feature set: Boot diagnostics Continuous health monitoring On-demand tests Scheduled tests Figure 5-12 shows an overview of Nexus GOLD. . the switch takes corrective action to mitigate the fault and to reduce potential network outages. It helps in detecting hardware faults and to make sure that internal data paths are operating as designed.

Nexus Device Configuration and Verification This section reviews the process of initial setup of Nexus switches. basic switch configuration. and on demand from CLI. Multiple concurrent message destinations. Runtime diagnostics (also known as health monitoring diagnostics) include nondisruptive tests that run in the background during the normal operation of the switch. Call Home includes a fixed set of predefined alerts on your switch. The switch includes the command output in the transmitted Call Home message. or use Cisco Smart Call Home services to automatically generate a case with the technical assistance center (TAC). GOLD tests are executed when the chassis is powered up or during an online insertion and removal (OIR) event. You can configure up to 50 e-mail destination addresses for each destination profile. to start an on-demand test on module 6. you can use following command: diagnostic start module 6 test all. Perform Initial Setup . corrective actions are taken through Embedded Event Manager (EEM) policies. Boot diagnostics include disruptive tests and nondisruptive tests that run during system boot and system reset. Multiple message format options. and CLI commands are assigned to execute when an alert in an alert group occurs. use show diagnostic boot level. If a failure condition is observed during the test. such as the following: Short Text: Suitable for pagers or printed reports. Following are advantages of using the Call Home feature: Automatic execution and attachment of relevant CLI command output. These alerts are grouped into alert groups. and some important CLI commands to configure and verify network connectivity. Call Home can deliver alerts to multiple recipients that you configure in destination profiles. XML: Matching readable format that uses the Extensible Markup Language (XML) and the Adaptive Messaging Language (AML) XML schema definition (XSD). To display the diagnostic level that is currently in place. e-mail a network operations center. It is recommended to start a disruptive diagnostic test manually during a scheduled network maintenance window.Figure 5-12 Nexus GOLD During the boot diagnostics. This feature can be used to page a network support engineer. To enable boot diagnostics. The XML format enables communication with the Cisco Systems Technical Assistance Center (Cisco-TAC). Smart Call Home The Call Home feature provides e-mail-based notification of critical system events. You can use this feature to notify any external entity when an important event occurs on your device. as per schedule. use the diagnostic bootup level complete command. You can also start or stop an on-demand diagnostic test. Continuous health checks occur in the background. Full Text: Fully formatted message information suitable for human reading. For example.

However. there is no default configuration on these switches. you can use CLI to perform additional configurations of the switch. this initial setup mode is also called startup configuration mode. first you perform the startup configuration using the console port of the switch. just press Enter. and there is no configuration file found in the NVRAM of the switch. and not the configured value. therefore. The startup configuration enables you to build an initial configuration file with enough information to provide basic system management connectivity. The startup configuration mode of the switch is an interactive system configuration dialog that guides you through the basic configuration of the system. If a default answer is not available. it goes into an initial setup mode. This initial setup mode indicates that the switch is unable to find the configuration file at the startup.When Nexus switches are shipped from the factory. if a default value is displayed on the prompt and you press Enter for that step. The initial setup script for the startup configuration is executed automatically the first time the switch is started. if you press Enter to skip the IP address configuration of mgmt0 interface. and you have already configured the mgmt0 interface earlier. However. The only step you cannot skip is to provide a password for the administrator. This setup script can have slightly different steps based on the Nexus platform. you can run the initial setup script anytime to perform a basic device configuration by using the setup command from the CLI. For example. After the initial configuration file is created. The default . You can press Ctrl+C at any prompt of the script to skip the remaining configuration steps and use what you have configured up to that point. the setup utility uses what was previously configured on the switch and goes to the next question. Therefore. Figure 5-13 shows the initial configuration process of a Nexus switch. the setup utility does not change that configuration. Figure 5-13 Initial Configuration Procedure During the startup configuration. the switch creates an admin user as the only initial user. the setup utility changes to a configuration using that default. To use the default answer for a question. when you power on the switch the first time. To start a new switch configuration.

When you configure a strong password. when no configuration is present. You can change the password strength check during the initial setup script by answering no to the enforce secure password standard question. upper.Basic System Configuration Dialog VDC: 2 ---This setup utility will guide you through the basic configuration of the system. After the initial parameters are provided to the startup script.option is to use strong passwords.0/0 172. If the configuration is correct. After the admin user password is configured. During the basic system configuration.1. so you can verify it. you must use a minimum of eight characters with a combination of alphabets.16. Would you like to enter the basic configuration dialog (yes/no): yes Do you want to enforce secure password standard (yes/no) [y]: y Create another login account (yes/no) [n]: n Configure read-only SNMP community string (yes/no) [n]: n Configure read-write SNMP community string (yes/no) [n]: n Enter the switch name : N7K Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: y Mgmt0 IPv4 address : 172.0 Configure the default gateway? (yes/no) [y]: y IPv4 address of the default gateway : 172.and lowercase.16.1 Configure advanced IP options? (yes/no) [n]: n Enable the telnet service? (yes/no) [n]: n Enable the ssh service? (yes/no) [y]: y Type of ssh key you would like to generate (dsa/rsa) [rsa]: rsa Number of rsa key bits <1024-2048> [1024]: 1024 Configure default interface layer (L3/L2) [L3]:L3 Configure default switchport interface state (shut/noshut) [shut]: shut The following configuration will be applied: password strength-check Switchname N7K interface mgmt0 ip address 172. *Note: setup is mainly used for configuring the system initially. Answer yes if you want to enter the basic system configuration dialog. the startup script asks you to proceed with basic system configuration.16. Press Enter if you would like to accept the default parameter.0.1. or later using CLI command no password strengthcheck. Setup configures only enough connectivity for management of the system. you do not need to edit it. So setup always assumes system defaults and not the current system configuration values. the default answers are mentioned in square brackets [xxx].1.1.255. Figure 5-14 shows the initial configuration setups of Nexus switches.0 no shutdown vrf context management ip route 0. you can accept this configuration.16. the switch shows you the initial configuration based on your input.255. Click here to view code image Do you want to enforce secure password standard (yes/no): yes Enter the password for "admin": C1sco123 Confirm the password for "admin": C1sco123 ---.255. Press Enter at anytime to skip a dialog.255.0.1 exit . and numerical characters. Use ctrl-c at anytime to skip the remaining dialogs.20 255. and it is saved in the switch NVRAM.20 Mgmt0 IPv4 netmask : 255.

It is important to note that the initial configuration script supports only the configuration of the IPv4 address on the mgmt0. The Cisco NX-OS software consists of two images: the kickstart image and the system image. This enables you to reschedule the software upgrade to a time less critical for your services to minimize the impact of outage. ISSUs enable a network operator to upgrade NX-OS software without disrupting the operation of the switch. the Cisco NX-OS software warns you before proceeding with the software upgrade. except for the following: Boot variable definitions The IPv4 configuration on the mgmt0 interface. You can erase the configuration of a Nexus switch to return to the factory default. and you can configure IPv6 later using CLI. a time might come when you have to upgrade the NX-OS software to address the software defects or to add new features to the switch. you can disconnect from the console port and access the switch using the IP address configured on the mgmt0 port. the network administrator initiates the ISSU manually using one of the following methods: From CLI using a single command From the administrative interface of Cisco Data Center Network Manager (DCNM) On Nexus switches. The write erase command erases the entire startup configuration. In the case where ISSU will cause disruption to the data-plane traffic. To upgrade the switch to newer software. There is minimal disruption to control-plane traffic only. ISSU upgrades the software images without disrupting data-plane traffic. it updates the following components on the system as needed: . the switch will go into the startup configuration mode.no feature telnet ssh key rsa 1024 force feature ssh no system default switchport system default switchport shutdown Would you like to edit the configuration? (yes/no) [n]: <enter> Use this configuration and save it? (yes/no) [y]: y Figure 5-14 Initial Configuration Dialogue After the initial configuration is completed. To delete the switch startup configuration from NVRAM. including the IP address and subnet mask The route address in the management VRF To remove the boot variable definitions and the IPv4 configuration on the mgmt0 interface. The software upgrade of a switch in the data center should not have any impact on availability of services. use the write erase command. and the switch will continue to operate normally. In-Service Software Upgrade Each Nexus switch is shipped with the Cisco NX-OS software. When an ISSU is initiated. After installing the Nexus switch and putting it into production. This command will not delete the running configuration of the switch. you can use the write erase boot command. When you restart the switch after erasing its configuration from NVRAM.

existing flows for servers connected to the Cisco Nexus device should see no traffic disruption. As a result. This multistage ISSU cycle is designed to minimize overall system interruption with no impact on data traffic forwarding. and supervisor state is restored to the previously known configuration. examples of Nexus 5000 and Nexus 7000 switches are described in the following sections. At this time the runtime state of the control plane and data plane are synchronized. but the data plane of the switch continues its operation and keeps forwarding packets. the data plane keeps forwarding packets. control plane operation is resumed. Figure 5-15 shows the ISSU process on Nexus 5000 switch. . and the ISSU on this switch causes the supervisor CPU to reset and load the new software image. and the process varies based on the Nexus platform. It takes fewer than 80 seconds to restore control plane operation. When a network administrator initiates ISSU. the ISSU performed on the parent switch automatically downloads and upgrades each of the attached fabric extenders. During the upgrade. therefore. each fabric extender is upgraded to the Cisco NX-OS software release of the parent switch after the parent switch ISSU completes. Cisco NX-OS software supports ISSU from release 4. When the system recovers from the reset. In a topology where Cisco Nexus 2000 Series fabric extenders are used. To explain ISSU. the control plane of the switch is inactive. During this reload. This separation in the control and data planes leads to a software upgrade with no service disruption.1 and later. ISSU on the Cisco Nexus 5000 The Nexus 5000 switch is a single supervisor system. it is running an updated version of NX-OS. a multistage ISSU cycle is initiated by the installer service.Kickstart image Supervisor BIOS System image Fabric extender (FEX) image I/O module BIOS and image A well-designed data center considers service continuity by implementing switch-level redundancy at each layer within the data center and by dual-homing servers to redundant switches at the access layer.2.

such as STP. Network topology changes. Use the dir command to verify that required NX-OS files are present on the system. these images are copied to the switch bootflash. All the configuration change requests from CLI and SNMP are denied. and module insertion are not allowed during ISSU operation.3. .5. Link Aggregation Control Protocol (LACP) fast timers are not supported.bin Some of the important considerations for Nexus 5000 ISSU are the following: The Nexus 5000 switch does not allow configuration changes during ISSU. FSPF.2. After verifying the image on the bootflash. The ISSU process will abort if the system is configured with LACP fast timers. No interface should be in a spanning tree designated forwarding state. use the show install all impact command to determine which components of the system will be affected by the upgrade. download the required software image and copy it to your upgrade location.3. Make sure both the kickstart file and the system file have the same version. ISSU cannot be performed if any application has Cisco Fabric Services (CFS) lock enabled.0.N2.Figure 5-15 ISSU Process on Nexus 5000 To perform ISSU. Typically. The syntax of this command is show install all impact kickstart n5000-uk9-kickstart.N2. The Nexus 5000 switch undergoing ISSU must not be a root switch in the STP topology. FC Fabric changes that affect zoning.5.2.bin system n5000uk9.0. domain manager.

Note Always refer to the latest release notes for ISSU guidelines. The Cisco Nexus 5500 platform switches support Layer 3 functionality. first the standby supervisor is upgraded to the new software. The secondary vPC switch should be upgraded after the ISSU process completes successfully on the primary device. It is a fully distributed system with nondisruptive Stateful Switch Over (SSO) and ISSU capabilities. the nondisruptive software upgrade is not supported when the Layer 3 license is installed. ISSU is supported on Nexus 5000 in a vPC configuration. ISSU will be aborted. on Nexus 7000. The peer switch is responsible for holding on to its state until the ISSU process is completed successfully. both the data plane and the control plane are available during the entire software upgrade process. the control plane of the switch is running on new software. The vPC switches continue their control plane communication during the entire ISSU process. Bridge assurance must be disabled for nondisruptive ISSU.If a zone merge or zone change request is in progress. Figure 5-16 shows the ISSU process for the Nexus 7000 platform. However. It is okay to have bridge assurance enabled on the vPC peer-link. While upgrading the switches in vPC topology. Therefore. the primary switch is responsible for upgrading the FEX. The Nexus 7000 ISSU leverages the SSO capabilities of the switch to perform a nondisruptive software upgrade. except when the ISSU process resets the CPU of the switch being upgraded. In this process. Please note that the SSO feature is available only when dual supervisor modules are installed in the system. After this step. the other peer switch locks the configuration until the ISSU is completed. ISSU is achieved using a rolling update process. The software is updated on the supervisor that is now in standby mode (previously active). . the linecards or I/O modules are upgraded. In FEX virtual port channel (vPC) configurations. When both supervisors are upgraded successfully. ISSU on the Cisco Nexus 7000 The Nexus 7000 switch is a modular system with supervisor and I/O modules installed in a chassis. you should start with the primary vPC switch. In this process. When one peer switch is undergoing an ISSU. and a failover is done so the standby supervisor with the new image becomes active.

Figure 5-17 shows the state of supervisor modules before and after the software upgrade.Figure 5-16 ISSU Process on Nexus 7000 It is important to note that the ISSU process results in a change in the state of active and standby supervisors. .

use the show incompatibility system command. including all VDCs on the physical device. therefore. Prior to NX-OS release 5. a parallel upgrade on the FEX is supported by ISSU. An ISSU might be disruptive if you have configured features that are not supported on the new software image. To determine ISSU compatibility. use the parallel keyword in the command. Starting from NX-OS Release 6. To initiate parallel FEX software upgrades. that is. Configuration mode is blocked during the Cisco ISSU to prevent any changes.1(1). Following are some of the important considerations for a Nexus 7000 ISSU: When you upgrade Cisco NX-OS software on Nexus 7000. ISSU performs the linecard software upgrade serially. It is not possible to upgrade the software individually for each VDC. This method was very time consuming. one I/O module is upgraded at a time. support for simultaneous line card upgrade is added into this release to reduce the ISSU time.Figure 5-17 Nexus 7000 Supervisors Before and After ISSU Before you start ISSU on a Nexus 7000 switch. Operating VPC peers with dissimilar versions after the upgrade or downgrade process is complete is not supported. it is upgraded for the entire system. To start a parallel upgrade of line cards. It is recommended to use the show install all impact command to check whether the images are compatible for an upgrade.2(1). use the following ISSU command: install all kickstart <image> system <image> parallel. The software upgrade might be disruptive under the following conditions: A single-supervisor system with a kickstart or system image changes A dual-supervisor system with incompatible system software images VPC peers can only operate dissimilar versions of the Cisco NX-OS software during the upgrade or downgrade process. You can start the ISSU using the install all command. For releases . make sure that the upgrade process will be nondisruptive.

only a serial upgrade of FEX is supported. N5K-A(config)# interface ? ethernet Ethernet IEEE 802. In syntax help.1(1). Example 5-1 Command-Line Help Click here to view code image N5K-A# configure terminal Enter configuration commands. to configure a system. the Tab key . The commands that perform similar functions are grouped together at the same level. To list keywords or arguments of a command. These commands will help you in configuration and verification of basic switch connectivity. For example. enter a question mark (?) in place of a keyword or argument. the upgrade will be a serial upgrade. and command syntax help for the interface command. Pressing the Tab key displays a brief list of the commands that are available at the current branch. type a question mark (?) at the system prompt. This form of help is called command syntax help. you use the configure terminal command and then use the appropriate command to configure the system.prior to Cisco NX-OS Release 6. all the commands that display system. command completion may also be used to complete a system-defined name and. and all commands that are used to configure the switch are grouped under the configure terminal command. you see full parser help and descriptions. Important CLI Commands In NX-OS all CLI commands are organized hierarchically.3z fc Fiber Channel interface loopback Loopback interface mgmt Management interface port-channel Port Channel interface san-port-channel SAN Port Channel interface vfc Virtual FC interface vlan Vlan interface N5K-A(config)# interface <press-tab-key> ethernet loopback port-channel fc mgmt san-port-channel N5K-A(config)# interface vfc vlan Automatic Command Completion In any command mode. For example. configuration. you can type an incomplete command and then press the Tab key to complete the rest of the command. If you would like to use a command. Command-Line Help To obtain a list of available commands. one per line. in some cases. you first must enter into the group hierarchy and then type the command. End with CNTL/Z. without any detail description. Pressing the Tab key provides command completion to a unique but partially typed command string. This section discusses important NX-OS CLI commands. In Example 5-2. Example 5-1 shows how to enter into switch configuration mode. or hardware information are grouped under the show command. Even if a user types the parallel keyword in the command. In the Cisco NX-OS software. user-defined names. the use of the where command to find current context.

you can do so by running the command with the feature specified. All interface-related commands for interface Ethernet 1/1 will go under the interface. including the default configuration of the switch. show running-config interface ethernet 1/1 shows the configuration of interface Ethernet 1/1. The Ctrl+Z key combination exits configuration mode but retains the user input on the command line. you enter into the interface context by typing interface ethernet 1/1. it displays all the configuration. you can run all show commands from the configuration context. first you must switch to the appropriate configuration context in the group hierarchy. If you need to view a specific feature’s portion of the configuration. to configure an interface. interface Ethernet1/1 admin@N5K-A N5K-A(config-if)# exit N5K-A(config)# show <Press CTRL-Z> N5K-A# show Display Current Configuration The show running-config command displays the current running configuration of the switch. as shown in Example 5-4. When you type a command that does not belong to the current context. Example 5-4 show running-config Click here to view code image N5K-A# sh running-config ipqos !Command: show running-config ipqos . NX-OS will go out of the context to find the match.is used to complete the VRF command and then the name of a VRF on the command line. For example. The Exit command will take you back one level in the hierarchy. Example 5-3 Configuration Context Click here to view code image N5K-A(config)# interface ethernet 1/1 N5K-A(config-if)# where conf. Commands discussed in this section and their output are shown in Example 5-3. For example. For example. You can use the where command to display the current context and login credentials. Example 5-2 Automatic Command Completion Click here to view code image N5K-A(config)# vrf con<press-tab-key> N5K-A(config)# vrf context N5K-A(config)# vrf context man<press-tab-key> N5K-A(config)# vrf context management N5K-A(config-vrf)# Entering and Exiting Configuration Context To perform a configuration task. If you run this command with command-line parameter all.

0(3)N2(2) class-map type qos class-fcoe class-map type queuing class-fcoe match qos-group 1 class-map type queuing class-all-flood match qos-group 2 class-map type queuing class-ip-multicast match qos-group 2 class-map type network-qos class-fcoe match qos-group 1 class-map type network-qos class-all-flood match qos-group 2 class-map type network-qos class-ip-multicast match qos-group 2 system qos service-policy type qos input fcoe-default-in-policy service-policy type queuing input fcoe-default-in-policy service-policy type queuing output fcoe-default-out-policy service-policy type network-qos fcoe-default-nq-policy Command Piping and Parsing To locate information in the output of a show command. These searching and filtering options follow a pipe character (|) at the end of the show command. When a match is found. The pipe parameter can be placed at the end of all CLI commands to qualify. where the qualifiers indicate to display the lines that match the regular expression mstp. or filter the output of the command. the show spanning-tree | egrep ignore-case next 4 mstp command is executed. Example 5-5 Command Piping and Parsing Click here to view code image N5K-A# sh spanning-tree | egrep ? WORD Search for the expression count Print a total count of matching lines only ignore-case Ignore case difference when comparing strings invert-match Print only lines that contain no matches for <expr> line-exp Print only lines where the match is a whole line line-number Print each match preceded by its line number . The egrep parameter enables you to search for a word or expression and print a word count. redirect. the next 4 in the command means that 4 lines of output following the match of mstp are also to be displayed. Many advanced pipe options are available. The show spanning-tree command’s output is piped to egrep. and ignore-case in the command indicates that this match must not be case sensitive. It also has other options. In Example 5-5.!Time: Sun Sep 21 12:45:14 2014 version 5. the Cisco NX-OS software provides the searching and filtering options. including the following: Get regular expression (grep) and extended get regular expression (egrep) Less No-more Head and tail The pipe option can also be used multiple times or recursively.

You can enable or disable specific features such as OSPF. the process will not start until you configure the features. the configuration and verification commands become available. the process for that feature is not running. however. An example of such a feature is OSPF. interfaces. the show running-config | include feature command is used to determine the features that are currently enabled on the switch. followed by pressing the Tab key for help. PIM. such as SVI. such as 1Gbps and 10Gbps Ethernet. In Example 5-6. and Ethernet port channel interface types in the Cisco NX-OS software command output. After you enable the feature on the switch. and the configuration and verification commands for that feature are not available from the CLI. and tunnels. protocols. It is recommended that you keep the features disabled until they are required. Nexus switch supports physical interfaces. mgmt. For some features. In the same manner other features can be enabled before configuring them on the switch. or disable them if they are no longer required. not all features are enabled by default.6d01 This bridge is the root Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Feature Command Cisco NX-OS supports many features. You can also use the show feature command to verify the features that are currently enabled on the switch. you can see only Ethernet.73cb. or logical interfaces. only feature telnet and feature lldp are enabled. This approach optimizes the operating system and increases availability of the switch by running only required processes on the switch. and their related configuration contexts are not visible until that feature is enabled on the switch. Because of the feature command on the Nexus platform. After enabling the interface-vlan feature on the Nexus switch. NX-OS only enables those features that are required for initial network connectivity. When you use the interface command. vFC. and FCOE by using the feature command. the SVI interface type appears as a valid interface option. If a feature is not enabled on the switch. Example 5-6 feature Command Click here to view code image N5K-A(config)# sh running-config | include feature feature fcoe feature telnet no feature http-server Feature lldp N5K-A(config)# interface <Press-Tab-Key> ethernet loopback port-channel vfc . In the default configuration.next prev word-exp Print <num> lines of context after every matching line Print <num> lines of context before every matching line Print only lines where the match is a complete word N5K-A# sh spanning-tree | egrep ignore-case next 4 mstp Spanning tree enabled protocol mstp Root ID Priority 32768 Address 0005. vPC.

6ce7 3 0000. If expansion modules are installed.-------------------------------. This switch has a Supervisor 1 module in slot 5.73cb.---------------------.---------- . In Nexus 5000 Series switch. and their range of MAC addresses for the module are also available via this command.73cb. The number of ports associated with module 1 will vary based on the model of the switch.-------------------------------------1 0005. Modules that are inserted into the expansion module slots will be numbered according to the slot into which they are inserted. they are also visible in this command.----------------------------------.6cc8 to 0005. If fabric extenders (FEX) are attached. Example 5-8 Nexus 7000 Modules Click here to view code image N7K-1# show module Mod Ports Module-Type Model Status --. FEX slot numbers are assigned by the administrator during FEX configuration.0(3)N2(2) Hw -----1. their serial numbers.0000 to 0000. Example 5-7 Nexus 5000 Modules Click here to view code image N5K-A# show module Mod Ports Module-Type Model Status --.0 1.----.fc N5K-A(config)# N5K-A(config)# ethernet fc N5K-A(config)# mgmt san-port-channel feature interface-vlan interface <Press-Tab-Key> loopback port-channel mgmt san-port-channel vfc vlan Verifying Modules The show module command provides a hardware inventory of the interfaces and module that are installed in the chassis.000f N5K-A# Serial-Num ---------JAF1505BGTG JAF1504CCAR The show module command on the Nexus 7000 Series switches identifies which modules are inserted and operational on the switch.0000. the main system is referred to as module 1. and a 32-port M1 module in slot 10. each FEX is also considered an expansion module of the main system. Example 5-7 shows the output of the show module command for the Nexus 5548 switch.----.0(3)N2(2) 5.0 World-Wide-Name(s) (WWN) -------------------------------------------------20:01:00:05:73:cb:6c:c0 to 20:08:00:05:73:cb:6c:c0 -- Mod MAC-Address(es) --. Example 5-8 shows output of the show module command on the Nexus 7000 switch.-----------------. This number can be seen under the Module-Type column of the show module command. Software and hardware versions of each module.-----------1 32 O2 32X10GE/Modular Universal Pla N5K-C5548UP-SUP active * 3 0 O2 Daughter Card with L3 ASIC N55-D160L3 ok Mod --1 3 Sw -------------5. a 32-port F1 module in slot 9.0000. The number and type of expansion modules are dependent on the model of the system.

2(1) N7K-SUP1 N7K-F132XP-15 N7K-M132XP-12 active * ok ok Hw -----1. speed. SVI. VLAN. null.output removed ----- Verifying Interfaces The Cisco Nexus switches running NX-OS software support the following types of interfaces: Physical: Ethernet (10/100/1000/10G/40G) Logical: Port channel. tunnels. and rate mode (dedicated or shared) is also displayed. The show interface brief command lists the most relevant interface parameters for all interfaces on a Cisco Nexus Series switch. where the name of the interface also indicates the speed. it also lists the reason why the interface is down.1.2(1) 5.0 ----. subinterfaces Management: Management On the Nexus platform. Example 5-9 show interface brief Command Click here to view code image N5K-A# show interface brief -------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------mgmt0 -up 172.8 1. For interfaces that are down. Information about the switch port mode. Example 5-9 shows the output of the show interface brief command.2(1) 5.output removed ----- . on Nexus there is no differentiation in the naming convention for the different speeds.1 1000 1500 -------------------------------------------------------------------------------Ethernet VLAN Type Mode Status Reason Speed Port Interface Ch # -------------------------------------------------------------------------------Eth1/1 1 eth trunk up none 10G(D) -Eth1/2 1 eth trunk down SFP not inserted 10G(D) 60 Eth1/3 1 eth access down SFP not inserted 10G(D) -Eth1/4 -eth routed down SFP not inserted 10G(D) -Eth1/5 1 eth fabric up none 10G(D) 105 Eth1/6 1 eth access down SFP not inserted 10G(D) -Eth1/7 1 eth access down SFP not inserted 10G(D) -Eth1/8 1 eth access down SFP not inserted 10G(D) – ----.16.5 9 10 0 32 32 Supervisor module-1X 1/10 Gbps Ethernet Module 10 Gbps Ethernet Module Mod --5 9 10 Sw -------------5. Unlike other switching platforms from Cisco. loopback.1 2. all Ethernet interfaces are named Ethernet.

address: 0005. 252 packets/sec ----.73cb. DLY 10 usec.output removed ----- .6cc8 (bia 0005. output flow-control is off Rate mode is dedicated Switchport monitor is off EtherType is 0x8100 Last link flapped 3week(s) 6day(s) Last clearing of "show interface" counters never 30 seconds input rate 1449184 bits/sec. rxload 1/255 Encapsulation ARPA Port mode is trunk full-duplex.73cb.The show interface command displays the operational state of any interface. description. MAC address.0(3)N2(2) interface Ethernet1/1 no description lacp port-priority 32768 lacp rate normal priority-flow-control mode auto lldp transmit lldp receive no switchport block unicast no switchport block multicast no hardware multicast hw-hash no cdp enable ----. MTU. Example 5-11 shows the output of the show running-config interface Ethernet 1/1 all command. media type is 10G Beacon is turned off Input flow-control is off. Example 5-10 Displaying Interface Operational State Click here to view code image N5K-A# show interface ethernet 1/1 Ethernet1/1 is up Hardware: 1000/10000 Ethernet. It also shows details about the port speed. 237 packets/sec 30 seconds output rate 1566080 bits/sec. The default configuration parameters for any interface may be displayed using the show running-config interface all command. reliability 255/255. including the reason why that interface might be down.output removed ----- Interfaces on the Cisco Nexus Series switches have many default settings that are not visible in the show running-config command. 10 Gb/s. 181148 bytes/sec. 195760 bytes/sec. BW 10000000 Kbit. Example 5-11 Showing Default Configuration Click here to view code image N5K-A# show running-config interface ethernet 1/1 all !Command: show running-config interface Ethernet1/1 all !Time: Sun Sep 21 17:39:33 2014 version 5.6cc8) MTU 1500 bytes. txload 1/255. and traffic statistics of the interface. Example 5-10 shows the output of the show interface Ethernet 1/1 command.

use the show logging nvram [last number-lines] command. Example 5-13 Viewing NVRAM Logs Click here to view code image N5K-A# sh logging nvram 2012 Jul 8 08:30:22 N5K-A 2012 Jul 8 08:30:22 N5K-A 2012 Jul 8 08:30:26 N5K-A 2012 Jul 10 09:21:20 N5K-A %$ %$ %$ %$ VDC-1 VDC-1 VDC-1 VDC-1 %$ %$ %$ %$ %PFMA-2-FEX_STATUS: Fex 105 is online %NOHMS-2-NOHMS_ENV_FEX_ONLINE: FEX-105 On-line %PFMA-2-FEX_STATUS: Fex 105 is online %SATCTRL-FEX105-2-SOHMS_DIAG_ERROR: FEX-105 . The clear logging nvram command clears the logged messages in NVRAM. The clear logging logfile command clears the contents of the log file. Example 5-12 Viewing Log File Click here to view code image N5K-A# sh logging logfile 2014 Aug 24 22:40:45 N5K-A %CDP-6-CDPDUP: CDP Daemon Up. Example 5-13 shows the output of the show logging nvram command.The highlighted portions of the CLI include default parameters. You can display or clear messages in the log file.000 usec 2014 Aug 24 22:40:45 N5K-A %FEX-5-FEX_PORT_STATUS_NOTI: Uplink-ID 0 of Fex 105 that is connected with Ethernet1/5 changed its status from Created to Configured 2014 Aug 24 22:40:47 N5K-A %SYSLOG-3-SYSTEM_MSG: Syslog could not be sent to server(10. the device outputs messages to terminal sessions and to a log file. Use the following commands: The show logging last number-lines command displays the last number of lines in the logging file.4) : No such file or directory 2014 Aug 24 22:40:45 N5K-A %STP-6-STATE_CREATED: Internal state created stateless 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: pss init done 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: conditional action being done 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: ARP lib init done 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: ppf lib init done 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: In global cfg db 2014 Aug 24 22:40:46 N5K-A %DHCP_SNOOP-6-INFO: In vlan cfg db 2014 Aug 24 22:40:46 N5K-A last message repeated 12 times ----. Viewing Logs By default. Example 5-12 shows the output of the show logging logfile command.output removed ----- Viewing NVRAM logs To view the NVRAM logs. The show logging logfile [start-time yyyy mmm dd hh:mm:ss] [end-time yyyy mmm dd hh:mm:ss] command displays the messages in the log file that have a time stamp within the span entered. 2014 Aug 24 22:40:45 N5K-A ntpd[4744]: ntp:precision = 1.14.113.

” also on the disc.output removed ----- Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. .kernel 2013 Mar 13 12:21:59 N5K-A %$ VDC-1 %$ Mar 13 12:21:59 %KERN-2-SYSTEM_MSG: IO Error(set): IO space is not enabled. Table 5-9 Key Topics for Chapter 5 Complete Tables and Lists from Memory Print a copy of Appendix B. . “Memory Tables. Appendix C. Define Key Terms . “Memory Tables Answer Key.kernel 2013 Mar 23 06:52:08 N5K-A %$ VDC-1 %$ %NOHMS-2-NOHMS_ENV_ERR_TEMPMINALRM: System minor temperature alarm on module 1. noted with the key topics icon in the outer margin of the page. base_addr=0x0. includes completed tables and lists to check your work.Module 1: Runtime diag detected major event: Voltage failure on power supply: 1 2012 Jul 10 09:21:20 N5K-A %$ VDC-1 %$ %SATCTRL-FEX105-2-SOHMS_DIAG_ERROR: FEX-105 System minor alarm on power supply 1: failed 2012 Jul 10 09:24:26 N5K-A %$ VDC-1 %$ %SATCTRL-FEX105-2-SOHMS_DIAG_ERROR: FEX-105 Recovered: System minor alarm on power supply 1: failed 2013 Jan 13 01:17:35 N5K-A %$ VDC-1 %$ Jan 13 01:17:35 %KERN-2-SYSTEM_MSG: IO Error(set): IO space is not enabled. base_addr=0x0. and complete the tables and lists from memory. Sensor Outlet crossed minor threshold 50C ----.” (found on the disc) or at least the section for this chapter. Table 5-9 lists a reference of these key topics and the page numbers on which each is found.

Define Key Terms Define the following key terms from this chapter. and check your answers in the glossary: Data Center Network Manager (DCNM) Application Specific Integrated Circuit (ASIC) Frame Check Sequence (FCS) Unified Port Controller (UPC) Unified Crossbar Fabric (UCF) Cisco Discovery Protocol (CDP) Bidirectional Forwarding Detection (BFD) Unidirectional Link Detection (UDLD) Link Aggregation Control Protocol (LACP) Address Resolution Protocol (ARP) Spanning Tree Protocol (STP) Dynamic Host Configuration Protocol (DHCP) Connectivity Management Processor (CMP) Virtual Device Context (VDC) Role Based Access Control (RBAC) Simple Network Management Protocol (SNMP) Remote Network Monitoring (RMON) Fabric Extender (FEX) Management Information Base (MIB) Generic Online Diagnostics (GOLD) Extensible Markup Language (XML) In Service Software Upgrade (ISSU) Field-Programmable Gate Array (FPGA) Complex Instruction Set Computing (CISC) Reduced Instruction Set Computing (RISC) .

answer the Part Review questions for this part of the book. using the PCPT software. This exercise lists a set of key self-assessment questions. answer the “Do I Know This Already?” questions again for the chapters in this part of the book. you should try to answer these self-assessment questionnaires to the best of your ability. if you’re not certain. If you do not remember some details. you should go back and refresh on the key topics. without looking back at the chapters or your notes. . take the time to reread those topics. Answer Part Review Questions For this task. Table PI-1 Part I Review Checklist Repeat All DIKTA Questions For this task. There will be no written answers to these questions in this book. but as you read through each new chapter you will become more familiar and comfortable with them. For this self-assessment exercise in this book. Review Key Topics Browse back through the chapters and look for the Key Topic icons. Self-Assessment Questionnaire Each chapter of this book introduces several key learning items. shown in the following list. This might seem overwhelming to you initially. using the PCPT software. which are relevant to the particular chapter. Refer to “How to View Only Part Review Questions by Part” in the Introduction to this book for help with how to make the PCPT software show you Part Review questions for this part only. Refer to “How to View Only DIKTA Questions by Part” in the Introduction to this book for help with how to make the PCPT software show you DIKTA questions for this part only. Details on each task follow the table.Part I Review Keep track of your part review progress with the checklist shown in Table PI-1. which will help you remember the concepts and technologies of these key topics as you progress with the book.

Think of each of these open self-assessment questions as being about the chapters you’ve read. number 1 is about Data Center Network Architecture. Virtual Port Channel (vPC) would apply to Chapter 1. and so on. but the chapter is structured to give you the knowledge and understanding to be able to discuss and explain these self-assessment questions. number 2 is about Cisco Nexus Product Family. You might or might not find the answer explicitly. Note your answer and try to reference key words in the answer from the relevant chapter. . For example. For example.

Part II: Data Center Storage-Area Networking Exam topics covered in Part II: Describe the Network Architecture for the Storage Area Network Describe the Edge/Core Layers of the SAN Describe Fibre Channel Storage Networking Describe Initiator Target Describe the Different Storage Array Connectivity Describe VSAN Describe Zoning Describe Basic SAN Connectivity Describe the Cisco MDS Product Family Describe the Cisco MDS Multilayer Directors Describe the Cisco MDS Fabric Switches Describe the Cisco MDS Software and Storage Services Describe the Cisco Prime Data Center Network Management Describe Storage Virtualization Describe LUN Storage Virtualization Describe Redundant Array of Inexpensive Disk (RAID) Describe Virtualizing SANs (VSAN) Describe Where the Storage Virtualization Occurs Perform Initial Setup on Cisco MDS Switch Verify SAN Switch Operations Verify Name Server Login Configure and Verify VSAN Configure and Verify Zoning Chapter 6: Data Center Storage Architecture Chapter 7: Cisco MDS Product Family Chapter 8: Virtualizing Storage Chapter 9: Fibre Channel Storage-Area Networking .

Powerful technology trends include the dramatic increase in processing power. Gartner predicts that worldwide consumer digital storage needs will grow from 329 exabytes in 2011 to 4. thousands of devices are newly connected to the Internet. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. and backup.Chapter 6. and to make that data accessible throughout the enterprise data centers. scalability.” . Data Center Storage Architecture This chapter covers the following exam topics: Describe the Network Architecture for the Storage-Area Network Describe the Edge/Core Layers of the SAN Describe Fibre Channel Storage Networking Describe Initiator Target Describe the Different Storage Array Connectivity Describe VSAN Describe Zoning Describe Basic SAN Connectivity Every day. and bandwidth at ever-lower costs. the rapid growth of cloud. and mobile computing. “Answers to the ‘Do I Know This Already?’ Quizzes. Table 6-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. and improved data access. storage. the ability to analyze Big Data and turn it into actionable information. Devices previously only plugged into a power outlet are connected to the Internet and are sending and receiving tons of data at scales.1 zettabytes in 2016. Companies are searching for more ways to efficiently manage expanding volumes of data. flexibility. It compares Small Computer System Interface (SCSI). and an improved capability to combine technologies (both hardware and software) in more powerful ways. read the entire chapter. Storage area networks (SAN) are the leading storage infrastructure of today. It covers Fibre Channel protocol and operations. social media. This chapter discusses the function and operation of the data center storage-networking technologies. and network-attached storage (NAS) connectivity for remote server storage. You can find the answers in Appendix A. SANs offer simplified storage management. movement. This chapter goes directly into the edge/core layers of the SAN design and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. and availability. This demand is pushing the move of storage into the network. Fibre Channel.

Each block or storage volume can be treated as an independent disk drive and is controlled by an external server OS. Fibre Channel . Block-level storage systems are generally inexpensive when compared to file-level storage systems.) a. b. They can support external boot of the systems connected to them. Which of the following options describe advantages of storage-area network (SAN)? (Choose all the correct answers.) a. CIFS b. Consolidation b. e. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. None 3. Which of the following protocols are file based? (Choose all the correct answers.) a.Table 6-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. Which of the following options describe advantages of block-level storage systems? (Choose all the correct answers. If you do not know the answer to a question or are only partially sure of the answer. 2. 1. Storage virtualization c. Block-level storage systems are very popular with storage area networks. Business continuity d. d. Secure access to all hosts e. c. Block-level storage systems are well suited for bulk file storage. you should mark that question as wrong for purposes of the self-assessment.

to ensure that the initiator and target process can communicate? a. c. It is used for mission-critical applications.c. only 239 domains are available to the fabric. Every HBA. Which of the following options should be taken into consideration during SAN design? a. 5. one can define 4096 VSANs. b. 10. d. A dual-ported HBA has three WWNs: one nWWN. Device performance and oversubscription ratios c. 7. and one pWWN for each port. PRLO 8. The domain ID is an 8-bit field. Which options describe the characteristics of Tier 1 storage? (Choose all the correct answers.) a. PLOGI c. Integrated large-scale disk array. d. PRLI d. b. FLOGI b. Which process enables an N Port to exchange information about ULP support with its target N Port. E b. SCSI d. Port density and topology requirements b. Which of the following options are correct for VSANs? (Choose all the correct answers. c. The arbitrated loop physical address (AL-PA) is a 16-bit address. Back-up storage product. c. NP 9. switch. and Fibre Channel disk drive has a single unique nWWN. Traffic management d. Membership is typically defined using the VSAN ID to Fx ports. TN d.) a. F c. An HBA or storage device can belong to multiple VSANs. Which mapping table is inside the front-end array controller and determines which LUNs are advertised through which storage array ports? . Which of the following options are correct for Fibre Channel addressing? a. An HBA or a storage device can belong to only a single VSAN—the VSAN associated with the Fx port. On a Cisco MDS switch. gateway. array controller. NFS 4. b. Centralized controller and cache system. Low latency 6. d. Which type of port is used to create an ISL on a Fibre Channel SAN? a.

the read and write function can be faster or slower. The basic interface for all modern drives is straightforward. Zoning d. The primary characteristics of an HDD are its capacity and performance. including as of 2011. A hard disk drive (HDD) is a data storage device used for storing and retrieving digital information using rapidly rotating disks (platters) coated with magnetic material. Serial Attached SCSI (SAS). Data is read in a random-access manner. also called IDE or EIDE. meaning individual blocks of data can be stored or retrieved in any order rather than sequentially. the primary competing technology for secondary storage is flash memory in the form of solid-state drives (SSD). described before the introduction of SATA as ATA). Most current units are manufactured by Seagate. many file systems will read or write 4KB at a time (or more). Multisector operations are possible. where 1 gigabyte = 1 billion bytes). and HDDs became the dominant secondary storage device for general-purpose computers by the early 1960s. The platters are all bound together around the spindle.000 gigabytes (GB. No correct answer exists Foundation Topics What Is a Storage Device? Data is stored on hard disk drives that can be both read and written on. each of which can be read or written. indeed. and the HDD technology on which they were built. As of 2014. SSDs are replacing HDDs where speed. IBM introduced the first HDD in 1956. These platters are usually made of some hard material. and Fibre Channel. Thus. write latency. A disk can have one or more platters. LUN mapping b. Continuously improved. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte (TB) drive has a capacity of 1. and durability are more important considerations. price per unit of storage. we can view the disk as an array of sectors. Toshiba. The drive consists of a large number of sectors (512-byte blocks). Depending on the methods that are used to run those tasks. HDDs are expected to remain the dominant medium for secondary storage due to predicted continuing advantages in recording capacity. each platter has two sides. HDDs are accessed over one of a number of bus types. LUN zoning e. HDDs have maintained this position into the modern era of servers and personal computers. More than 200 companies have produced HDD units. and product lifetime. and Western Digital. SCSI. An HDD retains its data even when powered off. each of which is called a surface. 0 to n – 1 is thus the address space of the drive. LUN masking c. power consumption. parallel ATA (PATA. which is connected to a motor that spins the . A platter is a circular hard surface on which data is stored persistently by inducing magnetic changes to it. and then coated with a thin magnetic layer that enables the drive to persistently store bits even when the drive is powered off.a. The sectors are numbered from 0 to n – 1 on a disk with n sectors. such as aluminum. Serial ATA (SATA).

RAID 1). For example. Data is encoded on each surface in concentric circles of sectors. Figure 6-1 Components of Hard Disk Drive A specific sector must be referenced through a three-part address composed of cylinder/head/sector information. performance. logical block addressing facilitates the flexibility of storage networks by allowing many types of disk drives to be integrated more easily into a large heterogeneous storage environment. The different schemes or architectures are named by the word RAID followed by a number (RAID 0. Data is distributed across the drives in one of several ways. JBOD (derived from “just a bunch of . which makes disks much easier to work with by presenting a single flat address space. when reading a sector from the disk. and sector addresses have been replaced by a method called logical block addressing (LBA).platters around (while the drive is powered on) at a constant (fixed) rate. and all corresponding tracks from all disks define a cylinder (see Figure 6-1). And cache is just some small amount of memory (usually around 8 or 16 MB).200 RPM to 15. it is also not used much anymore by the systems and subsystems that use disk drives. for example. track. and sectors is interesting. which comes in several standard configurations and nonstandard configurations. Although the internal system of cylinders. availability. Cylinder. and typical modern values are in the 7. which the drive can use to hold data read from or written to the disk. In Chapter 8.000RPM range. The most widespread standard for configuring multiple hard drives is Redundant Array of Inexpensive/Independent Disks (RAID). There are usually 1024 tracks on a single disk. depending on the specific level of redundancy and performance required. RAID levels greater than RAID 0 provide protection against unrecoverable (sector) read errors. Each scheme provides a different balance between the key goals: reliability. The rate of rotation is often measured in rotations per minute (RPM).000 RPM means that a single rotation takes about 6 milliseconds (6 ms). referred to as RAID levels. a drive that rotates at 10. as well as whole disk failure. the drive might decide to read in all of the sectors on that track and cache them in its memory. “Virtualizing Storage. Note that we will often be interested in the time of a single rotation. To a large degree. we call one such concentric circle a track. tracks. doing so enables the drive to quickly respond to any subsequent requests to the same track.” all the RAID levels are discussed in great detail. and capacity.

Hard drives may be handled independently as separate logical volumes. and each block can be controlled like an individual hard drive. NFS (Linux). Block-level storage systems are very popular with SANs. and maintainability by using existing components (controllers. Additionally. disk enclosures. cache. File-level storage systems are more popular with NAS-based storage systems—Network Attached Storage. fans. CIFS. power supplies. File-level storage systems are generally inexpensive when compared to block-level storage systems. disk array components are often hot swappable. Typically. Storage-area network (SAN) arrays are based on block-level storage. The file level storage device itself can generally handle operations such as access control. Each block or storage volume can be individually formatted with the required file system. often up to the point where all single points of failure (SPOF) are eliminated from the design. It stores files and folders and is visible as such to both the systems storing the files and the systems accessing them. Each block or storage volume can be treated as an independent disk drive and is controlled by external server OS. and so on) and files are stored and accessed from it as such. or they may be combined into a single logical volume using a volume manager. resiliency. Generally. They can be configured with common file-level protocols like NTFS (Windows). Advantages of file-level storage systems are the following: A file-level storage system is simple to implement and simple to use. The following are advantages of block-level storage systems: Block-level storage systems offer better performance and speed than file-level storage systems. a disk array provides increased availability. while making them accessible either as independent hard drives or as a combined (spanned) single logical volume with no actual RAID functionality.disks”) is an architecture involving multiple hard drives. File-level storage systems are well suited for bulk file storage. Disk arrays are divided into categories: Network attached storage (NAS) arrays are based on file-level storage. and the like. Each block or storage volume can be formatted with the file system required by the application (NFS/NTFS/SMB). The raw blocks (storage volumes) are created. In this type of storage. and so on). these blocks are controlled by the server-based operating systems. and so on. . integration with corporate directories. the storage disk is configured with a particular protocol (such as NFS. in bulk. access ports.

DAS devices limit file sharing and can be complex to implement and manage. Dell. Servers are connected to the connection port of the disk subsystem using standard I/O techniques such as Small Computer System Interface (SCSI). Virtual Machine File Systems (VMFS). Standard . which sees only the hard disks that the disk subsystem provides to the server. to support data backups. To access data with DAS. DAS devices provide little or no mobility to other servers and little scalability. They can support external boot of the systems connected to them. NetApp. Figure 6-2 Servers Connected to a Disk Subsystem Using Different I/O Techniques Direct-attached storage (DAS) is often implemented within a parallel SCSI implementation. IBM. and their transport systems are very efficient. Hitachi Data Systems. Oracle Corporation. For example. Devices in a captive storage topology do not have direct access to the storage network and do not support efficient sharing of storage. Fibre Channel. or Internet SCSI (iSCSI) and can thus use the storage capacity that the disk subsystem provides (see Figure 6-2). a user must go through some sort of front-end network. and other companies that often act as OEMs for the previously mentioned vendors and do not themselves market the storage components they manufacture. Block-level storage can be used to store files and also provide the storage required for special applications like databases. Primary vendors of storage systems include EMC Corporation. DAS devices require resources on the host and spare disk systems that cannot be used on other systems.Block-level storage systems are more reliable. The internal structure of the disk subsystem is completely hidden from the server. DAS is commonly described as captive storage. Infortrend. Hewlett-Packard. and the like. The controller of the disk subsystem must ultimately store all data on physical hard disks.

For example. and increasingly Serial ATA (SATA). tape drives can stream data very quickly. the disk subsystem switches from the first to the second I/O channel. Sometimes. and an automated method for loading tapes (a robot). and computer systems so that data transfer is secure and robust. This layer organizes the connections. A disk drive can move to any position on the disk in a few milliseconds. currently ranging from 20 terabytes up to 2. in normal operation the controller communicates with the hard disks via the first I/O channel and the second I/O channel is not used. Active/passive: In active/passive cabling the individual hard disks are connected via two I/O channels. which provides random access storage. both groups are addressed via the other I/O channel. sometimes called a tape silo. If one I/O channel fails. a number of slots to hold tape cartridges. and Serial Storage Architecture (SSA) are being used for internal I/O channels between connection ports and controllers as well as between controllers and internal hard disks. A tape drive is a data storage device that reads and writes data on a magnetic tape. A tape library.1 exabytes of data or multiple thousand times the capacity of a typical hard drive and well in excess of capacities achievable with network attached storage. tape robot. it is no longer possible to access the data. unlike a disk drive. Magnetic tape data storage is typically used for offline.I/O techniques such as SCSI. but a tape drive must physically wind tape between reels to read any one particular piece of data. which provides physical connections and a management layer. However. If one I/O channel fails. What Is a Storage-Area Network? The Storage Networking Industry Association (SNIA) defines the meaning of the storage-area network (SAN) as a network whose primary purpose is the transfer of data between computer systems and storage elements and among storage elements. a bar-code reader to identify tape cartridges. however. in normal operation the controller divides the load dynamically between the two I/O channels so that the available hardware can be optimally utilized. A SAN consists of a communication infrastructure. A tape drive provides sequential access storage. comparable to hard disk drives. as of 2010. archival data storage. storage elements. is a storage device that contains one or more tape drives. In the event of the failure of the first I/O channel. or tape jukebox. The I/O channels can be designed with built-in redundancy to increase the fault-tolerance of a disk subsystem. the communication goes through the other channel only. Active/active (load sharing): In this approach. all hard disks are addressed via both I/O channels. There are four main I/O channel designs: Active: In active cabling the individual physical hard disks are connected via only one I/O channel. The hard disks are divided into two groups: in normal operation the first group is addressed via the first I/O channel and the second via the second I/O channel. tape drives have very slow average seek times for sequential access after the tape is positioned. Linear Tape-Open (LTO) supported continuous data transfer rates of up to 140 MBps. Fibre Channel. manufacturer-specific—I/O techniques are used. These devices can store immense amounts of data. proprietary—that is. if this access path fails. Serial Attached SCSI (SAS). As a result. The term SAN is usually (but not necessarily) identified with block I/O services rather than file-access . Active/active (no load sharing): In this cabling method the controller uses both I/O channels in normal operation.

services. centralized storage management tools can help improve scalability. including disk. Consolidation is concentrating the systems and resources into locations with fewer. The simplification of the infrastructure consists of six main areas. Figure 6-3 Storage Connectivity The key benefits that a SAN might bring to a highly data-dependent business infrastructure can be summarized into three concepts: simplification of the infrastructure. availability. and disaster tolerance. and optical storage. tape. a SAN introduces the flexibility of networking to enable one server or many heterogeneous servers to share a common storage utility. A SAN allows an any-to-any connection across the network by using interconnect elements such as switches and directors. information life-cycle management (ILS). Instead. high-speed network that attaches servers and storage devices. but more powerful. currently limited by the number of storage devices that are attached to the individual server. It eliminates the traditional dedicated connection between a server and storage. Figure 6-3 portrays a sample SAN topology. A SAN is a specialized. and the concept that the server effectively owns and manages the storage devices. Additionally. It also eliminates any restriction to the amount of data that a server can access. and business continuity. the storage utility might be located far from the servers that it uses. Storage virtualization helps in making complexity nearly transparent and can offer a composite . A network might include many storage devices. servers and storage pools that can help increase IT efficiency and simplify the infrastructure. In addition.

A SAN implementation makes it easier to manage the information life cycle because it integrates applications and data into a single-view system in which the information resides. In smaller SAN environments you can deploy hubs for Fibre Channel arbitrated loop topologies. SANs make it possible to meet any availability requirements. it is about devising plans and strategies that will enable you to continue your business operations and enable you to recover quickly and effectively from any type of disruption. arbitrated loop. storage devices. computer systems. including directors. How to Access a Storage Device Each new storage technology introduced a new physical interface. Fibre Channel supports point-to-point. It gives you a solid framework to lean on in times of crisis and provides stability and security. routers. and bridges. and appliances. Given this definition. When all servers have secure access to all data. it’s about identifying your key products and services and the most urgent activities that underpin them. several components can be used to build a SAN. Storage subsystems. As the name suggests. after that analysis is complete. electrical interface. and server systems can be attached to a Fibre Channel SAN. Information life-cycle management (ILM) is a process for managing information through its life cycle. Automation is choosing storage components with autonomic capabilities. plus all control software. SANs play a key role in the business continuity. higher-level tasks that are unique to the company’s business mission. A storage system consists of storage elements. As SANs increase in size and complexity. whatever its size or cause. including access to open system storage (FCP). and storage . As soon as day-today tasks are automated. Business continuity (BC) is about building and improving resilience in your business. switches. access to mainframe storage (FICON). then.view of storage assets. from conception until intentional disposal. embedding BC into your business is proven to bring business benefits. Depending on the implementation. Integrated storage environments simplify system management tasks and improve security. a Fibre Channel network might be composed of many types of interconnect entities. gateways. and networking (TCP/IP). hubs. which can improve availability and responsiveness and help protect data as storage needs grow. storage administrators might be able to spend more time on critical. your infrastructure might be better able to respond to the information needs of your users. Fibre Channel is a serial I/O interconnect that is capable of supporting multiple protocols. which communicates over a network. and switched topologies with various copper and optical links that are running at speeds from 1 Gbps to 16 Gbps. or switches and directors for Fibre Channel switched fabric topologies. Fibre Channel directors can be introduced to facilitate a more flexible and fault-tolerant configuration. it is a network. so any combination of devices that can interoperate is likely to be used. It is the deployment of these different types of interconnect entities that enables Fibre Channel networks of varying scale to be built. The committee that is standardizing Fibre Channel is the INCITS Fibre Channel (T11) Technical Committee. storage devices. In fact. We talk about storage virtualization in detail in Chapter 8. Each component that composes a Fibre Channel SAN provides an individual management capability and participates in an often-complex end-to-end management environment. The ILM process manages this information in a manner that optimizes storage and maintains a high level of access at the lowest cost. By deploying a consistent and safe infrastructure.

The intelligent interface model abstracts device-specific details from the storage protocol and decouples the storage protocol from the physical and electrical interfaces. SCSI is an American National Standards Institute (ANSI) standard that is one of the leading I/O buses in the computer industry. others provide file access via a network protocol. metadata may be associated with the file records to define the record length. Most file systems are based on a block device. . SCSI is a block-level I/O protocol for writing and reading data blocks to and from a storage device. usually containing some whole number of records having a maximum length. are sequences of bytes or bits. the formats can be fixed-length or variable length. by key. with different physical organizations or padding mechanisms. or by record number. Since then. Files are granular containers of data created by the system or an application. the details vary depending on the particular system. the information is easily separated and identified. This type of structured data is called blocked. In this chapter we will be categorizing them in three major groups: Blocks. for example. Block-Level Protocols Block-oriented protocols (also known as block-level protocols) read and write individual fixedlength blocks of data. which comes in a number of variants. Records are similar to a file system.protocol. sequential. The first version of the SCSI standard was released in 1986. although the block size in file systems may be a multiple of the physical block size. SCSI has been continuously developed. Some file systems are used on local data storage devices. By separating the data into individual pieces and giving each piece a name. There are several record formats. Different methods to access records may be provided. a block size. In general. This enables multiple storage technologies to use a common storage protocol. which is a level of abstraction for the hardware responsible for storing and retrieving specified blocks of data. sometimes called physical records. or the data might be part of the record. The SCSI bus is a parallel bus. The structure and logic rule used to manage the groups of information and their names is called a file system. Data transfer is also governed by standards. as shown in Table 6-2.

Table 6-2 SCSI Standards Comparison The International Committee for Information Technology Standards (INCITS) is the forum of choice for information technology developers. ANSI promotes American National Standards to ISO as a joint technical committee member (JTC-1). drafts become standards and are published by ANSI. T13) prepares drafts. The standard process is that a technical committee (T10. These rules are designed to ensure that voluntary standards are developed by the consensus of directly and materially affected interests. producers. Drafts are sent to INCITS for approval. After the drafts are approved by INCITS. T11. The Information Technology Industry Council (ITI) sponsors INCITS (see Figure 6-4). Figure 6-4 Standard Groups: Storage . and users for the creation and maintenance of formal daily IT standards. INCITS is accredited by and operates under rules approved by the American National Standards Institute (ANSI).

All SCSI devices are intelligent. a logical unit is a virtual machine (or virtual controller) that handles SCSI communications on behalf of real or virtual storage devices in a target. A target must have at least one LUN. A SCSI controller in a modern storage array acts as a target externally and acts as an initiator internally. Commands received by targets are directed to the appropriate logical unit by a task router in the target controller. The process of provisioning storage in a SAN storage subsystem involves defining a LUN on a particular target port and then assigning that particular target/LUN pair to a specific logical unit. to which a response is expected. A logical unit can be thought of as a “black box” processor. Although we tend to use the term LUN to refer to a real or virtual storage device. Table 6-2 lists the SCSI standards. via a LUN. Status. so SCSI controllers typically are called initiators. The bus can be realized in the form of printed conductors on the circuit board or as a cable. A so-called daisy chain can connect up to 16 devices together. and the LUN is a way to identify SCSI black boxes. LUN 0. a disk drive might use a single LUN.As a medium. Thus. Essentially. Over time. and Block Data Between Initiators and Targets The logical unit number (LUN) identifies a specific logical unit (virtual controller) in a target. Also note that array-based replication software requires a storage controller in the initiating storage array to act as initiator both externally and internally. One SCSI device (the initiator) initiates communication with another SCSI device (the target) by issuing a command. Figure 6-5 SCSI Commands. For instance. numerous cable and plug types have been defined that are not directly compatible with one another. whereas a subsystem might allow hundreds of LUNs to be defined. An . and might optionally support additional LUNs. a LUN is an access point for exchanging commands and status information between initiators and targets. SCSI defines a parallel bus for the transmission of data with additional lines for the control of communication. the SCSI protocol is half-duplex by design and is considered a command/response protocol. SCSI storage devices typically are called targets (see Figure 6-5). but SCSI operates as a master/slave model. Logical units are architecturally independent of target ports and can be accessed through any of the target’s ports. SCSI targets have logical units that provide the processing context for SCSI commands. The initiating device is usually a SCSI controller.

Depending on the version of the SCSI standard. and LUN.individual logical unit can be represented by multiple LUNs on different ports. During the arbitration of the bus. a logical unit could be accessed through LUN 1 on Port 0 of a target and also accessed as LUN 8 on port 1 of the same target. The bus. The SCSI protocol permitted only eight IDs. If the bus is heavily loaded. and LUN triad is defined from parallel SCSI technology. A server can be equipped with several SCSI controllers. the IDs 7 to 0 should retain the highest priority so that the IDs 15 to 8 have a lower priority (see Figure 6-6). For instance. For reasons of compatibility. a maximum of 8 or 16 IDs are permitted per SCSI bus. More recent versions of the SCSI protocol permit 16 different IDs. the operating system must note three things for the differentiation of devices: controller ID. target. each supporting a separate string of disks. The target represents a single disk controller on the string. Figure 6-6 SCSI Addressing Devices (servers and storage devices) must reserve the SCSI bus (arbitrate) before they can send data through it. this can lead to devices with lower priorities never being . The bus represents one of several potential SCSI interfaces that are installed in the host. the device that has the highest priority SCSI ID always wins. Therefore. SCSI ID. with the ID 7 having the highest priority.

This is the main reason why TCP was considered essential for the iSCSI protocol that transports SCSI commands and data transfers over IP networking equipment: TCP provided ordering while other upper-layer protocols. even faster bus types are introduced. Figure 6-7 SCSI I/O Channel. responding to requests. cables. It is often also assumed that SCSI provides in-order delivery to maintain data integrity. The SCSI-3 device server sits in the target device. In SCSI3. retrieval of the file requires a hierarchy of functions. SCSI-3 is an all-encompassing term that refers to a collection of standards that were written as a result of breaking SCSI-2 into smaller. did not. These functions assemble raw data blocks into a coherent file that an application can manipulate. In other words. so it has different functional components (see Figure 6-7). file system.” The SCSI protocol layer sits between the operating system and the peripheral resources. Fibre Channel I/O Channel. and command Protocol). Applications typically access data as files or records. along with serial SCSI buses that reduce the cabling overhead . was not needed by the SCSI protocol layer. The SCSI arbitration procedure is therefore “unfair. Although this information might be stored on disk or tape media in the form of data blocks. the SCSI protocol assumes that proper ordering is provided by the underlying connection technology. hierarchical modules that fit within a general framework called SCSI Architecture Model (SAM). In SCSI-3. and operating system I/O requests.allowed to send data. such as UDP. In-order delivery was traditionally provided by the SCSI bus and. therefore. the SCSI protocol does not provide its own reordering mechanism and the network is responsible for the reordering of transmission frames that are received out of order. and TCP/IP I/O Networking It is important to understand that while SCSI-1 and SCSI-2 represent actual standards that completely define SCSI (connectors. signals. The SCSI version 3 (SCSI-3) application client resides in the host and represents the upper-layer application.

In particular. By eliminating the need for a second network technology just for storage. The specific number of IOPS possible in any system configuration will vary greatly. whereas for SSDs and similar solid state storage devices. sequential IOPS are reported as a simple MB/s number as follows: IOPS × TransferSizeInBytes = BytesPerSec (and typically this is converted to megabytes per sec) SCSI messages and data can be transported in several ways: Parallel SCSI cable: This transport is mainly used in traditional deployments. the mix of sequential and random access patterns. Fibre Channel cable: This transport is the basis for a traditional SAN deployment. A SAN is Fibre Channel-based (FC-based). where the SAN switch devices allow the mixture of open systems and mainframe traffic. It is at this point where the Fibre Channel model is introduced. rather than a replacement for. Fibre Channel has a lossless delivery mechanism by using bufferto-buffer credits (BB_Credits). for both storage and data networks. including the balance of read and write operations. Latency is low.and allow a higher maximum bus length. the number of worker threads and queue depth. IOPS (Input/output Operations Per Second. Because it uses TCP/IP. the traditional IBM Enterprise Systems Connection (ESCON) architecture. iSCSI has the potential to lower the costs of deploying networked storage. iSCSI is also suited to run over almost any physical network. On both types of storage devices. structured applications such as database applications that require many I/O operations per second (IOPS) to achieve high performance. FICON is a prerequisite for IBM z/OS systems to fully participate in a heterogeneous SAN. the random IOPS numbers are primarily dependent upon the storage device’s internal controller and memory interface speeds. the random IOPS numbers are primarily dependent upon the storage device’s random seek time. and storage-area networks (SAN). depending on the variables the tester enters into the program. Fibre Channel connection (FICON) architecture: An enhancement of. ISCSI is SCSI over TCP/IP: Internet Small Computer System Interface (iSCSI) is a transport protocol that carries SCSI commands from an initiator to a target. The SCSI protocol is suitable for block-based. pronounced eye-ops) is a common performance measurement used to benchmark computer storage devices like hard disk drives (HDD). SCSI is carried in the payload of a Fibre Channel frame between Fibre Channel ports. Therefore. the demands and needs of the market push for new technologies. which are explained in detail later in this chapter. For HDDs and similar electromechanical storage devices. Latency is low and bandwidth is high (16 GBps). Often. there is always a push for faster communications without limitations on distance or on the number of connected devices. so data can flow in only one direction at a time. ISCSI enables the implementation of IP-based SANs. FICON is a protocol that uses Fibre Channel as . enabling clients to use the same networking technologies. As always. and the data block sizes. the sequential IOPS numbers (especially when using a large block size) typically indicate the maximum sustained bandwidth that the storage device can handle. but distance is limited to 25 m and is half-duplex. It is a data storage networking protocol that transports standard SCSI requests over the standard Transmission Control Protocol/Internet Protocol (TCP/IP) networking technology. solid state drives (SSD).

is enormous. but HBAs are optimized for SANs and provide features that are specific to storage. It is a method to allow the transmission of Fibre Channel information to be tunneled through the IP network. at a relatively low cost. Because most organizations already have an existing IP infrastructure. Another possible assumption is that the only need for the IP connection is to facilitate any distance requirement that is beyond the current scope of an FCP SAN. Fibre Channel over Ethernet (FCoE): This transport replaces the Fibre Channel cabling with 10 Gigabit Ethernet cables and provides lossless delivery over unified I/O. HBAs are roughly analogous to network interface cards (NIC).) HBAs are I/O adapters that are designed to maximize performance by performing protocol-processing functions in silicon. Any congestion control and management and data error and data loss recovery is handled by TCP/IP services and does not affect Fibre Channel fabric services. FCIP encapsulates Fibre Channel block data and then transports it over a TCP socket. Fibre . The major consideration with FCIP is that it does not replace Fibre Channel with IP. The assumption that this might lead to is that the industry decided that Fibre Channel-based SANs are more than appropriate. (See Figure 6-9. the attraction of being able to link geographically dispersed SANs.its physical medium. it allows deployments of Fibre Channel fabrics by using IP tunneling. At the time of writing this book. Figure 6-8 portrays networking stack comparison for all block I/O protocols. FICON can also increase the number of control unit images per link and the number of device addresses per control unit link. FICON channels can achieve data rates up to 850 MBps and extend the channel distance (up to 100 km). TCP/IP services are used to establish connectivity between remote SANs. Fibre Channel over IP (FCIP): Also known as Fibre Channel tunneling or storage tunneling. Figure 6-8 Networking Stack Comparison for All Block I/O Protocols Fibre Channel provides high-speed transport for SCSI payload via Host Bus Adapter (HBA). FCoE is discussed in detail in Chapters 10 and 11. The protocol can also retain the topology and switch management characteristics of ESCON.

FC-2. such as SCSI-3.Channel overcomes many shortcomings of Parallel I/O. IP. The diagram shows the Fibre Channel. Figure 6-9 Comparison Between Fibre Channel HBA and Ethernet NIC Figure 6-9 contrasts HBAs with NICs. sequencing. With NICs. HIPPI data framing. segmentation and reassembly. Internet Protocol (IP). HBAs are roughly analogous to NICs. Offloading these functions is necessary to provide the performance that storage networks require. host speeds of 100 to 1600 MBps (1–16 Gbps). FC-1. . which is divided into four lower layers (FC-0. and Fibre Channel connection (FICON). and others. and FC-3) and one upper layer (FC-4). Fibre Channel Protocol (FCP) has an ANSI-based layered architecture that can be considered a general transport vehicle for upper-layer protocols (ULP) such as SCSI command sets. Figure 6-10 shows an overview of the Fibre Channel model. and error correction. HBAs are I/O adapters that are designed to maximize performance by performing protocol-processing functions in silicon. FC-4 is where the upper-level protocols are used. The HBA offloads these protocol-processing functions onto the HBA hardware with some combination of an ASIC and firmware. support for multiple protocols. software drivers perform protocol-processing functions such as flow control. illustrating that HBAs offload protocol-processing functions into silicon. loop (shared) and fabric (switched) transport. but HBAs are optimized for SANs and provide features that are specific to storage. and combines best attributes of a channel and a network together. including addressing for up to 16 million nodes.

alias address definition. To derive ULP throughput. and multicast management and stacked connect-requests. sequence disassembly and reassembly. provides a raw bit rate of 4. 1-Gbps FC operates at 1. Specifications exist here but are rarely implemented. Table 6-3 shows the evolution of Fibre Channel speeds. rates of 12. and provides a data bit rate of 1. including cabling. 200 MBps as 2-Gbps.25 Gbps. which means the baud rate and raw bit rate are equal. The FC-PI specification redefines baud rate more accurately and states explicitly that FC encodes 1 bit per baud.5 MBps.25 GBaud. FC-2 framing and flow control: Provides the framing and flow control that is required to transport the ULP over Fibre Channel. 50 MBps. and so on. Instead. Fibre Channel is described in greater depth throughout this chapter. and error control. The FC-PH specification defines baud rate as the encoded bit rate per second. 25 MBps. provides a raw bit rate of 1.0625 Gbps. FC-2 functions include several classes of service.0625 GBaud. FC-3 generic services: Provides the Fibre Channel Generic Services (FC-GS) that is required for fabric management. In the initial FC specification.7 Gbps. Note that FCP does not define its own header. and 100 MBps were introduced on several different copper and fiber media. and 400 MBps as 4-Gbps. FCP is the FC-4 mapping of SCSI-3 onto FC. including 200 MBps and 400 MBps. Additional rates were subsequently introduced. provides a raw bit rate of 2. the FC-2 header and interframe spacing overhead must be subtracted. Fibre Channel throughput is commonly expressed in bytes per second rather than bits per second. FC-1 variants up to and including 8 Gbps use the same encoding scheme (8B/10B) as GE fiber optic variants. protocols for hunt group. 2-Gbps FC operates at 2. decoding.125 Gbps. 4-Gbps FC operates at 4. 100 MBps FC is also known as 1-Gbps FC. FC-0 physical interface: Provides physical connectivity.Figure 6-10 Fibre Channel Protocol Architecture These are the main functions of each Fibre Channel layer: FC-4 upper-layer protocol (ULP) mapping: Provides protocol mapping to identify the ULP that is encapsulated into a protocol data unit (PDU) for delivery to the FC-2 layer. exchange management. FC-1 encoding: Defines the transmission protocol that includes the serial encoding. connectors.125 GBaud. address assignment.4 Gbps. frame format definition. fields within the FC-2 . and provides a data bit rate of 3. and provides a data bit rate of 850 Mbps.

Assuming the maximum payload (2112 bytes) and no optional FC-2 headers. Inter-frame spacing adds another 24 bytes.30608 Gbps for 1 Gbps. A distributed (client/server) network might contain multiple servers and multiple clients. The benefits of client/server architecture include cost reduction because of hardware and space requirements. The client and the server communicate with each other through established protocols. Unlike the 10 Gbps FC standards. The configuration of the network depends on the resource requirement of the environment. Examples of resources are e-mail. and the system that requires the resources is called the client. 2 Gbps. The local workstations do not need as much disk space because commonly used data . and 3. 4 Gbps FC. 2 Gbps FC.65304 Gbps. These resources can be made available through NFS. database. The theory of client/server architecture is based on the concept that one computer has the resources that another computer requires. The system with the resources is called the server. The 1 Gbps FC.header are used by FCP. the ULP throughput rate is 826. 8 Gbps FC designs all use 8B/10B encoding. and files.519 Mbps. These ULP throughput rates are available directly to SCSI. or multiple clients and one server. and the 10G and 16GFC standard uses 64B/66B encoding. and NFS is found primarily on UNIX and Linux servers. Files are segmented into blocks before being stored on disk or tape. and 4 Gbps. 16 Gbps FC provides backward compatibility with 4 Gbps FC and 8 Gbps FC. CIFS is found primarily on Microsoft Windows servers (a Samba service implements CIFS on UNIX systems). Table 6-3 Evolution of Fibre Channel Speeds File-Level Protocols File-oriented protocols (also known as file-level protocols) read and write variable-length files. respectively. 1. The basic FC-2 header adds 36 bytes of overhead. Common Internet File System (CIFS) and Network File System (NFS) are file-based protocols that are used for reading and writing files across a network.

These services include mount daemon (mounted). the server in the network is a network appliance storage system. and portmap (also known as rpcbind). Status Monitor (sm_1_main). low. a client cannot mount a resource if mountd is not running on the server. Block I/O protocols are suitable for well-structured applications. Similarly. Because CIFS and NFS are transported over TCP/IP. records. latency is high. One benefit of NAS storage is that files can be shared easily between users within a workgroup. Block I/O and File I/O are complementary solutions for reading and writing data to and from storage devices. Block I/O uses the SCSI protocol to transfer data in 512-byte blocks. if rpcbind is not running on the server. SCSI is mostly used in Fibre Channel SANs to transfer blocks of data between servers and storage arrays in a high-bandwidth. CIFS and NFS are verbose protocols that send thousands of messages between the servers and NAS devices when reading and writing files. For example. SCSI may be transported over many physical media. such as databases that transfer small blocks of data when updating fields. NFS communication cannot be established between the client and the server. . Each service is required for successful operation of an NFS process. Network Lock Manager (nlm_main). NFS daemon (nfsd). and the client can be one of many versions of the Unix or Linux operating system. to allow for easy recovery in the event of server failure. Figure 6-11 Block I/O Versus File I/O The storage system provides services to the client. NFS is stateless. Microsoft SharePoint.can be stored on the server. SCSI is an efficient protocol and is the accepted standard within the industry. These file-based protocols are suitable for file-based applications: Microsoft Office. and File and Print services. and tables. NFS is a widely used protocol for sharing files across networks. quota daemon (rquot_1_main). In Figure 6-11. Other benefits include centralized support (backups and maintenance) that is performed on the server.latency redundant SAN.

NTFS. number of storage devices per chain. Figure 6-12 DAS. NFS. and block-level data access applies to Fibre Channel (FC) back-end networks. FCoE. and NAS Comparison At this stage. resiliency. Fibre Channel was introduced to overcome these limitations. NAS servers are turnkey file servers. FCoE SAN. On the other hand. Therefore. Network Attached Storage (NAS) appliances reside on the front-end. Front-end protocols include FTP.In the scenario portrayed in Figure 6-11. SANs include the back end along with front-end components. speed. NAS servers have only limited suitability as data storage for databases due to lack of performance. SMB (server message block—Microsoft NT). In contrast to NAS. NAS transfers files or file fragments. in Fibre Channel. Fibre Channel supplies optimal performance for the data exchange between server and storage device. and NAS are four techniques with which storage networks can be realized (see Figure 6-12). iSCSI SAN. SCSI is limited by distance. and management flexibility. Storage networks are more difficult to configure. Front-end data communications are considered host-to-host or client/server. SAN. Putting It All Together Fibre Channel SAN. loop or . IDE along with file-system formatting standards such as FAT. you already figured out why Fibre Channel was introduced. and iSCSI the data exchange between servers and storage devices takes place in a block-based fashion. Back-end data communication is hostto-storage or program-to-device. and iSCSI. file-level data access applies to IP front-end networks. Storage networks can be realized with NAS servers by installing an additional LAN between the NAS server and the application servers. lack of device sharing. Fibre Channel offers a greater sustained data rate (1 GFC–16 GFC at the time of writing this book). CIFS (common Internet file-system) or NCP (Netware Core protocol—Novell) Back-end protocols include SCSI. FCoE. iSCSI. In contrast to Fibre Channel.

In a large-scale storage infrastructure. this environment yields a suboptimal storage design that can be improved only with a focus on data access characteristics analysis and management to provide optimum performance. the impact on application availability. if necessary. and products designed with it are generally classified as monolithic storage. Products designed with this architecture are often classified as distributed. Scale-up storage systems tend to be the least flexible and involve the most planning and longest service interruptions when adding capacity. An architecture that adds capacity by adding individual storage systems. A scale-up solution is a way to increase only the capacity.switched networks for device sharing and low latency. nondisruptive device addition. or nodes. Data may be dynamically moved among classes in a tiered storage implementation . facilities. and maintenance costs. which can be relatively high with some storage systems. Scaling architectures determine how much capacity can be added. A scaleout solution is like a set of multiple arrays with a single logical view. allowing system updates to be rolled back. Storage Architectures Storage systems are designed in a way that customers can purchase some amount of capacity initially with the option of adding more as the existing capacity fills up with data. This is the reason IT organizations tend to spend such a high percentage of their budget on storage every year. and centralized management with local or remote access. An architecture that adds capacity by adding storage components to an existing system is called a scale-up architecture. the lifespan of storage products built on these architectures is typically four years or less. virtually unlimited devices in a fabric. The cost of the added capacity depends on the price of the devices. The Storage Networking Industry Association (SNIA) standards group defines tiered storage as “storage that is physically partitioned into multiple distinct classes based on price. and by software upgrades processes that take several steps to complete. High availability with scale-up storage is achieved through component redundancy that eliminates single points of failure. to a group of systems is called a scale-out architecture. distances of hundreds of kilometers or greater with extension devices. performance or other attributes. how quickly it can be added. In addition. Tiered Storage Many storage environments support a diversity of needs and use disparate technologies that cause storage sprawl. and how much work and planning is involved. but not the number of the storage controllers. Scale-up and Scale-out Storage Storage systems increase their capacity in one of two ways: by adding storage components to an existing system or by adding storage systems to an existing group of systems. Scale-up and scale-out storage architectures have hardware. Increasing storage capacity is an important task for keeping up with data growth.

In Figure 6-13. are possible. . such as the performance needs. A tiered storage approach can provide the performance you need and save significant costs associated with storage.” Tiered storage is a mix of high performing/high-cost storage with low performing/low-cost storage and placing data based on specific characteristics. age. the overall management effort increases when managing storage capacity and storage performance needs across multiple storage classes. you might more efficiently address the highest performance needs while reducing the Enterprise class storage. and cooling reductions. Figure 6-13 Cost Versus Performance in a Tiered Storage Environment Typically. you can see the concept and a cost versus performance relationship of a tiered storage environment. and importance of data availability. However. and energy costs. Table 6-4 lists the different storage tier levels. footprint. By introducing solid-state drive (SSD) storage as tier 0. such as energy. because lower-tier storage is less expensive. Environmental savings. an optimal design keeps the active operational data in Tier 0 and Tier 1 and uses the benefits associated with a tiered storage approach mostly related to cost.based on access activity or other considerations. system footprint.

Table 6-4 Comparison of Storage Tier Levels .

It is also about making sure the network design and topology look as clean and functional one. To achieve 99. This is primarily motivated by a desire to achieve 99. Some companies take the same approach with their traditional IP/Ethernet networks. or five years later as the day they were first deployed.SAN Design SAN design doesn’t have to be rocket science. Figure 6-14 SAN A and SAN B FC Networks Infrastructures for data access necessarily include options for redundant server systems. many companies are actively seeking alternatives.999 percent availability the network should be built with redundant switches that are not interconnected. and each end node (host or storage) is connected to both networks. Because the traditional FC SAN design doubles the cost of network implementation. each host and storage device is dual-attached to the network. In the traditional FC SAN design. Figure 6-14 illustrates typical dual path FC-SAN designs.999 percent availability. and others are considering single path FC SANs. The FC network is built on two separate networks (commonly called path A and path B). Although . two. but most do not for reasons of cost. Modern SAN design is about deploying ports and switches in a configuration that provides flexibility and scalability. Some companies are looking to iSCSI and FCoE as the answer.

Figure 6-15 Storage Multipathing Failover A single multiport HBA can have two or more ports connecting to the SAN that can be used by multipathing software. Redundant server systems and filing systems are created through one of two approaches: clustered systems or server farms. changing an I/O path involves changing the initiator used for I/O transmissions and. servers are the starting point for most data access. all downstream connections. Server filing systems are clearly in the domain of storage. fault-tolerant system. although multiport HBAs provide path redundancy.server systems might not necessarily be thought of as storage elements. If one of these communication connections fails. another SCSI communication connection is used in its place. most current . In general. Farms are loosely coupled individual systems that have common access to shared data. Multipathing software depends on having redundant I/O paths between systems and storage. From a storage I/O perspective. Active/Active: balanced I/O over both paths (implementation specific) and Active/Passive: I/O over primary path—switches to standby path upon failure. Figure 6-15 portrays two types of multipathing. This includes switches and network cables that are being used to transfer I/O data between a computer and its storage. they are clearly key pieces of data access infrastructures. Multipathing establishes two or more SCSI communication connections between a host system and the storage it uses. and clusters are tightly coupled systems that function as a single. However. by extension.

establishing the shortest and quickest path between any two devices. SAN switches use the Fabric Shortest Path First (FSPF) routing protocol to converge new routes through the network following a change to the network configuration. From a design standpoint. As an example. the design for a SAN that will handle a network with 100 end ports will be very different from the design for a SAN that has to handle a network with 1500 end ports. SAN Design Considerations The underlying principles of SAN design are relatively straightforward: plan a network topology that can handle the number of ports necessary now and into the future. It is about helping ensure that a design remains functional if . the storage process uses the route. the Dijkstra algorithm on which it is commonly based has a worst-case running time that is the square of the number of nodes in the fabric. it is typically better to overestimate the port count requirements than to underestimate them. design a network topology with a given end-to-end performance and throughput level in mind. Designing for a 1500-port SAN does not necessarily imply that 1500 ports need to be purchased initially. Although FSPF itself provides for optimal routing between nodes. taking into account any physical requirements of a design. FSPF is the standard routing protocol used in Fibre Channel fabrics. As long as the new network route allows the storage path’s initiator and LUN to communicate. Multipathing is not the only automated way to recover from a network problem in a SAN. It also selects an alternative path in the event of failure of the primary path. or even at all. and provide the necessary connectivity with remote data centers to handle the business requirements of business continuity and disaster recovery. Port Density and Topology Requirements The single most important factor in determining the most suitable SAN design is determining the number of end ports for now and over the anticipated lifespan of the design. it could be advantageous to implement multipathing so that the timeout values for storage paths exceed the times needed to converge new network routes. This depends on switches in the network recognizing a change to the network and completing their route convergence process prior to the I/O operation timing out in the multipathing software.multipathing implementations use dual HBAs to provide redundancy for the HBA adapter. Considering this. FSPF automatically calculates the best path between any two devices in a fabric through dynamically computing routes. These underlying principles fall into five general categories: Port density and topology requirements: Number of ports required now and in the future Device performance and oversubscription ratios: Determination of what is acceptable and what is unavoidable Traffic management: Preferential routing or resource allocation Fault isolation: Consolidation while maintaining isolation Control plane scalability: Reduced routing complexity 1. Multipathing software in a host system can use new network routes that have been created in the SAN by switch routing algorithms. including link or switch failures.

Figure 6-16 portrays the SAN major design factors. rather than later finding the design is unworkable. the lifespan for any design should encompass the depreciation schedule for equipment. Although it is difficult to predict future requirements and capabilities. 12. because redesigning and reengineering a network topology become both more time-consuming and more difficult as the number of devices on a SAN expands. determining the approximate port count requirements is not difficult. Where existing SAN infrastructure is present. typically three years or more. and densities. but once again. You can use the current number of end-port devices and the increase in number of devices during the previous 6. As a minimum. and 18 months as rough guidelines for the projected growth in number of end-port devices in the future. a design should last longer than this. it is more difficult to determine future port-count growth requirements. Figure 6-16 SAN Major Design Factors For new environments. Preferably. coupled with an estimated growth rate of 30 percent per year. it is not difficult to plan based on an estimate of the immediate server connectivity requirements.that number of ports is attained. protocols. A design should also consider physical space requirements. unused module . is the data center all on one floor? Is it all in one building? Is there a desire to use lower-cost connectivity options such as iSCSI for servers with minimal I/O requirements? Do you want to use IP SAN extension for disaster recovery connectivity? Any design should also consider increases in future port speeds. For example.

On the storage side. application I/O requests. IOPS performance relates only to the servers and storage devices and their ability to handle high numbers of I/O operations. and port throughput. whereas bandwidth capacity relates to all devices in the SAN. When considering all the performance characteristics of the SAN infrastructure and the servers and storage devices. a fan-out ratio of storage subsystem ports can be achieved based on the I/O demands of various applications and server platforms. including the SAN infrastructure itself. is the practice of connecting multiple devices to the same switch port to optimize switch use. Because storage subsystem I/O resources are not commonly consumed at 100 percent all the time by a single client. the required bandwidth is strictly derived from the I/O load. percentage of reads versus writes. These recommendations are often in the range of 7:1 to 15:1. Note A fan-out ratio is the relationship in quantity between a single port on a storage device and the number of servers that are attached to it. including the system architecture. It is important to know the fan-out ratio in a SAN design so that each server gets optimal access to storage resources. however. CPU capacity. The two metrics are closely related. results in an uneconomic use of storage. Device Performance and Oversubscription Ratios Oversubscription. which is derived from factors including I/O size. Figure 617 portrays the major SAN oversubscription factors. multiple slower devices may fan in to a single port to take advantage of unused capacity. The higher the rate of oversubscription. Key factors in deciding the optimum fan-out ratio are server HBA queue depth. 2. SAN switch ports are rarely run at their maximum speed for a prolonged period. the supported bandwidth is again strictly derived from the IOPS capacity of the disk subsystem itself. Most major disk subsystem vendors provide guidelines as to the recommended fan-out ratio of subsystem client-side ports to server connections. Too low a fan-out ratio. disk controllers. . and I/O service time from the target device. two oversubscription metrics must be managed: IOPS and network bandwidth capacity of the SAN. Fan-in is how many storage ports can be served from a single host channel.slots in switches that have a proven investment protection record open the possibility to future expansion. Oversubscription is a necessity of any networked infrastructure and directly relates to the major benefit of network—to share common resources among numerous clients. When the fan-out ratio is high and the storage array becomes overloaded. On the server side. application performance will be affected negatively. cache. although they pertain to different elements of the SAN. and actual disks. storage device input/output (IOPS). in a SAN switching environment. the lower the cost of the underlying network infrastructure and shared resources.

However. 3. large CPUs. peak time I/O traffic contention is minimized. In this case. and the SAN design oversubscription has little effect on I/O contention. and application performance can be monitored closely. In more common scenarios. The general principle in optimizing design oversubscription is to group applications or servers that burst high I/O rates at different time slots within the daily production cycle. This grouping can examine either complementary application I/O profiles or careful scheduling of I/O-intensive activities such as backups and batch jobs. the oversubscription ratio can be increased gradually to a level that is both maximizing performance while minimizing cost. oversubscription can be safely factored into SAN design. neither application server HBAs nor disk subsystem client-side controllers can handle full wire-rate sustained bandwidth. server-side resources. Because of this fact. Best-practice would be to build a SAN design using a topology that derives a relatively conservative oversubscription ratio (for example. Although ideal scenario tests can be contrived using larger I/Os. If bandwidth is not the limited factor. which might temporarily require high-rate I/O service.Figure 6-17 SAN Oversubscription Design Considerations In most cases. 8:1) coupled with monitoring of the traffic on the switch ports connected to storage arrays and Inter-Switch Links (ISL) to see if bandwidth is a limiting factor. you must account for burst I/O traffic. I/O composition. and application I/O patterns do not result in sustained full-bandwidth utilization. Traffic Management . this is far from a practical real-world implementation. and sequential I/O operations to show wire-rate performance. application server performance is acceptable.

but both technical and business challenges make this difficult. Many organizations would like to consolidate their storage infrastructure into a single physical fabric. Technology such as virtual SANs (VSANs. The major drawback is that faults are no longer isolated within individual storage areas. Faults within one fabric are contained within a single fabric (VSAN) and are not propagated to other fabrics. 4. should traffic use one path in preference to the other? For some SAN designs it makes sense to implement traffic management policies that influence traffic flow and relative traffic priorities. see Figure 6-18) enables this consolidation while increasing the security and stability of Fibre Channel fabrics by logically isolating devices that are physically connected to the same set of switches. Fault Isolation Consolidating multiple areas of storage into a single physical fabric both increases storage utilization and reduces the administrative overhead associated with centralized storage management. .Are there any differing performance requirements for different application servers? Should bandwidth be reserved or preference be given to traffic in the case of congestion? Given two alternate traffic paths between data centers with differing distances.

name server and domain-controller queries. such as Fabric Shortest Path First (FSPF). and other Fibre Channel fabric services. routing protocols. which handles the forwarding of data frames within the SAN. and a control plane. routing updates and keepalives. typically through a high rate of traffic destined to the switch itself. Control plane service disruptions (perpetrated either inadvertently or maliciously) are possible. Control Plane Scalability A SAN switch can be logically divided into two parts: a data plane.Figure 6-18 Fault Isolation with VSANs 5. Fibre Channel frames destined for the switch itself. These result in excessive CPU utilization and/or deprive the switch of CPU resources . any service disruption to the control plane can result in business impacting network outages. which handles switch management functions. Control plane scalability is the primary reason storage vendors set limits on the number of switches and devices they have certified and qualified for operation in a single fabric. Because the control plane is critical to network operations.

Although FSPF itself provides for optimal routing between nodes. and this is why more emphasis is placed upon the fabric topology than on the two other topologies in this chapter. The fabric topology is the most frequently used of all topologies.for normal processing. Control plane CPU deprivation can also occur when there is insufficient control plane CPU relative to the size of the network topology and a networkwide event (for example. That is. A fabric basically requires one or more Fibre Channel switches connected together to form a control center between the end devices. FSPF is the standard routing protocol used in Fibre Channel fabrics. fabric defines a network in which several devices can exchange data simultaneously at full bandwidth. A goal of SAN design should be to try to minimize the processing required with a given SAN topology. Arbitrated loop defines a unidirectional ring in which only two devices can ever exchange data with one another at any one time. Finally. . the standard permits the connection of one or more arbitrated loops to a fabric. the Dijkstra algorithm on which it is commonly based has a worstcase running time that is the square of the number of nodes in the fabric. doubling the number of devices in a SAN can result in a quadrupling of the CPU processing required to maintain that routing. loss of a major switch or significant change in topology) occurs. Attention should be paid to the CPU and memory resources available for control plane functionality and to port aggregation features such as Cisco port channels. which provide all the benefits of multiple parallel ISLs between switches (higher throughput and resiliency) but only appear in the topology as a single logical link rather than multiple parallel links. Point-to-point defines a bidirectional connection between two devices. and point-topoint (see Figure 6-19). Furthermore. arbitrated loop. SAN Topologies The Fibre Channel standard defines three different topologies: fabric.

one input and one output channel. The core-only is a star topology. In the point-to-point topology and in the fabric topology the links are always bidirectional: in this case the input channel and the output channel of the two ports involved in the link are connected by a cross. The more active devices connected the less available bandwidth will be available. therefore. In this configuration the end devices are bidirectionally connected to the hub. and switches) must be equipped with one or more Fibre Channel ports. The edge switches in the core-edge design may be connected to both core switches. However. FC switches can be interconnected in any manner. Switched fabrics have a theoretical address support for over 16 million connections.Figure 6-19 Fibre Channel Topologies Common to all topologies is that devices (servers. The complexity and the size of the FC SANs increase when the number of connections increases. the links of the arbitrated loop topology are unidirectional: each output channel is connected to the input channel of the next port until the circle is closed. The connection between two ports is called a link. Figure 6-20 illustrates the typical FC SAN topologies. the redundant paths are usually not interconnected. Bandwidth is shared equally among all connected devices. Also switched FC fabrics are aware of arbitrated loop devices when attached. single host-to-storage FC connections are common in cluster and grid environments because host-based failover mechanisms are inherent to such environments. the wiring within the hub ensures that the unidirectional data flow within the arbitrated loop is maintained. In both the core-only and core-edge designs. FC switches employ a routing protocol called fabric shortest path first (FSPF) based on a link-state algorithm. compared to arbitrated-loop at 126. Adding a new device increases aggregate bandwidth. Cisco’s virtual SAN (VSAN) technology increases the number of switches that can be physically interconnected by reusing the entire FC address space within each VSAN. Address space constraints limit FC-SANs to a maximum of 239 switches. Host-to-storage FC connections are generally redundant. FSPF reduces all physical topologies to a logical tree topology. any port can communicate with another cut-through or store and forward switching. but that exceeds the practical and physical limitation of the switches that make up a fabric. The cabling of an arbitrated loop can be simplified with the aid of a hub. of which there is a limit of 126. Unlike Ethernet. It is possible for any port in a node to communicate with any port in another node connected to the same fabric. A switched fabric topology allows dynamic interconnections between nodes through ports connected to a fabric. A port always consists of two channels. On the other hand. Most FC SANs are deployed in one of two designs commonly known as the two-tier topology: core-edge-only and threetier topology (edge-core-edge) designs. the port is generally realized by means of so-called HBAs. FSPF convergence). . In servers. but this creates one physical network and compromises resilience against networkwide disruptions (for example. Like Ethernet. so that every output channel is connected to an input channel. storage devices. there is a limit to the number of FC switches that can be interconnected.

The increase in blade server deployments and the consolidation of servers through the use of server virtualization technologies affect the design of the network. The total number of hosts and NPV switches determine the number of fabric logins required on the core switch. The best design choice leans on the number of servers per rack. per-switch. the number of end devices determines the number of fabric logins needed. Typically. EoR designs are based on inter-rack cabling between servers and high-density switches installed on the same row as the server racks. With the use of features such as NPort ID Virtualization (NPIV) and Cisco N-Port Virtualization (NPV). making use of core ports (non oversubscribed) and edge ports (oversubscribed) but .) This terminology refers to the fact that a design is conceptually a core/edge design. they offer the storage team the challenge to manage a higher number of devices. On the other hand. (See Figure 6-21.Figure 6-20 Sample FC SAN Topologies The Top-of-Rack (ToR) and End-of-Row (EoR) designs represent how access switches and servers are connected to each other. The proliferation of NPIV-capable end devices such as host bus adaptors (HBAs) and Cisco NPV-mode switches makes the number of fabric logins needed on a perport. which can be installed on the same racks as the servers. Although these designs reduce the amount of cabling and optimize the space used by network equipment. EoR designs reduce the number of network devices and optimize port utilization on the network devices. The fabric login limits determine the design of the current SAN as well as its potential for future growth. the data rate for the connections. and per-physical-fabric basis a critical consideration. the budget. per-line-card. and the operational complexity. in a SAN design. But EoR flexibility taxes data centers with a great quantity of horizontal cabling running under the raised floor. SAN designs that can be built using a single physical switch are commonly referred to as collapsed core designs. Both have a direct impact over a major part of the entire data center cabling system. ToR designs are based on intra-rack cabling between servers and smaller switches. the number of fabric logins needed has increased even more.

Fibre Channel is a technology standard that allows data to be transferred at extremely high speeds. Traditionally. and there is nondisruptive failover between fabrics in the event of a problem in one fabric. English spelling (Fiber) is retained when referring generically to fiber optics and cabling. Both provide separation of fabric services. and industry-wide consortiums accredit the Fibre Channel standard.that it has been collapsed into a single physical switch. Current implementations support data transfers at up to 16 Gbps or even more. Note Is it Fibre or Fiber? Fibre Channel was originally designed to support fiber optic cabling only. unlike Small Computer System Interface (SCSI). the committee decided to keep the name in principle. with both hosts and storage connecting to both fabrics. technical associations. Multipathing software should be deployed on the hosts to manage connectivity between the host and storage so that I/O uses both paths. but to use the UK English spelling (Fibre) when referring to the standard. Figure 6-21 Sample Collapsed Core FC SAN Design SAN designs should always use two isolated fabrics for high availability. protection against a sprinkler head failing above a switch) and protection against equipment failure. When copper support was added. There are many products on the market that take advantage of the high-speed and high-availability characteristics of the Fibre Channel architecture. although it could be argued that multiple physical fabrics provide increased physical protection (for example. Fabric isolation can be achieved using either VSANs or dual physical switches. Fibre Channel was developed through industry cooperation. a collapsed core design on Cisco MDS 9000 family switches would utilize both non oversubscribed (storage) and oversubscribed (host-optimized) line-cards.S. . Fibre Channel Fibre Channel (FC) is the predominant architecture upon which SAN implementations are built. vendors. The U. Many standards bodies. which was developed by a vendor and submitted for standardization afterward.

Because of its channel-like qualities. FC-0: Physical Interface FC-0 defines the physical transmission medium (cable. and it can be managed as a network.t11. Fibre Channel is structured by a lengthy list of standards. and the addressing. plug) and specifies which physical signals are used to transmit the bits 0 and 1. with the flexible connectivity and distance characteristics of a traditional network. FC-4. a Fibre Channel SAN (as a storage network) or both at the same time. in which each bit has its own data line . Detailed information on all FC Standards can be downloaded from T11 at www. The upper layer. These services will be required to administer and operate a Fibre Channel network. define the fundamental communication techniques. The link services and fabric services are located quasi-adjacent to the Fibre Channel protocol stack. Figure 6-22 portrays just a few of them. In contrast to the SCSI bus. FC-0 to FC-3. The lower four layers. for example. the physical levels. The use of the various ULPs decides. technical standard for networking that incorporates the channel transport characteristics of an I/O bus. that is. it can support multiple protocols and a broad range of devices. hosts and applications see storage devices that are attached to the SAN as though they are locally attached storage. the transmission. Figure 6-22 Fibre Channel Standards The Fibre Channel protocol stack is subdivided into five layers (see Figure 6-10 in the “Block Level Protocols” section). ULPs) are mapped on the underlying Fibre Channel network. Because of its network characteristics. whether a real Fibre Channel network is used as an IP network.Fibre Channel is an open. defines how application protocols (upper layer protocols.org.

Larger. FC-1: Encode/Decode FC-1 defines how data is encoded before it is transmitted via a Fibre Channel cable (8b/10b data transmission code scheme patented by IBM). although not all products provide that flexibility. An example of this type of application is on long spans between two system or network devices. is typically associated with more expensive.5 micron. Overall. some form of fiber-optic link is required. Copper cables tend to be used for short distances. but it is not as simple as connecting two switches together in a single rack. Single-mode fiber (SMF) allows only one pathway. up to 30 meters (98 feet). where repeater and amplifier spacing needs to be maximized. Fibre Channel transmits the bits sequentially via a single line. If the two units that need to be connected are in different cities. and can be identified by their DB-9. Over a slightly longer distance—for example. It is compatible with both SMF and MMF. from one building to the next. a standard fiber cable suffices. Fibre Channel can use either optical fiber (for distance) or copper cable links (for short distance at low cost). Because most businesses are not in the business of laying cable.plus additional control lines. To connect one optical device to another. This fiber might need to be laid underground or through a conduit. MMF is more economical because it can be used with inexpensive connectors and laser devices.3 micron. 9-pin connector. Multimode fiber is for shorter distances. Long wave laser: This technology uses a wavelength of 1300 nanometers.5-micron diameter is sufficiently large for injected light waves to be reflected off the core interior. Multimode fiber (MMF) allows more than one mode of light. Where costly electronics are heavily concentrated. Common multimode core sizes are 50 micron and 62. or mode. If the distance is short. which reduces the total system cost. fiber-optic cabling is referred to by mode or the frequencies of light waves that are carried by a particular cable type. MMF fiber is better suited for shorter distance applications. Single-mode fiber is for longer distances. the primary cost of the system does not lie with the cable. a fiber link might need to be laid. of light to travel within the fiber. which was refined and proven over many years. SMFs are used in applications where low signal loss and high data rates are required. The 50micron or 562. When a company leases equipment. optical fibers provide a high-performance transmission medium. Multimode cabling is used with shortwave laser light and has either a 50-micron or a 62. Normally. The core size is typically 8. FC-1 also describes certain transmission words . they lease fiber-optic cables to meet their needs. Dark fiber generically refers to a long. in this case. Fiber-optic cables have a major advantage in noise immunity.5-micron core with a cladding of 125 micron. Fibre Channel architecture supports both short wave and long wave optical transmitter technologies in the following ways: Short wave laser: This technology uses a wavelength of 780 nanometers and is compatible only with MMF. Product flexibility needs to be considered when you plan a SAN. In such a case. the problem is much larger. Mixing fiber-optical and copper components in the same environment is supported. the fiberoptic cable that they lease is known as dark fiber. dedicated fiber-optic link that can be used without the need for any additional equipment.

Ordered sets begin with a special transmission character (K28. It also allows easier clock and timer synchronization because a transmission must be seen every 66 bits. If the preamble is 01. There are 11 types of SOF and eight types of EOF delimiters that are defined for the fabric and N_Port sequence . to move the data across the network. plus 56 bits of control information and data. The use of the 01 and 10 preambles guarantees a bit transmission every 66 bits. The preambles 00 and 11 are not used. Encoding and Decoding To transfer data over a high-speed serial interface. The overhead of the 64B/66B encoding is considerably less than the more common 8b/10b-encoding scheme. There are two categories of transmission words: data words and ordered sets. an 8-bit type field follows. They immediately precede or follow the contents of a frame. which is known as an ordered set. Sixty-four bits of data are transmitted as a 66-bit entity. aligned on a word boundary. The format of the 8b/10b character is of the format D/Kxx. An ordered set always begins with the special character K28.(ordered sets) that are required for the administration of a Fibre Channel connection (link control protocol). This scheme is called 8b/10b encoding because it refers to the number of data bits input to the encoder and the number of bits output from the encoder. Three major types of ordered sets are defined by the signaling protocol. which have a special meaning. A parity check does not find the even-numbered bit errors. If the preamble is 10. Ordered Sets Fibre Channel uses a command syntax.y: D or K D = Data K =Special Character “xx” = Decimal value of the 5 Least Significant bits “y” = Decimal value of the 3 Most Significant bits Communications of 10 and 16 Gbps use 64/66b encoding. Ordered sets provide the availability to obtain bit and word synchronization. Data words occur between the start of frame (SOF) and end of frame (EOF) delimiters. which is outside the normal data space. Transmission word consists of four continuous transmission characters treated as a unit. Ordered sets delimit frame boundaries and occur outside of frames. the 64 bits are entirely data. This 8b/10b encoding finds errors that a parity check cannot. This information allows the receiver to synchronize to the embedded clock information and successfully recover the data at the required error rate. The encoding process ensures that sufficient clock information is present in the serial data stream. and generate an error if seen. which also establishes word boundary alignment.5). The 66-bit entity is made by prefixing one of two possible 2-bit preambles to the 64 bits to be transmitted. only the odd numbers. the start-of-frame (SOF). the data is encoded before transmission and decoded upon reception. The ordered sets are 4-byte transmission words that contain data and special characters. and end-of-frame (EOF) ordered sets establish the boundaries of a frame. The frame delimiters. The 8b/10b encoding process converts each 8-bit byte into two possible 10-bit characters. which means that a continuous stream of zeros or ones cannot be valid data.5. The information sent on a link consists of transmission words containing four transmission characters each. The 8b/10b encoding logic finds almost all errors. They are 40 bits long.

Recognition of a primitive sequence requires consecutive detection of three instances of the same ordered set. When a primitive sequence is received and recognized. are ordered sets that are designated by the standard to have a special meaning. The primitive sequences that are supported by the standard are illustrated in Table 6-5. . It regulates the flow control that ensures that the transmitter sends the data only at a speed that the receiver can process it. a corresponding primitive sequence or idle is transmitted in response. a file) are transmitted via the Fibre Channel network. It determines how larger data units (for example. idle and receiver ready (R_RDY). Table 6-5 Fibre Channel Primitive Sequences FC-2: Framing Protocol FC-2 is the most comprehensive layer in the Fibre Channel protocol stack. The two primitive signals. An idle is a primitive signal that is transmitted on the link to indicate that an operational port facility is ready for frame transmission and reception.control. Or the set might indicate conditions that are encountered by the receiver logic of a port. A primitive sequence is an ordered set that is transmitted and repeated continuously to indicate specific conditions within a port. Each exchange consists of one or more bidirectional sequences (see Figure 6-23). The R_RDY primitive signal indicates that the interface buffer is available for receiving further frames. And it defines various service classes that are tailored to the requirements of various applications: Multiple exchanges are initiated between initiators (hosts) and targets (disks).

A Fibre Channel frame consists of a header. FC-2 guarantees that sequences are delivered to the receiver in the same order they were sent from the transmitter. Data frames transmit up to 2. A sequence could be representing an individual database transaction. Fibre Channel is an integrated whole: the layers of the Fibre Channel protocol stack are so well harmonized with one another that the ratio of payload to protocol overhead is very efficient at up to 98%.” Furthermore. have to be broken down into several frames. A Fibre Channel network transmits control frames and data frames. they signal events such as the successful delivery of a data frame. hence the name “sequence. sequences are only delivered to the next protocol layer up when all frames of the sequence have arrived at the receiver. and a Cyclic Redundancy Checksum (CRC) (see Figure 6-24). The CRC checking procedure is designed to recognize all transmission errors if the underlying medium does not exceed the specified error rate of 10-12.Figure 6-23 Fibre Channel FC-2 Hierarchy Each sequence consists of one or more frames. For the SCSI3 ULP. The frame is bracketed by a start-of-frame (SOF) delimiter and an end-of-frame (EOF) delimiter. Finally. Larger sequences. therefore. .112 bytes of useful data. Control frames contain no useful data. A sequence is a larger data unit that is transferred from a transmitter to a receiver. six filling words must be transmitted by means of a link between two frames. each exchange maps to a SCSI command. Only one sequence can be transferred after another within an exchange. In contrast to Ethernet and TCP/IP. useful data (payload).

on the other hand. A port can thus maintain several connections at the same time. Class 2 and Class 3. not end-to-end flow control. are packet-oriented services: no dedicated connection is built up. which realize a packet-oriented service (datagram service). Table 6-6 lists the different Fibre Channel classes of service. Class 2. the end devices themselves negotiate whether they communicate by Class 2 or Class 3. Class F serves for the data exchange between the switches within a fabric. storage devices) support the service classes Class 2 and Class 3. Instead. Several Class 2 and Class 3 connections can thus share the bandwidth. In Class 2 the receiver acknowledges each received frame. Almost all new Fibre Channel products (HBAs. the frames are individually routed through the Fibre Channel network.Figure 6-24 Fibre Channel Frame Format Fibre Channel Service Classes The Fibre Channel standard defines six service classes for exchange of data between end devices. with hardly any products providing the connection-oriented Class 1. Class 3 achieves less than Class 2: frames are not acknowledged. In addition. switches. and Class 3) are realized in products available on the market. Class 2 uses end-to-end flow control and link flow control. the higher protocol layers must notice for themselves whether a frame has been lost. Three of these defined classes (Class 1. This means that only link flow control takes place. . In Fibre Channel SAN implementations. In addition. Class 1 defines a connection-oriented communication connection between two node ports: a Class 1 connection is opened before the transmission of frames.

The sender decrements one BB_Credit for each frame sent and increments one for each R_RDY received. Fibre Channel uses credit model. FC-2 defines two mechanisms for flow control: end-to-end flow control and link flow control (see Figure 6-25). . If the receiver awards the transmitter a credit of 3. This means that the link flow control also takes place at the Fibre Channel switches. the transmitter may only send the receiver three frames.Table 6-6 Fibre Channel Classes of Service Fibre Channel Flow Control Flow control ensures that the transmitter sends data only at a speed that the receiver can receive it. The transmitter may not send further frames until the receiver has acknowledged the receipt of at least some of the transmitted frames. The end-to-end flow control is realized on the HBA cards of the end devices. BB_Credits are the more practical application and are very important for switches that are communicating across an extension (E_Port) because of the inherent delays present with transport over LAN/WAN/MAN. Two communicating ports negotiating the buffer-to-buffer credits achieve this. Buffer-to-buffer credits are used in Class 2 and Class 3 services between a node and a switch. two end devices negotiate the end-to-end credit before the exchange of data. In end-to-end flow control. Each credit represents the capacity of the receiver to receive a Fibre Channel frame. Initial credit levels are established with the fabric at login and then replenished upon receipt of receiver-ready (R_RDYY) from the target. The link flow control takes place at each physical connection.

One buffer is used per FC frame. FC frames are buffered and queued in intermediate switches. Buffer to buffer credits (BB_Credit) are negotiated between each device in a FC fabric. Figure 6-26 portrays the Fibre Channel Frame buffering on an FC switched fabric. the number of available BB_Credits required increases as well. As distance increases or frame size decreases. FC flow control is critical to the efficiency of the SAN fabric. there is no concept of end-to-end buffering exists. Hop-by-hop traffic flow is paced by return of Receiver Ready (R_RDY) frames and can only transmit up to the number of BB_Credits before traffic is throttled. Insufficient BB_Credits will throttle performance—no data will be transmitted until R_RDY is returned. the sender decrements one EE_Credit for each frame sent and increments one when an acknowledgement (ACK) is received. After the initial credit level is set.Figure 6-25 Fibre Channel End-to-End and Buffer-to-Buffer Flow Control Mechanism End-to-end credits are used in Class 1 and Class 2 services between two end nodes. regardless of its frame size. regardless of the number of switches in the network. Flow control management occurs between two connected Fibre Channel ports to prevent a target device from being overwhelmed with frames from the initiator device. . a small FC frame uses the same buffer as a large FC frame.

Figure 6-26 Fibre Channel Frame Buffering The optimal buffer credit value can be calculated by determining the transmit time per FC frame and dividing that into the total round-trip time per frame. in currently available products. then BB_Credit >= 5 (100 μsec / 2 μsec =5).000 frames per sec). This helps keep the fiber optic link full.) If the sender waits for a response before sending the next frame. A Fibre Channel frame size is ~2 Kbytes (2148 bytes maximum). and without running out of BB_Credits. (Speed of light over fiber optic links ~5μsec/km. Striping could distribute the frames of an exchange over several ports and thus increase the throughput between the two devices. Table 6-7 illustrates the required BB_Credits for a specific FC frame size with specific link speeds. The transmit time for 2. 50μsec × 2 =100μsec. The following functions are being discussed for FC-3: Striping manages several paths between multiport end devices. 0001 = 10. the maximum number of frames in a second is 10. For a 10km link. the time it takes for a frame to travel from a sender to receiver and response to the sender is 100 μsec.000 bytes * 10. Multipathing combines several paths between two multiport end devices to form a logical path . If the round-trip time is 100 μsec.000m characters (2 KB frame) is approximately 20 μsec (2000 /1 Gbps or 100 MBps FC data rate). The speed of light over a fiber optic link is ~ 5μsec per km. Table 6-7 Fibre Channel BB_Credits FC-3: Common Services FC-3 has been in its conceptual phase since 1988. FC-3 is empty.000)./. This implies that the effective data rate is ~ 20 MBps (2.000 (1sec.

These characteristics ensure that the fabric can reliably identify and locate devices. The pWWNs uniquely identify each port in a device. Ports must be uniquely identifiable because each port participates in a unique data path. switch. WWNs are unique identifiers that are hardcoded into Fibre Channel devices. the following occurs: 1. Compressing and encryption of the data to be transmitted preferably realized in the hardware on the HBA. 3. On multiport devices. Every HBA. Fibre Channel ports are intelligent interface points on the Fibre Channel SAN. This capability is an important consideration for Cisco Fabric Services. such as an I/O adapter. For example. The service or application communicates with the target device. They are found embedded in various devices.group. On a single-port device. When a management service or application needs to quickly locate a specific device. using the port address. . Figure 6-27 portrays Fibre Channel addressing on an FC fabric. WWNs are important for enabling Cisco Fabric Services because they have these characteristics— they are guaranteed to be globally unique and are permanently associated with devices. The two types of WWNs are node WWNs (nWWN) and port WWNs (pWWN): The nWWNs uniquely identify devices. and switched fabric. Each Fibre Channel port has at least one WWN. and Fibre Channel disk drive has a single unique nWWN. The nWWNs and pWWNs are both needed because devices can have multiple ports. The nWWNs are required because the node itself must sometimes be uniquely identified. 2. Failure or overloading of a path can be hidden from the higher protocol layers. the nWWN and pWWN may be the same. The name server looks up and returns the current port address that is associated with the target WWN. FCIDs are dynamically acquired addresses that are routable in a switch fabric. Fibre Channel Addressing Fibre Channel uses World Wide Names (WWN) and Fibre Channel IDs (FCID). path failover and multiplexing software can detect redundant paths to a device by observing that the same nWWN is associated with multiple pWWNs. array or tape controller. The service or application queries the switch name server service with the WWN of the target device. the pWWN is used to uniquely identify each port. array controller. Vendors buy blocks of WWNs from the IEEE and allocate them to devices in the factory. gateway. A dual-ported HBA has three WWNs: one nWWN and one pWWN for each port.

Figure 6-28 portrays FCID addressing. However. Figure 6-28 FC ID Addressing The FC-AL topology uses an 8-bit addressing scheme: The arbitrated loop physical address (AL-PA) is an 8-bit address. One port assigns itself an address of 0x000000 and then assigns the other port an address of 0x000001. so 126 addresses are available for nodes. Addresses are cooperatively chosen during loop initialization. . Each switch receives a unique domain ID. Switched Fabric Address Space The 24-bit Fibre Channel address consists of three 8-bit elements: The domain ID is used to define a switch. which provides 256 potential addresses. only a subset of 127 addresses is available because of the 8B/10B encoding requirements.Figure 6-27 Fibre Channel Naming on FC Fabric The Fibre Channel point-to-point topology uses a 1-bit addressing scheme. One address is reserved for an FL port.

Areas are also used to uniquely identify fabric-attached arbitrated loops. Loop Initialize (LINIT). Areas can be used to group ports within a switch. depending upon the type of function provided and whether the frame contains a payload. Domains 00 and F0 through FF are reserved for use by switch services. Extended link services (ELS) are performed in a single exchange. State Change Registration (SCR). such as BB Credit. They are basic link services. Logout (LOGO). There are three types of link services defined. such as max frame size). Management Services. Following are ELS command examples: N_Port Login (PLOGI). Each fabric-attached loop receives a unique area ID. Fabric Login (FLOGI) is used by the N_Port to discover if a fabric is present. Alias Services.The area ID is used to identify groups of ports within a domain. A new N Port Fabric Login source address is 0x000000 and destination address is 0xFFFFFE. In practice. Registered State Change Notification (RSCN). the number of switches in a fabric is far smaller because of storage-vendor qualifications. it provides operating characteristics associated with the fabric (service parameters. ELS service requests are not permitted prior to a port login except the fabric login. FC-GS shares a Common Transport (CT) at the FC-4 level and the CT provides access to the services. Time Services and Key Distribution Services. FC-CT (Fibre Channel Common Transport) protocol is used as a transport media for these services. Process Login (PRLI). . Most of the ELSs are performed as a two sequence exchange: a request from the originator and a response from the responder. and FC-4 link services (generic services). N_Ports perform fabric login by transmitting the FLOGI extended link service command to the well-known address of 0xFFFFFE of the Fabric F_Port and exchanges service parameters. Generic services include the following: Directory Service. Port login is required for generic services. The fabric also assigns or confirms (if trying to reuse an old ID) the N_port identifier of the port initiating the BB_Credit value. maximum payload size. Basic link service commands support low-level functions like aborting a sequence (ABTS) and passing control bit information. State Change Notification (SCN). Figure 6-29 portrays FC fabric and port login/logout and query to switch process. only 239 domains are available to the fabric: Domains 01 through EF are available. Although the domain ID is an 8-bit field. Process Logout (PRLO). Fibre Channel Link Services Link services provide a number of architected functions that are available to the users of the Fibre Channel port. F_Port Login (FLOGI). If fabric is present. and class of service supported. extended link services. Each switch must have a unique domain ID. so there can be no more than 239 switches in a fabric.

ACCEPT (ELS reply): Fabric controller to initiator/target acknowledging registration command. 9. PLOGI (ELS): Initiator/target to name server (0xFFFFFC) for port login and establish a Fibre Channel session. 3. FLOGI extended Link Service (ELS): Initiator/target to login server (0xFFFFFE) for switch fabric login. SCR (ELS): Initiator/target issues State Change Registration Command (SCRC) to fabric controller (0xFFFFFD) for notification when a change in the switch fabric occurs. Fibre Channel ID (Port_Identifier) is assigned by login server to initiator/target. 5. GA_NXT (Name Server Query): Initiator to name server (0xFFFFFC) requesting attributes about a specific port. and State Change Registration 1. ACCEPT (Query reply): Name server to initiator with list of attributes of the requested port.Figure 6-29 Fibre Channel Fabric. 4. ACCEPT (ELS reply): Login server to initiator/target for fabric login acceptance. 2. Port Login. PLOGI (ELS): Initiator to target to establish an end-to-end Fibre Channel session (see Figure 6-30). 6. ACCEPT (ELS reply): Name server to initiator/target using Port_Identifier as D_ID for port login acceptance. . 8. 7.

It stores information such as port WWN. and they can be reached by well-defined addresses. Table 6-8 lists the reserved Fibre Channel addresses. node WWN. Initiator may now issue SCSI commands to the target. The fabric controller manages changes to the fabric under the address 0×FF FF FD. The fabric controller then informs registered N-Ports of changes to the fabric (Registered State Change Notification. 11. This information is managed by fabric services. N-Ports can register for state changes in the fabric controller (State Change Registration. PRLI (ELS): Initiator to target process login requesting an FC-4 session. ACCEPT (ELS reply): Target to initiator acknowledging session establishment. ACCEPT (ELS reply): Target to initiator acknowledging initiator’s port login and Fibre Channel session. 12. RSCN). port address. All FC services have in common that they are addressed via FC-2 frames. . or SCR). Servers can use this service to monitor their storage devices. supported FC-4 protocols. The fabric login server processes incoming fabric login requests with the address 0×FF FF FE. the name server appears as an N-Port to the other ports. and so on. The name server administers a database on N-Ports under the address 0×FF FF FC. supported service classes. Like all services. N-Ports can register their own properties with the name server and request information on other N-Ports.Figure 6-30 Fibre Channel Port/Process Login to Target 10. N-Ports must log on with the name server by means of port login before they can use its services. Fibre Channel Fabric Services In a Fibre Channel topology the switches manage a range of information that operates the fabric.

) FCP maps the SCSI protocol onto the underlying Fibre Channel network.Table 6-8 FC-PH Has Defined a Block of Addresses for Special Functions (Reserved Addresses) FC-4: ULPs—Application Protocols The layers FC-0 to FC-3 serve to connect end devices together by means of a Fibre Channel network. A specific Fibre Channel network can serve as a medium for several application protocols—for example. The task of the FC-4 protocol mappings is to map the application protocols onto the underlying Fibre Channel network. the type of data that end devices exchange via Fibre Channel connections remains open. or ULPs) come into play. no further modifications are necessary to the operating system except for the installation of a new device driver. they specify which service classes will be used and how the data flow in the application protocol will be projected onto the exchange sequence frame mechanism of Fibre Channel. This emulation of traditional SCSI devices should make it possible for Fibre Channel SANs to be simply and painlessly integrated into existing hardware and software. The idea of the FCP protocol is that the system administrator merely installs a new device driver on the server and this realizes the FCP protocol. For the connection of storage devices to servers. This mapping of existing protocols aims to ease the transition to Fibre Channel networks: ideally. The operating system recognizes storage devices connected via Fibre Channel as SCSI devices. The application protocol for SCSI is called Fibre Channel Protocol (FCP). However. (See Figure 6-31. This is where the application protocols (Upper Layer Protocols. which it addresses like “normal” SCSI devices. The protocol mappings determine how the mechanisms of Fibre Channel are used in order to realize the application protocol by means of Fibre Channel. . SCSI and IP. the SCSI cable is therefore replaced by a Fibre Channel network. For example. This means that the FC-4 protocol mappings support the Application Programming Interface (API) of existing protocols upward in the direction of the operating system and realize these downward in the direction of the medium via the Fibre Channel network.

IPFC defines how IP packets will be transferred via a Fibre Channel network.Figure 6-31 FCP Protocol Stack A further application protocol is IPFC: IPFC uses a Fibre Channel connection between two servers as a medium for IP data traffic. When a server or storage device communicates. the interface point acts as the initiator or target for the connection. Fibre Channel is therefore taking the old familiar storage networks from the world of mainframes into the Open System world (Unix. The server or storage device issues SCSI commands. MacOS). such as an I/O adapter. Figure 6-32 portrays the various Fibre Channel port types. These ports understand Fibre Channel. OS/400. Fibre Channel—Standard Port Types Fibre Channel ports are intelligent interface points on the Fibre Channel SAN. FICON maps the ESCON protocol (Enterprise System Connection) used in the world of mainframes onto Fibre Channel networks. They are found embedded in various devices. Using ESCON. Fibre Connection (FICON) is a further important application protocol. Windows. Novell. array or tape controller. The Fibre Channel switch recognizes which type of device is attaching to the SAN and configures the ports accordingly. . it has been possible to realize storage networks in the world of mainframes since the 1990s. and switched fabric. which the interface point formats to be sent to the target device.

an interface functions as a trunking expansion port. only one FL Port becomes operational. When an interface is in TE Port mode. If more than one FL Port is detected on the FC-AL during initialization. F Ports support class 2 and class 3 service. This port connects to another E Port to create an interswitch link (ISL) between two switches. class 3. TE Ports are specific to Cisco MDS 9000 Series switches and expand the functionality of E Ports to support virtual SAN (VSAN) trunking. This port connects to another TE Port to create an extended ISL (EISL) between two switches. and the Fibre Channel Traceroute (fctrace) feature. Fabric loop port (FL Port): In FL Port mode. which contains VSAN information. an interface functions as a fabric expansion port. transport quality of service (QoS) parameters. an interface functions as a fabric port. the other FL Ports enter nonparticipating mode. all frames that are transmitted are in the EISL frame format. This port connects to one or more NL Ports (including FL Ports in other switches) to form a public FC-AL. an interface functions as a fabric loop port. NP Ports function like node ports (N . E Ports support class 2. This port connects to a peripheral device (such as a host or disk) that operates as an N Port. An F Port can be attached to only 1 N Port. They also serve as a conduit between switches for frames that are destined to remote node ports (N Ports) and node loop ports (NL Ports). Interconnected switches use the VSAN ID to multiplex traffic from one or more VSANs across the same physical link. Trunking expansion port (TE Port): In TE Port mode. Fabric port (F Port): In F Port mode. FL Ports support class 2 and class 3 service.Figure 6-32 Fibre Channel Standard Port Types Expansion port (E Port): In E Port mode. Node-proxy port (NP Port): An NP Port is a port on a device that is in N-Port Virtualization (NPV) mode and connects to the core switch via an F Port. E Ports carry frames between switches for configuration and fabric management. and class F service.

and zoning. You can design multiple SANs with different topologies. TNP Port: In TNP Port mode. TE Port. an interface functions as an entry-point port in the source switch for the Remote SPAN (RSPAN) Fibre Channel tunnel. the G-Port configures itself as an E-Port. Fx Port: An interface that is configured as an Fx Port can operate in either F or FL Port mode. some SAN extender devices implement a B Port model to connect geographically dispersed fabrics. an interface functions as a trunking expansion port. a Fibre Channel switch is connected to a further Fibre Channel switch via a G-Port. it cannot be attached to any device and therefore cannot be used for normal Fibre Channel traffic. G-Port (Generic_Port): Modern Fibre Channel switches configure their ports automatically. TF Ports are specific to Cisco MDS 9000 Series switches and expand the functionality of F Ports to support VSAN trunking. Virtual Storage-Area Network A VSAN is a virtual storage-area network (SAN). with the port mode being determined during interface initialization. an interface functions as a SPAN. This feature is nonintrusive and does not affect switching of network traffic for any SPAN source port. In SANs. If. they also function as proxies for multiple physical N Ports. A SAN is a dedicated network that interconnects hosts and storage devices primarily to exchange SCSI traffic. E Port. In TF Port mode. This interface connects to a TF Port to create a link to a core N Port ID Virtualization (NPIV) switch from an NPV switch to carry tagged frames. A set of protocols runs over the SAN to handle routing. Such ports are called G-Ports. An SD Port monitors network traffic that passes through a Fibre Channel interface. ST Port mode and the RSPAN feature are specific to Cisco MDS 9000 Series switches. all frames are transmitted in the EISL frame format. for example.Ports) but in addition to providing N Port operations. depending on the attached N or NL Port. SD Ports cannot receive frames and transmit only a copy of the source traffic. This interface connects to another trunking node port (TN Port) or trunking node-proxy port (TNP Port) to create a link between a core switch and an NPV switch or a host bus adapter (HBA) to carry tagged frames. This model uses B Ports that are as described in the T11 Standard Fibre Channel Backbone 2 (FC-BB-2). or TF Port. . When a port is configured as an ST Port. naming. The SPAN feature is specific to Cisco MDS 9000 Series switches. which contains VSAN information. Bridge port (B Port): Whereas E Ports typically interconnect Fibre Channel switches. Switched Port Analyzer (SPAN) destination port (SD Port): In SD Port mode. Monitoring is performed using a standard Fibre Channel analyzer (or a similar Switch Probe) that is attached to the SD Port. Fx Port mode is determined during interface initialization. Trunking fabric port (TF Port): In TF Port mode. an interface functions as a trunking expansion port. SPAN tunnel port (ST Port): In ST Port mode. FL Port. Auto mode: An interface that is configured in auto mode can operate in one of the following modes: F Port. you use the physical links to make these interconnections.

VSANs provide not only a hardware-based isolation. configuration management capability. in practice this situation can become costly and wasteful in terms of fabric ports and resources. the network administrator can build a single topology containing switches. A VSAN has the following additional features: Multiple VSANs can share the same physical topology. a completely separate set of Cisco Fabric Services. With the introduction of VSANs. each housing multiple applications and still providing island-like isolation. The reasons for building SAN islands include to isolate different applications into their own fabric or to raise availability by minimizing the impact of fabric-wide disruptive events. fewer less-costly redundant fabrics can be built. Figure 6-33 portrays VSAN topology and a representation of VSANs on an MDS 9700 Director switch by per port allocation. VSANs increase the efficiency of a SAN fabric by alleviating the need to build multiple physically isolated fabrics to meet organizational or application needs. . but also a complete replicated set of Fibre Channel services for each VSAN. when a VSAN is created. Figure 6-33 VSANs Physically separate SAN islands offer a higher degree of security because each physical infrastructure contains a separate set of Cisco Fabric Services and management access.A SAN island refers to a completely physically isolated switch or group of switches that are used to connect hosts to storage devices. and policies are created within the new VSAN. thus increasing VSAN scalability. Therefore. providing a clean method of virtually growing application-specific SAN islands. and one or more VSANs. Each VSAN in this topology has the same behavior and property of a SAN. The same Fibre Channel IDs (FC IDs) can be assigned to a host in another VSAN. Unfortunately. Spare ports within the fabric can be quickly and nondisruptively assigned to existing VSANs. Instead. links.

or changed between VSANs without changing the physical structure of a SAN. domain manager. Events causing traffic disruptions in one VSAN are contained within that VSAN and are not propagated to other VSANs. Up to 256 VSANs can be configured in a switch. Per VSAN fabric services: Replication of fabric services on a per VSAN basis provides increased scalability and availability. Table 6-9 lists the different characteristics of VSAN and Zone. User-specified VSAN IDs range from 2 to 4093. and zoning. Membership to a VSAN is based on physical ports. such as FSPF. moved. The EISL link is supported between Cisco MDS and Nexus switch products. The ability to create several logical VSAN layers increases the scalability of the SAN. Fabric-related configurations in one VSAN do not affect the associated traffic in another VSAN. one is a default VSAN (VSAN 1) and another is an isolated VSAN (VSAN 4094). redundant protection (to another VSAN in the same physical SAN) is configured using a backup path between the host and the device.Every instance of a VSAN runs all required protocols. Redundancy: Several VSANs created on the same physical SAN ensure redundancy. . Scalability: VSANs are overlaid on top of a single physical fabric. and devices reside in only one VSAN. not at a physical level. if desired. Of these. The EISL link type has been created and includes added tagging information for each frame within the fabric. ensuring absolute separation between user groups. VSANs offer the following advantages: Traffic isolation: Traffic is contained within VSAN boundaries. Ease of configuration: Users can be added. Moving a device from one VSAN to another only requires configuration at the port level. and no physical port may belong to more than one VSAN. Using a hardware-based frame tagging mechanism on VSAN member ports and EISL links isolates each separate virtual fabric. If one VSAN fails.

You can assign any one of these identifiers or any combination of these identifiers to configure DPVM mapping.Table 6-9 VSAN and Zone Comparison Dynamic Port VSAN Membership Port VSAN membership on the switch is assigned on a port-by-port basis. Inter-VSAN Routing VSANs improve SAN scalability. These benefits are derived from the separation of Fibre Channel services in each VSAN and the isolation of traffic between . preference is given to the pWWN. The pWWN identifies the host or device and the nWWN identifies a node consisting of multiple devices. DPVM offers flexibility and eliminates the need to reconfigure the port VSAN membership to maintain fabric topology when a host or storage device connection is moved between two Cisco MDS switches or two ports within a switch. The Cisco NX-OS software checks the database during a device FLOGI and obtains the required VSAN details. and security by allowing multiple Fibre Channel SANs to share a common physical infrastructure of switches and ISLs. DPVM uses the Cisco Fabric Services (CFS) infrastructure to allow efficient database management and distribution. This method is referred to as Dynamic Port VSAN Membership (DPVM). By default each port belongs to the default VSAN. availability. If you assign a combination. DPVM configurations are based on port world wide name (pWWN) and node world wide name (nWWN) assignments. A DPVM database contains mapping information for each device pWWN/nWWN assignment and the corresponding VSAN. You can dynamically assign VSAN membership to ports by assigning VSANs based on the device WWN. It retains the configured VSAN regardless of where a device is connected or moved.

Figure 6-34 Inter-VSAN Topology Fibre Channel Zoning and LUN Masking VSANs are used to segment the physical fabric into multiple logical fabrics. LUN masking is used to provide additional security to LUNs after an initiator has reached the target device. IVR is not limited to VSANs present on a common switch.VSANs (see Figure 6-34). such as robotic tape libraries. Data traffic isolation between the VSANs also inherently prevents sharing of resources attached to a VSAN. Zoning provides security within a single fabric. whether physical or logical. Fibre Channel traffic does not flow between VSANs. Using IVR. you can access resources across VSANs without compromising other VSAN benefits. to restrict access between initiators and targets (see Figure 6-35). It provides efficient business continuity or disaster recovery solutions when used with Fibre Channel over IP (FCIP). It establishes proper interconnected routes that traverse one or more VSANs across multiple switches. nor can initiators access resources across VSANs other than the designated VSAN. . IVR transports data traffic between specific initiators and targets on different VSANs without merging VSANs into a single logical fabric.

In contrast. to show only an allowed subset of devices. zoning allows the user to overlay a security map. This requires efficient hardware implementation (frame filtering) in the fabric switches. the differences between the two have blurred. Therefore. The primary goal is to prevent certain devices from accessing other devices within the fabric. any server can still attempt to contact any device on the network by address. it will see only the devices it is allowed to see. can see which targets. namely hosts. Advanced zoning capabilities specified in the FC-GS-4 and FC-SW-3 standards are provided. In this way. if a host were to access a disk that another host is using. the need for security is crucial. Soft zoning restricts only the fabric name service. All modern SAN switches then enforce soft zoning in hardware. modern switches would employ hard zoning when you implement soft. reducing the risk of data loss. This process dictates which devices. hard and soft. With many types of servers and storage devices on the network.Figure 6-35 VSANs Versus Zoning The zoning service within a Fibre Channel fabric is designed to provide security between devices that share the same fabric. potentially with a different operating system. The fabric name service allows each device to query the addresses of all other devices. To avoid any compromise of crucial data within the SAN. That stated. There are two main methods of zoning. but is much more secure. when a server looks at the content of the fabric. However. Cisco MDS 9000 and . standards-compliant zoning capabilities. For example. You can use either the existing basic zoning capabilities or the advanced (enhanced). soft zoning is similar to the computing concept of security through obscurity. hard zoning restricts actual communication across a fabric. the data on the disk could become corrupted. More recently.

If zoning is activated. Zoning does have its limitations.Cisco Nexus family switches implement hard zoning. If zoning is not activated. If a new switch is added to an existing fabric. Zoning has the following features: A zone consists of multiple zone members. Zone changes can be configured nondisruptively. Any installed changes to a zoning configuration are therefore disruptive to the entire connected fabric. Members in a zone can access each other. Devices can belong to more than one zone. Fabric pWWN: Specifies the WWN of the fabric port (switch port’s WWN). This membership is also referred to as interface-based zoning. This membership is also referred to as port-based zoning. FC ID: Specifies the FC ID of an N port attached to the switch as a member of the zone. A zone can be a member of more than one zone set. zone sets are acquired by the new switch. IPv4 address: Specifies the IPv4 address (and optionally the subnet mask) of an attached device. Symbolic nodename: Specifies the member symbolic node name. A Cisco MDS switch can have a maximum of 500 zone sets. A zone set can be activated or deactivated as a single entity across all switches in the fabric. full zone sets are distributed to all switches in the fabric. Domain ID and port number: Specifies the domain ID of an MDS domain and additionally specifies a port belonging to a non-Cisco switch. IPv6 address: The IPv6 address of an attached device in 128 bits in colon-separated hexadecimal format. all devices are members of the default zone. When you activate a zone (from any switch). Port world wide name (pWWN): Specifies the pWWN of an N port attached to the switch as a member of the zone. members in different zones cannot access each other. Additionally. Zoning can be administered from any switch in the fabric. A zone set consists of one or more zones. if this feature is enabled in the source switch. It is a distributed service that is common throughout the fabric. The maximum length is 240 characters. New zones and zone sets can be activated without interrupting traffic on unaffected ports or devices. all switches in the fabric receive the active zone set. Zoning was designed to do nothing more than prevent devices from communicating with other unauthorized devices. Only one zone set can be activated at any time. Zone membership criteria is based mainly on WWNs or FC IDs. Interface and domain ID: Specifies the interface of a switch identified by the domain ID. Zones can vary in size. . Interface and switch WWN (sWWN): Specifies the interface of a switch identified by the sWWN. Zoning was also not designed to address availability or scalability of a Fibre Channel infrastructure. any device that is not in an active zone (a zone that is part of an active zone set) is a member of the default zone.

although both methods can be used concurrently.” LUN masking is the most common method of ensuring LUN security. The best practice is to avoid configuring large numbers of targets or large numbers of initiators in a single zone.Default zone membership includes all ports or WWNs that do not have a specific membership association. When there are many hosts. This type of configuration wastes switch resources by provisioning and managing many communicating pairs (initiator-to-initiator or target-totarget) that will never actually communicate with each other. LUN masking is essentially a mapping table inside the front-end array controllers. a single initiator with a single target is the most efficient approach to zoning. N × (N – 1) access permissions need to be enabled. mistakes are more likely to be made and more than one host might access the same LUN by accident. For this reason. LUN mapping is configured in the HBA of each host. it writes a signature at the start of the LUN to claim exclusive access. and obtain their size. Each SCSI host uses a number of primary SCSI commands to identify its target. LUN masking ensures that only one host can access a LUN. LUN masking determines which LUNs are to be advertised through which storage array ports. LUNs can be advertised on many storage ports and discovered by several hosts at the same time. Configuring the same initiator to multiple targets is accepted. These are the commands: Identify: Which device are you? Report LUNs: How many LUNs are behind this storage array port? Report capacity: What is the capacity of each LUN? Request sense: Is a LUN online and available? It is important to ensure that only one host accesses each LUN on the storage array at a time. Configuring multiple initiators to multiple targets is not recommended. Device Aliases Versus Zone-Based (FC) Aliases . The zoning feature complies with the FC-GS-4 and FC-SW-3 standards. Many LUNs are to be visible. Access between default zone members is controlled by the default zone policy. to ensure that only one host at a time can access each LUN. unless the hosts are configured in a cluster. Both standards support the basic zoning functionalities and the enhanced zoning functionalities. An alternative method of LUN security is LUN mapping. and which host is allowed to own which LUNs. but it is the responsibility of the administrator to configure each HBA so that each host has exclusive access to its LUNs. “Fibre Channel Storage Area Networking. For a zone with N members. it overwrites the previous signature. If LUN masking is unavailable in the storage array. The following guidelines must be considered when creating zone members: Configuring only one initiator and one target for a zone provides the most efficient use of the switch resources. all other hosts are masked out. All members of a zone can communicate with each other. As the host mounts each LUN volume. If a second host should discover and try to mount the same LUN volume. discover LUNs. LUN mapping can be used. Both features are explained in detail in Chapter 9.

These user-friendly names are referred to as device aliases.snia. You can avoid this problem if you define a user-friendly name for a port WWN and use this name in the entire configuration commands as required. An incorrect device name may cause unexpected results. The applications track the device alias database changes and take actions to enforce it. The applications store the device alias name in the configuration and distribute it in the device alias format instead of expanding to pWWN. port security) in a Cisco MDS 9000 family switch. QoS. Table 6-10 Comparison Between Zone Aliases and Device Aliases Device alias supports two modes: basic and enhanced. When you configure the basic mode using device aliases.org . all applications accept the device-alias configuration in the native format. you must assign the correct device name each time you configure these features. the application immediately expands to pWWNs. Reference List Storage Network Industry Association (SNIA): http://www. This operation continues until the mode is changed to enhanced. When device alias runs in the enhanced mode.Device Aliases Versus Zone-Based (FC) Aliases When the port WWN (pWWN) of a device must be specified to configure different features (zoning. Table 6-10 lists the different characteristics of zone aliases and device aliases.

Microsoft Press. Marc.Internet Engineering Task Force—IP Storage: http://www.t11. 2013 Kembel.ietf.org/education/dictionary Farley.charters/ips-charter.htm ANSI T10 Technical Committee: www. .T10.org IEEE Standards site: http://standards. noted with the key topic icon in the outer margin of the page. Inc.html ANSI T11—Fibre Channel: http://www.. Rethinking Enterprise Storage: A Hybrid Cloud Mode.snia. Fibre Channel a Comprehensive Introduction.org/index. Northwest Learning Associates. Table 6-11 lists a reference for these key topics and the page numbers on which each is found. 2000 Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter.org/html.ieee.org SNIA dictionary: http://www. Robert W.

.

.

“Memory Tables” (found on the disc). includes completed tables and lists to check your work. or at least the section for this chapter. Appendix C. and complete the tables and lists from memory. Define Key Terms Define the following key terms from this chapter. and check your answers in the glossary: . “Memory Tables Answer Key.” also on the disc.Table 6-11 Key Topics for Chapter 6 Complete Tables and Lists from Memory Print a copy of Appendix B.

virtual storage-area network (VSAN) zoning fan-out ratio fan-in ratio logical unit number (LUN) masking LUN mapping Extended Link Services (ELS) multipathing Big Data IoE N_Port Login (PLOGI) F_Port Login (FLOGI) logout (LOGO) process login (PRLI) process logout (PRLO) state change notification (SCN) registered state change notification (RSCN) state change registration (SCR) inter-FSAN routing (IVR) Fibre Channel Generic Services (FC-GS) hard disk drive (HDD) solid-state drive (SSD) Small Computer System Interface (SCSI) initiator target American National Standards Institute (ANSI) International Committee for Information Technology Standards (INCITS) Fibre Connection (FICON) T11 Input/output Operations per Second (IOPS) Common Internet File System (CIFS) Network File System (NFS) Cisco Fabric Services (CFS) Top-of-Rack (ToR) End-of-Row (EoR) Dynamic Port VSAN Membership (DPVM) .

inter-VSAN routing (IVR) Fabric Shortest Path First (FSPF) .

Table 7-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. and third-generation modules to all coexist in both existing customer chassis and new switch configurations. 4-. “Answers to the ‘Do I Know This Already?’ Quizzes. This chapter discusses the Cisco Multilayer Data Center Switch (MDS) product family in four main areas: Cisco MDS Multilayer Directors. and 10-Gbps Fibre Channel along with being Fibre Channel over Ethernet (FCoE) ready.Chapter 7. 8-. One major benefit of the Cisco MDS 9000 family architecture is investment protection: the capability of first-. 16-. Cisco MDS Multiservice and Multilayer Fabric Switches. the focus is on the current generation of modules and storage services solutions. A quick look at the storage-area network (SAN) marketplace and the track record of delivery reveals that Cisco is the leader among SAN providers in delivering compatible architectures and designs that can adapt to future changes in technology. second-. You can find the answers in Appendix A. Cisco MDS Software and Storage Services. providing true investment protection. and low-latency networks that improve data center scalability and manageability while controlling IT costs. read the entire chapter. the Cisco MDS 9000 family of storage networking products supports now 1-. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. Cisco MDS Product Family This chapter covers the following exam topics: Describe the Cisco MDS Product Family Describe the Cisco MDS Multilayer Directors Describe the Cisco MDS Fabric Switches Describe the Cisco MDS Software and Storage Services Describe the Cisco Prime Data Center Network Management Evolving business requirements highlight the need for high-density. This chapter goes directly into the key concepts of the Cisco Storage Networking product family. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. and Data Center Network Management. high-bandwidth storage networking solution along with integrated fabric applications to support dynamic data center requirements.” . 2-. high-speed. and discusses topics relevant to “Introducing Cisco Data Center Technologies” (DCICT) certification. With the addition of the fourth-generation modules. The Cisco MDS 9000 family provides the leading high-density. In this chapter.

It is an authentication mechanism for Fibre Channel frames. MDS 9513 6. SAN extension d. onePK API e. 4 d. you should mark that question as wrong for purposes of the self-assessment. Multiprotocol support b. 6 c. MDS 9222i d. Which of the following Cisco SAN switches are director model? a. MDS 9706 c. If you do not know the answer to a question or are only partially sure of the answer. 2 5. High availability for fabric modules c.Table 7-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. 1. Which of the following features are NOT supported by Cisco MDS switches? a. b. 3. c. What is head-of-line blocking? a. MDS 9704 4. MDS 9709 d. MDS 9148S b. MDS 9520 c. It is used for a forwarding mechanism in Cisco MDS switches. Which of the following Cisco MDS fabric switches support Fibre Channel over IP (FCIP) and I/O Accelerator (IOA)? . How many crossbar switching fabric modules exists on a Cisco MDS 9706 switch? a. 32 Gbps Fibre Channel 2. Congestion on one port causes in blocking of traffic to other ports. Which of the following MDS switches are qualified for IBM Fiber Connection (FICON)? a. It is used for high availability for fabric modules. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. MDS 9506 b. 8 b. d.

Virtual Extensible LAN e. It is a mainframe-based replication solution. MDS 9148S d. c. c. It is an industry standard that allows multiple N-port fabric login concurrently on a single physical Fibre Channel link. These devices are capable of deploying highly available and scalable storage-area networks (SAN) for the most demanding data center . MDS 9250i b. It is a high-availability feature that allows managing the Fibre Channel traffic load. b. Which of the following options are fundamental components of Cisco Prime DCNM SAN? a. Andiamo means “Let’s go” in Italian. It is a new Fibre Channel login method that is used for storage arrays. d. MDS 9148 7. DCNM-SAN Client c. It is an intelligent fabric application that does data migration between heterogeneous disk arrays. DCNM Data Mobility Manager e. d. It is an intelligent fabric application that provides Fibre Channel link encryption. Which protocol or protocols does the Cisco MDS 9250i fabric switch NOT support? a. 9. b. IP over Fibre Channel c. What is Cisco Data Mobility Manager (DMM)? a. DCNM traffic accelerator Foundation Topics Once Upon a Time There Was a Company Called Andiamo In August 2002 Cisco Systems officially entered the storage networking business with the spin-in acquisition of Andiamo Systems. IBM Fiber Connection 8. Performance Manager d.” The MDS 9000 Multilayer Director switch family is composed of fabric and director-class Fibre Channel switches. What is N-port identifier virtualization (NPIV) in the Fibre Channel world? a. 10. Fibre Channel over IP d. Fibre Channel over Ethernet b.a. DCNM-SAN Server b. It is an intelligent fabric application that provides SCSI acceleration. MDS 9222i c. and Cisco entered the storage business by saying “Andiamo. It is an industry standard that reduces the number of Fibre Channel domain IDs in core-edge SANs.

first-generation Cisco MDS 9000 Family line cards connected to dual crossbar switch fabrics with an aggregate of 80 Gbps full-duplex bandwidth per slot. In 2006. the earlier generation line cards can still work on MDS 9500 Director switches together with the latest line cards.environments. Figure 7-1 portrays the available Cisco MDS 9000 switches (at the time of writing). and the flagship Cisco MDS 9513 Multilayer Director. resulting in a total system switching capacity of 1. With the introduction of second-generation line cards. Cisco set the benchmark for high-performance Fibre Channel Director switches with the release of the Cisco MDS 9509 Multilayer Director. second-. Cisco again set the benchmark for high-performance Fibre Channel Director switches. system density has grown to a maximum of 528 Fibre Channel ports operating at 1/2/4-Gbps or 44 10-Gbps Fibre Channel ports. resulting in a total system switching capacity of 2. increasing interface bandwidth per slot to 96 Gbps full duplex. Figure 7-1 Cisco MDS 9000 Product Family In December 2002.2 Tbps. so in this chapter the focus will not be on first-. the available line cards for MDS reached to fourth.44 Tbps. At the time of writing this chapter. crossbars (supervisors). Offering a maximum system density of 224 1/2 Gbps Fibre Channel ports. the earlier generation line cards already became end of sale (EOS). and third-generation line cards. On the other hand.and fifthgeneration. .

The latter allows for tighter integration with the server and network admins.5 Tbps of FC throughput per slot. Comprehensive and simplified management of the entire data center network: Organizations looking to simplify management and integrate previously disparate networking teams. The MDS 9250i has been designed to help accelerate SAN extension. they can manage Cisco MDS and Nexus platforms. These new MDS solutions provide the following: Enhanced availability: Built on the experience from the MDS 9500 and the Nexus product line. From a single management tool. essentially extending the concept of Software Defined Networks (SDN) into the SAN. A single resource should help to accelerate troubleshooting with easy-to-understand dynamic topology views and the ability to monitor the health of all the devices and filters on events that occur. including N+1 fabric modules and three fan trays with front-to-back airflow. The Secret Sauce of Cisco MDS: Architecture All Cisco MDS 9500 Series Director switches are based on the same underlying crossbar architecture. IBM Fiber Connection (FICON for mainframe connectivity). the MDS 9710 crossbar and central arbiter design enable up to 1. Being considered for future development are 32 Gbps Fibre Channel and 40 Gbps Fibre Channel over Ethernet (FCoE). The MDS 9250i Multiservice Fabric Switch accommodates 16 Gbps FC. Future proofing: The MDS 9710 will support the Cisco Open Network Environment (ONE) Platform Kit (onePK) API to enable future services access and programmability. resulting in a distributed forwarding architecture. This is coupled with the 384 ports of line rate 16 Gbps FC. . High-density. Frame forwarding logic is distributed in application-specific integrated circuits (ASIC) on the line cards themselves. and data should find the Cisco Prime Data Center Network Manager (DCNM) appealing. 10 Gbps Fibre Channel over IP (FCIP). and 10 Gbps Fiber Channel over Ethernet (FCoE). 10 Gbps FCoE. There is never any requirement for frames to be forwarded to the supervisor for centralized forwarding. Organizations should also be able to leverage and report on the data captured and plan for additional capacity more effectively. Following a 10-year run with the initial MDS platform. high-performance storage director: To handle the rapid scale accompanied by data center consolidation. the MDS 9710 provides enhanced reliability. DCNM is also integrated with VMware vSphere to provide a more comprehensive VM-to-storage visibility. All frame forwarding is performed in dedicated forwarding hardware and distributed across all line cards in the system.The Cisco approach to Fibre Channel storage-area networking reflected its strong heritage in data networking. The N+1 concept allows eight power supplies and up to six fabric modules. nor is there ever any requirement for forwarding to drop back into software for processing. storage. in 2012 Cisco introduced its next generation of Fibre Channel platforms. starting with the MDS 9710 Multilayer Director and MDS 9250i Multiservice Fabric Switch. and disk replication as well as migrate data between arrays or data centers. tape backup. Multiprotocol support: The MDS 9710 Multilayer Director supports a range of Fibre Channel speeds. or Internet Small Computer System Interface (iSCSI) and FICON. namely the concept of building a modular chassis that could be upgraded to incorporate future services without ripping and replacing the original chassis.

and the VSAN tag used internal to the switch is stripped from the front of the frame. enabling the switch to handle duplicate addresses within different fabrics separated by VSANs. multiprotocol (Small Computer System Interface over IP [iSCSI] and Fibre Channel over IP [FCIP]) and intelligent (storage services) line cards. When a frame is scheduled for transmission. 5. resulting in an identical amount of active crossbar switching capacity regardless of whether the system is operating with one or two crossbar switch fabrics. A lookup is issued to the forwarding and frame routing tables to determine where to switch the frame to and whether to route the frame to a new VSAN and/or rewrite the source/destination of the frame if Inter VSAN Routing (IVR) is being used. . it is transferred across the crossbar switch fabric to the egress line card where there is a further output access control list (ACL). It also ensures that congestion on one port does not result in blocking of traffic to other ports (head-of-line blocking).Forwarding of frames on the Cisco MDS always follows the same processing sequence: 1. oversubscribed) line cards. To ensure fairness between ports. Depending on the MDS Director switch model. non-oversubscribed) line cards and host-optimized (nonblocking. and the frame is transmitted out the output port. In addition. If the destination port is congested or busy. if priority is desired for a given set of traffic flows. 3. 4. making use of QoS policy maps to determine the order in which frames are scheduled. If there are multiple equal-cost paths for a given destination device. 6. IVR is an MDS NX-OS software feature that is similar to Network Address Translation (NAT) in the IP world. the number of crossbar switch fabrics that are installed might change. Combined with centralized arbitration. the switch will choose an egress port based on the load-balancing policy configured for that particular VSAN. Starting with frame error checking on ingress. From a switch architecture perspective. this ensures that when multiple input ports on multiple line cards are contending for a congested output port. Input access control lists (ACL) are used to enforce hard zoning. One active channel and one standby channel are used on each of the crossbar switch fabrics. 2. crossbar capacity is provided to each line card slot as a small number of high-bandwidth channels. there is complete fairness between ports regardless of port location within the switch. QoS and CoS can be used to give priority to those traffic flows. the standby channel on the remaining crossbar becomes active. resulting in both crossbars being active (1:1 redundancy). the frame will be queued on the ingress line card for delivery. frame switching uses a technique called virtual output queuing (VOQ). In the event of a crossbar failure. It is used for Fibre Channel NAT. This provides significant flexibility for line card module design and architecture and makes it possible to build both performance-optimized (nonblocking. Figure 72 portrays the Central Arbitration in MDS 9000 switches. the frame is tagged with its ingress port and virtual storage-area networking (VSAN).

. Figure 7-3 Cisco MDS 9000 Series Switches Packet Flow 1. The primary function of the MAC is to decipher Fibre Channel frames from the incoming bit stream by identifying start-of-frame (SoF) and end-of-frame (EoF) markers in the bit stream. the PHY converts optical signals received into electrical signals. Each front-panel port has an associated physical layer (PHY) device port and a media access controller (MAC). On ingress. as well as other Fibre Channel signaling primitives.Figure 7-2 Crossbar Switch with Central Arbitration A Day in the Life of a Fibre Channel Frame in MDS 9000 All Cisco MDS 9000 family line cards use the same process for frame forwarding. The various frame-processing stages are outlined in detail in this section (see Figure 7-3). Ingress PHY/MAC Modules: Fibre Channel frames enter the switch front-panel optical Fibre Channel ports through Small Form-Factor Pluggable (SFP) optics for Fibre Channel interfaces or X2 transceiver optical modules for 10 Gbps Fibre Channel interfaces. sending the electrical stream of bits into the MAC.

This table lookup also indicates whether there is a requirement for any Inter VSAN routing (IVR—in simple terms Fibre Channel NAT). or perform any additional inspection on the frame. The result from this lookup tells the forwarding module where to switch the frame to (egress port). and a timestamp of when the frame entered the switch. 2. ingress VSAN. When a frame is received by the ingress forwarding module. based on the input port. guaranteeing in-order delivery. source address. three simultaneous lookups are initiated followed by forwarding logic based on the result of those lookups: The first of these is a per-VSAN forwarding table lookup determined by VSAN and the destination address. D_ID.Along with frames being received. drop the frame. D_ID) of the frame. D_ID) or a hash also based on the exchange ID (S_ID. VSAN. Statistics gathered include frame and byte counters from a given source to a given destination. return buffer credits are paced to match the configured throughput on the port. frame QoS markings (if the ingress port is set to trust the sender. The load-balancing policy (and algorithm) can be configured on a per-VSAN basis to be either a hash of the source and destination addresses (S_ID. The MAC also checks that the received frame contains no errors by validating its cyclic redundancy check (CRC). and a variety of other fields from the interswitch header and Fibre Channel frame header. The second lookup is a statistics lookup. The third lookup is a per-VSAN ingress ACL lookup determined by VSAN. The switch uses the result from this lookup to either permit the frame to be forwarded. Queuing Module: The queuing module is responsible for scheduling the flow of frames through the switch (see Figure 7-4). a load-balancing decision is made to choose a single physical egress interface from a set of interfaces. If ingress rate limiting has been enabled. and the destination address within the Fibre Channel frame. If the frame has multiple possible egress ports (for example. 3. destination address. all frames within the same flow (either between a single source to a single destination or within a single SCSI I/O operation) will always be forwarded on the same physical path. The concept of an internal switch header is an important architectural element of what enables the multiprotocol and multitransport capabilities of Cisco MDS 9000 family switches. if there are multiple equal-cost Fabric Shortest Path First [FSPF] routes or the destination is a Cisco port channel bundle). Ingress Forwarding Module: The ingress forwarding module determines which output port on the switch to send the ingress frame to. ingress port. providing the forwarding module with details such as ingress port. In this manner. If traffic from a given source address to a given destination address is marked for IVR. OX_ID) of the frame. Before forwarding the frame. If the lookup fails. the MAC prepends an internal switch header onto the frame. the MAC communicates with the forwarding and queuing modules and issues return buffer-credits (R_RDY primitives) to the sending device to recredit the received frames. This also forms the basis for the Cisco implementation of hard zoning within Fibre Channel. The switch uses this to maintain a series of statistics about what devices are communicating between one another. the final forwarding step is to rewrite the VSAN ID and optionally the source and destination addresses (S_ID. the frame is dropped because there is no destination to forward to. blank if not). . Return buffer credits are issued if the ingress-queuing buffers have not exceeded an internal threshold. type of port.

head-of-line blocking is generally solved by “dropping” frames.Figure 7-4 Queuing Module Logic It uses distributed virtual output queuing (VOQ) to prevent head-of-line blocking and is the functional block responsible for implementing QoS and fairness between frames contending for a congested output port. In the Ethernet world. if there is congestion the ingress port is blocked. In the Fibre Channel world. Figure 7-5 shows a congested destination. Figure 7-5 Head-of-Line Blocking . So head-ofline blocking is less of an issue in the Ethernet world. which in this picture is point A. All the packets sent from the source device to all the destination points are held up because the destination point A cannot accept any packet. Note Head-of-line blocking is the congestion of one port resulting in blocking traffic to other ports. Point A is causing a head-of-line blocking.

Frames enter the queuing module and are queued in a VOQ based on the output port to which the frame is due to be sent (see Figure 7-6). Frames queued in the three DWRR queues are scheduled using whatever relative weightings are configured. Figure 7-7 Distributed Virtual Output Queuing 4a. corresponding to four QoS levels per port: one priority queue and three queues using deficit-weighted round robin (DWRR) scheduling. It monitors utilization in the transmit queues. granting requests to transmit from ingress forwarding . Figure 7-7 portrays the distributed virtual output queuing. however. Figure 7-6 Virtual Output Queues Four virtual output queues are available for each output port. and 20 for the low queue. Unicast Fibre Channel frames (frames with only a single destination) are queued in one virtual output queue. 30 for the medium queue. The default weights are 50 for the high queue. DWRR is a scheduling algorithm that is used for congestion management in the network scheduler. If a queue uses excess bandwidth. these can be set to any ratio.The queuing module is also responsible for providing frame buffering for ingress queuing if an output interface is congested. Central Arbitration Module: The central arbiters are responsible for scheduling frame transmission from ingress to egress. the DWRR algorithm keeps track of this excess usage and subtracts it from the queue’s next transmission bandwidth allocation. Multicast Fibre Channel frames are replicated across multiple virtual output queues for each output port to which a multicast frame is destined to be forwarded. Frames queued in the priority queue will be scheduled before traffic in any of the DWRR queues.

If the primary arbiter fails. In this manner. the arbiter will wait until there is a free buffer. If there is no free buffer space. They are capable of scheduling more than a billion frames per second. The crossbar itself operates three times over speed.) The over speed helps ensure that the crossbar remains nonblocking even while sustaining peak traffic levels. fairness is also guaranteed. the active central arbiter will respond to the request with a grant. Because switch-wide arbitration is operating in a centralized manner. that is. and each QoS level for each output port has its own virtual output queue. (Figure 7-8 portrays the crossbar switch architecture. . Crossbar Switch Fabric Module: Dual redundant crossbar switch fabrics are used on all Director switches to provide low-latency. the originating line card will issue a request to the central arbiters asking for a scheduling slot on the output port of the output line card. 4b. the arbiter will grant transmission to the output port in a fair (round-robin) manner.modules whenever there is an empty slot in the output port buffers. nonblocking. Congestion does not result in head-of-line blocking because queuing at ingress utilizes virtual output queues. Another feature to maximize crossbar performance under peak frame rate is a feature called superframing. Superframing improves the crossbar efficiency in the situation where there are multiple frames queued originating from one line card with a common destination. This also helps ensure that any blocking does not spread throughout the system. in the event of congestion on an output port. If there is an empty slot in the output port buffer. the redundant arbiter can resume operation without any loss of frames or performance or suffer any delays caused by switchover. Two redundant central arbiters are available on each Cisco MDS Director switch. high-throughput. it operates internally at a rate three times faster than its endpoints. Rather than having to arbitrate each frame independently. with the redundant arbiter snooping all requests issued at and grants offered by the primary active arbiter. Anytime a frame is to be transferred from ingress to egress. any congestion on an output port will result in distributed buffering across ingress ports and line cards. multiple frames can be transferred back to back across the crossbar. and nonoversubscribed switching capacity between line card modules.

There is also additional Inter VSAN routing IVR frame processing. the egress-forwarding module will issue an ACL table lookup to see if any . if the frame requires IVR. Figure 7-9 portrays it in a flow diagram. 5. and enforce some additional ACL checks (logical unit number [LUN] zoning and read-only zoning if configured). the first processing step is to validate that the frame is error free and has a valid CRC. Figure 7-9 Egress Frame Forwarding Logic Before the frame arrives. make sure that the Fibre Channel frame is valid. When a frame arrives at the egressforwarding module from the crossbar switch fabric. and outbound QoS queuing.Figure 7-8 Centralized Crossbar Architecture Crossbar switch fabric modules depend on clock modules for crossbar channel synchronization. the egress-forwarding module has signaled the central arbiter that there is output buffer space available for receiving frames. Egress Forwarding Module: The primary role of the egress forwarding module is to perform some additional frame checks. If the frame is valid.

Under normal operating conditions. Egress PHY/MAC Modules: Frames arriving into the egress PHY/MAC module first have their switch internal header removed. . slow WAN links subject to packet loss are combined with a number of Fibre Channel sources. ACLs applied on egress include LUN zoning and read-only zoning ACLs. and if the frame has been queued within the switch for too long it is dropped. Figure 7-10 portrays how the in-order delivery frames are delivered in order. The default drop timer is 500 milliseconds and can be tuned using the fcdroplatency switch configuration setting.additional ACLs need to be applied in permitting or denying the frame to flow to its destination. it is very unlikely for any frames to be expired. Note that 500 milliseconds is an extremely long time to buffer a frame and typically occurs only when long. 6. The next processing step is to finalize any Fibre Channel frame header rewrites associated with IVR. the frame is queued for transmission to the destination port MAC with queuing on a per-CoS basis matching the DWRR queuing and configured QoS policy map. an Enhanced ISL (EISL) header is prepended to the frame. If the output port is a Trunked E_Port (TE_Port). Next. the frame time stamp is checked. Finally. Dropping of expired frames is in accordance with Fibre Channel standards and sets an upper limit for how long a frame can be queued within a single switch.

. shifting traffic from what might have been a congested link to one that is no longer congested. provided the fabric is stable and no topology changes are occurring. inserting SoF and EoF markers. and inserting other Fibre Channel primitives over the cable. when a topology change occurs (for example. This guarantee extends across an entire multiswitch fabric. Reordering of frames can occur when routing changes are instantiated. the frames are transmitted onto the cable of the output port. The outbound MAC is responsible for formatting the frame into the correct encoding over the cable.Figure 7-10 In-Order Delivery Finally. Lossless Networkwide In-Order Delivery Guarantee The Cisco MDS switch architecture guarantees that frames can never be reordered within a switch. frames might be delivered out of order. Unfortunately. newer frames transmitted down a new non-congested path might pass older frames queued in the previous congested path. In this case. a link failure occurs).

This partitioning of fabric services greatly reduces network instability by containing fabric reconfiguration settings and error conditions within an individual VSAN. This mechanism is a little crude because it guarantees that an ISL failure or topology change will be a disruptive event to the entire fabric. Cisco MDS Software and Storage Services The foundation of the data center network is the network software that runs the switches in the network. Native iSCSI support in the Cisco MDS 9000 family helps customers consolidate storage for a wide range of servers into a common pool on the SAN. manageability. starting from release 4. Small Computer System Interface over IP (iSCSI). Multiprotocol Support In addition to supporting Fibre Channel Protocol (FCP). No new forwarding decisions are made during the time it takes for traffic to flush from existing interface queues. stable.1 it is rebranded as Cisco MDS 9000 NX-OS Software. Virtual SANs Virtual SAN (VSAN) technology partitions a single physical SAN into multiple VSANs. Cisco NX-OS Software is the network software for the Cisco MDS 9000 family and Cisco Nexus family data center switching products. all Fibre Channel switch vendors have a fabricwide configuration setting called a networkwide in-order delivery (network-IOD) guarantee. Because of this. Common Software Across All Platforms Cisco MDS 9000 NX-OS runs on all Cisco MDS 9000 family switches and Cisco Nexus family of data center Ethernet switches. isolated environments to improve Fibre Channel SAN scalability. This stops the forwarding of new frames when there has been an ISL failure or a topology change. Flexibility and Scalability Cisco MDS 9000 NX-OS is a highly flexible and scalable platform for enterprise SANs. It is based on a secure. providing a modular and sustainable base for the long term. Cisco MDS 9000 NX-OS supports IBM Fibre Connection (FICON). all SAN switch vendors default to disable network-IOD except where its use is mandated by channel protocols such as IBM Fiber Connection (FICON). and security. VSAN capabilities enable Cisco MDS 9000 NX-OS to logically divide a large physical fabric into separate.To protect against this. Formerly known as Cisco SAN-OS. VSANs facilitate true hardware-based separation of FICON and open systems. For mainframe environments. Each VSAN is a logically and functionally separate SAN with its own set of Fibre Channel fabric services. and standard Linux core. availability. The strict traffic segregation provided by VSANs helps ensure that the control and data traffic of a given VSAN are confined within the VSAN’s own domain. Users can create SAN administrator roles that are limited in scope . FCoE. and Fibre Channel over IP (FCIP) in a single platform. The general idea here is that it is better to stop all traffic for a period of time than to allow the possibility for some traffic to arrive out of order.

The feature also extends the distance for . Inter-VSAN (IVR) routing is used to transport data traffic between specific initiators and targets on different VSANs without merging VSANs into a single logical fabric. Trunking allows Inter-Switch Links (ISL) to carry traffic for multiple VSANs on the same physical link. IVR also can be used with FCIP to create more efficient business-continuity and disaster-recovery solutions. Advanced capabilities. Cisco DMM can be introduced without the need to rewire or reconfigure the SAN. Network-Assisted Applications Cisco MDS 9000 family network-assisted storage applications offer deployment flexibility and investment protection by allowing the deployment of appliance-based storage applications for any server or storage device in the SAN without the need to rewire SAN switches or end devices. extending VSANs to include devices at remote locations. F-port trunking allows multiple VSANs on a single uplink in N-port virtualization (NPV) mode. continuous data protection. and multipath supports. For example. VSANs are supported across FCIP links between SANs. such as data verification. no additional management software is required. Easy insertion and provisioning of appliance-based storage applications is achieved by removing the need to insert the appliance into the primary I/O path between servers and storage. This approach improves the manageability of large SANs and reduces disruptions resulting from human errors by isolating the effect of a SAN administrator’s action to a specific VSAN whose membership can be isolated based on the switch ports or World Wide Names (WWN) of attached devices. The Cisco MDS 9000 family also implements trunking for VSANs.to certain VSANs. provide flexibility and meet the high-availability requirements of enterprise data centers. and other roles can be set up to allow configuration and management within specific VSANs only. I/O Accelerator The Cisco MDS 9000 I/O Accelerator (IOA) feature is a SAN-based intelligent fabric application that provides SCSI acceleration to significantly improve the number of SCSI I/O operations per second (IOPS) over long distances in a Fibre Channel or FCIP SAN by reducing the effect of transport latency on the processing of each operation. a SAN administrator role can be set up to allow configuration of all platform-specific capabilities. Valuable resources such as tape libraries can be easily shared without compromise. Cisco DMM is transparent to host applications and storage devices. Intelligent Fabric Applications Cisco MDS 9000 NX-OS forms a solid foundation for delivery of network-based storage applications and services such as virtualization. nor can initiators access any resources aside from the ones designated with IVR. and replication on Cisco MDS 9000 family switches. intelligent fabric application offering data migration between heterogeneous disk arrays. unequal-size logical unit number (LUN) migration. data acceleration. Cisco DCNM-SAN is used to administer Cisco DMM. Cisco DMM offers rate-adjusted online migration to enable applications to continue uninterrupted operation while data migration is in progress. data migration. Fibre Channel control traffic does not flow between VSANs. snapshots. Cisco Data Mobility Manager Cisco Data Mobility Manager (DMM) is a SAN-based.

IOA includes the following features: Transport independence: IOA provides a unified solution to accelerate I/O operations over the MAN and WAN. Write acceleration: IOA provides write acceleration for Fibre Channel and FCIP networks. known as the System Data Mover (SDM). EMC MirrorView. and HDS TrueCopy to extend the distance between data centers or reduce the effects of latency. Network Security Cisco takes a comprehensive approach to network security with Cisco MDS 9000 NX-OS. Write acceleration significantly reduces latency and extends the distance for disk replication. IOA can also be used to enable remote tape backup and restore operations without significant throughput degradation. XRC Acceleration IBM Extended Remote Copy (XRC). Cisco has supported XRC over FCIP at distances of up to 124 miles (200 km). Intuitive provisioning: IOA can be easily provisioned using Cisco DCNM-SAN. Speed independence: IOA can accelerate 1/2/4/8/10/16-Gbps links and consolidate traffic over 8/10/16-Gbps ISLs.disaster-recovery and business-continuity applications over WANs and metropolitan-area networks (MAN). Integration of data compression into IOA enables implementation of more efficient Fibre Channel.and FCIP-based business-continuity and disaster-recovery solutions without the need to add or manage a separate device. High availability and resiliency: IOA combines port channels and equal-cost multipath (ECMP) routing with disk and tape acceleration for higher availability and resiliency. reducing or eliminating the latency effects that can otherwise reduce performance at distances of 124 miles (200 km) or greater. The new Cisco MDS 9000 XRC Acceleration feature supports essentially unlimited distances. Service clustering: IOA delivers redundancy and load balancing for I/O acceleration. In the past. IOA as a fabric service: IOA service units (interfaces) can be located anywhere in the fabric and can provide acceleration service to any port. This process is sometimes referred to as XRC emulation or XRC extension. which is now officially renamed IBM z/OS Global Mirror. Compression: Compression in IOA increases the effective MAN and WAN bandwidth without the need for costly infrastructure upgrades. XRC Acceleration accelerates dynamic updates from the primary to the secondary direct-access storage device (DASD) by reading ahead of the remote replication IBM System z. Tape acceleration improves the performance of tape devices and enables remote tape vaulting over extended distances for data backup for disaster-recovery purposes. is a mainframe-based software replication solution in widespread use in financial institutions worldwide. This data is buffered within the Cisco MDS 9000 family module that is local to the SDM. In . Transparent insertion: IOA requires no fabric reconfiguration or rewiring and can be transparently turned on by enabling the IOA license. IOA can be deployed with disk data-replication solutions such as EMC Symmetrix Remote Data Facility (SRDF). Tape acceleration: IOA provides tape acceleration for Fibre Channel and FCIP networks.

authorization. Fibre Channel data between E-ports of Cisco MDS 9000 8-Gbps Fibre Channel switching modules can be encrypted. Cisco MDS 9000 NX-OS offers other significant security features. Cisco TrustSec Fibre Channel Link Encryption is an extension of the FC-SP feature and uses the existing FC-SP architecture. Starting with Cisco MDS 9000 NX-OS 4. because the encryption is at full line rate with no performance penalty. The proven IETF standard IP Security (IPsec) capabilities in Cisco MDS 9000 NX-OS offer secure authentication. data encryption for privacy. coarse wavelength-division multiplexing (CWDM). Cisco TrustSec Fibre Channel Link Encryption Cisco TrustSec Fibre Channel Link Encryption addresses customer needs for data integrity and privacy. Other customers. and AES-GMAC authenticates only the frames that are being passed between the two E-ports. such as role-based access control (RBAC) and Cisco TrustSec Fibre Channel Link Encryption. such as those in defense and intelligence services. IP Security for FCIP and iSCSI Traffic flowing outside the data center must be protected.2(1). At ingress. may be even more security focused and choose to encrypt all traffic within their data center as well. Diffie-Hellman Challenge Handshake Authentication Protocol (DH-CHAP) is used to perform authentication locally in the Cisco MDS 9000 family or remotely through RADIUS or TACACS+. a switch or host cannot join the fabric. Role-Based Access Control . Cisco MDS 9000 NX-OS uses Internet Key Exchange Version 1 (IKEv1) and IKEv2 protocols to dynamically set up security associations for IPsec using preshared keys for remote-side authentication. Encryption is performed at line rate by encapsulating frames at egress with encryption using the GCM mode of AES 128-bit encryption.addition to VSANs. and supports the industry-standard security protocol for authentication. and data integrity for both FCIP and iSCSI connections on the Cisco MDS 9000 family switches. There are two primary use cases for Cisco TrustSec Fibre Channel Link Encryption: Many customers will want to help ensure the privacy and integrity of any data that leaves the secure confines of their data center through a native Fibre Channel link. AES-GCM encrypts and authenticates frames. or dense wavelength-division multiplexing (DWDM). which provide true isolation of SAN-attached devices. Cisco MDS 9000 family management has been certified for Federal Information Processing Standards (FIPS) 140-2 Level 2 and validated for Common Criteria (CC) Evaluation Assurance Level 3 (EAL 3). frames are decrypted and authenticated with integrity check. The encryption algorithm is 128-bit Advanced Encryption Standard (AES) and enables either AES-Galois/Counter Mode (AESGCM) or AES-Galois Message Authentication Code (AES-GMAC) for an interface. and accounting (AAA). such as dark fiber. A future release of Cisco NX-OS software will enable encryption of Fibre Channel data between Eports of Cisco MDS 9000 16-Gbps Fibre Channel switching modules. Switch and Host Authentication Fibre Channel Security Protocol (FC-SP) capabilities in Cisco MDS 9000 NX-OS provide switchto-switch and host-to-switch authentication for enterprisewide fabrics. If authentication fails.

Applications using SNMP Version 3 (SNMPv3). Cisco MDS 9000 NX-OS supports the following types of zoning: N-port zoning: Defines zone members based on the end-device (host and storage) port WWN Fibre Channel identifier (FC-ID) Fx-port zoning: Defines zone members based on the switch port WWN WWN plus interface index. identified by their WWNs. targets. The zoning feature complies with the FC-GS-4 and FC-SW-3 standards. Port Security and Fabric Binding Port security locks the mapping of an entity to a switch port. Fabric binding extends port security to allow ISLs between only specified switches. The entities can be hosts. Smart zoning enables customers to intuitively create fewer multimember zones at an application level. the software provides support for smart zoning. The roles describe the access-control policies for various featurespecific commands on one or more VSANs. instead of creating multiple two-member zones. or switches. or a virtualized cluster level without sacrificing switch resources. zoning is always enforced per frame using access control lists (ACLs) that are applied at the ingress switch. All zoning policies are enforced in hardware. Availability . or domain ID plus interface index Domain ID plus port number (for Brocade interoperability) Fabric port WWN Interface plus domain ID. up to 64 user-defined roles can be configured. and none of them cause performance degradation. Both standards support the basic and enhanced zoning functionalities. such as Cisco DCNM-SAN.Role-Based Access Control Cisco MDS 9000 NX-OS provides RBAC for management access to the Cisco MDS 9000 family command-line interface (CLI) and Simple Network Management Protocol (SNMP). In addition. Zoning Zoning provides access control for devices within a SAN. a physical cluster level. and only a single administrative account is required for each user. which dramatically reduces operational complexity while consuming fewer resources on the switch. Enhanced zoning session-management capabilities further enhance security by allowing only one user at a time to modify zones. This locking mechanism helps ensure that unauthorized devices connecting to the switch port do not disrupt the SAN fabric. In addition to the two default roles on the switch. CLI and SNMP users and passwords are shared. or plus switch WWN of the interface iSCSI zoning: Defines zone members based on the host zone iSCSI name IP address To provide strict network security. offer full RBAC for switch features managed using this protocol.

Port Tracking for Resilient SAN Extension The Cisco MDS 9000 NX-OS port-tracking feature enhances SAN extension resiliency. VRRP increases IP network availability for iSCSI and FCIP connections by allowing failover of connections from one port to another. . including misconfiguration detection and auto creation of port channels among compatible ISLs. Stateful Process Failover Cisco MDS 9000 NX-OS automatically restarts failed software processes and provides stateful supervisor engine failover to help ensure that any hardware or software failures on the control plane do not disrupt traffic flow in the fabric. In the autoconfigure mode. Cisco MDS 9000 NX-OS uses a protocol to exchange port channel configuration information between adjacent switches to simplify port channel management. iSCSI. The autotrespass feature enables high-availability iSCSI connections to RAID subsystems. independent of host software. This feature facilitates the failover of an iSCSI volume from one IP services port to any other IP services port. making control-traffic path failures transparent to applications. ISLs with compatible parameters automatically form channel groups. either locally or on another Cisco MDS 9000 family switch. Minimally disruptive upgrades are provided for the other Cisco MDS 9000 family fabric switches that do not have redundant supervisor engine hardware. Otherwise. up to 16 expansion ports (Eports) or trunking E-ports (TE-ports) or F-ports connected to NP-ports can be bundled into a port channel. Trespass commands can be sent automatically when Cisco MDS 9000 NX-OS detects failures on active paths.Cisco MDS 9000 NX-OS provides resilient software architecture for mission-critical hardware deployments. it takes down the associated diskarray link when port tracking is configured. ISL ports can reside on any switching module. if a port or a switching module fails. ISL Resiliency Using Port Channels Port channels aggregate multiple physical ISLs into one logical link with higher bandwidth and port resiliency for both Fibre Channel and FICON traffic. If a Cisco MDS 9000 family switch detects a WAN or MAN link failure. so the array can redirect a failed I/O operation to another link without waiting for an I/O timeout. FCIP. With this feature. disk arrays must wait seconds for an I/O timeout to recover from a network link failure. Similarly. and they do not need a designated master port. VRRP dynamically manages redundant paths for external Cisco MDS 9000 family management applications. the port channel continues to function properly without requiring fabric reconfiguration. Thus. no manual intervention is required. Nondisruptive Software Upgrades Cisco MDS 9000 NX-OS provides nondisruptive software upgrades for director-class products with redundant hardware and fabric switches. and Management-Interface High Availability The Virtual Routing Redundancy Protocol (VRRP) increases the availability of Cisco MDS 9000 Family management traffic routed over both Ethernet and Fibre Channel networks.

and switch ports. The Cisco DCNM-SAN Storage Management Initiative Specification (SMI-S) server provides an XML interface with an embedded agent that complies with the Web-Based Enterprise Management (WBEM). Distributed device alias services provide fabricwide alias names for host bus adapters (HBA). Configuration and Software-Image Management The CiscoWorks solution is a commonly used suite of tools for a wide range of Cisco devices such as IP switches. HBAs that support NPIV can help improve SAN security by enabling configuration of zoning and port security . storage devices. routers. v2. and wireless devices. With FDMI. Common Information Model (CIM). and v3 over an OOB management port and in-band IPFC Cisco FICON Control Unit Port (CUP) for in-band management from IBM S/390 and z/900 processors IPv6 support for iSCSI. The open API also helps CiscoWorks Device Fault Manager (DFM) monitor Cisco MDS 9000 family device health. Fabric Device Management Interface (FDMI) capabilities provided by Cisco MDS 9000 NX-OS simplify management of devices such as Fibre Channel HBAs though in-band communications. N-Port Virtualization Cisco MDS 9000 NX-OS supports industry-standard N-port identifier virtualization (NPIV). and SMI-S standards. such as supervisor memory and processor utilization. management applications can gather HBA and host OS information without the need to install proprietary host agents. and inventory management. software-image management.Manageability Cisco MDS 9000 NX-OS incorporates many management features that facilitate effective management of growing storage environments with existing resources. intelligent system log (syslog) management. and management traffic routed in band and out of band Open APIs Cisco MDS 9000 NX-OS provides a truly open API for the Cisco MDS 9000 family based on the industry-standard SNMP. power supplies. server. which allows multiple N-port fabric logins concurrently on a single physical Fibre Channel link. all major storage and network management software vendors use the Cisco MDS 9000 NX-OS management API. FCIP. Commands performed on the switches by Cisco DCNM-SAN use this open API extensively. eliminating the need to reenter names when devices are moved. fabric. including switch. Cisco fabric services simplify SAN provisioning by automatically distributing configuration information to all switches in a storage network. CiscoWorks DFM can also monitor the health of important components such as fans. and temperature. The Cisco MDS 9000 NX-OS open API allows the CiscoWorks Resource Manager Essentials (RME) application to provide centralized Cisco MDS 9000 family configuration management. and in-band IP over Fibre Channel (IPFC) SNMPv1. Management interfaces supported by Cisco MDS 9000 NX-OS include the following: CLI through a serial port or out-of-band (OOB) Ethernet management port. Also. and zoning profiles.

NPIV is beneficial for connectivity between core and edge SAN switches. A storage administrator can discover and move fabrics within a federation for the purposes of load balancing. The process involves both SAN and server administrators. management. Cisco MDS 9000 NXOS 6. high availability. .2(1) supports the FlexAttach feature on Cisco MDS 9148 Multilayer Fabric Switch when the NPV mode is enabled. The administrator can use this feature to configure and activate network security features such as port security without having to manually configure the security for each port. In addition. and event-handling processes. In addition to being useful for server connections. and logs from a single web browser. Cisco MDS 9000 family fabric switches operating in the NPV mode do not join a fabric. To alleviate the need for interaction between SAN and server administrators. iSCSI targets presented by Cisco MDS 9000 family IP storage services and Fibre Channel device-state-change notifications are registered by Cisco MDS 9000 NX-OS. Autolearn for Network Security Configuration The autolearn feature allows the Cisco MDS 9000 family to automatically learn about devices and switches that connect to it. performance-monitoring. Cisco Data Center Network Manager Server Federation Cisco DCNM server federation improves management availability and scalability by load balancing fabric-discovery. inventory. Network Boot for iSCSI Hosts Cisco MDS 9000 NX-OS simplifies iSCSI-attached host management by providing network-boot capability. distributed iSNS built in to Cisco MDS 9000 NX-OS or through external iSNS servers. the SAN configuration should not have to be changed when a new server is installed or an existing server is replaced.independently for each virtual machine (OS partition) on a host. NPIV is used by edge switches in the NPV mode to log in to multiple end devices that share a link to the core switch. This feature is available only for Cisco MDS 9000 family blade switches and the Cisco MDS 9100 Series Multilayer Fabric Switches. FlexAttach addresses these problems. either through the highly available. users can connect to any Cisco DCNM instance and view all reports. statistics. and configuration of iSCSI devices. Cisco DCNM provides a single management pane for viewing and managing all fabrics within a single federation. they just pass traffic between core switch links and end devices. which eliminates the domain IDs for these switches. One of the main problems faced today in SAN environments is the time and effort required to install and replace servers. reducing the number of configuration changes and the time and coordination required by SAN and server administrators when installing and replacing servers. FlexAttach Cisco MDS 9000 NX-OS supports the FlexAttach feature. Internet Storage Name Service The Internet Storage Name Service (iSNS) helps existing TCP/IP networks function more effectively as SANs by automating discovery. Up to 10 Cisco DCNM instances can form a federation (or cluster) that can manage more than 75. and the interaction and coordination between them can make the process time consuming.000 end devices. and disaster recovery. NPV is a complementary feature that reduces the number of Fibre Channel domain IDs in core-edge SANs.

In addition to allowing fabric-wide iSCSI configuration from a single switch. Fibre Channel data traffic for latency-sensitive applications can be configured to receive higher priority than throughput-intensive applications using data QoS priority levels. to accelerate convergence of fabric-wide protocols such as FSPF. iSCSI server load balancing (iSLB) is available to automatically redirect servers to the next available Gigabit Ethernet port. Zonebased QoS helps simplify configuration and administration by using the familiar zoning concept. rapid recovery from IP connectivity problems for high availability. Control traffic is assigned the highest QoS priority automatically. or FC-ID. Data traffic can be classified for QoS by the VSAN identifier.Proxy iSCSI Initiator The proxy iSCSI initiator simplifies configuration procedures when multiple iSCSI initiators (hosts) are assigned to the same iSCSI target ports. and Cisco MDS 9000 family switches running previous software revisions. IPv6 Cisco MDS 9000 NX-OS provides IPv6 support for FCIP. and pure IPv6 data networks. Cisco MDS 9000 NX-OS enhances the architecture of the Cisco MDS 9000 family with several advanced traffic-management features that help ensure consistent performance of the SAN under varying load conditions. up to 4095 buffer credits from a pool of more than 6000 buffer credits for a module can be allocated to ports as needed to greatly extend the distance of Fibre Channel SANs. Proxy mode reduces the number of separate times that back-end tasks such as Fibre Channel zoning and storage-device configuration must be performed. A complete dual stack has been implemented for IPv4 and IPv6 to remain compatible with the large base of IPv4-compatible hosts. zone. and principal switch selection. and management traffic routed inband and out-of-band. transitional networks with a mixture of both versions. Quality of Service Four distinct quality-of-service (QoS) priority levels are available: three for Fibre Channel data traffic and one for Fibre Channel control traffic. iSLB greatly simplifies iSCSI configuration and provides automatic. This dual-stack approach allows the Cisco MDS 9000 family switches to easily connect to older IP networks. routers. iSCSI Server Load Balancing Cisco MDS 9000 NX-OS helps simplify large-scale deployment and management of iSCSI servers. iSCSI. Virtual Output Queuing Virtual output queuing (VOQ) buffers Fibre Channel traffic at the ingress port to eliminate head-of- . N-port WWN. Adding credits lengthens distances for Fibre Channel SAN extension. Using extended credits. Traffic Management In addition to implementing the Fabric Shortest Path First (FSPF) Protocol to calculate the best path between two switches and providing in-order delivery features. Extended Credits Full line-rate Fibre Channel ports provide at least 255 buffer credits as standard. zone merges.

Gigabit Ethernet ports for the Cisco MDS 9222i Multiservice Modular Switch (MMS). with typical ratios of 4:1 over a variety of data sources. FCIP Compression FCIP compression in Cisco MDS 9000 NX-OS increases the effective WAN bandwidth without the need for costly infrastructure upgrades. Cisco MDS 9000 NX-OS also can be configured to load balance across multiple same-cost FSPF routes. By integrating data compression into the Cisco MDS 9000 family. Load Balancing of Port Channel Traffic Port channels load balance Fibre Channel traffic using a hash of the source FC-ID and destination FC-ID. Fibre Channel Port Rate Limiting The Fibre Channel port rate-limiting feature for the Cisco MDS 9100 Series Multilayer Fabric Switches controls the amount of bandwidth available to individual Fibre Channel ports within groups of four host-optimized ports. Cisco MDS 9000 NX-OS includes a SAN extension tuner. optimize transfer sizes for the IP network topology. FCIP Tape Acceleration Centralization of tape backup and archive operations provides significant cost savings by allowing expensive robotic tape libraries and high-speed drives to be shared. Load balancing using port channels is performed over both Fibre Channel and FCIP links. and reduce latency by eliminating TCP connection setup for most data transfers. which directs SCSI I/O commands to a specific virtual target and reports IOPS and I/O latency results. . FCIP performance is further enhanced for SAN extension by compression and write acceleration. iSCSI and SAN Extension Performance Enhancements iSCSI and FCIP enhancements address out-of-order delivery problems. Highperformance streaming tape drives require a continuous flow of data to avoid write-data underruns. helping determine the number of concurrent I/O operations needed to increase FCIP throughput. and optionally the exchange ID. and MDS 9000 16-Port Storage Services Node (SSN) achieve up to a 43:1 compression ratio. This centralization poses a challenge for remote backup media servers that need to transfer data across a WAN.line blocking. A future release of Cisco NX-OS software will support similar compression ratios on the 10 Gigabit Ethernet FCIP ports on the new Cisco MDS 9250i. more efficient FCIP-based business-continuity and disaster-recovery solutions can be implemented without the need to add and manage a separate device. Limiting bandwidth on one or more Fibre Channel ports allows the other ports in the group to receive a greater share of the available bandwidth under high-use conditions. For WAN performance optimization. the next-generation intelligent services switch in the Cisco MDS 9200 Series Multiservice Switch portfolio. which dramatically reduce write throughput. MDS 9000 18/4-Port Multiservice Module (MSM). Port rate limiting is also beneficial for throttling WAN traffic at the source to help eliminate excessive buffering in Fibre Channel and IP data network devices. The switch is designed so that the presence of a slow N-port on the SAN does not affect the performance of any other port on the SAN.

Cisco Switched Port Analyzer and Cisco Fabric Analyzer Typically. These features also increase availability by decreasing SAN disruptions for maintenance and reducing recovery time from problems. and maintaining SANs. FCIP tape acceleration helps achieve nearly full throughput over WAN links for remote tape-backup operations for both open systems and mainframe environments. packaged with other relevant information in a standard format. SCSI Flow Statistics LUN-level SCSI flow statistics can be collected for any combination of initiator and target. The Cisco Switched Port Analyzer (SPAN) feature allows an administrator to analyze all traffic between ports (called the SPAN source ports) by nonintrusive directing the SPAN-session traffic to a SPAN destination port that has an external analyzer attached to it. Troubleshooting. Call Home Cisco MDS 9000 NX-OS offers a Call Home feature for proactive fault management. Cisco MDS 9000 NX-OS can shut down malfunctioning ports if errors exceed user-defined thresholds. expanding. any Fibre Channel port in the fabric can be a source. The scope of these statistics includes read. This feature is available only on the Cisco MDS 9000 family storage service modules. With Fibre Channel ping. administrators can check the connectivity of an N-port and determine its round-trip latency. The SPAN destination port does not have to be on the same switch as the SPAN source ports. Alert grouping capabilities and customizable destination profiles offer the flexibility needed to notify specific individuals or support organizations only when necessary. and control commands and error statistics.Without FCIP tape acceleration. Fibre Channel Ping and Fibre Channel Traceroute Cisco MDS 9000 NX-OS brings to storage networks features such as Fibre Channel ping and Fibre Channel traceroute. The Call Home feature forwards the alarms and events. Serviceability. The embedded Cisco Fabric Analyzer allows the Cisco MDS 9000 family to save Fibre Channel control traffic within the switch for text-based analysis. debugging errors in a Fibre Channel SAN require the use of a Fibre Channel analyzer. Cisco Call Home provides a notification system triggered by software and hardware events. SPAN sources can include Fibre Channel ports and FCIP and iSCSI virtual ports for IP services. which causes significant disruption of traffic in the SAN. and Diagnostics Cisco MDS 9000 NX-OS is among the first storage network OSs to provide a broad set of serviceability features that simplify the process of building. and restore operations for open systems. This capability helps isolate problems and reduce risks by preventing errors from spreading to the whole fabric. write. to external entities. With Fibre Channel traceroute. These . Fibre Channel control traffic therefore can be captured and analyzed without an expensive Fibre Channel analyzer. the effective WAN throughput for remote tape operations decreases exponentially as the WAN latency increases. administrators can check the reachability of a switch by tracing the path followed by frames and determining hop-by-hop latency. or to send IP-encapsulated Fibre Channel control traffic to a remote PC for decoding and display using the open source Ethereal networkanalyzer application.

Messages can be selectively routed to a console and to log files. Enhanced event logging and reporting with SNMP traps and syslog: Cisco MDS 9000 family events filtering and remote monitoring (RMON) provide complete and exceptionally flexible control over SNMP traps. and the Cisco Technical Assistance Center (TAC). With this feature. NTP messages are transported using IPFC. Starting with Cisco MDS 9000 NX-OS 6. Other Serviceability Features Additional serviceability features include the following: Online diagnostics: Cisco MDS 9000 NX-OS provides advanced online diagnostics capabilities. IPFC: The Cisco MDS 9000 Family provides the capability to carry IP packets over a Fibre Channel network. . Syslog severity levels can be set individually for all Cisco MDS 9000 NX-OS functions. switching modules. These online diagnostics do not adversely affect normal Fibre Channel operations. but are not restricted to. and they can be sent to external syslog servers. During testing. External entities can include. a port is isolated from the external connection. An NTP server must be accessible from the fabric through the OOB Ethernet port. providing a precise time base for all switches. switch counters. Loopback testing: The Cisco MDS 9000 family uses offline port loopback testing to check port capabilities. an external management station attached through an OOB management port to a Cisco MDS 9000 family switch in the fabric can manage all other switches in the fabric using the in-band IPFC Protocol. and interconnections are functioning properly. facilitating logging and display of messages ranging from brief summaries to very detailed information for debugging. Cisco GOLD is a suite of diagnostic facilities for verifying that hardware and internal data paths are operating as designed. Traps can be generated based on a threshold value. continuous monitoring. and traffic is looped internally from the transmit path back to the receive path. Periodically tests are run to verify that supervisor engines. or time stamps. System Log The Cisco MDS 9000 family syslog capabilities greatly enhance debugging and management. and on-demand and scheduled tests are part of the Cisco GOLD feature set. the powerful Cisco Generic Online Diagnostics (GOLD) framework replaces the Cisco Online Health Management System (OHMS) diagnostic framework on the new Cisco MDS 9700 Series Multilayer Directors. Network Time Protocol (NTP) support: NTP synchronizes system clocks in the fabric. Boot-time diagnostics.notification messages can be used to automatically open technical-assistance tickets and resolve problems before they become critical.2. standby fabric loopback tests. allowing them to be run in production SAN environments. Cisco EEM helps customers harness the network intelligence intrinsic to the Cisco software and enables them to customize behavior based on network events as they happen. Within the fabric. a server in-house or at a service provider’s facility. optics. Cisco Embedded Event Manager (EEM): Cisco EEM is a powerful device and system management technology integrated into Cisco NX-OS. an administrator’s e-mail account or pager. Messages are logged internally.

DCNM Packages. .Syslog provides a comprehensive supplemental source of information for managing Cisco MDS 9000 family switches. Cisco offers a set of advanced software features logically grouped into software license packages. Cisco license packages require a simple installation of an electronic license: no software installation or upgrade is required. and 9100 family licenses. and Table 7-3 lists the Cisco MDS 9500. SAN Extension over IP Package. such as the Cisco MDS 9000 Enterprise Package. Table 7-2 lists the Cisco MDS 9700 family licenses. IOA Package. and XRC Acceleration Package. Cisco MDS stores license keys on the chassis serial PROM (SPROM). some features are logically grouped into add-on packages that must be licensed separately. Cisco Data Center Network Manager for SAN includes a centralized license management console that provides a single interface for managing licenses across all Cisco MDS switches in the fabric. Licenses can also be installed on the switch in the factory. DMM Package. However. On-demand port activation licenses are also available for the Cisco MDS 9000 family blade switches and 4-Gbps Cisco MDS 9100 Series Multilayer Fabric Switches. Cisco MDS 9000 Family Software License Packages Most Cisco MDS 9000 family software features are included in the standard package: the base configuration of the switch. If an administrative error does occur with licensing. The grace period allows plenty of time to correct the licensing issue. Mainframe Package. All licensed features may be evaluated for up to 120 days before a license is expired. if desired. Messages ranging from only high-severity events to detailed debugging messages can be logged. 9200. This single console reduces management overhead and prevents problems because of improperly maintained licensing. so license keys are never lost even during a software reinstallation. the switch provides a grace period before the unlicensed features are disabled.

Table 7-2 Cisco MDS 9700 Family Licenses .

.

Table 7-3 Cisco MDS 9500. FICON. They enable seamless deployment of unified fabrics with high-performance Fibre Channel. and Table 7-4 explains the models. and FCoE connectivity to achieve low total cost of ownership (TCO). and 9100 Family Licenses Cisco MDS Multilayer Directors Cisco MDS 9500 and 9700 Series Multilayer Directors are (SAN) switches designed for deployment in large. Figure 7-11 portrays the Cisco MDS 9000 Director switches since year 2002. 9200. . scalable enterprise clouds. They share the same operating system and management interface with Cisco Nexus data center switches.

4-. and 16Gbps Fibre Channel and FCoE. It supports 2-.Figure 7-11 Cisco MDS 9000 Director Switches Table 7-4 Details of Cisco MDS 9500 and 9700 Series Director Switch Models Cisco MDS 9700 Cisco MDS 9700 is available in a 10-slot and 6-slot configuration. The multilayer architecture of the Cisco MDS 9700 series enables a consistent feature set over a . 8-. MDS 9710 supports up to 384 2/4/8-Gbps or 4/8/16-Gbps autosensing line-rate Fibre Channel ports in a single chassis and up to 1152 Fibre Channel ports per rack with up to 24 terabits per second (Tbps) of Fibre Channel system bandwidth. The Cisco MDS 9710 also supports 10 Gigabit Ethernet clocked optics carrying Fibre Channel traffic. 10-.

non-blocking. MDS 9710 and MDS 9706 have six crossbar-switching modules located at the rear of the chassis. This per-slot bandwidth is twice the bandwidth needed to support a 48-port 16-Gbps Fibre Channel module at full line rate.protocol-independent switch fabric.5 Tbps of front-panel Fibre Channel throughput between modules in each direction for each of the eight Cisco MDS 9710 payload slots. based on central arbitration and crossbar fabric. predictable performance across all traffic conditions for every port in the chassis. The power supplies are redundant by default and they can be configured to be combined if desired. provides 16-Gbps line-rate. Figure 7-12 portrays the Cisco MDS 9700 Director switches and their modular components. Fabric modules are located behind the fan trays. Users can add an additional fabric card to enable N+1 fabric redundancy. It has three hot-swappable rear panel fan trays with redundant individual fans. The Cisco MDS 9710 Director has a 10-slot chassis and it supports the following: Three fan trays at the back of the chassis. Figure 7-12 Cisco MDS 9710 Director Switch The Cisco MDS 9700 series switches enable redundancy on all major components. two Supervisor-1 modules reside in slots 5 and 6. When the system is running. including the fabric card. Eight power supply bays are located at the front of the chassis. and FICON. Cisco MDS 9710 architecture. They support multihop FCoE. remove only one fan tray at a time to access the appropriate fabric modules: Fabric Modules . extending connectivity from FCoE and Fibre Channel fabrics to FCoE and Fibre Channel storage devices. It provides grid redundancy on the power supply and 1+1 redundant supervisors. The combination of the 16-Gbps Fibre Channel switching module and the Fabric-1 crossbar switching modules enables up to 1. FCoE. MDS 9710 and MDS 9706 switches transparently integrate Fibre Channel. the fabric modules are numbered 1–6 from left to right when facing the rear of chassis.

1–2 are behind fan tray 1. the best practice is one behind each fan tray. Air enters through perforations in line cards. Blank panels must be installed in an empty line card or PSU slots to ensure proper airflow. Figure 7-14 portrays the Cisco MDS 9706 Director switch chassis details. Air exits through perforations in fan trays. Figure 7-13 Cisco MDS 9710 Director Switch The Cisco MDS 9706 Director switch has a 6-slot chassis and it supports the following: Three fan trays at the back of the chassis. Fabric Modules 5–6 are behind fan tray 3. Fabric Modules 3–4 are behind fan tray 2. MDS 9710 and MDS 9706 have front to back airflow. two Supervisor-1 modules reside in slots 3 and 4. Fabric modules may be installed in any slot. The power supplies are redundant by default and they can be configured to be combined if desired. Four power supply bays are located at the front of the chassis. and power supplies. Figure 7-13 portrays the Cisco MDS 9710 Director switch details. It has three hot-swappable rear panel fan trays with redundant individual fans. . supervisor modules.

and N_Port ID virtualization (NPIV) for mainframe Linux partitions.and 13-slot configurations. Figure 7-15 portrays the Cisco MDS 9500 Director switch details. . and 10-Gbps Fibre Channel. VSAN-enabled intermix of mainframe and open systems environments. It supports 1-. 1/2/4/8-Gbps and 10-Gbps FICON: The Cisco MDS 9513 supports advanced FICON services including cascaded FICON fabrics. 1-Gbps Fibre Channel over IP (FCIP).Figure 7-14 Cisco MDS 9706 Director Switch Cisco MDS 9500 Cisco MDS 9500 is available in 6. 4-. 10-Gbps FCoE. and 1-Gbps Small Computer System Interface over IP (iSCSI) port speeds. Inclusion of Cisco Control Unit Port (CUP) support enables in-band management of Cisco MDS 9000 family switches from mainframe management applications. 2-. 8-. MDS 9500 supports up to 528 1/2/4/8-Gbps autosensing Fibre Channel ports in a single chassis and up to 1584 Fibre Channel ports per rack.

Although there is a single fan tray. Two crossbar modules are located at the rear of the chassis. The front panel consists of 13 horizontal slots. for more information. A side-mounted fan tray provides active cooling. and slots 7 and 8 are for Supervisor-2A modules only. The Cisco MDS 9506 Director has a 6-slot chassis. The Cisco MDS 9513 Director supports the following: Two Supervisor-2A modules reside in slots 7 and 8. where slots 1 to 6 and slots 9 to 13 are reserved for switching and services modules only. End of Sale status. The rear of the chassis supports two vertical. Two power supplies are located at the rear of the chassis. there are multiple redundant fans with dual fan controllers.Figure 7-15 Cisco MDS 9500 Director Switch Note The Cisco MDS 9509 has reached End of Life. Modules exist on both sides of the plane. two clock modules. All Cisco MDS 9500 Series Director switches are fully redundant with no single point of failure. and failure of one or more individual fans causes all other fans to speed up to compensate. crossbar switch fabrics. external crossbar modules. One hot-swappable fan module for the crossbar modules is located at the rear of the chassis.cisco. All fans are variable speed. Two hot-swappable clock modules are located at the rear of the chassis.com. The power supplies are redundant by default and they can be configured to be combined if desired. A variable-speed fan tray with 15 individual fans is located on the front-left panel of the chassis. In the unlikely event of multiple simultaneous fan failures. www. and a variable-speed fan tray with two individual fans located above the crossbar modules. It continues to be supported at the time of writing this chapter. and power supplies. The Cisco MDS 9513 Director uses a midplane. redundant. and employs majority signal voting wherever 1:1 redundancy would not be appropriate. clock modules. and it supports the following: . It has one hot-swappable front panel fan tray with redundant individual fans. Please refer to the Cisco website. the fan tray still provides sufficient cooling to keep the switch operational. two vertical. The Cisco MDS 9513 Director is a 13-slot Fibre Channel switch. The system has dual supervisors. redundant power supplies.

It has one hot-swappable fan module with redundant fans. The first four slots are for optional modules that can include up to four switching modules or three IPS modules. with a console port. Slots 5 and 6 are reserved for the supervisor modules.Up to two Supervisor-2A modules that provide a switching fabric. . Table 7-5 compares the major details of Cisco 9500 and 9700 Series Director switch models. The power supplies are redundant by default and can be configured to be combined if desired. Two power supplies are located in the back of the chassis. COM1 port. Two power entry modules (PEM) are in the front of the chassis for easy access to power supply connectors and switches. and a MGMT 10/100 Ethernet port on each module.

.

.

Designed to integrate multiprotocol switching and routing. so we will not be covering these modules in this book. and storage applications onto highly scalable SAN switching platforms. These modules are still be supported at the time of writing this chapter. Gen-3 1/2/4/8 Gbps modules. resilient. with the remaining slots available for switching modules. The modules occupy two slots in the Cisco MDS 9500 Series chassis.cisco.4 terabits per second (Tbps) of internal system bandwidth when combined with a Cisco MDS 9506 or MDS 9509 Multilayer Director chassis or up to 2. the Cisco MDS 9500 Series Supervisor-2A Module enables intelligent. This topic describes the capacities of the Cisco MDS 9000 Series switching and supervisor modules. www. Cisco MDS 9700–MDS 9500 Supervisor and Crossbar Switching Modules The Cisco MDS 9500 Series Supervisor-2A Module incorporates an integrated crossbar switching fabric. 1584 Fibre Channel ports in a single rack. Cisco MDS 9000 Series Multilayer Components Several modules are available for the modular switches in the Cisco MDS 9000 Series product range. and 1.Table 7-5 Details of Cisco MDS 9500 and 9700 Series Director Switch Models Note The Cisco Fabric Module 2. The active/standby configuration of the supervisor modules allows support of nondisruptive software upgrades.com. 4-port 10 Gbps FC Module have reached End of Life.2 Tbps of internal system bandwidth when installed in the Cisco MDS 9513 Multilayer Director chassis. End of Sale status. The supervisor module also supports stateful process restarts. and secure high-performance multilayer SAN switching solutions. intelligent SAN services. allowing recovery from most process errors without a reset of the supervisor . The Cisco MDS 9500 Series directors include two supervisor modules—a primary module and a redundant module. Refer to the Cisco website. for more information. Supporting up to 528 Fibre Channel ports in a single chassis. scalable.

This capability is achieved through Cisco MDS 9500 Series Multilayer Directors crossbar architecture with a central arbiter arbitrating fairly between local traffic and traffic to and from other modules. this is used only on Cisco MDS 9509 and MDS 9506 chassis. Figure 7-16 Cisco MDS 9000 Supervisor Models . the Cisco MDS NX-OS Release 5. On the Cisco MDS 9513 chassis. The Cisco MDS 9000 arbitrated local switching feature provides line-rate switching across all ports on the same module without degrading performance or increasing latency for traffic to and from other modules in the chassis.2 supports Cisco MDS 9513 Fabric 3 module. Table 7-6 lists the details of switching capabilities per fabric for Cisco MDS 9710 and MDS 9706 Director switch models. Although each Supervisor-2A module contains a crossbar switch fabric. Figure 7-16 portrays the Cisco MDS 9500 Series Supervisor-2A module and Cisco MDS 9700 Supervisor-1 module. They house the crossbar switch fabrics and central arbiters used to provide data-plane connectivity between all the line cards in the chassis. the crossbar switch fabrics are on fabric cards (refer to Figure 7-15) inserted into the rear of the chassis. and optionally also a DB9 auxiliary console port suitable for modem connectivity. Besides providing crossbar switch fabric capacity. an RJ-45 serial console port. All frame forwarding and processing is handled within the distributed forwarding ASICs on the line cards themselves. On Cisco MDS 9513 Director. DS-13SLT-FAB3 that is a generation-4 type module. Cisco MDS 9000 family supervisor modules provide two essential switch functions: They house the control-plane processors that are used to manage the switch and keep the switch operational. and the crossbar switch fabrics housed on the supervisors are disabled.module and with no disruption to traffic. Management connectivity is provided through an out-of-band Ethernet (interface mgmt0). the supervisor modules do not handle any frame forwarding or frame processing.

Three fabric modules provide 660 Gbps data bandwidth and 768 Gbps of Fibre Channel bandwidth per slot. Figure 7-17 portrays the crossbar switching modules for MDS 9700 and MDS 9513 platforms. equivalent to 256 Gbps of Fibre Channel bandwidth. The line-rate on 48-port 16 Gbps Fibre Channel module needs only three Fabric modules. Table 7-7 lists the details of MDS 9700 Supervisor-1 and MDS 9500 Supervisor-2A. which is why three is the minimum number of fabric modules that are supported on MDS 9700 Series switches.Table 7-6 Details of Switching Capabilities per Fabric for Cisco MDS 9710 and MDS 9706 Each fabric module provides 220 Gbps data bandwidth per slot. Figure 7-17 Cisco MDS 9700 Crossbar Switching Fabric-1 and MDS 9513 Crossbar Switching Fabric-3 .

5:1 oversubscription at 8-Gbps Fibre Channel (FC) rate across all ports. With Arbitrated Local Switching. For traffic switched across the backplane. this module supports 1. For large-scale storage networking environments. Figure 7-18 portrays the Cisco MDS 9000 48-port 8-Gbps Advanced Fibre Channel Switching Module.Table 7-7 Details of MDS 9700 Supervisor-1 and MDS 9500 Supervisor-2A Both MDS 9700 supervisor-1 and MDS 9500 supervisor-2A modules have two USB 2. . Cisco MDS 9000 48-Port 8-Gbps Advanced Fibre Channel Switching Module The MDS 9000 48-Port 8-Gbps Advanced Fibre Channel Switching Module provides high port density and is ideal for connection of high-performance virtualized servers. the 48-port 8-Gbps Advanced module delivers fullduplex aggregate performance of 768 Gbps across locally switched ports on the module. the 48-port 8-Gbps Advanced Fibre Channel switching module delivers full duplex aggregate backplane switching performance of 512 Gbps.0 ports provided on the front panel for simplified configuration-file uploading and downloading using common USB memory stick products.

Fibre Channel. It provides multiprotocol capabilities. The modules can support 8-Gbps or 10-Gbps and/or a combined (8. Fibre Channel over IP (FCIP). and the ports are grouped in different orders depending on the 8-Gbps and 10Gbps modes.and 10-Gbps) InterSwitch Link (ISL). Figure 7-19 Cisco MDS 9000 32-port 8-Gbps Advanced Fibre Channel Switching Module Cisco MDS 9000 18/4-Port Multiservice Module (MSM) The Cisco MDS 9000 18/4-Port Multiservice Module (Figure 7-20) offers 18 1-. making this module well suited for the high-performance 8-Gbps storage subsystems. IBM Fibre Connection (FICON). and 4-Gbps Fibre Channel ports and four 1-Gigabit Ethernet IP storage services ports. . in a single-form-factor.Figure 7-18 Cisco MDS 9000 48-port 8-Gbps Advanced Fibre Channel Switching Module Cisco MDS 9000 32-Port 8-Gbps Advanced Fibre Channel Switching Module The MDS 9000 32-Port 8-Gbps Advanced Fibre Channel Switching Module delivers line-rate performance across all ports and is ideal for high-end storage subsystems and for Inter-Switch Link (ISL) connectivity. Small Computer System Interface over IP (iSCSI). and switch cascading. The 10-Gbps ports can be used for long-haul DWDM connections. integrating. Cisco MDS 9000 I/O Accelerator (IOA). Figure 7-19 portrays the Cisco MDS 9000 32-port 8-Gbps Advanced Fibre Channel Switching Module. FICON Control Unit Port (CUP) management. The Cisco MDS 9000 18/4-Port Multiservice Module (MSM) is optimized for deployment of high-performance SAN extension solutions. distributed intelligent fabric services. Cisco Storage Media Encryption (SME). and cost-effective IP storage and mainframe connectivity in enterprise storage networks. Cisco MDS 9000 Extended Remote Copy (XRC) Acceleration. The 32-port 8-Gbps Advanced switching module delivers full-duplex aggregate performance of 512 Gbps. Cisco Storage Services Enabler (SSE). 2-.

FC write acceleration.Figure 7-20 Cisco MDS 9000 18/4-port Multiservice Module Cisco MDS 9000 16-Port Gigabit Ethernet Storage Services Node (SSN) Module The Cisco MDS 9000 16-Port Storage Services Node provides a high-performance. and intelligent fabric applications. The 16-Port Storage Services Node offers FCIP for remote SAN Extension. and Cisco Storage Media Encryption (SME) to secure missioncritical data stored on heterogeneous disk. The Cisco MDS 9000 16-Port Storage Services Node hosts four independent service engines. Figure 7-21 portrays the Cisco MDS 9000 16-port Gigabit Ethernet Storage Services Node (SSN) Module. and VTL drives. flexible. which can each be individually and incrementally enabled to scale as business requirements change. and FC tape read-and-write acceleration. Figure 7-21 Cisco MDS 9000 16-port Gigabit Ethernet Storage Services Node Module . tape. unified platform for deploying enterprise-class disaster recovery. Cisco IO Accelerator (IOA) to optimize the utilization of metropolitan. Table 7-8 lists the details of Multiservice Module (MSM) and Storage Services Node (SSN).and wide-area network (MAN/WAN) resources for backup and replication using hardware-based compression. business continuance. or be configured to consolidate up to four fabric application instances to dramatically decrease hardware footprint and free valuable slots in the MDS 9500 Director chassis.

Table 7-8 Details of Multiservice Module (MSM) and Storage Services Node (SSN) Cisco MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module Cisco MDS 9000 Series supports 10-Gbps Fibre Channel switching with the Cisco MDS 9000 4-Port 10-Gbps Fibre Channel switching module. Figure 7-23 Cisco MDS 9000 8-port 10-Gbps Fibre Channel over Ethernet (FCoE) Module FCoE allows an evolutionary approach to I/O consolidation by preserving all Fibre Channel constructs. Not only does the Cisco MDS 9000 10-Gbps 8-Port FCoE Module take advantage of migration of FCoE into the core layer. training. maintaining the latency. Cisco MDS 9000 10-Gbps FCoE module (see Figure 7-23) provides the industry’s first multihop-capable FCoE module. This capability Allows FCoE initiators to access remote Fibre Channel resources connected through Fibre Channel over IP (FCIP) for enterprise backup solutions Supports virtual SANs (VSANs) for resource separation and consolidation. and SANs. . respectively. This module supports hot-swappable X2 optical SC interfaces. and traffic management attributes of Fibre Channel while preserving investments in Fibre Channel tools. Modules can be configured with either short-wavelength or long-wavelength X2 transceivers for connectivity up to 300 meters and 10 kilometers. Figure 7-22 Cisco MDS 9000 4-port 10-Gbps Fibre Channel Switching Module Cisco MDS 9000 8-Port 10-Gbps Fibre Channel over Ethernet (FCoE) Module Cisco MDS 9500 Series of Multilayer Directors support 10-Gbps 8-port FCoE switching module with full line-rate FCoE connectivity to extend the benefits of FCoE beyond the access layer into the core of the data center network. FCoE enables the preservation of Fibre Channel as the storage protocol in the data center while giving customers a viable solution for I/O consolidation. security. Supports inter-VSAN routing (IVR) to use resources that may be segmented. but it also extends enterprise-class Fibre Channel services to Cisco Nexus 7000 and 5000 Series Switches and FCoE initiators. Figure 7-22 portrays the MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module.

and supports hot-swappable Enhanced Small Form-Factor Pluggable (SFP+) transceivers. Figure 7-24 Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module Cisco MDS 9700 48-Port 10-Gbps Fibre Channel over Ethernet (FCoE) Module The Cisco MDS 9700 48-Port 10-Gbps FCoE Module provides the scalability of 384 10-Gbps full line-rate autosensing ports in a single chassis. resilient high-performance ISLs. advanced traffic management capabilities. thus providing a way to deploy dedicated FCoE SAN without requiring any convergence. With the Cisco Enterprise Package. . fault detection and isolation of error packets.Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module Cisco MDS 9700 Series Fibre Channel Switching Module is hot swappable and compatible with 2/4/8-Gbps. This module also supports connectivity to FCoE initiators and targets that only send FCoE traffic. comprehensive security frameworks. MDS 9700 module also supports 10 Gigabit Ethernet clocked optics carrying 10 Gbps Fibre Channel traffic. enabling full link bandwidth over long distances with no degradation in link utilization. The 16-Gbps Fibre Channel switching module continues to provide previously available features such as predictable performance and high availability. Figure 7-24 portrays the Cisco MDS 9700 48-port 16-Gbps Fibre Channel Switching Module. Figure 7-25 portrays the Cisco MDS 9700 48 port 16-Gbps Fibre Channel Switching Module. up to 4095 buffer credits can be allocated to an individual port. or 10-Gbps SFP+ transceivers. Each port supports 500 buffer credits without requiring additional licensing. Individual ports can be configured with Cisco 16-Gbps. integrated VSANs and IVR. 4/8/16-Gbps and 10-Gbps FC interfaces. and sophisticated diagnostics. 8-Gbps.

Cisco MDS family Storage Modules offer a wide range of options for various SAN architectures.html. The most up-to-date information can be found for pluggable transceivers in the following link: www. They allow you to choose different cabling types and distances on a port-by-port basis.cisco.Figure 7-25 Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module Cisco MDS 9000 Family Pluggable Transceivers The Cisco Small Form-Factor Pluggable (SFP). . and X2 devices are hot-swappable transceivers that plug in to Cisco MDS 9000 family director switching modules and fabric switch ports. along with the chassis compatibility associated with them. Table 7-9 lists all the hardware modules available at the time of writing this chapter.com/c/en/us/products/collateral/storage-networking/mds-9000-seriesmultilayer-switches/product_data_sheet09186a00801bc698. Enhanced Small Form-Factor Pluggable (SFP+).

.

and highly configurable Fibre Channel switches that are ideal for small to medium-sized businesses. providing edge-switch connectivity to enterprise core SANs. the Cisco MDS 9100 Series switches easily scale from an entry-level departmental switch to a top-ofthe-rack switch. The Cisco MDS 9200 and 9100 Series . and cost-effective multiprotocol connectivity for both open systems and mainframe environments. With high port densities. With a compact form factor and advanced capabilities normally available only on director-class switches. the Cisco MDS 9200 Series switches provide an ideal solution for departmental and remote branch-office SANs. easy-to-install. Cisco MDS NX-OS provides services optimized for virtual machines and blade servers that let IT managers dynamically respond to changing business needs in virtual environments. The Cisco MDS 9200 Series switches deliver state-of-theart multiprotocol and distributed multiservice convergence. offering high-performance SAN extension and disaster-recovery solutions.Table 7-9 Hardware Modules and the Chassis Compatibility Associated with Them Cisco MDS Multiservice and Multilayer Fabric Switches Cisco MDS 9200 and MDS 9100 Series switches are fabric switches designed for deployment in small to medium-sized enterprise SANs. intelligent fabric services. The Cisco MDS 9100 Series switches are cost-effective.

or 48 enabled ports. also supporting 1/2/4-Gbps connectivity. as mandated by the U. all the way to 48 ports by adding these licenses. . providing load-balancing bandwidth use across all links. 36. This extensibility is available without additional licensing. or as an edge switch in larger. core-edge SAN infrastructures. Customers have the option of purchasing preconfigured models of 16. Cisco MDS 9148 The MDS 9148 is a 1 RU Fibre Channel switch with 48 line-rate 8-Gbps ports. 32. Figure 7-26 Cisco MDS 9148 Cisco MDS 9148S The Cisco MDS 9148S 16G Multilayer Fabric Switch (see Figure 7-27). compact one rack-unit (1RU) switch scales from 12 to 48 line-rate 16 Gbps Fibre Channel ports. as a Top-of-Rack switch. Port channels enable users to aggregate up to 16 physical Inter-Switch Links (ISLs) into a single logical bundle. federal government. When extended distances are required.S. and support IPv6. The Cisco MDS 9148 Multilayer Fabric Switch is designed to support quick configuration with zerotouch plug-and-play features and task wizards that allow it to be deployed quickly and easily in networks of any size. helping ensure that the bundle remains active even in the event of a port group failure. standalone SANs. The bundle can consist of any port from the switch. and can be upgraded as needed with the 12-port Cisco MDS 9148S OnDemand Port Activation license to support configurations of 24. The base switch model comes with 12 ports enabled.switches are FIPS compliant. with a default of 32 buffer credits per port. The Cisco MDS 9148 can be used as the foundation for small. up to 125 buffer credits can be allocated to a single port within the port group. The following additional capabilities are included: Each port group consisting of four ports has a pool of 128 buffer credits.and 32-port models onsite. Ports per chassis: the On-Demand Port Activation license allows the addition of eight 8-Gbps ports. or 48 ports and upgrading the 16. Figure 7-26 portrays the Cisco MDS 9148 switch.

Table 7-10 lists the chassis comparison between MDS 9148 and MDS 9148S. The Cisco MDS 9148S includes dual redundant hot-swappable power supplies and fan trays. and F-port channeling for resiliency on uplinks from a Cisco MDS 9148S operating in NPV mode. This means that Cisco NX-OS Software can be upgraded while the Fibre Channel ports carry traffic. Table 7-10 Chassis Comparison Between MDS 9148 and MDS 9148S .Figure 7-27 Cisco MDS 9148S The Cisco MDS 9148S offers In-Service Software Upgrades (ISSU). The Cisco MDS 9148S also supports Power On Auto Provisioning (POAP) to automate software image upgrades and configuration file installation on newly deployed switches. port channels for Inter-Switch Link (ISL) resiliency.

Up to 4095 buffer-to-buffer credits can be assigned to a single Fibre Channel port to extend storage networks over long distances. with four of the Fibre Channel ports capable of running at 8 Gbps. replication. It can also provide remote site device aggregation and SAN extension connectivity to large data center directors.Cisco MDS 9222i The Cisco MDS 9222i provides multilayer and multiprotocol functions in a compact three-rack-unit (3RU) form factor. Figure 7-28 portrays the Cisco MDS 9222i two-rack-unit switch. Links can span any port on any module in a chassis for added scalability and resilience. Thus. the Cisco MDS 9222i scales up to 66 ports for both open systems and IBM Fiber Connection (FICON) mainframe environments. midsize customers in support of IT simplification. The Cisco MDS 9222i supports the Cisco MDS 9000 4/44-Port 8-Gbps Host-Optimized Fibre Channel Switching Module in the expansion slot. This additional engine can run a second advanced software application solution package such as Cisco MDS 9000 I/O Accelerator (IOA) to further improve the SAN Extension over IP performance and throughput . Figure 7-28 Cisco MDS 9222i The Cisco MDS 9222i can add a second services engine when a Cisco MDS 9000 18/4-Port Multiservice Module (MSM) is installed in the expansion slot. It provides 18 fixed 4-Gbps Fibre Channel ports and 4 fixed Gigabit Ethernet ports for Fibre Channel over IP (FCIP) and Small Computer System Interface over IP (iSCSI) storage services. It can be used as a cost-effective SAN extension over IP switch for midrange. The following additional capabilities are included: High-performance ISLs: Cisco MDS 9222i supports up to 16 Fibre Channel interswitch links (ISLs) in a single port channel. Intelligent application services engines: The Cisco MDS 9222i includes as standard a single application services engine in the fixed slot that enables the included SAN Extension over IP software solution package to run on the four fixed 1 Gigabit Ethernet storage services ports. The Cisco MDS 9222i also supports a Cisco MDS 9000 4-Port 10-Gbps Fibre Channel Switching Module in the expansion slot. and disaster recovery and business continuity solutions. The Cisco MDS 9222i is designed to address the needs of medium-sized and large enterprises with a wide range of SAN capabilities.

depending on the choice of optional service module that is installed in the expansion slot. IBM Control Unit Port (CUP) support enables in-band management of Cisco MDS 9200 Series switches from the mainframe management console. enabling features such as Fibre Channel over IP (FCIP) and compression on the switch without the need for additional licenses. the Cisco MDS 9250i platform attaches to directly connected FCoE and Fibre Channel storage devices and supports multitier unified network fabric connectivity directly over FCoE. Cisco MDS 9250i The Cisco MDS 9250i offers up to forty 16-Gbps Fibre Channel ports. Table 7-11 lists the technical details of the Cisco MDS 9200 and 9100 Series models. The Cisco SAN Extension over IP application package license is enabled as standard on the two fixed 1/10 Gigabit Ethernet IP storage services ports. Also. and N-port ID virtualization (NPIV) for mainframe Linux partitions. including cascaded FICON fabrics. the Cisco MDS 9222i switch platform scales from a single application service solution in the standard offering to a maximum of five heterogeneous or homogeneous application services. The Cisco MDS 9000 XRC Acceleration feature enables acceleration of dynamic updates for IBM z/OS Global Mirror. . two 1/10-Gigabit Ethernet IP storage services ports. If more intelligent services applications need to be deployed. Overall. In Service Software Upgrade (ISSU) for Fibre Channel interfaces: Cisco MDS 9222i provides high serviceability by allowing Cisco MDS 9000 NX-OS Software to be upgraded while the Fibre Channel ports are carrying traffic. FICON tape acceleration reduces latency effects for FICON channel extension over FCIP for FICON tapes read and write operations to mainframe physical or virtual tape. the Cisco MDS 9000 16-Port Storage Services Node (SSN) can be installed in the expansion slot to provide four additional services engines for simultaneously running application software solution packages such as Cisco Storage Media Encryption (SME). formerly known as XRC. The Cisco MDS 9250i connects to existing native Fibre Channel networks. VSAN-enabled intermix of mainframe and open systems environments. This feature is sometimes referred to as tape pipelining. Advanced FICON services: The Cisco MDS 9222i supports System z FICON environments. Figure 7-29 portrays the Cisco MDS 9250i two-rack-unit switch. IPv6 support is provided for FCIP. iSCSI.running on the standard services engine. using the eight 10-Gigabit Ethernet FCoE ports. and management traffic routed in-band and out-ofband. and eight 10-Gigabit Ethernet FCoE ports in a fixed two-rack-unit (2RU) form factor.

Figure 7-29 Cisco MDS 9250i .

.

Links can span any port on any module in a chassis. Cisco MDS Fibre Channel Blade Switches . Intelligent application services engine: The Cisco MDS 9250i includes as standard a single application services engine that enables the included Cisco SAN Extension over IP software solution package to run on the two fixed 1/10 Gigabit Ethernet storage services ports. It also supports IBM CUP. Cisco Data Mobility Manager (DMM) as a distributed fabric service: Cisco DMM is a fabric-based data migration solution that transfers block data nondisruptively across heterogeneous storage volumes and across distances. Advanced FICON services: The Cisco MDS 9250i supports FICON environments.Table 7-11 Summary of the Cisco MDS 9200 and 9100 Series Models The following additional capabilities are included: SAN consolidation with integrated multiprotocol support: The Cisco MDS 9250i is available in a base configuration of 20 ports (upgradeable to 40 through On-Demand Port Activation license) of 16-Gbps Fibre Channel. including cascaded FICON fabrics. FICON tape acceleration. and IBM Extended Remote Copy (XRC) features like the MDS 9222i. 2 ports of 10-Gigabit Ethernet for FCIP and iSCSI storage services. VSAN-enabled intermix of mainframe and open systems environments. and 8 ports of 10 Gigabit Ethernet for FCoE connectivity. Cisco MDS 9250i supports In Service Software Upgrade (ISSU) for Fibre Channel interfaces. High-performance ISLs: Cisco MDS 9250i supports up to 16 Fibre Channel ISLs in a single port channel. FIPS compliance: The Cisco MDS 9250i is FIPS 140-2 compliant and it is IPv6 capable. Platform for intelligent fabric applications: The Cisco MDS 9250i provides an open platform that delivers the intelligence and advanced features required for multilayer intelligent SANs. whether the host is online or offline. Up to 256 buffer-to-buffer credits can be assigned to a single Fibre Channel port.

BladeCenter H. and IBM BladeCenter S (10-port model only). It is running Cisco MDS 9000 NX-OS Software. The Cisco MDS Fibre Channel Blade Switch for HP c-Class BladeSystem has two models: the base 12-port model and the 24-port model. Figure 7-30 Cisco MDS Fibre Channel Blade Switches The Cisco MDS Fibre Channel Blade Switch for IBM BladeCenter has two models: the 10-port switch and the 20-port switch. Architecturally. the Cisco MDS Fibre Channel Blade Switch offers the densities required from entry level to advance.Cisco Fibre Channel Blade switches are built using the same hardware and software platform as Cisco MDS 9000 family Fibre Channel switches and hence are consistent in features and performance. There is a 10-port upgrade license to upgrade from 10 ports to 20 ports. and BladeCenter H) and HP c-Class BladeSystem (see Figure 7-30). 2. Cisco offers Fibre Channel blade switch solutions for HP c-Class BladeSystem and IBM BladeCenter. In the 12-port model. The management of the module is integrated with the HP OpenView Management Suite. The management of the module is integrated with the IBM Management Suite. blade switches share the same design as the Cisco MDS 9000 family fabric switches. 16 ports are internal for server connections. BladeCenter T. BladeCenter HT. There is a 12-port license upgrade to upgrade the 12-port model from 12 ports to 24 ports. BladeCenter T. providing transparent. In the 24-port model. 8 ports are for internal connections to the server. 7 ports are for internal connections to the server and 3 ports are external. Figure 7-30 shows the Cisco MDS Fibre Channel Blade Switch for HP and IBM. In the 20-port model. With its flexibility to expand ports in increments. The switch modules have the following features: . 14-ports are for internal connections to the server and 6-ports are external. in the form factor to fit IBM BladeCenter (BladeCenter. The Cisco MDS Fibre Channel Blade Switch functions are equivalent to the Cisco MDS 9124 switch. In the 10-port model. it includes advanced storage networking features and functions and is compatible with Cisco MDS 9500 Series Multilayer Directors and Cisco MDS 9200 Series Multilayer Fabric Switches. There is also the MDS 8Gbps Fibre Channel Blade Switch for HP cClass BladeSystem. and 1 Gbps. Supported IBM BladeCenter chassis are IBM BladeCenter E. and MSIM (20-port model only). and 8 ports are external or SAN facing. end-to-end service delivery in core-edge deployments. and 4 ports are external or storage area network (SAN) facing. The Cisco MDS Fibre Channel Blade Switch is capable of speeds of 4.

External ports can be configured as F_ports (fabric ports). It provides visibility and control of the unified data center so that you can optimize for the quality of service (QoS) required to meet service-level agreements. large-scale. standards-based. Internal autosensing Fibre Channel ports that operate at 4 or 2 Gbps. Internal ports are configured as F_ports at 2 Gbps or 4 Gbps. 2. Cisco Prime DCNM 7. and troubleshoot the data center network infrastructure. which is composed of Nexus and MDS 9000 switches. Cisco Prime DCNM 7. monitor.0 policy-based deployment helps reduce labor and operating expenses (OpEx). DCNM software provides ready-to-use.0 integrates with external hypervisor solutions and third-party applications using a Representational State Transfer (REST) API. or E_ports (expansion ports). Power-on diagnostics and status reporting.0 (at the time of writing this chapter) includes the infrastructure necessary to operate and automate a Cisco DFA network fabric. or 1 Gbps. Figure 7-31 shows the Cisco MDS Fibre Channel Blade Switch for HP. Figure 7-32 illustrates Cisco Prime DCNM as central point of integration for NX-OS Platforms with visibility across networking. Cisco Prime DCNM 7. extensible management capabilities for Cisco Dynamic Fabric Automation (DFA).Six external autosensing Fibre Channel ports that operate at 4. computing. FL_ports (fabric loop ports). Figure 7-31 Cisco MDS 8-Gbps Fibre Channel Blade Switch Cisco Prime Data Center Network Manager Cisco Prime Data Center Network Manager (DCNM) is a management system for the Cisco Data Center Unified Fabric. Two internal full-duplex 100 Mbps Ethernet interfaces. . and storage resources. It enables you to provision.

or as a virtual service blade on Nexus 1010 or 1100. configuration. This advanced management product Provides self-service provisioning of intelligent and scalable fabrics. It provides a robust framework and comprehensive feature set that meets the routing. and changes. Cisco DCNM provides a high level of visibility and control through a single web-based management console for Cisco Nexus. and storage administration needs of data centers. Cisco MDS. Cisco Prime DCNM product for Cisco DCNM Release 5. inventory. Cisco DCNM authenticates administration through the following protocols: TACACS+. and through trending capacity manager predicts when an individual tier will be consumed.Figure 7-32 Cisco Prime DCNM as Central Point of Integration Cisco Prime DCNM provisions and optimizes the overall uptime and reliability of the data center fabric. .2 is retitled with the name Cisco DCNM for SAN. More Cisco DCNM servers can be added to a cluster. Eases diagnosis and troubleshooting of data center outages. Simplifies operational management of virtualized data centers. and events. and LDAP. The VMware vCenter plug-in brings Cisco Prime DCNM computing dashboard into VMware vCenter for dependency mapping. Tracks port utilization by tier. adds.2 is retitled with the name Cisco DCNM for LAN. VMpath analysis provides the capability to view performance for every switch hop all the way to the individual VMware ESX server and virtual machine. This management solution can be deployed in Windows and Linux. and Cisco Unified Computing System products. Opens northbound application programming interfaces (API) for Cisco and third-party management and orchestration platforms. Centralizes fabric management to facilitate resource moves. Cisco DCNM also supports the installation of the Cisco DCNM for SAN and Cisco DCNM for LAN components with a single installer. RADIUS. and detects performance degradation. performance. Proactively monitors the SAN and LAN. Cisco Fabric Manager product for Cisco DCNM Release 5. switching.

The Cisco DCNM-SAN software. DCNM-SAN Web Client.6.3 and 6. A licensed DCNM-SAN Server maintains up-to-date discovery information on all configured fabrics so device status and interconnections are immediately available when you open the DCNM-SAN Client. Network Monitoring. Roaming user profiles: The licensed DCNM-SAN Server uses the roaming user profile feature . Cisco DCNM-SAN Server has the following features: Multiple fabric management: DCNM-SAN Server monitors multiple physical fabrics under the same user interface. Red Hat Enterprise Linux 5. SNMP operations are used to collect fabric information. 5. troubleshooting.0) or Windows 2008 R2 SP1. DCNM-SAN Server provides centralized MDS management services and performance monitoring. and configuration capabilities. This facilitates managing redundant fabrics. Performance Monitoring.4 (Cisco Prime DCNM 6. which ensures that you can manage any of your MDS devices from a single console. Figure 7-33 Cisco Prime DCNM Managing Virtualized Topology DCNM-SAN Server DCNM-SAN Server is a platform for advanced MDS monitoring. Performance Manager. 6. including the server components. Up to 16 clients (by default) can connect to a single Cisco DCNM-SAN Server concurrently. DCNM-SAN Client.7.Cisco DCNM SAN Fundamentals Overview Cisco Prime DCNM SAN has the following fundamental components: DCNM-SAN Server. The Cisco DCNM-SAN Clients can also connect directly to an MDS switch in fabrics that are not monitored by a Cisco DCNM-SAN Server. Figure 7-33 shows how Cisco Prime DCNM manages highly virtualized data centers. so any events that occurred since the last time you opened the DCNM-SAN Client are captured.3). Windows 2008 Standard SP2. Each computer configured as a Cisco DCNM-SAN Server can monitor multiple Fibre Channel SAN fabrics. requires about 100 GB of hard disk space. Authentication in DCNM-SAN Client. Cisco Traffic Analyzer. Cisco DCNMSAN Server runs on VMware ESXi (Cisco Prime DCNM 7. Device Manager. Continuous health monitoring: MDS health is monitored continuously.

Gradually the resolution is reduced as groups of the oldest samples are rolled up together. you run the feature as if you had purchased the license. These reports list the average and peak throughput and provide hot links to additional performance graphs and tables with additional statistics. Performance Manager The primary purpose of DCNM-SAN is to manage the network.to store your preferences and topology map layouts on the server. you need to buy and install the Cisco DCNM-SAN Server package. To get the licensed features. remote client support. These reports are available only if you create a collection using Performance Manager and start the collector. Performance Manager summary reports: The Performance Manager summary report provides a high-level view of the network performance. because its size does not increase over time. or Cisco Nexus 7000 Series switch chassis (FCoE only) along with the installed switching modules. and the fan assemblies. At prescribed intervals the oldest samples are averaged (rolled-up) and saved. the supervisor modules. and yearly trends. Performance Manager drill-down reports: Performance Manager can analyze daily. weekly. Zero maintenance databases for statistics storage: No maintenance is required to maintain Performance Manager’s round-robin database. trial versions of these licensed features are available. and inventory from a remote location using a web browser. a Device Manager dialog box shows values for a single switch. To enable the trial version of a feature. The tables in the DCNM-SAN information pane basically correspond to the dialog boxes that appear in Device Manager. You can also view the results for specific time intervals using the interactive zooming functionality. Cisco Nexus 5000 Series switch chassis. Licensing Requirements for Cisco DCNM-SAN Server When you install DCNM-SAN. A full two days of raw samples are saved for maximum resolution. DCNM-SAN Web Client With DCNM-SAN Web Client you can monitor Cisco MDS switch events. You see a dialog box explaining that this is a demo version of the feature and that it is enabled for a limited time. the power supplies. Performance Manager gathers network device statistics historically . monthly. although DCNM-SAN tables show values for one or more switches. the status of each port within each module. Device Manager also provides more detailed information for verifying or troubleshooting device-specific configuration than DCNM-SAN. such as Performance Manager. the basic unlicensed version of Cisco DCNM-SAN Server is installed with it. performance. so that your user interface will be consistent regardless of what computer you use to manage your storage networks. A key management capability is network performance monitoring. Device Manager Device Manager provides a graphical representation of a Cisco MDS 9000 family switch chassis. However. However. Both tabular and graphical reports are available for all interconnections monitored by Performance Manager. and continuously monitored fabrics.

either DCNM-SAN Client or DCNM-SAN Server opens a CLI session to the switch (SSH or Telnet) and retries the user name and password pair. Performance Manager communicates with the appropriate devices (switches. including Name Server registrations. Traffic encapsulated by one or more Port Analyzer Adapter products can be analyzed concurrently with a single workstation running Cisco Traffic Analyzer. Cisco Traffic Analyzer Cisco Traffic Analyzer provides real-time analysis of SPAN traffic or analysis of captured traffic through a web browser user interface. Using information polled from a seed Cisco MDS 9000 family switch. which is based on ntop. SCSI session status. the switch creates a temporary SNMP username that is used by DCNM-SAN Client and DCNM-SAN Server. If this username and password are not a recognized SNMP username and password. and provides inventory and configuration information in multiple viewing options such as fabric view. Additional statistics are also available on Fibre Channel frame sizes and network management protocols. or storage elements) and collects the appropriate information at fixed five-minute intervals. and operation view. Performance Manager gathers statistics from across the fabric based on collection configuration files. Performance Manager presents recent statistics in detail and older statistics in summary. hosts. Network Monitoring DCNM-SAN provides extensive SAN discovery. Authentication in DCNM-SAN Client Administrators launch DCNM-SAN Client and select the seed switch that is used to discover the fabric. and management task information. Collection: The Web Server Performance Collection screen collects information on desired fabrics. presents it in a customizable map. Flows are defined based on a host-to-storage (or storage-to-host) link. Cisco Traffic Analyzer monitors round-trip response times. Fibre Channel Generic .and provides this information graphically using a web browser. Based on this configuration. a SAN discovery process begins. and information viewing capabilities. SCSI I/Os per second. DCNM-SAN re-creates a fabric topology. The username and password are passed to DCNM-SAN Server and are used to authenticate to the seed switch. SCSI read or traffic throughput and frame counts. DCNM-SAN collects information on the fabric topology through SNMP queries to the switches connected to it. and configured flows. After DCNM-SAN is invoked. Performance Manager can collect statistics for ISLs. storage elements. These files determine which SAN elements and SAN links Performance Manager gathers statistics for. Performance Manager also integrates with external tools such as Cisco Traffic Analyzer. hosts. summary view. a public domain software enhanced by Cisco for Fibre Channel traffic analysis. Presentation: Generates web pages to present the collected data through DCNM-SAN Web Server. device view. If the switch in either the local switch authentication database or through a remote AAA server recognizes the username and password. Performance Manager has three operational stages: Definition: The Flow Wizard sets up flows in the switches. topology mapping.

html . DCNM-SAN automatically discovers all devices and interconnects on one or more fabrics. and minimum or maximum value per second. For a selected port. Reference List The landing page for Cisco SAN portfolio is available at http://www.pdf Cisco MDS 9000 Family Pluggable Transceivers Data Sheet: http://www.com/c/en/us/products/storagenetworking/mds-9700-series-multilayer-directors/datasheet-listing. serial number and firmware version. software revision levels. you can monitor any of a number of statistics. SAN elements.com/go/dcnm More information on the Cisco I/O Acceleration: http://www. These tools provide real-time statistics as well as historical performance monitoring. DCNMSAN gathers this information through SNMP queries to each switch. All available switches. and SCSI-3.com/en/US/prod/collateral/ps4159/ps6409/ps6028/ps10531/data_sheet_c78538860.com/en/US/prod/collateral/ps4159/ps6409/ps6028/product_data_sheet0900ae More information on the Cisco Prime DCNM: http://www. and host operating-system type and version discovery without host agents. and FICON data.html More information on the Data Mobility Manager (DMM): http://www.com/en/US/products/ps6028/index. including traffic in and out. value per second. ISLs. Real-time performance statistics are a useful tool in dynamic troubleshooting and fault isolation within the fabric. and VSANs.com/c/en/us/products/collateral/storage-networking/mds-9000-seriesmultilayer-switches/product_data_sheet09186a00801bc698.cisco. and SAN links. Real-time statistics gather data on parts of the fabric in user-configurable intervals and display these results in DCNM-SAN and Device Manager.html Cisco MDS 9500 Data Sheets: http://www. You can set the polling interval from ten seconds to one hour.com/c/en/us/products/storage-networking/index. host bus adapters (HBAs).cisco. and storage devices are discovered.cisco. The device information discovered includes device names.Services (FC-GS). The Cisco MDS 9000 family switches use Fabric-Device Management Interface (FDMI) to retrieve the HBA model.cisco.cisco.cisco. Fabric Shortest Path First (FSPF). class 2 traffic. These statistics show the performance of the selected port in real-time and can be used for performance monitoring and troubleshooting. vendor. and display the results based on a number of selectable options including absolute value. This tool gathers statistics at a configurable interval and displays the results in tables or charts. port channels.html Cisco MDS 9700 Data Sheets: http://www.com/c/en/us/products/collateral/storagenetworking/mds-9500-series-multilayer-directors/data_sheet_c78_605104.cisco.cisco. errors. Device Manager provides an easy tool for monitoring ports on the Cisco MDS 9000 family switches.html More information on the Cisco MDS 9000 Family intelligent fabric applications: http://www. Performance Monitoring DCNM-SAN and Device Manager provide multiple tools for monitoring the performance of the overall fabric.

Table 7-12 lists a reference of these key topics and the page numbers on which each is found.html. .com/en/US/prod/collateral/ps4159/ps6409/ps6028/ps10526/data_sheet_c78538834. noted with the key topics icon in the outer margin of the page.More information on the XRC: http://www. Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter.cisco.

or at least the section for this chapter. and check your answers in the glossary: storage-area network (SAN) Fibre Channel over Ethernet (FCoE) Data Center Network Manager (DCNM) Internet Small Computer System Interface (iSCSI) Fibre Connection (FICON) Fibre Channel over IP (FCIP) buffer credits crossbar arbiter virtual output queuing (VOQ) virtual SAN (VSAN) zoning N-port virtualization (NPV) N-port identifier virtualization (NPIV) Data Mobility Manager (DMM) I/O Accelerator (IOA) Dynamic Fabric Automation (DFA) multiprotocol support XRC Acceleration inter-VSAN routing (IVR) head-of-line blocking . “Memory Tables” (found on the disc).” also on the disc. “Memory Tables Answer Key. Appendix C. Define Key Terms Define the following key terms from this chapter. includes completed tables and lists to check your work.Table 7-12 Key Topics for Chapter 7 Complete Tables and Lists from Memory Print a copy of Appendix B. and complete the tables and lists from memory.

A number of computing technologies benefit from virtualization. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. Server virtualization: Partitioning a physical server into smaller virtual servers. This chapter guides you through storage virtualization concepts answering basic what. Network virtualization: Using network resources through a logical segmentation of a single physical network. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. storage device.” Table 8-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. applications. such as a server. and human users are able to interact with the virtual resource as if it were a real single logical resource. You can find the answers in Appendix A. including the following: Storage virtualization: Combining multiple network storage devices into what appears to be a single storage unit. and how questions. Table 8-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. This chapter goes directly into the key concepts of storage virtualization and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. “Answers to the ‘Do I Know This Already?’ Quizzes. or network where the framework divides the resource into one or more execution environments. Application virtualization: Decoupling the application and its data from the operating system. Virtualizing Storage This chapter covers the following exam topics: Describe Storage Virtualization Describe LUN Storage Virtualization Describe Redundant Array of Inexpensive Disk (RAID) Describe Virtualizing SANs (VSAN) Describe Where the Storage Virtualization Occurs In computing. Devices. If you . why. virtualization means to create a virtual version of a device or resource.Chapter 8. read the entire chapter.

It is an exact copy (or mirror) of a set of data on two disks. SNIA (Storage Networking Industry Association) d. Which of the following options are reasons to use storage virtualization? (Choose three. In the SAN fabric d. It creates a striped set from a series of mirrored drives. 6. Which of the following options explains RAID 6? a. The control plane e.) a.) a. It is an exact copy (or mirror) of a set of data on two disks. Where does the storage virtualization occur? (Choose three. 1. The host b. DNS 5. INCITS (International Committee for Information Technology Standards) b. you should mark that question as wrong for purposes of the self-assessment. b. . Switches 4. d. The storage array c. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. Which of the following organizations defines the storage virtualization standards? a. Which of the following options can be virtualized according to SNIA storage virtualization taxonomy? (Choose all that apply. Blocks c. Disks b. It comprises block-level striping with distributed parity. 2. It comprises block-level striping with double distributed parity. Tape systems d. b. Power and cooling consumption in the data center b.do not know the answer to a question or are only partially sure of the answer. Virtualization trends d. Achieving cost-effective business continuity and data protection 3. It comprises block-level striping with distributed parity.) a. c. Which of the following options explains RAID 1+0? a. Exponential data growth and disruptive storage upgrades c. Storage virtualization has no standard measure defined by a reputable organization. IETF (Internet Engineering Task Force) c. Data protection regulations e. File systems e.

It initiates a write to the secondary storage array. c. IV. The LVM is a collection of ports from a set of connected Fibre Channel switches. Write operation to the primary array is acknowledged locally. III. Which of the following options explains the Logical Volume Manager (LVM)? (Choose two. It is a proprietary technique that Cisco MDS switches offer. b. It creates a striped set from a series of mirrored drives. e. It is close to the file system. II. b. It uses the array controller’s CPU cycles. Primary storage array maintains new data queued to be copied to the secondary storage array at a later time. a. It is licensed and managed per host. I. I. II.) a. II c. d. c. III. 8. II. I. It is independent of SAN transport. IV. IV b. III. LVMs can be used to divide large physical disk arrays into more manageable virtual disks. d. An acknowledgement is sent to the primary storage array by a secondary storage array after the data is stored on the secondary storage array. VI d. Select the correct order of operations for completing asynchronous array-based replication. It should be configured on HBAs. 9. e. II. Foundation Topics . Write operation is received by primary storage array from the host.) a. Asynchronous mirroring operation was not explained correctly.) a. Which of the following options are the advantages of host-based storage virtualization? (Choose three. LVM combines multiple hard disk drive components into a logical unit to provide data redundancy. It comprises block-level striping with double distributed parity. It is a feature of storage arrays. 7. III.c. It provides basic LUN-level security by allowing LUNs to be seen by selected servers. It is a feature of Fibre Channel HBA. d. primary storage array does not require a confirmation from the secondary storage array to acknowledge the write operation to the server. IV e. c. 10. I. The LVM manipulates LUN representation to create virtualized storage to the file system. Which of the following options explains the LUN masking? (Choose two. d. III. b. It uses the operating system’s built-in tools. I.

Software Defined Storage (SDS) has been proposed in 2013 as a new category of storage software products. Unlike previous new protocols or architectures. Virtual memory operating systems evolved during the 1960s. or isolating the internal function of a storage (sub) system or service from applications. the control plane and the data plane have been intertwined within the traditional switches that are deployed today. SDS delivers automated. The closest vendor-neutral attempt to make storage virtualization concepts comprehensible has been the work of the Storage Networking Industry Association (SNIA). virtual tape was the most popular storage initiative in many storage management pools. external storage systems into one virtual pool of storage. A single set of tools and processes performs everything from online any-to-any data migrations to heterogeneous replication. The beginning of virtualization goes a long way back. Such mapping information. which was a box full of DRAM chips that appeared to be rotating magnetic disks. which was a helical-scan cartridge that looked like a disk. making abstraction and virtualization more difficult to manage in complex virtual environments. Much of the pioneering work in virtualization began on mainframes and has since moved into the non-mainframe. also referred to as metadata. application-aware storage services through orchestration of the underlining storage infrastructure in support of an overall software defined environment. In 2005. SDS refers to the abstraction of the physical elements. similar to server virtualization. One decade later.What Is Storage Virtualization? Storage virtualization is a way to logically combine storage capacity and resources from various heterogeneous. is stored in huge mapping tables. computer servers. increasing the demand for virtual tape. storage virtualization has no standard measure defined by a reputable organization such as INCITS (International Committee for Information Technology Standards) or the IETF (Internet Engineering Task Force). however. which was first used to describe an approach in network technology that abstracts various elements of networking and creates an abstraction or virtualized layer in software. In 2003. open-system world. the virtualization layer looks at the logical address in that I/O request and. This virtual pool can then be more easily managed and provisioned as needed. virtualization began to move into storage and some disk subsystems. Virtualization gained momentum in the late 1990s as a result of virtualizing SANs and storage networks in an effort to tie together all the computing platforms from the 1980s. The late 1970s witnessed the introduction of the first solid-state disk. In networking. SDS represents a new evolution for the storage industry for how storage will be managed and deployed in the future. How Storage Virtualization Works Storage virtualization works through mapping. The term Software Defined Storage is a follow up of the technology trend Software Defined Networking. When a host requests I/O. The storage virtualization layer creates a mapping from the logical storage address space (used by the hosts) to the physical address of the storage device. Virtualization achieved a new level when the first Redundant Array of Inexpensive Disk (RAID) virtual disk array was announced in 1992 for mainframe systems. The original mass storage device used a tape cartridge. storage virtualization is the act of abstracting. The first virtual tape systems appeared in 1997 for mainframe systems. According to the SNIA dictionary. or general network resources for the purpose of enabling application and network independent management of storage or data. . virtual tape architectures moved away from the mainframe and entered new markets. policy-driven. hiding. which has produced useful tutorial content on the various flavors of virtualization technology.

using the mapping table. CIOs and IT managers must cope with shrinking IT budgets and growing client demands. use IT resources more efficiently. The virtualization layer then performs I/O with the underlying storage device using the physical address. and when it receives the data back from the physical device. ensure business continuity. The application is unaware of the mapping that happens beneath the covers. Organizations can use the technology to resolve the issue and create a more adaptive. they are faced with ever-increasing constraints on power. and space. The top seven reasons to use storage virtualizations are Exponential data growth and disruptive storage upgrades Low utilization of existing assets . In addition. flexible service-based infrastructure to reflect ongoing changes in business requirements. cooling. it returns that data to the application as if it had come from the logical address. Figure 8-1 illustrates this virtual storage layer that sits between the applications and the physical storage devices. They must simultaneously improve asset utilization. and become more agile. Figure 8-1 Storage Virtualization Mapping Heterogeneous Physical Storage to Virtual Storage Why Storage Virtualization? The business drivers for storage virtualization are much the same as those for server virtualization. translates the logical address into the address of a physical storage device. The major economic driver is to reduce costs without sacrificing data integrity or performance.

and how it is implemented. Figure 8-2 SNIA Storage Virtualization Taxonomy Separates Objects of Virtualization from Location and Means of Execution What is being virtualized may include blocks. in storage arrays. file systems. and file or record virtualization. Where virtualization occurs may be on the host. high-quality storage service delivery to application and business owners Achieving cost-effective business continuity and data protection Power and cooling consumption in the data center What Is Being Virtualized? The Storage Networking Industry Association (SNIA) taxonomy for storage virtualization is divided into three basic categories: what is being virtualized. tape systems. or in the network via intelligent fabric switches or SAN-attached appliances. . This is illustrated in Figure 8-2. and manage storage Ensuring rapid. where the virtualization occurs. run. How the virtualization occurs may be via inband or out-of-band separation of control and data paths.Growing management complexity with flat or decreasing budgets for IT staff head count Increasing hard costs and environmental costs to acquire. Storage virtualization provides the means to build higher-level storage services that mask the complexity of all underlying components and enable automation of data storage operations. disks.

Figure 8-3 illustrates all components of an intelligent SAN. presenting an image of virtual storage to the host by one link and allowing the host to directly retrieve data blocks from physical storage on another. network-based. It has the opportunity to work as a bridge. it was defined as Redundant Arrays of Inexpensive Disks (RAID) on a paper . Block Virtualization—RAID RAID is another data storage virtualization technology that combines multiple hard disk drive components into a logical unit to provide data redundancy and improve performance or capacity. Originally. so that both block data and the control information that govern its virtual appearance transit the same link. respectively. within SAN switches. within the storage network in the form of a virtualization appliance. The out-of-band method provides separate paths for data and control. vendors have different methods for implementing virtualized storage transport. The in-band method places the virtualization engine directly in the data path.The most important reasons for storage virtualization were covered in the “Why Storage Virtualization?” section. The abstraction layer that masks physical from logical storage may reside on host systems such as servers. or array-based virtualization. it can present an iSCSI target to the host while using standard FC-attached arrays for the storage. these alternatives are referred to as host-based. In common usage. binding multiple levels of technologies on a foundation of logical abstraction. The ultimate goal of storage virtualization should be to simplify storage administration. This capability enables end users to preserve investment in storage resources while benefiting from the latest host connection mechanisms such as Fibre Channel over Ethernet (FCoE). in 1988. This can be achieved by a layered approach. or on a storage array or tape subsystem targets. In the in-band method the appliance acts as an I/O target to the hosts and acts as an I/O initiator to the storage. In-band and out-of-band virtualization techniques are sometimes referred to as symmetrical and asymmetrical. For example. Figure 8-3 Possible Alternatives Where the Storage Virtualization May Occur In addition to differences between where storage virtualization is located.

subsequent reads can be calculated from the distributed parity so that no data is lost. but RAID 6 uses two separate parities based respectively on addition and multiplication in a particular Galois Field or Reed–Solomon error correction that we will not be covering in this chapter. system motherboards. but many variations have evolved—notably. Each scheme provides a different balance between the key goals: reliability and availability. host bus adapters. RAID has been implemented in adapters/controllers for many years. and Randy Katz. For example. For example. RAID 5 requires a minimum of three disks. Many RAID levels employ an error protection scheme called “parity. Host volume management software typically has several RAID variations and. there were five RAID levels.written by David A. Patterson. Double parity provides fault tolerance up to two failed drives. Because RAID works with storage address spaces and not necessarily storage devices. The different schemes or architectures are named by the word RAID followed by a number (for example. and RAID 6 (dual parity). Today. if an 800 GB disk is striped together with a 500 GB disk. referred to as RAID levels. device driver software. This makes larger RAID groups more practical. The most common RAID levels that are used today are RAID 0 (striping). several nested levels and many nonstandard levels (mostly proprietary). Now it is commonly used as Redundant Array of Independent disks (RAID). A RAID 0 can be created with disks of different sizes. volume management software. It can be found in disk subsystems. Such an array can only be as big as the smallest member disk. The Storage Networking Industry Association (SNIA) standardizes RAID levels and their associated data formats in the common RAID Disk Drive Format (DDF) standard. RAID 1 and variants (mirroring). but the storage space added to the array by each disk is limited to the size of the smallest disk. depending on the specific level of redundancy and performance required. Originally. RAID 0 is normally used to increase performance. especially for high- . A RAID 0 (also known as a stripe set or striped volume) splits data evenly across two or more disks (striped). and virtually any processor along the I/O path. RAID 0. Most of the RAID levels use simple XOR. Gibson. There is an enormous range of capabilities offered in disk subsystems across a wide range of prices. It requires that all drives but one be present to operate. Upon failure of a single drive. This is useful when read performance or reliability is more important than data storage capacity. RAID 1). from the University of California. Running RAID in a volume manager opens the potential for integrating file system and volume management products for management and performance-tuning purposes. RAID levels greater than RAID 0 provide protection against unrecoverable (sector) read errors. A classic RAID 1 mirrored pair contains two disks. it can be used on multiple levels recursively. Gary A. starting with SCSI and including Advanced Technology Attachment (ATA) and serial ATA. It provides no data redundancy. in some cases. RAID 5 (distributed parity). performance and capacity. A RAID 6 comprises block-level striping with double distributed parity. Data is distributed across the hard disk drives in one of several ways. RAID is a staple in most storage products. offers many sophisticated configuration options. the size of the array will be 1 TB (500 GB × 2).” a widely used method in information technology to provide fault tolerance in a given set of data. although it can also be used as a way to create a large logical disk out of two or more physical ones. A RAID 1 is an exact copy (or mirror) of a set of data on two disks. without parity information. the size of the array will be 400 GB. Disk subsystems have been the most popular product for RAID implementations and probably will continue to be for many years to come. A RAID 5 comprises block-level striping with distributed parity. if a 500 GB disk is mirrored together with 400 GB disk. as well as whole disk failure.

.availability systems. When the top array is RAID 0 (such as in RAID 1+0 and RAID 5+0). Table 8-2 lists the different types of RAID levels. RAID 1+0 creates a striped set from a series of mirrored drives. respectively. Many storage controllers allow RAID levels to be nested (hybrid). but if drives fail on both sides of the mirror. and storage arrays are rarely nested more than one level deep. The larger the drive capacities and the larger the array size. it is possible to mitigate most of the problems associated with RAID 5. The array continues to operate with one or more drives failed in the same mirror set. Most common examples of nested RAIDs are the following: RAID 0+1 creates a second striped set to mirror a primary striped set. most vendors omit the “+”. The array can sustain multiple drive losses as long as no mirror loses all its drives. the data on the RAID system is lost. As with RAID 5. The final array of the RAID level is known as the top array. With a RAID 6 array. the more important it becomes to choose RAID 6 instead of RAID 5. a single drive failure results in reduced performance of the entire array until the failed drive has been replaced. which yields RAID 10 and RAID 50. using drives from multiple sources and manufacturers. because large-capacity drives take longer to restore.

.

.

.

or similar interface. A Logical Unit Number (LUN) is a unique identifier that designates individual hard disk devices or grouped devices for address by a protocol associated with a SCSI. iSCSI.Table 8-2 Overview of Standard RAID Levels Virtualizing Logical Unit Numbers Virtualization of storage helps to provide location independence by abstracting the physical location of the data. The front-end LUN or volume is presented to the computer or host system for use. which manages the process of mapping that logical space to the physical location. The user is presented with a logical space for data storage by the virtualization system. Virtualization maps the relationships between the back-end resources and front-end resources. How the mapping . You can have multiple layers of virtualization or mapping by using the output of one layer as the input for a higher layer of virtualization. Fibre Channel (FC). LUNs are central to the management of block storage arrays shared over a storage-area network (SAN). The back-end resource refers to a LUN that is not presented to the computer or host system for direct use.

LUN management becomes more difficult. known as a Logical Block Address (LBA). one block of information is addressed by using a LUN and an offset within that LUN. a proprietary technique that Cisco MDS switches offer. LUN zoning can be used instead of. a feature of Fibre Channel HBAs. In a block-based storage environment. each individual LUN must be discovered by only one server host bus adapter (HBA). Typically. LUN mapping: LUN mapping. LUN masking in heterogeneous environments or where Just a Bunch of Disks (JBOD) are installed. LUN mapping must be configured on every HBA. There are three ways to prevent this multiple access. one physical disk is broken down into smaller subsets in multiple megabytes or gigabytes of disk space. In most SAN environments. or in combination with. this mapping is a large management task. as shown in Figure 8-13: LUN masking: LUN masking. . They then perform LUN management in the array (LUN masking) or in the network (LUN zoning). In a heterogeneous environment with arrays from different vendors. LUN zoning: LUN zoning. Most administrators configure the HBA to automatically map all LUNs that the HBA discovers. Each storage array vendor has its own management and proprietary techniques for LUN masking in the array. a feature of enterprise storage arrays. provides basic LUN-level security by allowing LUNs to be seen only by selected servers that are identified by their port worldwide name (pWWN).is performed depends on the implementation. In a large SAN. allows the administrator to selectively map some LUNs that have been discovered by the HBA. the same volume will be accessed by more than one file system. leading to potential loss of data or security. allows LUNs to be selectively zoned to their appropriate host port. Otherwise.

can be added as and when needed. Each storage vendor has different LUN virtualization solutions and techniques.Figure 8-13 LUN Masking. although a server can detect a vLUN with a “full” size. which may be heterogeneous in nature. Similarly. Utilizing vLUNs disk expansion and shrinking can be managed easily. This feature brings the concept of oversubscription to data storage. More physical storage can be allocated by adding to the mapping table (assuming the using system can cope with online expansion). disks can be reduced in size by removing some physical storage from the mapping (uses for this are limited because there is no guarantee of what resides on the areas removed). In most LUN virtualization deployments. This virtualizer generates a virtual LUN (vLUN) that proxies server I/O operations while hiding specialized data block processes that are occurring in the pool of disk arrays. In this . only blocks that are present in the vLUN are saved on the storage pool. This process is fully transparent to the applications using the storage infrastructure. consequently demanding special attention to the ratio between “declared” and actually used resources. LUN Mapping. More storage systems. and the virtual storage space will scale up by the same amount. Using this technique. The physical storage resources are aggregated into storage pools from which the logical storage is created. Thin-provisioning is a LUN virtualization feature that helps reduce the waste of block storage resources. a virtualizer element is positioned between a host and its associated target disk array. and LUN Zoning on a Storage Area Network (SAN) Note JBODs do not have a management function or controller and so do not support LUN masking.

Virtualizing Storage-Area Networks A virtual storage-area network (VSAN) is a collection of ports from a set of connected Fibre Channel switches. FCIP. Conversely. A VSAN. Each VSAN is a separate self-contained fabric using distinctive security policies. and applied to a SAN. Tape Storage Virtualization Tape storage virtualization can be divided into two categories: the tape media virtualization and the tape drive and library virtualization (VTL). events. VSANs were designed by Cisco. VSANs can also be configured separately and independently. The use of VSANs allows traffic to be isolated within specific portions of the network. accessed through NFS and CIFS protocols over IP networks) not requiring tape library emulation at all. which form a virtual fabric. the data stored on the VTL’s disk array is exported to other media. Restore processes are found to be faster than backup regardless of implementations. Most current VTL solutions use SAS or SATA disk arrays as the primary storage component because of their relatively low cost. it saves tapes. By backing up data to disks instead of tapes. The geographic location of the switches and the attached devices are independent of their segmentation into logical VSANs. VTL often increases performance of both backup and recovery operations.chapter we have briefly covered a subset of these features. modeled after the virtual local area network (VLAN) concept in Ethernet networking. tape libraries. The shift to VTL also eliminates streaming problems that often impair efficiency in tape drives. In some cases. memberships. A VTL presents a storage component (usually hard disk storage) as tape libraries or tape drives for use with existing backup software. If a problem occurs in one VSAN. despite sharing hardware resources. most contemporary backup software products introduced also direct usage of the file system storage (especially network-attached storage. With tape media virtualization the amount of mounts are reduced. In October 2004. because disk technology does not rely on streaming and can write effectively regardless of data transfer speeds. and iSCSI.) Alternatively. . that problem can be handled with a minimum of disruption to the rest of the network. They also often offer a disk staging feature: moving the data from disk to a physical tape for long-term storage. or D2D2T. Figure 8-14 portrays three different SAN islands that are being virtualized onto a common SAN infrastructure on Cisco MDS switches. zones. (This scheme is called disk-to-disk-to-tape. FICON. Tape media virtualization resolves the problem of underutilized tape media. The use of array enclosures increases the scalability of the solution by allowing the addition of more disk drives and enclosures to increase the storage capacity. Ports within a single switch can be partitioned into multiple VSANs. Virtualizing the disk storage as tape allows integration of VTLs with existing backup software and existing backup and recovery processes and policies. the Technical Committee T11 of the International Committee for Information Technology Standards approved VSAN technology to become a standard of the American National Standards Institute (ANSI). A virtual tape library (VTL) is used typically for backup and recovery purposes. can offer different high-level protocols such as FCP. and floor space. like each FC fabric. The benefits of such virtualization include storage consolidation and faster data restore processes. multiple switches can join a number of ports to form a single VSAN. such as physical tapes. and name services. for disaster recovery purposes.

Figure 8-14 Independent Physical SAN Islands Virtualized onto a Common SAN Infrastructure Since December 2002 Cisco enabled multiple virtualization features within intelligent SANs. Whenever a file system client needs to access a file. The problem. aliasing. it must retrieve information about the file from the metadata servers first (see Figure 8-15). Consider that every Fibre Channel switch in a fabric needs a different domain ID. and security perspectives. and that the total number of domain IDs in a fabric is limited. Whereas NPIV is primarily a host-based solution. In some cases. The N-Port Virtualizer feature allows the blade switch or top-of-rack fabric device to behave as an NPIV-based host bus adapter (HBA) to the core Fibre Channel director. N-Port ID Virtualization (NPIV) and N-Port Virtualizer. an inherent conflict between trying to reduce the overall number of switches to keep the domain ID count low while also needing to add switches to have a sufficiently high port count. NPV is intended to address this problem. NPV is primarily a switch-based technology. It is designed to reduce switch management and overhead in larger SAN deployments. therefore. In a virtual machine environment where many host operating systems or applications are running on a physical host. . this limit can be fairly low depending upon the devices attached to the fabric. N-Port Virtualizer (NPV) An extension to NPIV is the N-Port Virtualizer feature. These systems are based on a global file directory structure that is contained on dedicated file metadata servers. each virtual machine can now be managed independently from zoning. The device aggregates the locally connected host ports or N-Ports into one or more uplinks (pseudo-interswitch links) to the core switches. N-Port ID Virtualization (NPIV) NPIV allows a Fibre Channel host connection or N-Port to be assigned multiple N-Port IDs or Fibre Channel IDs (FCID) over a single link. though. Different applications can be used in conjunction with NPIV. Virtualizing File Systems A virtual file system is an abstraction layer that allows client applications using heterogeneous filesharing network protocols to access a unified pool of storage resources. is that you often need to add Fibre Channel switches to scale the size of your fabric. All FCIDs assigned can now be managed on a Fibre Channel fabric as unique entities on the same physical host. Several examples are Inter VSAN routing (IVR). There is.

The terms of the contract might bring incompatibility from release to release.Figure 8-15 File System Virtualization The purpose of a virtual file system (VFS) is to allow client applications to access different types of concrete file systems in a uniform way. so that applications can access files on local file systems of those types without having to know what type of file system they are accessing. be used to access local and network storage devices transparently without the client application noticing the difference. The VMware Virtual Machine File System (VMFS). it is easy to add support for new file system types to the kernel simply by fulfilling the contract. A VFS can. so that concrete file system support built for a given release of the operating system would work with future versions of the operating system. which would require that the concrete file system support be recompiled. NTFS. and Unix file systems. Sun Microsystems introduced one of the first VFSes on Unix-like systems. and the Oracle Clustered File System (OCFS) are all examples of virtual file systems. Linux’s Global File System (GFS). It can be used to bridge the differences in Windows. and possibly modified before recompilation. Therefore. . Mac OS. for example. to allow it to work with a new release of the operating system. A VFS specifies an interface (or a “contract”) between the kernel and a concrete file system. Or the supplier of the operating system might make only backward-compatible changes to the contract.

The first is one that’s built in to the storage system or the NAS itself. meaning the same hardware must be used to expand these systems and is often available from only one manufacturer. and in other instances it is offered as a separate product. and objectives for deploying storage virtualization. Both offer file-level granularity. In-band solutions typically offer better performance. The best location to implement storage virtualization functionality depends in part on the preferences. residing above the disk device driver intercepts the I/O requests (see Figure 8-16) and provides the metadata lookup and I/O mapping. whereas out-of-band solutions are simpler to implement. a software layer. Host-Based Virtualization A host-based virtualization requires additional software running on the host as a privileged task or process. you could lose the ability to store older data on a less expensive storage system. the flexibility to mix hardware in a system. Because these files are less frequently accessed. The challenge. In a common scenario. often called a “global” file system. and device emulation. however. the primary NAS could be a high-speed high performance system storing production data. Where Does the Storage Virtualization Occur? Storage virtualization functionality. as you will remember. the logical volume manager. In some cases. Often global file systems are not granular to the file level. but most important. . However. This eliminates one of the key capabilities of a file virtualization system. A physical device driver handles the volumes (LUN) presented to the host system. heterogeneous replication. as well as in a storage system. With this solution. transparent data migration. which could solve the file stub and manual look-up problems. we discussed different types of tiered storage. volume management is built in to the operating system. In Chapter 6. There are two types of file virtualization methods available today. you simply type in the domain name. can be implemented at the host server in the data path on a network switch blade or on an appliance. Data can be moved from a tier-one NAS to a tier-two or tier-three NAS or network file server automatically. which removes the requirement to know the IP address of a website. a situation equivalent to not being able to mix brands of servers in a server virtualization project. they can be moved to lower cost. These systems can either sit in the data path (in-band) or outside the data path (out-of-band) and can move files to alternate locations based on a variety of attribute related policies. In a similar fashion.File/Record Virtualization File virtualization unites multiple storage devices into a single logical pool of files. is that this metadata is unique to the users of that specific file system. more power-efficient systems that bring down capital and operational costs. It is a vital part of both file area network (FAN) and network file management (NFM) concepts. existing technologies. File virtualization is similar to Dynamic Name Service (DNS). including aggregation (pooling). file virtualization eliminates the need to know exactly where a file is physically located. Users look to a single mount point that shows all their files. The second file virtualization method uses a standalone virtualization engine—typically an appliance. standalone file virtualization systems do not have to use all the same hardware. There may be some advantage here because the file system itself is managing the metadata that contains the file locations.

it’s called Logical Volume Manager or LVM. but manual administration of hundreds or thousands of servers in a large data center can be very costly. it will not differentiate the RAID levels. Most modern operating systems have some form of logical volume management built in (in Linux. RAID controllers for internal DAS and JBOD external chassis are good examples of hardware-based virtualization. For smaller data centers. in Windows.Figure 8-16 Logical Volume Manager on a Server System The LVM manipulates LUN representation to create virtualized storage to the file system or database manager. LVMs can be used to divide large physical disk arrays into more manageable virtual disks or to create large virtual disks from multiple physical disks. . it’s ZFS’s zpool layer. For higher performance requirements. RAID can be implemented without a special controller. Use of host-resident virtualization must therefore be balanced against the processing requirements of the upper layer applications so that overall performance expectations can be met. the virtualization capabilities require standardized protocols to ensure interoperability between fabric-hosted virtualization engines. heterogeneous storage systems can be used on a perserver basis. in Solaris and FreeBSD. but the software that performs the striping calculations often places a noticeable burden on the host CPU. The SAN fabric provides connectivity to heterogeneous server platforms and heterogeneous storage arrays and tape subsystems. Host-based storage virtualization may require more CPU cycles. The logical volume manager treats any LUN as a vanilla resource for storage pooling. or appliances. servers. logical volume management may be complemented by hardware-based virtualization. Network-Based (Fabric-Based) Virtualization A network-based virtualization is a fabric-based switch infrastructure. So with host-based virtualization. Fabric-based storage virtualization dynamically interacts with virtualizers on arrays. So the storage administrator must ensure that the appropriate classes of storage under LVM control are paired with individual application requirements. Fabric may be composed of multiple switches from different vendors. this may not be a significant issue. or LDM) that performs virtualization tasks. it’s Logical Disk Manager.

mirroring. and the storage virtualization application. It can be either in-band or out-of-band. In-band solutions intercept I/O requests from hosts. FAIS is an open systems project of the ANSI/INCITS T11. map them to the physical storage locations. Allocation of virtualized storage to individual servers and management of the storage metadata is the responsibility of the storage application running on the CPP. The FAIS initiative separates control information from the data path. it’s also less disruptive if the virtualization engine goes down. The FAIS initiative aims to provide fabric-based services. The APIs are a means to more easily integrate storage applications that were originally developed as host. routing I/O requests to the proper physical locations. switching. and data replication. Asymmetric or out-of-band: The virtualization device in this approach sits outside the data . while maintaining high-performance switching of storage data. an application-specific integrated circuit (ASIC) blade. FAIS development is thus being driven by the switch manufacturers and by companies who have developed storage virtualization software and virtualization hardware-assist components. The data path controller (DPC) may be implemented at the port level in the form of an applicationspecific integrated circuit (ASIC) or dedicated CPU. There are two types of processors. The DPC is optimized for low latency and high bandwidth to execute basic SCSI read/write transactions under the management of one or more CPPs. Fabric-based virtualization requires processing power and memory on top of a hardware architecture that is providing processing power for fabric services. the virtualization device is sitting in the path of the I/O data flow.A host-based (server-based) virtualization provides independence from vendor-specific storage. or auxiliary module. such as snapshots. The CPP is therefore a high-performance CPU with auxiliary memory. Network-based virtualization can be done with one of the following approaches: Symmetric or in-band: With an in-band approach. so vendor implementation may be different. and regenerate those I/O requests to the storage systems on the back end. They do require that the virtualization engine handle both control traffic and data. centralized within the switch architecture. and storage-based virtualization provides independence from vendor-specific servers and operating systems.5 task group and defines a set of common application programming interfaces (APIs) to be implemented within fabrics. Multivendor fabric-based virtualization provides high-availability. array. referring to whether the virtualization engine is in the data path. The fabric-based virtualization engine can be hosted on an appliance. Out-of-band solutions typically run on the server or on an appliance in the network and essentially process the control traffic. The control path processor (CPP) includes some form of operating system. or appliance-based utilities to now be supported within fabric switches and directors. and other tasks. This can result in less latency than in-band storage virtualization because the virtualization engine doesn’t handle the data. Hosts send I/O requests to the virtualization device. and it initiated the fabric application interface standard (FAIS). The FAIS specifications do not define a particular hardware design for CPP and DPC entities. which perform I/O with the actual storage device on behalf of the host. but don’t handle data traffic. Network-based virtualization provides independence from both. the CPP and the DCP. the FAIS application interface. which requires the processing power and internal bandwidth to ensure that they don’t add too much latency to the I/O process for host servers. Caching for improving performance and other storage management features such as replication and migration can be supported.

which knows to first request the location of data from the virtualization device and then use that mapped physical address to perform the I/O. in the split-path approach the I/O-intensive work is offloaded to dedicated port-level ASICs on the SAN switch. Figure 8-17 portrays the three architecture options for implementing network-based storage virtualization. taking advantage of intelligent SAN switches to perform I/O redirection and other virtualization tasks at wire speed. Specialized software running on a dedicated highly available appliance interacts with the intelligent switch ports to manage I/O traffic and map logical-to-physical storage resources at wire speed.path between the host and storage device. These ASIC can look inside Fibre Channel frames and perform I/O mapping and reroute the frames without introducing any significant amount of latency. the CPU is susceptible to being overwhelmed by I/O traffic. Hybrid split-path: This method uses a combination of in-band and out-of-band approaches. Whereas in typical in-band solutions. What this means is that special software is needed on the host. .

Some applications and their storage requirements are best suited for a combination of technologies to address specific needs and requirements. MDS 9222i Multiservice Modular Switch. Cisco SANTAP is one of the Intelligent Storage Services features supported on the Storage Services Module (SSM).Figure 8-17 Possible Architecture Options for How the Storage Virtualization Is Implemented There are many debates regarding the architecture advantages of in-band (in the data path) vs. out-ofband (out-of-data path with agent and metadata controller) or split path (hybrid of in-band and out-ofband). and MDS 9000 18/4-Port Multiservice .

Array-based data replication may be synchronous or asynchronous. the virtualization layer resides inside a storage array controller. The protocol-based interface that is offered by SANTAP allows easy and rapid integration of the data storage service application because it delivers a loose connection between the application and an SSM. and multiple other storage devices from the same vendor or from a different vendor can be added behind it. This is commonly referred to as disk-to-disk (D2D) data replication. which the SANTAP process creates and monitors. migration. Cisco SANTAP is a good example of Network-based virtualization. For this reason.Module (MSM-18/4). The SCSI I/O is therefore dependent on the speed at which both writes can be performed. An appliance sends requests to a Control Virtual Target (CVT). . The communication protocol between storage subsystems may be proprietary or based on the SNIA SMI-S standard. and any latency between the two systems affects overall performance. The communication between separate storage array controllers enables the disk resources of each system to be managed collectively. SANTAP has a control path and a data path. The control path handles requests that create and manipulate replication sessions sent by an appliance. or performance of a data path between the server and disk. data replication requires that a storage system function as a target to its attached servers and as an initiator to a secondary array. is responsible for providing the mapping functionality. SANTAP also allows LUN mapping to Appliance Virtual Targets (AVT). Array-Based Virtualization In the array-based approach. which reduces the effort needed to integrate applications with the core services being offered by the SSM. availability. and other storage management services across the storage devices that it is virtualizing. Responses are sent to the control logical unit number (LUN) on the appliance. The Cisco MDS 9000 SANTAP service enables customers to deploy third-party appliance-based storage applications without compromising the integrity. The control path is implemented using an SCSI-based protocol. The primary storage array controller also provides replication. either through a distributed management scheme or through a hierarchal master/slave relationship. synchronous data replication is generally limited to metropolitan distances (~150 kilometers) to avoid performance degradation due to speed of light propagation delay. As one of the first forms of array-to-array virtualization. That controller. each write to a primary array must be completed on the secondary array before the SCSI transaction acknowledges completion. called the primary storage array controller. In a synchronous replication operation (see Figure 8-18).

Figure 8-19 Asynchronous Mirroring Array-based virtualization services often include automated point-in-time copies or snapshots that may be implemented as a complement to data replication. This solves the problem of local performance. Both virtual (copy-on-write) and physical (full-copy) snapshots protect against corruption of data content. . data mining. Additionally. individual write operations to the primary array are acknowledged locally. test data. are virtual or physical copies of data that capture the state of data set contents at a single instant. some implementations required the primary array to temporarily buffer each pending transaction to the secondary so that transient disruptions are recoverable. full-copy snapshots can protect against physical destruction. and a kind of off-host processing. Like photographic snapshots capture images of physical action. or snapshots. To minimize this risk. Point-in-time copy is a technology that permits you to make the copies of large sets of data a common activity. while one or more write transactions may be queued to the secondary array. point-in-time copy. These copies can be used for a backup.Figure 8-18 Synchronous Mirroring In asynchronous array-based replication (see Figure 8-19). but may result in loss of the most current write to the secondary array if the link between the two systems is lost. a checkpoint to restore the state of an application.

snia.Solutions need to be able to scale in terms of performance. functionality. and resiliency without introducing instability.html A.cisco.snia. N-Port Virtualization in the Data Center: http://www. The best solution is going to be the one that meets customers’ specific requirements and may vary by customers’ different tiers of storage and applications. Table 8-3 Comparison of Storage Virtualization Levels Reference List “Common RAID Disk Drive Format (DDF) standard”. 2013. Table 8-3 provides a basic comparison between various approaches based on SNIA’s virtualization tutorial.org/tech_activities/standards/curr_standards/ddf/ SNIA Dictionary: http://www.snia.com/c/en/us/products/collateral/storagenetworking/mds-9000-nx-os-software-release-4-1/white_paper_c11-459263. There are many approaches to implement and deploy storage virtualization functionality to meet various requirements.org/education/dictionary SNIA Storage Virtualization tutorial I and II: http://www. Indianapolis: Cisco Press. connectivity. . Gustavo. Data Center Virtualization Fundamentals. Storage Virtualization for IT Flexibility. Santana.org/education/storage_networking_primer/stor_virt Fred G. SNIA. and ease of management. A. TechTarget 2006.org: http://www. Sun Microsystems. Moore.

Long. Volume 1 and 2. 2005. 2004.” also on the disc. Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. Storage Virtualization—Technologies for Simplifying Data Storage and Management. James. . Indianapolis: Addison-Wesley Professional. Indianapolis: Cisco Press. includes completed tables and lists to check your work. 2006. or at least the section for this chapter. Storage Networking Fundamentals. noted with the key topic icon in the outer margin of the page.Farley. Tom. and complete the tables and lists from memory. Table 8-4 lists a reference of these key topics and the page numbers on which each is found. Appendix C. Storage Networking Protocol Fundamentals. Marc. Indianapolis: Cisco Press. “Memory Tables Answer Key. “Memory Tables” (found on the disc). Clark. Table 8-4 Key Topics Complete Tables and Lists from Memory Print a copy of Appendix B.

and check your answers in the Glossary: storage-area network (SAN) Redundant Array of Inexpensive Disk (RAID) virtual storage-area network (VSAN) operating system (OS) Logical Unit Number (LUN) virtual LUN (vLUN) application-specific integrated circuit (ASIC) host bus adapter (HBA) thin-provisioning Storage Networking Industry Association (SNIA) software-defined storage (SDS) metadata in-band out-of-band virtual tape library (VTL) disk-to-disk-to-tape (D2D2T) N-Port virtualizer (NPV) N-Port ID virtualization (NPIV) virtual file system (VFS) asynchronous and synchronous mirroring host-based virtualization network-based virtualization array-based virtualization .Define Key Terms Define the following key terms from this chapter.

Table 9-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. and the fabric domain using command-line interface.” . You can find the answers in Appendix A. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. the fabric login. Fibre Channel Storage-Area Networking This chapter covers the following exam topics: Perform Initial Setup on Cisco MDS Switch Verify SAN Switch Operations Verify Name Server Login Configure and Verify VSAN Configure and Verify Zoning The foundation of the data center network is the network software that runs the switches in the network. Formerly known as Cisco SAN-OS. In this chapter we build our storage-area network starting from the initial setup configuration on Cisco MDS 9000 switches. Chapter 7. starting from release 4. “Cisco MDS Product Family.” discussed Cisco MDS 9000 Software features. read the entire chapter. providing a modular and sustainable base for the long term. This chapter goes directly into the practical configuration steps of the Cisco MDS product family and discusses topics relevant to Introducing Cisco Data Center Technologies (DCICT) certification. It also describes how to verify virtual storage-area networks (VSAN). This chapter discusses how to configure Cisco MDS 9000 Series multilayer switches and how to update software licenses. and standard Linux core. zoning. stable.1 it is rebranded as Cisco MDS 9000 NX-OS Software. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. It is based on a secure. “Answers to the ‘Do I Know This Already?’ Quizzes.Chapter 9. Cisco NX-OS Software is the network software for Cisco MDS 9000 family and Cisco Nexus family data center switching products.

and 8 ports of 10 Gigabit Ethernet for FCoE connectivity. None 3. which interfaces can be configured with the Ipv6 address? a. and MGMT interface? (Choose all the correct answers. 1.) a. Via answering yes to Configuring Advanced IP Options. If you do not know the answer to a question or are only partially sure of the answer. d. MDS 9706 family director switch does not support in-band management. MDS 9148S d. 2 ports of 10 Gigabit Ethernet for FCIP and iSCSI storage services. During the initial setup script of MDS 9500. All FC interfaces c. MDS 9508 c. Via enabling SSH service. MDS 9706 5.Table 9-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. Which MDS switch models support the PowerOn Auto provisioning with NX-OS 6. you should mark that question as wrong for purposes of the self-assessment. how can you configure in-band management? a. Out-of-band management interface e. MDS 9513 d. 4.2(9)? a. c. MDS 9148S e. The base configuration of 20 ports of 16 Gbps Fibre Channel. COM1. VSAN interface d. MDS 9710 b. MDS 9148 c. How does the On-Demand Port Activation license work on the Cisco MDS 9148S 16G FC switch? a. During the initial setup script of MDS 9706. MDS 9250i b. There is no configuration option for in-band management on MDS family switches. MDS 9706 2. b. Which of the following switches support Console port. MGMT 0 interface b. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. .

2. None 8. System—BIOS—Loader—Kickstart d. Click here to view code image switch(config)# interface vsan 1 switch(config-if)# no shutdown d. or 48 ports.22. Which of the following options is the correct configuration for in-band management of MDS 9250i? a. c. Click here to view code image switch(config)# interface mgmt0 switch(config-if)# ip address 10.255. Which NX-OS CLI command suspends a specific VSAN traffic? a. BIOS—Loader—System—Kickstart 7. Which is the correct option for the boot sequence? a. vsan 4 name SUSPEND b.0 switch(config-if)# no shutdown switch(config-if)# exit switch(config)# ip default-gateway 10.1 b. or 48 ports. Click here to view code image switch(config-if)# (no) switchport mode F switch(config-if)# (no) switchport mode auto e. The base switch model comes with 12 ports enabled and can be upgraded as needed with the 12-port activation license to support 24. vsan 4 suspend . no suspend vsan 4 c.b.255. Click here to view code image switch(config)# interface mgmt0 switch(config-if)# ipv6 enable switch(config-if)# ipv6 address ipv6 address 2001:0db8:800:200c::417a/64 switch(config-if)# no shutdown c.22.2. BIOS—Loader—Kickstart—System c. d. 32.2 255. 36. 6. The base switch model comes with 24 ports enabled and can be upgraded as needed with a 12-port activation license to support 36 or 48 ports. System—Kickstart—BIOS—Loader b. The base switch model comes with 8 ports and can be upgraded to models of 16.

which explains in detail how and where to install specific components of the MDS switches. It is an asynchronous (sync) serial port. Before starting any configuration. The console port. 2-. pWWN b. The MDS multiservice and multilayer switches have one management and one console port as well. In which of the following conditions for two MDS switches is the trunking state (ISL) and port mode E? a. 16-. 10-Gbps Fibre Channel. trunk mode off. To connect the console port to a computer terminal. trunk mode on. follow these steps: . Switch 2: Switchport mode F. Switch 2: Switchport mode E. you may want to quickly review the chapter. Switch 2: Switchport mode E. trunk mode off. Switch 1: Switchport mode E. This port is used to create a local management connection to set the IP address and other initial configuration settings before connecting the switch to the network for the first time. The MDS multilayer directors have dual supervisor modules. FC ID 10.and 10-Gbps Fibre Channel over IP (FCIP) on Gigabit Ethernet ports. trunk mode auto d. Switch 1: Switchport mode E. Switch 2: Switchport mode E. Switch 1: Switchport mode E. vsan no suspend vsan 4 9.” is an RS-232 port with an RJ-45 interface. The Cisco MDS family switches provide the following types of ports: Console port (supervisor modules): The console port is available for all the MDS product family. trunk mode off b. Ipv6 address c. trunk mode auto. No correct answer exists Foundation Topics Where Do I Start? The Cisco MDS 9000 family of storage networking products supports 1-. the equipment should be installed and mounted onto the racks. trunk mode on c. This chapter starts with explaining what to do after the switch is installed and powered on following the hardware installation guide. 8-. Switch 1: Switchport mode E.d. any device connected to this port must be capable of asynchronous transmission. if needed. trunk mode auto e. For each MDS product family switch there is a specific hardware installation guide. and each of the supervisor modules have one management and one console port. labeled “Console. 1. Which of the following options are valid member types for a zone configuration on MDS 9222i switch? a. Mac address of the MGMT 0 interface d. 10-Gbps Fibre Channel over Ethernet (FCoE). In Chapter 7 we described all the details about the MDS 9000 product family. 4-.

no parity. 2. connect the MGMT 10/100/1000 Ethernet port on the active supervisor module and on the standby supervisor module to the same network or VLAN. switch. It is available on each supervisor module of MDS 9500 Series switches. a. Connect the adapters using the RJ-45-to-RJ-45 rollover cable (or equivalent crossover cable). . Connect the DB-9 serial adapter to the COM1 port. We recommend using the adapter and cable provided with the switch. MGMT Ethernet port is available for all the MDS product family. 3. You can use it to connect to an external serial communication device such as a modem. COM1 port is available for all the MDS product family except MDS 9700 director switches. 8 data bits. This process requires an Ethernet connection to the newly activated supervisor module.default: ATE0Q1&D2&C1S0=1\015 Statistics: tx:17 rx:0 Register Bits:RTS|DTR MGMT 10/100/1000 Ethernet port (supervisor module): This is an Ethernet port that you can use to access and manage the switch by IP address. c. Connect the console cable (a rollover RJ-45 to RJ-45 cable) to the console port and to the RJ-45 to DB-9 adapter or the RJ-45 to DP-25 adapter (depending on your computer) at the computer serial port. Connect the RJ-45 to DB-25 modem adapter to the modem. 1 stop bit. The supervisor modules support an autosensing MGMT 10/100/1000 Ethernet port (labeled “MGMT 10/100/1000”) and have an RJ-45 interface. b. Configure the terminal emulator program to match the following default port characteristics: 9600 baud. such as through Cisco Data Center Network Manager (DCNM). The active supervisor module owns the IP address used by both of these Ethernet connections. The COM1 port (labeled “COM1”) is an RS-232 port with a DB-9 interface. On a switchover. COM1 port: This is an RS-232 port that you can use to connect to an external serial communication device such as a modem. or router. the newly activated supervisor module takes over this IP address. Connect the supplied RJ-45 to DB-9 female adapter or RJ-45 to DP-25 female adapter (depending on your computer) to the computer serial port. You can connect the MGMT 10/100/1000 Ethernet port to an external hub. Note For high availability. straightthrough UTP cable to connect the MGMT 10/100/1000 Ethernet port to an Ethernet switch port or hub or use a crossover cable to connect to a router interface. The default COM1 settings are as follows: line Aux: |Speed: 9600 bauds Databits: 8 bits per byte Stopbits: 1 bit(s) Parity: none Modem In: Enable Modem Init-String . You need to use a modular.1. 2. RJ-45. follow these steps: 1. Connect the modem to the COM1 port using the adapters and cables. To connect the COM1 port to a modem.

The Cisco MDS 9700 Series switch has two USB drives (in each Supervisor-1 module). X2. CDR. SFP+ is the mainstream design. and XENPAK directly. SFP+. thus it moves some functions to the motherboard. MAC. USB drive/port: It is a simple interface that allows you to connect to different devices supported by NX-OS. and SFF-8432 specifications. and the cable must not exceed the stipulated cable length for reliable communications. You can use any combination of SFP. and XFP are all terms for a type of transceiver that plugs into a special port on a switch or other network device to convert the port to a copper or fiber interface. The Cisco MDS 9000 family supports both Fibre Channel and FCoE protocols for SFP+ transceivers.com/c/en/us/products/collateral/storage-networking/mds-9000-series-multilayerswitches/product_data_sheet09186a00801bc698. If the related SFP has a Cisco-assigned extended ID. and XENPAK. Slot 0 and LOG FLASH. Gigabit Ethernet ports (IP storage ports): These are IP Storage Services ports that are used for the FCIP or iSCSI connectivity. They allow you to choose different cabling types and distances on a port-by-port basis. there are two USB drives. SFP. The SFP hardware transmitters are identified by their acronyms when displayed in the show interface brief command.html.Fibre Channel ports (switching modules): These are Fibre Channel (FC) ports that you can use to connect to the SAN or for in-band management. The only restrictions are that short wavelength (SWL) transceivers must be paired with SWL transceivers.3ae. SFP is based on the INF-8074i specification and SFP+ is based on the SFF-8431 specification. X2. and XFP? SFP. The size of SFP+ is smaller than XFP. SFP+ was replaced by the XFP 10 G modules and became the mainstream. In the supervisor module. the show interface and show interface brief commands display the ID instead of the transmitter type. The USB drive is not available for MDS 9100 Series switches. and EDC. SFP+ is in compliance with the protocol of IEEE802. It can connect with the same type of XFP. including signal modulation function. SFP has the lowest cost among all three. . SFP+. XFP is based on the standard of XFP MSA.cisco. The Cisco SFP. Let’s also review the transceivers before we continue with the configuration. The LOG FLASH and Slot 0 USB ports use different formats for their data. XFP and SFP+ are 10G fiber optical modules and can connect with other types of 10G modules. SFP and SFP+ specifications are developed by the Multiple Source Agreement (MSA) group. Fibre Channel over Ethernet ports (switching modules): These are Fibre Channel over Ethernet (FCoE) ports that you can use to connect to the SAN or for in-band management. They both have the same size and appearance but are based on different standards. What are the differences between SFP+. The cost of SFP+ is lower than XFP. and the cable must not exceed the stipulated cable length for reliable communication. and long wavelength (LWL) transceivers with LWL transceivers. SFF-8431. SFP+ transceivers that are supported by the switch. Each transceiver must match the transceiver on the other end of the cable. The most up-todate information can be found for pluggable transceivers here: http://www. The show interface transceiver command and the show interface fc slot/port transceiver command display both values for Cisco-supported SFPs. and X2 devices are hot-swappable transceivers that plug into Cisco MDS 9000 family director switching modules and fabric switch ports. An advantage of the SFP+ modules is that SFP+ has a more compact form factor package than X2 and XFP.

maximum supportable distances. The Cisco MDS 9000 family switches can be accessed and configured in many ways. Figure 9-1 Tools for Configuring Cisco NX-OS Software The different protocols that are supported in order to access. In Chapter 5. “Management and Monitoring of Cisco Nexus Devices. and they support standard management protocols.” we discussed Nexus management and monitoring features. Figure 9-1 portrays the tools for configuring the Cisco NX-OS software. check the Fiber Optic Association page (FOA) at http://www. The Cisco MDS product family uses the same operating system as NX-OS. The Nexus product line and MDS family switches use mostly the same management features. .htm.To get the most up-to-date technical specifications for fiber optics per the current standards and specs. and attenuation for optical fiber applications by fiber type.thefoa. monitor. and configure the Cisco MDS 9000 family of switches are described in Table 9-2.org/tech/Linkspec.

the device hostname). After the file is created. You can press Ctrl+C at any prompt to skip the remaining configuration options and proceed with what you have configured up to that point. except for the administrator password. you can use the CLI to perform additional configuration. . the device uses what was previously configured and skips to the next question. The setup starts automatically when a device has no configuration file in NVRAM. The setup utility allows you to build an initial configuration file using the System Configuration dialog. The dialog guides you through initial configuration. If a default answer is not available (for example. The setup utility allows you to configure only enough connectivity for system management. Figure 9-2 portrays the setup script flow. and Configure the Cisco MDS Family The Cisco MDS NX-OS Setup Utility—Back to Basics The Cisco MDS NX-OS setup utility is an interactive command-line interface (CLI) mode that guides you through a basic (also called a startup) configuration of the system. Monitor. If you want to skip answers to any questions.Table 9-2 Protocols to Access. press Enter.

the setup utility does not change that configuration if you skip that step. Be sure to carefully check the configuration changes before you save the configuration. you can use the setup utility at any time for basic device configuration.Figure 9-2 Setup Script Flow You use the setup utility mainly for configuring the system initially. the device uses the default gateway IPv4 address. If IPv4 routing is disabled. connect the Ethernet management ports on both supervisor modules to the network. not the configured value. The setup utility keeps the configured values when you skip steps in the script. If you have dual supervisor modules. when no configuration is present. Before starting the setup utility. Connect the Ethernet management port on the supervisor module to the network. Step 2. Have a password strategy for your network environment. the device uses the IPv4 route and the default network IPv4 address. The first time you access a switch in the Cisco MDS 9000 family. Connect the console port on the supervisor module to the network. However. connect the console ports on both supervisor modules to the network. However. For example. Note Be sure to configure the IPv4 route. if you have already configured the mgmt0 interface. the default network IPv4 address. if there is a default value for the step. If you enable IPv4 routing. you must execute the following steps: Step 1. the setup utility changes to the configuration using that default. and the default gateway IPv4 address to enable SNMP access. If you have dual supervisor modules. it runs a setup program that . The setup script only supports IPv4. Step 3.

uppercase letters. Power on the switch. Use CTRL-C-c at anytime to skip the remaining dialogs. Step 2. The IP address can only be configured from the CLI. A secure password should contain characters from at least three of the classes: lowercase letters. This management interface uses the Fibre Channel infrastructure to transport IP traffic. Click here to view code image Would you like to enter the basic configuration dialog (yes/no): yes The setup utility guides you through the basic configuration process. Enter the password for the administrator. . Enter yes to enter the setup mode. Switches in the Cisco MDS 9000 family boot automatically. Each switch should have its VSAN 1 interface configured with either an IPv4 address or an IPv6 address in the same subnetwork. This setup utility will guide you through the basic configuration of the system. Enter yes (yes is the default) to enable a secure password standard. There are two types of management through MDS NX-OS CLI. *Note: setup is mainly used for configuring the system initially. and special characters. The following are the initial setup procedure steps for configuring out-of-band management on the mgmt0 interface: Step 1. This information is required to configure and manage the switch. Click here to view code image Enter the password for admin: admin-password Confirm the password for admin: admin-password Step 4. An interface for VSAN 1 is created on every switch in the fabric.prompts you for the IP address and other configuration information necessary for the switch to communicate over the supervisor module Ethernet interface. Setup configures only enough connectivity for management of the system. After you perform this step. You can configure out-of-band management on the mgmt 0 interface. When you power up the switch for the first time. Press Ctrl+C at any prompt to end the configuration process Step 5. A default route that points to the switch providing access to the IP network should be configured on every switch in the Fibre Channel fabric. assign the IP address. The in-band management logical interface is VSAN 1. digits. Press Enter at any time to skip a dialog. Step 3. Enter yes (no is the default) if you do not want to create additional accounts. Click here to view code image Create another login account (yes/no) [no]: yes While configuring your initial setup. Click here to view code image Do you want to enforce secure password standard (yes/no): yes You can also enable a secure password standard using the password strength-check command. the Cisco MDS 9000 Family Cisco Prime DCNM can reach the switch through the console port. you can create an additional user account (in the network-admin role) besides the administrator’s account. when no configuration is present. So setup always assumes system defaults and not the current system configuration values.

Enter yes (no is the default) to avoid configuring the read-only SNMP community string. However. two roles exist in all switches: a. Network operator (network-operator): Has permission to view the configuration only. the setup script supports only IP version 4 (IPv4) for the management interface.Click here to view code image Enter the user login ID: user_name Enter the password for user_name: user-password Confirm the password for user_name: user-password Enter the user role [network-operator]: network-admin By default. Enter a name for the switch. Enter yes (yes is the default) at the configuration prompt to configure out-of-band management. Enter yes (yes is the default) to configure the default gateway. Click here to view code image Enter the switch name: switch_name Step 8. Click here to view code image Continue with out-of-band (mgmt0) management configuration? [yes/ no]: yes Enter the mgmt0 IPv4 address. One (of these 64 additional roles) can be configured during the initial setup process. Enter the SNMP community string. The administrator can also create and customize up to 64 additional roles. a. Network administrator (network-admin): Has permission to execute all commands and make configuration changes. The operator cannot make any configuration changes. . Configure the read-only or read-write SNMP community string. The switch name is limited to 32 alphanumeric characters. Step 6. The default switch name is “switch”. b. Click here to view code image SNMP community string: snmp_community Step 7. Click here to view code image Mgmt0 IPv4 netmask: subnet_mask Step 9. IP version 6 (IPv6) is supported in Cisco MDS NX-OS Release 4. Click here to view code image Mgmt0 IPv4 address: ip_address Enter the mgmt0 IPv4 subnet mask.1(x) and later. Click here to view code image Configure read-only SNMP community string (yes/no) [n]: yes b.

Enter yes (yes is the default) to enable IPv4 routing capabilities. Click here to view code image Next hop ip address: next_hop_address d. Click here to view code image Destination prefix: dest_prefix Enter the destination prefix mask. default network. Click here to view code image . DNS. and domain name.Click here to view code image Configure the default-gateway: (yes/no) [y]: yes Enter the default gateway IP address. Click here to view code image Enable ip routing capabilities? (yes/no) [y]: yes c. Click here to view code image Destination prefix mask: dest_mask Enter the next hop IP address. Enter no (no is the default) at the in-band management configuration prompt. Click here to view code image Configure Advanced IP options (yes/no)? [n]: yes a. such as in-band management static routes. Click here to view code image Default network IP address [dest_prefix]: dest_prefix e. Enter yes (yes is the default) to configure a static route. Enter yes (yes is the default) to configure the default network. Click here to view code image Continue with in-band (VSAN1) management configuration? (yes/no) [no]: no b. Click here to view code image Configure the default-network: (yes/no) [y]: yes Enter the default network IPv4 address. Click here to view code image Configure static route: (yes/no) [y]: yes Enter the destination prefix. Enter yes (yes is the default) to configure the DNS IPv4 address. IP address of the default gateway: default_gateway Step 10. Enter yes (no is the default) to configure advanced IP options.

Click here to view code image Enable the telnet service? (yes/no) [n]: yes Step 13. Enter con (con is the default) to configure congestion or no_credit drop. Click here to view code image Enter the number of key bits? (768-2048) [1024]: 2048 Step 12. Click here to view code image Configure congestion or no_credit drop for fc interfaces? (yes/ no) [q/quit] to quit [y]: yes Step 14. Enter yes (yes is the default) to enable the SSH service Click here to view code image Enabled SSH service? (yes/no) [n]: yes Enter the SSH key type. DNS IP address: name_server f. Click here to view code image Enter number of milliseconds for congestion/no_credit drop[100 1000] or [d/default] for default:100 . Enter yes (no is the default) to skip the default domain name configuration. Click here to view code image Type the SSH key you would like to generate (dsa/rsa)? rsa Enter the number of key bits within the specified range. Enter yes (yes is the default) to configure congestion or no_credit drop for FC interfaces. Click here to view code image Enter the type of drop to configure congestion/no_credit drop? (con/no) [c]:con Step 15. Enter a value from 100 to 1000 (d is the default) to calculate the number of milliseconds for congestion or no_credit drop.Configure the DNS IP address? (yes/no) [y]: yes Enter the DNS IP address. Enter yes (no is the default) to disable the Telnet service. Click here to view code image Default domain name: domain_name Step 11. Click here to view code image Configure the default domain name? (yes/no) [n]: yes Enter the default domain name.

Click here to view code image . Click here to view code image Configure default switchport mode F (yes/no) [n]: y Step 21. Enter yes (no is the default) to disable a full zone set distribution. Review and edit the configuration that you have just entered. Click here to view code image Enable full zoneset distribution (yes/no) [n]: yes Overrides the switch-wide default for the full zone set distribution feature. Click here to view code image Configure NTP server? (yes/no) [n]: yes Enter the NTP server IPv4 address. Step 19. Step 23.Step 16. Enter permit (deny is the default) to deny a default zone policy configuration. and Gigabit Ethernet interfaces are shut down. Click here to view code image Configure default zone policy (permit/deny) [deny]: permit Permits traffic flow to all members of the default zone. Click here to view code image Configure default port-channel auto-create state (on/off) [off]: on Step 22. Click here to view code image Enter mode for congestion/no_credit drop[E/F]: Step 17. Enter yes (yes is the default) to configure the switchport mode F. Only the Fibre Channel. Enter on (off is the default) to configure the switch port trunk mode. Click here to view code image Configure default switchport trunk mode (on/off/auto) [off]: on Step 20. NTP server IP address: ntp_server_IP_address Step 18. You see the new configuration. Step 24. Enter enhanced (basic is the default) to configure default-zone mode as enhanced. Click here to view code image Configure default switchport interface state (shut/noshut) [shut]: shut The management Ethernet interface is not shut down at this point. Enter yes (no is the default) to configure the NTP server. FCIP. Enter on (off is the default) to configure the port-channel auto-create state. Enter shut (shut is the default) to configure the default switch port interface to the shut (disabled) state. Enter a mode for congestion or no_credit drop. iSCSI.

Log in to the switch using the new username and password. Enter yes (yes is default) to use and save this configuration. Verify that the switch is running the latest Cisco NX-OS 6. Type yes to save the new configuration.Configure default zone mode (basic/enhanced) [basic]: enhanced Overrides the switch-wide default zone mode as enhanced. Example 9-1 portrays show version display output. Click here to view code image Use this configuration and save it? (yes/no) [y]: yes If you do not save the configuration at this point.2(x) software. Step 25. Verify that the required licenses are installed in the switch using the show license command Step 29. Step 27. depending on which you installed. Step 28. Enter no (no is the default) if you are satisfied with the configuration. This ensures that the kickstart and system images are also automatically configured. Example 9-1 Verifying NX-OS Version on an MDS 9710 Switch Click here to view code image DS-9710-1# show version Cisco Nexus Operating System (NX-OS) Software TAC support: http://www. issuing the show version command.com/tac .cisco. none of your changes are updated the next time the switch is rebooted. The following configuration will be applied: Click here to view code image username admin password admin_pass role network-admin username user_name password user_pass role network-admin snmp-server community snmp_community ro switchname switch interface mgmt0 ip address ip_address subnet_mask no shutdown ip routing ip route dest_prefix dest_mask dest_address ip default-network dest_prefix ip default-gateway default_gateway ip name-server name_server ip domain-name domain_name telnet server disable ssh key rsa 2048 force ssh server enable ntp server ipaddr ntp_server system default switchport shutdown system default switchport trunk mode on system default switchport mode F system default port-channel auto-create zone default-zone permit vsan 1-4093 zoneset distribute full vsan 1-4093 system default zone mode enhanced Would you like to edit the configuration? (yes/no) [n]: n Step 26.

---------1 1c-df-0f-78-cf-d8 to 1c-df-0f-78-cf-db JAE171308VQ 2 1c-df-0f-78-d8-50 to 1c-df-0f-78-d8-53 JAE1714019Y 5 1c-df-0f-78-df-05 to 1c-df-0f-78-df-17 JAE171504KY Status ---------ok ok active * ha-standby .-------------.0 kickstart: version 6. 39 minute(s).Documents: http://www.2(3) 1. 0 hour(s).1. Verify the status of the modules on the switch using the show module (see Example 9-2) command.1.org/licenses/lgpl-2.----------------------------------.2(3) 1. Inc. Cisco Systems.0.html Copyright (c) 2002-2013.2(3) 1.org/licenses/gpl-2.cisco. 20 second(s) Last reset Reason: Unknown System version: 6. Certain components of this software are licensed under the GNU General Public License (GPL) version 2.-----------------1 48 2/4/8/10/16 Gbps Advanced FC Module DS-X9448-768K9 2 48 2/4/8/10/16 Gbps Advanced FC Module DS-X9448-768K9 5 0 Supervisor Module-3 DS-X97-SF1-K9 6 0 Supervisor Module-3 DS-X97-SF1-K9 Mod Sw Hw --.com/en/US/products/ps9372/tsd_products_support_series_ home. Example 9-2 Verifying Modules on an MDS 9710 Switch Click here to view code image MDS-9710-1# show module Mod Ports Module-Type Model --.-------------------------------------.1 5 6.php Software BIOS: version 3.-----1 6.1 2 6.1. Processor Board ID JAE171504KY Device name: MDS-9710-1 bootflash: 3915776 kB slot0: 0 kB (expansion flash) Kernel uptime is 0 day(s).0 or the GNU Lesser General Public License (LGPL) Version 2. A copy of each such license is available at http://www.2(3) Service: Plugin Core Plugin. All rights reserved.2(3) system: version 6.opensource. Ethernet Plugin Step 30.2(3) 1.0 6 6.2(3) BIOS compile time: 02/27/2013 kickstart image file is: bootflash:///kickstart kickstart compile time: 7/10/2013 2:00:00 [07/31/2013 10:10:16] system image file is: bootflash:///system system compile time: 7/10/2013 2:00:00 [07/31/2013 11:59:38] Hardware cisco MDS 9710 (10 Slot) Chassis ("Supervisor Module-3") Intel(R) Xeon(R) CPU with 8120784 kB of memory.opensource. The copyrights to certain works contained in this software are owned by other third parties and used and distributed under license.0 Mod MAC-Address(es) Serial-Num --.php and http://www.----.

6 Mod --1 2 5 6 Xbar --1 2 3 Xbar --1 2 3 Xbar --1 2 3 1c-df-0f-78-df-2b to 1c-df-0f-78-df-3d JAE171504E6 Online Diag Status -----------------Pass Pass Pass Pass Ports Module-Type Model ----. Click here to view code image Continue with in-band (VSAN1) management configuration? (yes/no) [no]: yes Enter the VSAN 1 IPv4 address. and domain name. Click here to view code image Enable ip routing capabilities? (yes/no) [y]: no c. static routes.0 MAC-Address(es) Serial-Num -------------------------------------.0 NA 1.----------------------------------. Click here to view code image Configure Advanced IP options (yes/no)? [n]: yes a. . default network. Enter yes (no is the default) at the in-band management configuration prompt.-----NA 1.---------NA JAE1710085T NA JAE1710087T NA JAE1709042X Status ---------ok ok ok The following are the initial setup procedure steps for configuring in-band management logical interface on VSAN 1: You can configure both in-band and out-of-band configuration together by entering yes in both step 10c and step 10d in the following procedure: Step 10.-----------------0 Fabric Module 1 DS-X9710-FAB1 0 Fabric Module 1 DS-X9710-FAB1 0 Fabric Module 1 DS-X9710-FAB1 Sw Hw -------------. Click here to view code image VSAN1 IPv4 net mask: subnet_mask b. DNS. Enter no (yes is the default) to enable IPv4 routing capabilities. Enter no (yes is the default) to configure a static route. Click here to view code image VSAN1 IPv4 address: ip_address Enter the IPv4 subnet mask. Enter yes (no is the default) to configure advanced IP options such as in-band management.0 NA 1.

You can choose to exit or continue with POAP. .2(9). the POAP capability is available on Cisco MDS 9148 and MDS 9148S multilayer fabric switches (at the time of writing this book). POAP mode starts. Click here to view code image Configure the DNS IP address? (yes/no) [y]: no f. it loads the software image that is installed at manufacturing and tries to find a configuration file from which to boot. When you power up a switch for the first time. The USB device on MDS 9148 or on MDS 9148S does not contain the required installation files. and bootstraps itself with its interface IP address. asking if you want to abort POAP and continue with a normal setup. The POAP setup requires the following network infrastructure: a DHCP server to bootstrap the interface IP address. gateway. locates the DCNM DHCP server. Click here to view code image Configure the default domain name? (yes/no) [n]: no In the final configuration the following CLI commands will be added to the configuration: Click here to view code image interface vsan1 ip address ip_address subnet_mask no shutdown The Power On Auto Provisioning POAP (Power On Auto Provisioning) automates the process of upgrading software images and installing configuration files on Cisco MDS and Nexus switches that are being deployed in the network. and DNS (Domain Name System) server. gateway address. If you exit POAP mode. you enter the normal interactive setup script. the switch enters POAP mode. Figure 9-3 portrays the network infrastructure for POAP. When a configuration file is not found. and DCNM DNS server IP addresses. Enter no (no is the default) to skip the default domain name configuration. One or more servers can contain the desired software images and configuration files. When a Cisco MDS switch with the POAP feature boots and does not find the startup configuration. It also obtains the IP address of the DCNM server to download the configuration script that is run on the switch to download and install the appropriate software image and device configuration file. Starting with NX-OS 6. a prompt appears. During startup. all the front-panel interfaces are set up in the default configuration. Enter no (yes is the default) to configure the DNS IPv4 address. A TFTP server that contains the configuration script is used to automate the software image installation and configuration process. Click here to view code image Configure the default-network: (yes/no) [y]: no e.Click here to view code image Configure static route: (yes/no) [y]: no d. If you continue in POAP mode. Enter no (yes is the default) to configure the default network.

use one of the following commands: show running config: Displays the running configuration show startup config: Displays the startup configuration Licensing Cisco MDS 9000 Family NX-OS Software Features .Figure 9-3 POAP Network Infrastructure Here are the steps to set up the network environment using POAP: Step 1. and TFTP server IP addresses and a boot file with the path and name of the configuration script file. Deploy a DHCP server and configure it with the interface. Deploy one or more servers to host the software images and configuration files. To verify the configuration after bootstrapping the device using POAP. gateway. (Optional) if you want to exit POAP mode and enter the normal interactive setup script. Deploy a TFTP server to host the configuration script. Step 2. This information is provided to the switch when it first boots. follow these steps to configure the switch using POAP: Step 1. (Optional) Put the POAP configuration script and any other desired software image and switch configuration files on a USB device that is accessible to the switch. enter y (yes). Install the switch in the network. Step 3. Step 3. Step 2. Step 4. Power on the switch. After you set up the network environment for POAP setup.

The licensing model defined for the Cisco MDS product line has two options: Feature-based licenses allow features that are applicable to the entire switch. Devices with dual supervisors have the following additional high-availability features: The license software runs on both supervisor modules and provides failover protection. you must first obtain the license key file and then install that file in the device. The second step is using the device and the licensed features. Any feature not included in a license package is bundled with the Cisco MDS 9000 family switches. the license file continues to function from the version that is available on the chassis. The license key file is mirrored on both supervisor modules. On-demand port activation licenses are also available for the Cisco MDS 9000 family blade switches and 4-Gbps Cisco MDS 9100 Series multilayer fabric switches. If an administrative error does occur with licensing. This single console reduces management overhead and prevents problems because of improperly maintained licensing. the Cisco MDS 9134 Fabric Switch. such as the Cisco MDS 9000 Enterprise Package. Obtaining a Factory-Installed License: You can obtain factory-installed licenses for a new Cisco NX-OS device. All licensed features may be evaluated for up to 120 days before a license is expired. and the Cisco Fabric Switch for IBM BladeCenter. The grace period allows plenty of time to correct the licensing issue. DMM Package. Licenses can also be installed on the switch in the factory. Cisco Data Center Network Manager for SAN includes a centralized license management console that provides a single interface for managing licenses across all Cisco MDS switches in the fabric. License Installation You can either obtain a factory-installed license (only applies to new device orders) or perform a manual license installation of the license (applies to existing devices in your network). the Cisco Fabric Switch for HP c-Class Blade System. Module-based licenses allow features that require additional hardware modules. The cost varies based on a per-switch usage. IOA Package. so license keys are never lost even during a software reinstallation. Licensing allows you to access specified premium features on the switch after you install the appropriate license for that feature. An example is the IPS-8 or IPS-4 module using the FCIP feature. Performing a Manual Installation: If you have existing devices or if you want to install the licenses on your own. DCNM Packages. The cost varies based on a per-module usage. SAN Extension over IP Package. the switch provides a grace period before the unlicensed features are disabled. The first step is to contact the reseller or Cisco representative and request this service. Obtain the serial number for your device by entering the show license host-id command. . Some features are logically grouped into add-on packages that must be licensed separately. and XRC Acceleration Package. You can also obtain licenses to activate ports on the Cisco MDS 9124 Fabric Switch. Cisco MDS stores license keys on the chassis serial PROM (SPROM).Licenses are available for all switches in the Cisco MDS 9000 family. Obtaining the License Key File consists of the following steps: Step 1. Cisco license packages require a simple installation of an electronic license: no software installation or upgrade is required. Mainframe Package. Even if both supervisor modules fail.

. Click here to view code image switch# install license bootflash:license_file. You can use the file transfer service (tftp.The host ID is also referred to as the device serial number. Obtain your claim software license certificate document.done Step 4. Step 3. You can access the Product License Registration website from the Software Download website at this URL: http://www. Step 7. Step 2.cisco. Follow the instructions on the Product License Registration website to register the license for your device.com/cisco/web/download/index. Step 3. Installing the License Key File consists of the following steps: Step 1. Use the copy licenses command to save your license file to either the bootflash: directory or a slot0: device. If you cannot locate your software license claim certificate. You can use the file transfer service (tftp.lic Installing license . Step 5. Log in to the device through the console port of the active supervisor. sfp.com/en/US/support/tsd_cisco_worldwide_contacts. Click here to view code image mds9710-1# show licenser host-id License hostid: VDH=JAF1710BBQH Step 2. Step 6. contact Cisco Technical Support at this URL: http://www. or scp) or use the Cisco Fabric Manager License Wizard under Fabric Manager Tools to copy a license to the switch.0 permanent uncounted \ VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M9200-ALL-LICENSESINTRL</ SKU> \ HOSTID=VDH=FOX1244H0U4 \ . Step 4. Locate the product authorization key (PAK) from the software license claim certificate document. Example 9-3 show license Output on MDS 9222i Switch Click here to view code image MDS-9222i# show license MDS20090713084124110. sfp. Perform the installation by using the install license command on the active supervisor module from the device console. Exit the device console and open a new terminal session to view all license files installed on the device using the show license command. (Optional) Back up the license key file. Step 5.html.cisco.lic: SERVER this_host ANY VENDOR cisco INCREMENT ENTERPRISE_PKG cisco 1. as shown in Example 9-3. or scp) or use the Cisco Fabric Manager License Wizard under Fabric Manager Tools to copy a license to the switch. sftp.html. ftp. ftp. Locate the website URL from the software license claim certificate document. sftp.

.0 permanent 1 \ VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M9200-ALLLICENSES-INTRL</SKU> \ HOSTID=VDH=FOX1244H0U4 \ NOTICE="<LicFileID>20090713084124110</LicFileID><LicLineID>2</LicLineID> \ <PAK></PAK>" SIGN=9287E36C708C INCREMENT SAN_EXTN_OVER_IP cisco 1. To uninstall the installed license file.0 permanent 2 \ VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>L-M92EXT1AK9=</SKU> \ HOSTID=VDH=FOX1244H0U4 \ NOTICE="<LicFileID>20130812093101868</LicFileID><LicLineID>1</LicLineID> \ <PAK></PAK>" SIGN=DD075A320298 When you enable a Cisco NX-OS software feature. you can use clear license filename command. Example 9-4 show license brief Output on MDS 9222i Switch Click here to view code image MDS-9222i# show license brief MDS20090713084124110.lic Clearing license Enterprise.lic MDS201308120931018680. where filename is the name of the installed license key file.lic MDS-9222i# show license file MDS201308120931018680.lic MDS201308120931018680.. You can use the show license brief command to display a list of license files installed on the device (see Example 9-4).NOTICE="<LicFileID>20090713084124110</LicFileID><LicLineID>1</LicLineID> \ <PAK></PAK>" SIGN=D2ABC826DB70 INCREMENT STORAGE_SERVICES_ENABLER_PKG cisco 1.lic: SERVER this_host ANY VENDOR cisco . Click here to view code image switch# show license usage ENTERPRISE_PKG Application ----------ivr qos_manager ----------- You can use the show license usage command to identify all the active features. it can activate a license grace period.0 permanent 1 \ VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M9200-ALL-LICENSESINTRL</ SKU> \ HOSTID=VDH=FOX1244H0U4 \ NOTICE="<LicFileID>20090713084124110</LicFileID><LicLineID>3</LicLineID> \ <PAK></PAK>" SIGN=C03E97505672 <snip> .lic: SERVER this_host ANY VENDOR cisco INCREMENT SAN_EXTN_OVER_IP_18_4 cisco 1. Click here to view code image switch# clear license Enterprise.

use one of the following commands: show license: Displays information for all installed license files. usb1:. 32. After obtaining the license file. Customers have the option of purchasing preconfigured models of 16. If a license is in use. you must obtain and install an updated license. the On-Demand Port Activation license allows the addition of eight 8-Gbp ports. On-Demand Port Activation Licensing The Cisco MDS 9250i is available in a base configuration of 20 ports (upgradeable to 40 through On-Demand Port Activation license) of 16 Gbps Fibre Channel. The base switch model comes with 12 ports enabled and can be upgraded as needed with the 12-port Cisco MDS 9148S On-Demand Port Activation license to support configurations of 24.and 32-port models onsite all the way to 48 ports by adding these licenses.If the license is time bound. You can enable the grace period feature by using the license grace-period command: Click here to view code image switch# configure terminal switch(config)# license grace-period Verifying the License Configuration To display the license configuration. show license usage: Displays the usage information for installed licenses. or 48 ports and upgrading the 16. Example 9-5 Verifying License Usage on an MDS 9148 Switch Click here to view code image MDS-9148# show license usage Feature Ins Lic Status Expiry Date Comments Count ---------------------------------------------------------------------------FM_SERVER_PKG No Unused ENTERPRISE_PKG No Unused Grace expired . the status displayed is In use. show license brief: Displays summary information for all installed license files. 36. you can update the license file by using the update license url command. The Cisco MDS 9148S 16G Multilayer Fabric Switch compact one rack-unit (1RU) switch scales from 12 to 48 line-rate 16 Gbps Fibre Channel ports. or 48 enabled ports. On the Cisco MDS 9148. The PORT_ACTIVATION_PKG does not appear as installed if you have only the default license installed. show license file: Displays information for a specific license file. You need to contact technical support to request an updated license. or usb2: location of the updated license file. show license host-id: Displays the host ID for the physical device. 2 ports of 10 Gigabit Ethernet for FCIP and iSCSI storage services. You can use the show license usage command (see Example 9-5) to view any licenses assigned to a switch. If a license is installed but no ports have acquired a license. where url specifies the bootflash: slot0:. the status displayed is Unused. and 8 ports of 10 Gigabit Ethernet for FCoE connectivity.

you must make that port eligible by using the port-license command. Configure terminal. which are described in Table 9-3. 1. [no] port-license . 2. However.PORT_ACTIVATION_PKG No 16 In use never - Example 9-6 shows the default port license configuration for the Cisco MDS 9148 switch: Example 9-6 Verifying Port-License Usage on an MDS 9148 Switch Click here to view code image bdc-mds9148-2# show port-license Available port activation licenses are 0 ----------------------------------------------Interface Cookie Port Activation License ----------------------------------------------fc1/1 16777216 acquired fc1/2 16781312 acquired fc1/3 16785408 acquired fc1/4 16789504 acquired fc1/5 16793600 acquired fc1/6 16797696 acquired fc1/7 16801792 acquired fc1/8 16805888 acquired fc1/9 16809984 acquired fc1/10 16814080 acquired fc1/11 16818176 acquired fc1/12 16822272 acquired fc1/13 16826368 acquired fc1/14 16830464 acquired fc1/15 16834560 acquired fc1/16 16838656 acquired fc1/17 16842752 eligible <snip> fc1/46 16961536 eligible fc1/47 16965632 eligible fc1/48 16969728 eligible There are three different statuses for port licenses. interface fc slot/port 3. if a port has already been made ineligible and you prefer to activate it. all ports are eligible to receive a license. Table 9-3 Port Activation License Status Definitions Making a Port Eligible for a License By default.

. [no] port-license acquire 4. (Optional) copy running-config startup-config Moving Licenses Among Ports You can move a license from a port (or range of ports) at any time. No shutdown 10. you will need to first acquire licenses for ports to which you want to move the license. (Optional) show port-license 12. Exit 5. Configure terminal 2.” 1. use the KICKSTART variable. interface fc slot/port 3. Both images are not always required for each install. The images and variables are important factors in any install procedure. Exit 6. Exit 11. Shutdown 8. (Optional) copy running-config startup-config Cisco MDS 9000 NX-OS Software Upgrade and Downgrade Each switch is shipped with the Cisco MDS NX-OS operating system for Cisco MDS 9000 family switches. 2. Exit 5. Configure terminal. You must specify the variable and the respective image to upgrade or downgrade your switch. interface fc slot/port 3. interface fc slot/port 7. use the SYSTEM variable. the switch returns the message “port activation license not available. port-license acquire 9. If you attempt to move a license to a port and no license is available. (Optional) copy running-config startup-config Acquiring a License for a Port If you prefer not to accept the default on-demand port license assignments. (Optional) show port-license 6. To select the system image. Shutdown 4. 1. (Optional) show port-license 6. The Cisco MDS NX-OS software consists of two images: the kickstart image and the system image. no port-license 5. To select the kickstart image.4.

because they are not supported in the image to be installed. The image to be installed is considered incompatible with the running image if one of the following statements is true: An incompatible feature is enabled in the image to be installed and it is not available in the running image and may cause the switch to move into an inconsistent state. In the Release Notes. Supervisor modules: There are single or dual supervisor modules. So new devices will be unable to log in to the fabric via the control plane. Configuration incompatibility: There is a possible incompatibility if certain features in the running image are turned off. the user should review NX-OS Release Notes. In this case. Image version: Each image file has a version. During the upgrade. If the running image and the image you want to install are incompatible. To view the results of a dynamic compatibility check. there are specific sections that explain compatibility between different software versions and hardware modules. the software reports the incompatibility. the software upgrade may be disruptive. If the active and the standby supervisor modules run different versions of the image. An incompatible feature is enabled in the image to be installed and it is not available in the running image and does not cause the switch to move into an inconsistent state. you may decide to proceed with this installation. but existing devices will not experience any disruption of traffic via the data plane. when the upgrade has progressed past the point at which it cannot be stopped gracefully. Note What is nondisruptive NX-OS upgrade or downgrade? Nondisruptive upgrades on the Cisco MDS fabric switches take down the control plane for not more than 80 seconds. Compatibility is established based on the image and configuration: Image incompatibility: The running image and the image to be installed are not compatible. or if a failure occurs. In this case. Use this command to obtain further information when the install all . Flash disks on the switch: The bootflash: resides on the supervisor module and the Compact Flash disk is inserted into the slot0: device. the incompatibility is strict. On switches with dual supervisor modules. the incompatibility is loose. issue the show incompatibility system bootflash:filename command. Before starting any software upgrade or downgrade. the user may need to take a specific path to perform nondisruptive software upgrades or downgrades. In some cases. both images may be High Availability (HA) compatible in some cases and incompatible in others. but the data plane remains up.The software image install procedure is dependent on the following factors: Software images: The kickstart and system image files reside in directories or folders that can be accessed from the Cisco MDS 9000 family switch prompt. the control plane is down. In some cases. both supervisor modules must have Ethernet connections on the management interfaces (mgmt 0) to maintain connectivity when switchovers occur during upgrades and downgrades. In some cases.

This upgrade is nondisruptive for director switches. as a result. The kickstart loads the Linux kernel and device drivers and then needs to load the system image. usually on bootflash. When the Cisco MDS 9000 Series multilayer switch is first switched on or during reboot. one-step upgrade using the reload command. Figure 9-4 portrays the boot sequence. the boot parameters in NVRAM should point to the location and name of the system image. The install all command compares and presents the results of the compatibility before proceeding with the installation.command returns the following message: Click here to view code image Warning: The startup config contains commands not supported by the standby supervisor. This upgrade is disruptive. copy the correct kickstart and system images to the correct location and change the boot commands in your configuration. The install all command launches a script that greatly simplifies the boot procedure and checks for errors and the upgrade impact before proceeding. Again. the system BIOS on the supervisor module first runs power-on self-test (POST) diagnostics. the administrator must recover from the error and reload the switch. Before running the reload command. The loader obtains the location of the kickstart file. Quick. Do you wish to continue? (y/ n) [y]: n You can upgrade any switch in the Cisco MDS 9000 Family using one of the following methods: Automated. If the boot parameters are missing or have an incorrect name or location. one-step upgrades using the install all command. the boot process fails at the last stage. . The kickstart then verifies and loads the system image. some resources might become unavailable after a switchover. usually on bootflash. and proceeds to load the startup configuration that contains the switch configuration from NVRAM. checks the file systems. the system image loads the Cisco NX-OS Software. and verifies the kickstart image before loading it. The boot parameters are held in NVRAM and point to the location and name of both the kickstart and system images. You can exit if you do not want to proceed with these changes. If this error happens. Finally. The BIOS then runs the loader bootstrap function.

Steps 7 and 8 will not be executed. 5. the boot sequence steps are as follows: 1. Bring up the old active supervisor module with the new kickstart and system images. you must have a working connection to both supervisors. You can also create a backup of your existing configuration to a file by issuing the copy running-config bootflash:backup_config. switch. use the console connection. use the latest Cisco MDS NX-OS Software on the Cisco MDS 9000 director series switch. Issue the copy running-config startup-config command to store your current running configuration. and follow these steps: Step 1. 3. In this section we discuss firmware upgrade steps for a director switch with dual supervisor modules. Verify the following physical connections for the new Cisco MDS 9500 family switch: The console port is physically connected to a computer terminal (or terminal server).txt command. To upgrade the switch. because this process causes a switchover and the current standby supervisor will be active after the upgrade.Figure 9-4 Boot Sequence For the MDS Director switches with dual supervisor modules. Perform non-disruptive image upgrade for each data module (one at a time). Switch over from the active supervisor module to the upgraded supervisor module. 2. For MDS switches with only one supervisor module. Step 3. Step 2. or router. The management 10/100 Ethernet port (mgmt0) is connected to an external hub. Be aware that if you are upgrading through the management interface. Install licenses (if necessary) to ensure that the required features are available on the . 6. Bring up standby supervisor module with the new kickstart and system images. Upgrade the BIOS on the active and standby supervisor modules and the data modules. 4. Upgrading to Cisco NX-OS on an MDS 9000 Series Switch During any firmware upgrade. Upgrade complete.

bin Step 8. delete unnecessary files to make space available.bin Step 6.2.2.6.6. Verify that your switch is running compatible hardware by checking the Release Notes. Access the Software Download Center.2.switch.x. Download the files to an FTP or TFTP server.2.bin Step 11. and select the required Cisco MDS NX-OS Release 6. Step 10.x.6.2.bin 16604160 Jul 01 10:20:07 2011 m9500-sf2ek9-kickstart-mz.x. Click here to view code image switch# attach mod x (where x is the module number of the standby supervisor) switch(standby)s# dir bootflash: 12288 Aug 26 19:06:14 2011 lost+found/ 16206848 Jul 01 10:54:49 2011 m9500-sf2ek9-kickstart-mz.2.5.5. If you need more space on the standby supervisor module bootflash on a Cisco MDS 9500 Series switch.bin Usage for bootflash://sup-local 122811392 bytes used 61748224 bytes free 184559616 bytes total switch(standby)# exit ( to return to the active supervisor ) Step 7.bin bootflash:m9500-sf2ek9-mz. Step 9.6. Click here to view code image switch# copy tftp://tftpserver.5c. Step 12. Step 5. If you need more space on the active supervisor module bootflash.6.cisco.com/MDS/m9500-sf2ek9kickstart-mz.x. Ensure that the required space is available in the bootflash: directory for the image file(s) to be copied using the dir bootflash: command. Copy the Cisco MDS NX-OS kickstart and system images to the active supervisor module bootflash using FTP or TFTP.2.6.6.6.com/MDS/m9500-sf2ek9mz. Click here to view code image switch# del m9500-sf2ek9-kickstart-mz. Click here to view code image switch(standby)# del bootflash: m9500-sf2ek9-kickstart-mz.2. .2(x) image file.bin bootflash:m9500-sf2ek9-kickstartmz.2. Use the delete bootflash: filename command to remove unnecessary files. Verify that space is available on the standby supervisor module bootflash on a Cisco MDS 9500 Series switch. depending on which you are installing.6.cisco. Step 4. delete unnecessary files to make space available. Verify that the switch is running the required software version by issuing the show version command.27.5.2.bin switch# copy tftp://tftpserver.bin switch(standby)# del bootflash: m9500-sf2ek9-mz.6.bin switch# del m9500-sf2ek9-mz-npe.5.6.

because this process causes a switchover and the current standby supervisor will be active after the downgrade.5.6. The output displays the status only after the switch has rebooted with the new image. If the software image file is not present.bin bootflash:m9700-sf3ek9-mz. All actions preceding the reboot are not captured in this output because when you enter the install all command using a Telnet session.cisco.com/MDS/m9700-sf3ek9mz. Downgrading Cisco NX-OS Release on an MDS 9500 Series Switch During any firmware upgrade. Be aware that if you are downgrading through the management interface.5. (Refer to Steps 6 and 7 in the upgrade section. you must have a working connection to both supervisors. the script will check whether you want to continue with the upgrade.bin Step 3.2.) Click here to view code image switch# copy tftp://tftpserver.Step 13. Do you want to continue with the installation (y/n)? [n] y If the input is entered as y the install will continue. If you need more space on the active supervisor module bootflash: use the delete command to remove unnecessary files and ensure that the required space is available on the active supervisor. use the console connection.cisco. delete unnecessary files to make space available.9. Perform the upgrade by issuing the install all command.5. When downgrading any switch in the Cisco MDS 9000 family. the session is disconnected when the switch reboots. Here we outline the major steps for the downgrade: Step 1. in which case the output will display the status of the upgrade. You can display the status of a nondisruptive upgrade by using the show install all status command.6.bin ssi m9000-ek9ssi-mz. In this section we discuss firmware upgrade steps for an MDS director switch with dual supervisor modules.9. .2.6.bin The install all process verifies all the images before installation and detects incompatibilities.6.2.2. The process checks configuration compatibility. After information is provided about the impact of the upgrade before it takes place.5. Steps 7 and 8 will not be executed. For MDS switches with only one supervisor module.) If you need more space on the standby supervisor module bootflash.com/MDS/m9700-sf3ek9mz.bin bootflash:m9700-sf3ek9-kickstart-mz. Ensure that the required space is available on the active supervisor. Verify that the system image files for the downgrade are present on the active supervisor module bootflash with dir bootflash command.6.2. When you can reconnect to the switch through a Telnet session. download it from an FTP or TFTP server to the active supervisor module bootflash.6. (Refer to Step 5 in the upgrade section. Step 2.2.bin switch# copy tftp://tftpserver. Use the install all command to downgrade the switch and handle configuration conversions.bin system m9500-sf2ek9-mz. You should first read the Release Notes and follow the steps for specific NX-OS downgrade instructions. Click here to view code image switch# install all kickstart m9500-sf2ek9-kickstartmz. the upgrade might already be complete. avoid using the reload command.2.6.9.

TL port. Step 5. Click here to view code image switch# install all kickstart bootflash:m9500-sf2ek9-kickstartmz.S74 Step 8. Besides these modes. Disable any features that are incompatible with the downgrade system image. F port.5. or VSAN interfaces. the management interface (mgmt0). TE port.2. End with CNTL/Z. FL port. Issue the show incompatibility system command (see Example 9-7) to determine whether you need to disable any features not supported by the earlier release. Verify the status of the modules on the switch using the show module command. Example 9-7 show incompatibility system Output on MDS 9500 Switch Click here to view code image switch# show incompatibility system bootflash:m9500-sf2ek9-mz. Click here to view code image switch# copy running-config startup-config Step 7. These two modes determine the port type during interface initialization.5. Gigabit Ethernet interfaces.Disable autocreate on interfaces (no channel-group auto).2. one per line. SD port. Configuring Interfaces The main function of a switch is to relay frames from one data link to another. Each physical Fibre Channel interface in a switch may operate in one of several port modes: E port.S74 system bootflash:m9500-sf2ek9-mz. Click here to view code image switch# configure terminal Enter configuration commands.bin. 2.bin. To relay the frames. switch(config)# interface fcip 31 switch(config-if)# no channel-group auto switch(config-if)# end switch# port-channel 127 persistent switch# Step 6.1.bin The following configurations on active are incompatible with the system image 1) Service : port-channel . and B port (see Figure 9-5). each interface may be configured in auto or Fx port modes.2. Interfaces are created in VSAN 1 by default. Save the configuration using the copy running-config startup-config command. the characteristics of the interfaces through which the frames are received and sent must be defined..5.1. the original configuration is no . If a different type of module is inserted. ST port. The configured interfaces can be Fibre Channel interfaces.switch# dir bootflash: Step 4.. the configuration is retained. Issue the install all command to downgrade the software. Capability : CAP_FEATURE_AUTO_CREATED_41_PORT_CHANNEL Description : auto create enabled ports or auto created port-channels are present Capability requirement : STRICT Disable command : 1. When a module is removed and replaced with the same type of module.1.Convert autocreated port channels to be persistent (port-channel 1 persistent) .

“Data Center Storage Architecture. The Cisco NX-OS software implicitly performs a graceful shutdown in response to either of the following actions for interfaces operating in the E port mode: If you shut down an interface. In Chapter 6.longer retained.” we discussed the different Fibre Channel port types. Table 9-4 Interface States Graceful Shutdown Interfaces on a port are shut down by default (unless you modified the initial configuration). Figure 9-5 Cisco MDS 9000 Family Switch Port Modes The interface state depends on the administrative configuration of the interface and the dynamic state of the physical link. In Table 9-4 the interface states are summarized. .

regardless of the SPAN sources. Port Administrative Speeds By default. You can enter a shutdown and no shutdown command sequence to re-enable the interface. Autosensing speed is enabled on all 4-Gbps and 8-Gbps switching module interfaces by default.If a Cisco NX-OS software application executes a port shutdown as part of its function. An SD port monitors network traffic that passes through a Fibre Channel interface. or 4 Gbps on the 4-Gbps switching modules. Monitoring is performed using a standard Fibre Channel analyzer (or a similar Switch Probe) that is attached to the SD Port. When autosensing is enabled for an interface operating in dedicated rate mode. Bit Error Thresholds The bit error rate threshold is used by the switch to detect an increased error rate before performance degradation seriously affects traffic. the switches connected to the shutdown link coordinate with each other to ensure that all frames in the ports are safely sent through the link before shutting down. When a shutdown is triggered either by you or the Cisco NX-OS software. In SD port mode. Frame Encapsulation The switchport encap eisl command applies only to SD (SPAN destination) port interfaces. 2 Gbps. the switch disables the interface when the threshold is reached. The bit errors can occur for the following reasons: Faulty or bad cable Faulty or bad GBIC or SFP GBIC or SFP is specified to operate at 1 Gbps but is used at 2 Gbps GBIC or SFP is specified to operate at 2 Gbps but is used at 4 Gbps Short haul cable is used for long haul or long haul cable is used for short haul Momentary sync loss Loose cable connection at one or both ends Improper GBIC or SFP connection at one or both ends A bit error rate threshold is detected when 15 error bursts occur in a 5-minute period. When using local switching. all outgoing frames are transmitted in the EISL frame format. which allows traffic to be switched directly with a local crossbar when the traffic is directed from one port to another on the same line card. an interface functions as a SPAN. the port administrative speed for an interface is automatically calculated by the switch. By default. This command determines the frame format for all frames transmitted by the interface in SD port mode. even if the port negotiates at an operating speed of 1 Gbps or 2 Gbps. If the encapsulation is set to EISL. This enhancement reduces the chance of frame loss. By using local switching. Local Switching Local switching can be enabled in Generation 4 modules. A graceful shutdown ensures that no frames are lost when the interface is shutting down. 4 Gbps of bandwidth is reserved. an extra switching step is avoided. which decreases the latency. note the following guidelines: . This configuration enables the interfaces to operate at speeds of 1 Gbps. and 8 Gbps on the 8-Gbps switching modules.

storage ports that have higher fan-out ratios). The Cisco MDS 9000 Family allows for the bandwidth of ports in a port group to be allocated based on the requirements of individual ports. and ports on highbandwidth servers. When configuring the ports. For other ports.All ports need to be in shared mode. Ports in the port group can have bandwidth dedicated to them. such as ISL ports. E ports are not allowed in the module because they must be in dedicated mode. you can share the bandwidth in a port group by using the switchport rate-mode shared command. be sure not to exceed the available bandwidth in a port group. To place a port in shared mode. Table 9-5 lists the bandwidth and port group configuration for Fibre Channel modules. which usually is the default state. For ports that require high-sustained bandwidth. Table 9-5 Bandwidth and Port Group Configurations for Fibre Channel Modules . When planning port bandwidth requirements. you can have bandwidth dedicated to them in a port group by using the switchport rate-mode dedicated command. or ports can share a pool of bandwidth. allocation of the bandwidth within the port group is important. Note Local switching is not supported on the Cisco MDS 9710 switch. Dedicated and Shared Rate Modes Ports on Cisco MDS 9000 Family line cards are grouped into port groups that have a fixed amount of bandwidth per port group. storage and tape array ports. enter the switchport rate-mode shared command. typically servers that access shared storage-array ports (that is.

the end devices do not accept the frames at the configured or negotiated rate. Note This feature is used mainly for edge ports that are connected to slow edge devices. so that the ports run at 8 Gbps and are oversubscribed at a rate of 1. and using a 48-port 8-Gbps Advanced Fibre Channel module. and buffer-tobuffer flow control. The management port (mgmt0) is autosensing and operates in full-duplex mode at a speed of . and default gateway).4 Gbps). subnet mask. No-credit timeout drops all packets after the slow drain is detected using the configured thresholds. Even though this feature can be applied to ISLs as well. You can. has eight port groups of 6 ports each. we recommend that you apply this feature only for edge F ports and retain the default configuration for ISLs as E and TE ports. Most major disk subsystem vendors provide guidelines as to the recommended fan-out ratio of subsystem client-side ports to server connections.4 Gbps of bandwidth. per-hop-based. The lesser frame timeout value helps to alleviate the slow drain condition that affects the fabric by dropping the packets on the edge ports sooner than the time they actually get timed out (500 mms). To avoid or minimize the stuck condition. we discussed the fan-out ratio. You can also mix dedicated and shared rate ports in a port group.For example. however. the traffic is carried by Class 2 services that use link-level. configure all 6 ports in shared rate mode. This feature is focused mainly on the edge ports that are connected to slow drain devices. These recommendations are often in the range of 7:1 to 15:1. This oversubscription rate is well below the oversubscription rate of the typical storage array port (fan-out ratio) and does not affect performance. In Chapter 6. configure a lesser frame timeout for the ports.4 Gbps of bandwidth available. When there are slow devices attached to the fabric. you must configure either the IP version 4 (IPv4) parameters (IP address. This function frees the buffer space in ISL. even though destination devices do not experience slow drain. Management Interfaces You can remotely configure the switch through the management interface (mgmt0). and the port group has only 32. The credit shortage affects the unrelated flows in the fabric that use the same ISL link. a Cisco MDS 9513 Multilayer Director with a Fabric 3 module installed. The goal is to avoid or minimize the frames being stuck in the edge ports due to slow drain devices that are causing ISL blockage. Slow Drain Device Detection and Congestion Avoidance All data traffic between end devices in a SAN fabric is carried by Fibre Channel Class 3. These classes of service do not support end-to-end flow control. which can be used by other unrelated flows that do not experience the slow drain condition. This feature provides various enhancements to detect slow drain devices that are causing congestion in the network and also provides a congestion avoidance function. In some cases. The slow devices lead to ISL credit shortage in the traffic destined for these devices and they congest the links. Each port group has 32.48:1 (6 ports × 8 Gbps = 48 Gbps/32. or the IP version 6 (IPv6) parameters so that the switch is reachable. You cannot configure all 6 ports of a port group at the 8-Gbps dedicated rates because that would require 48 Gbps of bandwidth. To configure a connection on the mgmt0 interface.

the attached interface is automatically deleted. If you delete the VSAN. Create the interface VSAN. All the NX-OS CLI commands for interface configuration and verification are summarized in Table 9-6. Here are the guidelines when creating or deleting a VSAN interface: Create a VSAN before creating the interface for that VSAN.10/100/1000 Mbps. the default speed is 100 Mbps and the default duplex mode is auto. If a VSAN does not exist. On a Supervisor-2 module. On a Supervisor-1 module. it is not created automatically. the interface cannot be created. the default speed is auto and the default duplex mode is auto. you must configure the IP address for this VSAN. Autosensing supports both the speed and the duplex mode. Configure each interface only in one VSAN. You can create an IP interface on top of a VSAN and then use this interface to send frames to this VSAN. VSAN Interfaces VSANs apply to Fibre Channel fabrics and enable you to configure multiple isolated SAN topologies within the same physical infrastructure. To use this feature. VSAN interfaces cannot be created for nonexisting VSANs. .

.

.

.

. Fibre Channel concepts. zoning. In this section we discuss the configuration and verification steps to build a Fibre Channel SAN topology. Fibre Channel Services.Table 9-6 Summary of NX-OS CLI Commands for Interface Configuration and Verification Verify Initiator and Target Fabric Login In Chapters 6 and 7 we discussed a theoretical explanation of Virtual SAN (VSAN). In this chapter we build up the following sample SAN topology together to practice what we have learned so far (see Figure 9-6). and SAN topologies.

This state indicates that traffic can pass through this VSAN. A VSAN is in the operational state if the VSAN is active and at least one port is up. VSANs have the following attributes: VSAN ID: The VSAN ID identifies the VSAN as the default VSAN (VSAN 1). and the isolated VSAN (VSAN 4094). Each VSAN can contain up to 239 switches and has an independent address space that allows identical Fibre Channel IDs (FC IDs) to be used simultaneously in different VSANs. Nexus 6000. State: The administrative state of a VSAN can be configured to an active (default) or . Nexus 7000 Series switches. user-defined VSANs (VSAN 2 to 4093).Figure 9-6 Our Sample SAN Topology Diagram VSAN Configuration You can achieve higher security and greater stability in Fibre Channel fabrics by using virtual SANs (VSANs) on Cisco MDS 9000 family switches and Cisco Nexus 5000. VSANs provide isolation among devices that are physically connected to the same fabric. This state cannot be configured. With VSANs you can create multiple logical SANs over a common physical infrastructure. and UCS Fabric Interconnect.

Example 9-8 VSAN Creation and Verification on MDS-9222i Switch Click here to view code image . Interop mode: Interoperability enables the products of multiple vendors to interact with each other. the VSAN name is a concatenation of VSAN and a four-digit string representing the VSAN ID. By default. Each vendor has a regular mode and an equivalent interoperability mode. the default name for VSAN 3 is VSAN0003. This method is referred to as dynamic port VSAN membership (DPVM). If a port is configured in this VSAN. Use this state to deactivate a VSAN without losing the VSAN’s configuration. The name can be from 1 to 32 characters long and it must be unique across all VSANs. VSAN name: This text string identifies the VSAN for management purposes. (DPVM is explained in Chapter 6. After VSANs are created. Dynamically: By assigning VSANs based on the device WWN.) In Example 9-8. By default. the default) for load balancing path selection. Load balancing attributes: These attributes indicate the use of the source-destination ID (srcdst-id) or the originator exchange OX ID (src-dst-ox-id. By enabling a VSAN. we will be configuring and verifying VSANs on our sample topology that was shown in Figure 9-6. The active state of a VSAN indicates that the VSAN is configured and enabled. Fibre Channel standards guide vendors toward common external Fibre Channel interfaces. Cisco NX-OS software supports the following four interop modes: Mode 1: Standards based interop mode that requires all other vendors in the fabric to be in interop mode Mode 2: Brocade native mode (Core PID 0) Mode 3: Brocade native mode (Core PID 1) Mode 4: McData native mode Port VSAN Membership Port VSAN membership on the switch is assigned on a port-by-port basis. it is disabled. they may exist in various conditions or states. The suspended state of a VSAN indicates that the VSAN is configured but not enabled. each port belongs to the default VSAN. For example. which specifically turns off advanced or proprietary features and provides the product with a more amiable standards-compliant implementation. you can preconfigure all the VSAN parameters for the whole fabric and activate the VSAN immediately.suspended state. All ports in a suspended VSAN are disabled. By suspending a VSAN. you activate the services for that VSAN. You can assign VSAN membership to ports using one of two methods: Statically: By assigning VSANs to ports.

End with CNTL/Z. FL mode: Arbitrated loop mode—storage connections that allow multiple storage targets to log in to a single switch port. The last steps for the VSAN configuration will be verifying the VSAN membership and usage (see Example 9-9). MDS-9222i(config)# vsan database ! Creating VSAN 47 MDS-9222i(config-vsan-db)# vsan 47 ? MDS-9222i(config-vsan-db)# vsan 47 name VSAN-A ! Configuring VSAN name for VSAN 47 MDS-9222i(config-vsan-db)# vsan 47 name VSAN-A ? <CR> interop Interoperability mode value loadbalancing Configure loadbalancing scheme suspend Suspend vsan ! Other attributes of a VSAN configuration can be displayed with a question mark after !VSAN name. as highlighted. the Admin Mode is FX (the default). We have done this just so that we can see multiple storage targets for zoning. . fc1/3 ! Change the state of the fc interfaces MDS-9222i(config-if)# no shut ! Verifying status of the interfaces MDS-9222i(config-if)# show int fc 1/1. associated with the storage array port). fc1/3 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) fc1/1 47 FX on up swl F 2 -fc1/3 47 FX on up swl FL 4 -- Note. one per line.fc1/3 MDS-9222i(config-vsan-db)# interface fc1/1. the operational state is down since no FC port is up MDS-9222i# show vsan 47 vsan 47 information name:VSAN-A state:active interoperability mode:default loadbalancing:src-id/dst-id/oxid operational state:down MDS-9222i(config)# vsan database ! Interface fc1/1 and fc 1/3 are included in VSAN 47 MDS-9222i(config-vsan-db)# vsan 47 interface fc1/1. which allows the port to come up in either F mode or FL mode: F mode: Point-to-point mode for server connections and most storage connections (storage ports in F mode have only one target logging in to that port.!Initial Configuration and entering the VSAN configuration database MDS-9222i# conf t Enter configuration commands. MDS-9222i(config-vsan-db)# vsan 47 name VSAN-A interop loadbalancing ? src-dst-id Src-id/dst-id for loadbalancing src-dst-ox-id Ox-id/src-id/dst-id for loadbalancing(Default) ! Load balancing attributes of a VSAN ! To exit configuration mode issue command end MDS-9222i(config-vsan-db)# end ! Verifying the VSAN state.

These attributes are deregistered when the Nx port logs out either explicitly or implicitly. we continue verifying FCNS and FLOGI databases on our sample SAN topology.Example 9-9 Verification Steps for VSAN Membership and Usage Click here to view code image ! Verifying the VSAN membership MDS-9222i# show vsan membership vsan 1 interfaces: fc1/1 fc1/2 fc1/3 fc1/5 fc1/6 fc1/7 fc1/9 fc1/10 fc1/11 fc1/13 fc1/14 fc1/15 fc1/17 fc1/18 vsan 47 interfaces: fc1/1 fc1/3 vsan 4079(evfp_isolated_vsan) interfaces: vsan 4094(isolated_vsan) interfaces: ! Verifying VSAN usage MDS-9222i# show vsan usage 1 vsan configured configured vsans: 1. The name server permits an Nx port to register attributes during a PLOGI (to the name server) to obtain attributes of other hosts. One instance of the name server process runs on each switch. Use the show flogi command to verify if a storage device is displayed in the FLOGI table as in the next section. In a multiswitch fabric configuration. 47 vsans available for configuration: 2-46. In Example 9-10. Example 9-10 Verifying FCNS and FLOGI Databases on MDS-9222i Switch Click here to view code image ! Verifying fc interface status MDS-9222i(config-if)# show interface fc 1/1 fc1/1 is up Hardware is Fibre Channel. FCID is 0x410100 . If the required device is displayed in the FLOGI table.48-4093 fc1/4 fc1/8 fc1/12 fc1/16 In a Fibre Channel fabric. The name server stores name entries for all hosts in the FCNS database. each host or disk requires a Fibre Channel ID. the fabric login is successful. The name server functionality maintains a database containing the attributes for all hosts and storage devices in each VSAN. trunk mode is on snmp link state traps are enabled Port mode is F. SFP is short wave laser w/o OFC (SN) Port WWN is 20:01:00:05:9b:77:0d:c0 Admin port mode is FX. the name server instances running on each switch share information in a distributed database. Examine the FLOGI database on a switch that is directly connected to the host HBA and connected ports. Name servers allow a database entry to be modified by a device that originally registered the information.

! Verifying flogi database . 0 unknown class 0 too long. 0 NOS. 0 errors 0 input OLS. 0 NOS. 1 loop inits 16 receive B2B credit remaining 3 transmit B2B credit remaining 3 low priority transmit B2B credit remaining Interface last changed at Mon Sep 22 09:55:15 2014 ! The port mode is F mode and there is automatically FCID is assigned to the FC Initiator. 0 errors 0 CRC. 1396 bytes 0 discards. ! Verifying name server database ! The assigned FCID 0x410100 and device WWN information are registered in the FCNS and attain fabric-wide visibility. MDS-9222i(config-if)# show fcns database VSAN 47: -------------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x4100e8 NL c5:9a:00:06:2b:04:fa:ce scsi-fcp:target 0x4100ef NL c5:9a:00:06:2b:01:01:04 scsi-fcp:target 0x410100 N 21:00:00:e0:8b:0a:f0:54 (Qlogic) scsi-fcp:init Total number of entries = 3 MDS-9222i(config-if)# show fcns database detail -----------------------VSAN:47 FCID:0x4100e8 -----------------------port-wwn (vendor) :c5:9a:00:06:2b:04:fa:ce node-wwn :c5:9a:00:06:2b:01:00:04 class :3 node-ip-addr :0.0. 3632 bytes 0 discards. the initiator is connected to fc 1/1 the domain id is x41 MDS-9222i(config-if)# show flogi database -------------------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME -------------------------------------------------------------------------------fc1/1 47 0x410100 21:00:00:e0:8b:0a:f0:54 20:00:00:e0:8b:0a:f0:54 ! JBOD FC target contains 2 NL_Ports (disks) and is connected to domain x041 ! FC Initiator with the pWWN 21:00:00:e0:8b:0a:f0:54 is attached to fc interface 1/1. 11 bytes/sec. fc1/3 47 0x4100e8 c5:9a:00:06:2b:04:fa:ce c5:9a:00:06:2b:01:00:04 fc1/3 47 0x4100ef c5:9a:00:06:2b:01:01:04 c5:9a:00:06:2b:01:00:04 Total number of flogi = 3.0 ipa :ff ff ff ff ff ff ff ff fc4-types:fc4_features :scsi-fcp:target symbolic-port-name :SANBlaze V4. 4 bytes/sec. the VSAN domain manager 47 has assigned FCID 0x410100 to the device during FLOGI. 0 frames/sec 18 frames input.0. 0 too short 18 frames output. 0 LRR. which is attached to fc 1/1. 0 frames/sec 5 minutes output rate 32 bits/sec.Port vsan is 47 Speed is 2 Gbps Rate mode is shared Transmit B2B Credit is 3 Receive B2B Credit is 16 Receive data field Size is 2112 Beacon is turned off 5 minutes input rate 88 bits/sec. 2 loop inits 1 output OLS. The buffer-to-buffer credits are negotiated during the port login.0 Port . 1 LRR.

6 port-type :N port-ip-addr :0.0.symbolic-node-name : port-type :NL port-ip-addr :0.8.2.0. .0.0.0.0.0.0.1.2.8.25 DVR:v9.0 ipa :ff ff ff ff ff ff ff ff fc4-types:fc4_features :scsi-fcp:init symbolic-port-name : symbolic-node-name :QLA2342 FW:v3.0 fabric-port-wwn :20:03:00:05:9b:77:0d:c0 hard-addr :0x000000 permanent-port-wwn (vendor) :00:00:00:00:00:00:00:00 connected interface :fc1/3 switch name (IP address) :MDS-9222i (10.0 fabric-port-wwn :20:01:00:05:9b:77:0d:c0 hard-addr :0x000000 permanent-port-wwn (vendor) :21:00:00:e0:8b:0a:f0:54 (Qlogic) connected interface :fc1/1 switch name (IP address) :MDS-9222i (10.0 ipa :ff ff ff ff ff ff ff ff fc4-types:fc4_features :scsi-fcp:target symbolic-port-name :SANBlaze V4.44) Total number of entries = 3 The DCNM Device Manager for MDS 9222i will be displaying the ports in two tabs: one is Summary and the other one is a GUI display of the physical ports in the Device tab (see Figure 9-7).0 Port symbolic-node-name : port-type :NL port-ip-addr :0.0.2.03.8.44) -----------------------VSAN:47 FCID:0x4100ef -----------------------port-wwn (vendor) :c5:9a:00:06:2b:01:01:04 node-wwn :c5:9a:00:06:2b:01:00:04 class :3 node-ip-addr :0.0 fabric-port-wwn :20:03:00:05:9b:77:0d:c0 hard-addr :0x000000 permanent-port-wwn (vendor) :00:00:00:00:00:00:00:00 connected interface :fc1/3 switch name (IP address) :MDS-9222i (10.0.44) -----------------------VSAN:47 FCID:0x410100 ! Initiator connected to fc 1/1 ! The vendor of the host bus adapter (HBA) is Qlogic where we can see in the vendor pWWN -----------------------port-wwn (vendor) :21:00:00:e0:8b:0a:f0:54 (Qlogic) node-wwn :20:00:00:e0:8b:0a:f0:54 class :3 node-ip-addr :0.8.

Figure 9-7 Cisco DCNM Device Manager for MDS 9222i Switch In Table 9-7 we summarize NX-OS CLI commands that are related to VSAN configuration and verification. .

For example.Table 9-7 Summary of NX-OS CLI Commands for VSAN Configuration and Verification Verify Fibre Channel Zoning on a Cisco MDS 9000 Series Switch Zoning is used to provide security and to restrict access between initiators and targets on the SAN fabric. To avoid any compromise of . security is crucial. With many types of servers and storage devices on the network. if a host were to gain access to a disk that was being used by another host. potentially with a different operating system. the data on this disk could become corrupted.

Members in different zones cannot access one another. A zone consists of multiple zone members. SFTP. can see which targets. You can copy an active zone set from the bootflash: directory. volatile: directory. Members in a zone can access one another. This map dictates which devices.crucial data within the SAN. Zone Set Distribution: You can distribute full zone sets using one of two methods: one-time distribution or full zone set distribution. or slot0. You cannot make changes to an existing zone set and activate it if the full zone set is lost or is not propagated. Only one zone set can be activated at any time. Zone Set Duplication: You can make a copy of a zone set and then edit it without altering the original zone set. port-number fcalias Add fcalias to zone fcid Add FCID member to zone fwwn Add Fabric Port WWN member to zone interface Add member based on interface ip-address Add IP address member to zone pwwn Add Port WWN member to zone symbolic-nodename Add Symbolic Node Name member to zone ! Inserting a disk (FC target) into zoneA MDS-9222i(config-zone)# member pwwn c5:9a:00:06:2b:01:01:04 ! Inserting pWWN of an initiator which is connected to fc 1/1 into zoneA MDS-9222i(config-zone)# member pwwn 21:00:00:e0:8b:0a:f0:54 ! Verifying zone zoneA if the correct members are configured or not MDS-9222i(config-zone)# show zone name zoneA zone name zoneA vsan 47 pwwn c5:9a:00:06:2b:01:01:04 pwwn 21:00:00:e0:8b:0a:f0:54 ! Creating zone set in VSAN 47 MDS-9222i(config-zone)# zoneset name zonesetA vsan 47 . Example 9-11 Configuring Zoning on MDS-9222i Switch Click here to view code image ! Creating zone in VSAN 47 MDS-9222i(config-if)# zone name zoneA vsan 47 MDS-9222i(config-zone)# member ? ! zone member can be configured with one of the below options device-alias Add device-alias member to zone domain-id Add member based on domain-id. SCP. or TFTP) The active zone set is not part of the full zone set. In Example 9-11. A zone can be a member of more than one zone set. zoning allows the user to overlay a security map. we will continue building our sample topology with zoning configuration. thus reducing the risk of data loss. namely hosts. A zone set consists of one or more zones: A zone set can be activated or deactivated as a single entity across all switches in the fabric. to one of the following areas: To the full zone set To a remote location (using FTP.

and then select ZonesetA (see Figure 9-8). check zone status ! Verifying the active zone set status in VSAN 47 MDS-9222i(config)# show zoneset active zoneset name zonesetA vsan 47 zone name zoneA vsan 47 * fcid 0x4100ef [pwwn c5:9a:00:06:2b:01:01:04] * fcid 0x410100 [pwwn 21:00:00:e0:8b:0a:f0:54] The asterisks (*) indicate that a device is currently logged into the fabric (online). In this chapter we are just doing the CLI versions of these commands. A missing asterisk may indicate an offline device or an incorrectly configured zone. On the DCNM SAN active zone set. For example. initiator-target pairs are configured. This information allows more . The device type information of each device in a smart zone is automatically populated from the Fibre Channel Name Server (FCNS) database as host. useful combinations can be implemented at the hardware level by the Cisco MDS NX-OS software. target. but not initiator-initiator.MDS-9222i(config-zoneset)# member zoneA MDS-9222i(config-zoneset)# exit ! Inserting a disk (FC target) into zoneA MDS-9222i(config)# zoneset activate name zonesetA vsan 47 Zoneset activation initiated. members can be displayed via Logical Domains (on the left pane). possibly a mistyped pWWN. DCNM is really best at viewing and operating on existing fabrics (that is. and the combinations that are not used are ignored. By analyzing device-type information in the FCNS. Figure 9-8 Cisco DCNM Zone Members for VSAN 47 About Smart Zoning Smart zoning implements hard zoning of large zones with fewer hardware resources than was previously required. or both. Smart zoning eliminates the need to create a single initiator to single target zones. The traditional zoning method allows each device in a zone to communicate with every other device in the zone. The administrator is required to manage the individual zones according to the zone configuration guidelines. just about everything except making initial connections of multiple switches into one fabric). However. Select VSAN 47. MDS-9222i(config)# show zoneset brief zoneset name zonesetA vsan 47 zone zoneA Setting up multiple switches independently and then connecting them into one fabric can be done in DCNM.

The default zoning mode can be selected during the initial setup. smart zoning defaults can be overridden by the administrator to allow complete control. as per standards requirements. is implicitly obtained for enhanced zoning. Step 3. At the time enhanced zoning is enabled. REPORT_LUNS. About Enhanced Zoning The zoning feature complies with the FC-GS-4 and FC-SW-3 standards. and if you want to perform additional LUN zoning in a Cisco MDS 9000 family switch. changes are either committed or aborted with enhanced zoning. such as a disk controller that needs to communicate with another disk controller. with a few added commands. The zone mode will display enhanced for enhanced mode and the default zone distribution policy is full for . Smart zoning can be enabled at the VSAN level but can also be disabled at zone level.efficient utilization of switch hardware by identifying initiator-target pairs and configuring those only in hardware. you can restrict access to specific LUNs associated with a device. A storage device can have multiple LUNs behind it. a VSANwide lock. switch(config)# zone mode enhanced vsan 222 Zoning mode is changed to enhanced for VSAN 200. For one thing. however. READ. If the device port is part of a zone. LUN masking and mapping restricts server access to specific LUNs. INQUIRY) is supported. Enhanced zoning needs to be enabled on only one switch within the VSAN in the existing SAN topology. but data traffic to LUN 0 (for example. When LUN 0 is not included within a zone. control traffic to LUN 0 (for example. the enhanced zoning feature is disabled in all switches in the Cisco MDS 9000 family. switch# config t (enter the configuration mode) Step 2. With LUN zoning. If LUN masking is enabled on a storage subsystem. all zone and zone set modifications for enhanced zoning include activation. follow these steps: Step 1. if available. The flow of enhanced zoning. a member of the zone can access any LUN in the device. Enhanced zoning uses the same techniques and tools as basic zoning. Enhanced zoning can be turned on per VSAN as long as each switch within that VSAN is enhanced zoning capable. Last. In the event of a special situation. Click here to view code image switch(config)# no zone mode enhanced vsan 222 The preceding command disables the enhanced zoning mode for a specific VSAN. differs from that of basic zoning. You need to check the zone status to verify the status. the command will be propagated to the other switches within the VSAN automatically. To enable enhanced zoning in a VSAN. Second. Both standards support the basic zoning functionalities explained in Chapter 6 and the enhanced zoning functionalities described in this section. By default. The default zoning is basic mode. The zone status can be verified with the show zone status command. WRITE) is denied. the LUN number for each host bus adapter (HBA) from the storage subsystem should be configured with the LUN-based zone procedure. About LUN Zoning and Assigning LUNs to Storage Subsystems Logical unit number (LUN) zoning is a feature specific to switches in the Cisco MDS 9000 family. then.

Enabling enhanced zoning does not perform a zone set activation. Click here to view code image switch# show zone status vsan 222 VSAN: 222 default-zone: deny distribute: full Interop: default mode: enhanced merge-control: allow session: none hard-zoning: enabled broadcast: enabled smart-zoning: disabled rscn-format: fabric-address Default zone: qos: none broadcast: disabled ronly: disabled Full Zoning Database : DB size: 44 bytes Zonesets:0 Zones:0 Aliases: 0 Attribute-groups: 1 Active Zoning Database : Database Not Available The rules for enabling enhanced zoning are as follows: Enhanced zoning needs to be enabled on only one switch in the VSAN of an existing converged SAN fabric.enhanced mode. we discuss the advantages of using enhanced zoning. thereby overwriting the destination switches’ full zone set database. The switch that is chosen to initiate the migration to enhanced zoning will distribute its full zone database to the other switches in the VSAN. In Table 9-8. . Enabling it on multiple switches within the same VSAN can result in failure to activate properly. In basic zone mode the default zone distribution policy is active.

Table 9-8 Advantages of Enhanced Zoning .

fc1/12 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/1 47 FX on up swl F 2 -fc1/3 47 FX on up swl FL 4 -fc1/4 48 FX on up swl FL 4 -MDS-9222i(config-if)# show fcns database VSAN 47: -------------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x4100e8 NL c5:9a:00:06:2b:04:fa:ce scsi-fcp:target 0x4100ef NL c5:9a:00:06:2b:01:01:04 scsi-fcp:target 0x410100 N 21:00:00:e0:8b:0a:f0:54 (Qlogic) scsi-fcp:init Total number of entries = 3 VSAN 48: -------------------------------------------------------------------------- . so far we managed to build only separate SAN islands (see Figure 9-9).) Example 9-12 Verifying Initiator and Target Logins Click here to view code image MDS-9222i(config-if)# show interface fc1/1. Figure 9-9 SAN topology with CLI Configuration On MDS 9222i we configured two VSANs: VSAN 47 and VSAN 48.fc1/4.Let’s return to our sample SAN topology. Both VSANs are active and their operational states are up. (See Example 9-12.fc1/3.

The zoning features. read-only zones. . MDS-9148(config-if)# show fcns database VSAN 48: -------------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x420000 N 21:01:00:e0:8b:2a:f0:54 (Qlogic) scsi-fcp:init Total number of entries = 1 Table 9-9 summarizes NX-OS CLI commands for zone configuration and verification. which require the Enterprise package license. are LUN zoning. and zone-based FC QoS. zone-based traffic prioritizing.FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE -------------------------------------------------------------------------0x4100e8 NL c5:9b:00:06:2b:14:fa:ce scsi-fcp:target 0x4100ef NL c5:9b:00:06:2b:02:01:04 scsi-fcp:target Total number of entries = 2 MDS-9148(config-if)# show flogi database -------------------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME -------------------------------------------------------------------------------fc1/1 48 0x420000 21:01:00:e0:8b:2a:f0:54 20:01:00:e0:8b:2a:f0:54 Total number of flogi = 1.

.

.

.

.

.

and HBAs. Trunking enables interconnect ports to transmit and receive frames in more than one VSAN.Table 9-9 Summary of NX-OS CLI Commands for Zone Configuration and Verification VSAN Trunking and Setting Up ISL Port VSAN trunking is a feature specific to switches in the Cisco MDS 9000 Family. Trunking is supported on E ports and F ports. over the same physical link. . Figure 9-10 represents the possible trunking scenarios in a SAN with MDS core switches. third-party core switches. NPV switches.

. TNP port: If trunk mode is enabled in an NP port (see the link 2 in Figure 9-10) and that port becomes operational as a trunking NP port. a pWWN is defined for each VSAN and called a logical pWWN. TF-TNP port link: A single link can be established to connect an TF port to an TNP port using the PTP protocol to carry tagged frames (see the link 2 in Figure 9-10). the pWWNs for which the N port requests additional FC_IDs are called virtual pWWNs. TF port: If trunk mode is enabled in an F port (see the link 2 in Figure 9-10) and that port becomes operational as a trunking F port. In the case of MDS core switches. TF PortChannel: If trunk mode is enabled in an F port channel (see the link 4 in Figure 9-10) and that port channel becomes operational as a trunking F port channel. A Fibre Channel VSAN is called Virtual Fabric and uses a VF_ID in place of the VSAN ID. it is referred to as a TF port channel. TF-TN port link: A single link can be established to connect an F port to an HBA to carry tagged frames (see the link 1a and 1b in Figure 9-10) using Exchange Virtual Fabrics Protocol (EVFP). it is referred to as a TNP port. Cisco Port Trunking Protocol (PTP) is used to carry tagged frames. it is referred to as a TN port. When an N port supports trunking.Figure 9-10 Trunking F Ports The trunking feature includes the following key concepts: TE port: If trunk mode is enabled in an E port and that port becomes operational as a trunking E port. By default. it is referred to as a TF port. PTP is used because PTP also supports trunking port channels. the VF_ID is 1 for all ports. TN port: If trunk mode is enabled (not currently supported) in an N port (see the link 1b in Figure 9-10) and that port becomes operational as a trunking N port. it is referred to as a TE port. A server can reach multiple VSANs through a TF port without inter-VSAN routing (IVR).

Existing trunk configurations are not affected. The trunk mode configuration at the two ends of an ISL. On NPV switches. Trunk Modes By default. The preferred configuration on the Cisco MDS 9000 family switches is one side of the trunk set to auto and the other side set to on. trunk mode is disabled. In some cases. the trunk mode configuration on E ports has no effect. In the case of F ports. disable the trunking protocol. ST. by default. no port on that switch can apply new trunk configurations. FL. trunk mode is enabled on all Fibre Channel interfaces (Mode: E. and SD) on non-NPV switches. If the trunking protocol is disabled on a switch. Also. off (disabled). if the third-party core switches ACC’s physical FLOGI with the EVFP bit is configured. The ISL is always in a trunking disabled state. determines the trunking state of the link and the port modes at both ends. other switches that are directly connected to this switch are similarly affected on the connected interfaces. between two switches. then EVFP protocol enables trunking on the link. you may need to merge traffic from different port VSANs across a nontrunking ISL. You can configure trunk mode as on (enabled). we summarize the trunk mode status between switches. The TE port continues to function in trunk mode. In Table 9-11. Fx. we summarize supported trunking protocols. . If so. but only supports traffic in VSANs that it negotiated with previously (when the trunking protocol was enabled).In Table 9-10. the trunking protocol is enabled on E ports and disabled on F ports. Table 9-10 Supported Trunking Protocols By default. When connected to a third-party switch. F. or auto (automatic).

ISL ports can reside on any switching module. and they do not need a designated master port. the software indicates its nonnegotiable status. the port channel protocol is not initiated. The trunk-allowed VSANs configured for TE. the port channel continues to function properly without requiring fabric reconfiguration. The possible values for a channel group mode are as follows: ON (default): The member ports only operate as part of a port channel or remain inactive. The ACTIVE port channel mode allows automatic recovery without explicitly enabling and disabling the port channel member ports at . ACTIVE: The member ports initiate port channel protocol negotiation with the peer port(s) regardless of the channel group mode of the peer port. If a trunking enabled E port is connected to a third-party switch. In TE-port mode. if a port channel protocol frame is received from a peer port. while configured in a channel group. and TNP links are used by the trunking protocol to determine the allowed active VSANs in which frames can be received or transmitted. the trunking protocol ensures seamless operation as an E port. TF. or responds with a nonnegotiable status. frames are transmitted and received in one or more VSANs specified in this list. With this feature. However. If the peer port. In this mode.Table 9-11 Trunk Mode Status Between Switches Each Fibre Channel interface has an associated trunk-allowed VSAN list. There are couple of important guidelines and limitations for the trunking feature: F ports support trunking in Fx mode. If a port or a switching module fails. does not support the PortChannel protocol. Port Channel Modes You can configure each port channel with a channel group mode parameter to determine the port channel protocol behavior for all member ports in this channel group. up to 16 expansion ports (Eports) or trunking E-ports (TE-ports) can be bundled into a PortChannel. it will default to the ON mode behavior. MDS does not enforce the uniqueness of logical pWWNs across VSANs. By default. Port channels aggregate multiple physical ISLs into one logical link with higher bandwidth and port resiliency for both Fibre Channel and FICON traffic. the VSAN range (1 through 4093) is included in the trunk-allowed list.

Ports are also in trunk on mode. Ports in auto mode will automatically come up as ISL (E-ports) after they are connected to another switch. the interface Admin Mode is FX MDS-9222i# show interface fc1/5.8 Gbps of bandwidth.either end. Show port-resources module command will show the default and configured interface rate mode. so when ISL comes up it will automatically carry all VSANs in common between switches.fc1/6 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/5 1 FX on down swl --fc1/6 1 FX on down swl --! On 9222i ports need to be put into dedicated rate mode in order to be configured as ISL (or be set to mode auto). In Example 9-13.8 Gbps . Figure 9-11 portrays the minimum required configuration steps to set up ISL port channel with mode active between two MDS switches. MDS-9222i(config)# show port-resources module 1 Module 1 Available dedicated buffers are 3987 Port-Group 1 Total bandwidth is 12. Groups of six ports have a total 12. Example 9-13 Verifying Port Channel on Our Sample SAN Topology Click here to view code image ! Before changing the ports to dedicated mode . Figure 9-11 Setting Up ISL Port on Sample SAN Topology On 9222i ports need to be put into dedicated rate mode to be ISLs (or be set to mode auto). we show the verification and configuration steps on NX-OS CLI for port channel configuration on our sample SAN topology. On MDS 9222i there are three port-groups available and each port-group consists of six Fibre Channel ports. we will connect two SAN islands by creating port channel using ports 5 and 6 on both MDS switches. On our sample SAN topology.8 Gb shared between the other group members). When we set these two ports to dedicate they will reserve a dedicated 4 Gb of bandwidth (leaving 4. buffer –to-buffer credits and bandwidth.

0 shared fc1/4 16 4.0 shared fc1/4 16 4. then do "no shutdown" at both ends to bring it up MDS-9222i(config-if)# no shut MDS-9222i(config-if)# int port-channel 56 MDS-9222i(config-if)# channel mode active MDS-9222i(config-if)# sh interface fc1/5-6 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/5 1 auto on trunking swl TE 4 56 fc1/6 1 auto on trunking swl TE 4 56 .0 shared <snip> ! Changing the rate-mode to dedicated MDS-9222i(config)# inte fc1/5-6 MDS-9222i(config-if)# switchport rate-mode dedicated MDS-9222i(config-if)# switchport mode auto MDS-9222i(config-if)# sh interface fc1/5-6 brief ------------------------------------------------------------------------------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------------------------------------------------------------------------------fc1/5 1 auto on down swl --fc1/6 1 auto on down swl --! Interface fc 1/5-6 are configured for dedicated rate mode MDS-9222i(config-if)# show port-resources module 1 Module 1 Available dedicated buffers are 3543 Port-Group 1 Total bandwidth is 12.0 shared fc1/3 16 4.0 dedicated fc1/6 250 4.0 shared fc1/3 16 4.8 Gbps Allocated dedicated bandwidth is 0.0 Gbps -------------------------------------------------------------------Interfaces in the Port-Group B2B Credit Bandwidth Rate Mode Buffers (Gbps) -------------------------------------------------------------------fc1/1 16 4.8 Gbps Allocated dedicated bandwidth is 8.0 Gbps -------------------------------------------------------------------Interfaces in the Port-Group B2B Credit Bandwidth Rate Mode Buffers (Gbps) -------------------------------------------------------------------fc1/1 16 4.0 shared fc1/2 16 4.0 shared fc1/6 16 4.Total shared bandwidth is 12.0 dedicated <snip> ! Configuring port-channel interface MDS-9222i(config-if)# channel-group 56 fc1/5 fc1/6 added to port-channel 56 and disabled please do the same operation on the switch at the other end of the port-channel.0 shared fc1/5 16 4.8 Gbps Total shared bandwidth is 4.0 shared fc1/5 250 4.0 shared fc1/2 16 4.

22 loop inits 2 output OLS.47-48) Trunk vsans (up) (1. 2 loop inits Member[1] : fc1/5 Member[2] : fc1/6 ! During ISL bring up there will be principal switch election.48) Trunk vsans (isolated) (47) Trunk vsans (initializing) () 5 minutes input rate 1024 bits/sec. 4 LRR. 29776 bytes 0 discards. Local switch run time information: State: Stable Local switch WWN: 20:01:54:7f:ee:05:07:89 Running fabric name: 20:01:00:05:9b:77:0d:c1 Running priority: 128 . MDS-9222i# show fcdomain vsan 1 The local switch is the Principal Switch. 1 frames/sec 363 frames input. 2 NOS. 1 frames/sec 5 minutes output rate 856 bits/sec. for VSAN 1 the one with the lowest priority won the election which is MDS ! 9222i in our sample SAN topology. 36000 bytes 0 discards. 0 too short 361 frames output. 0 errors 0 CRC.! Verifying the portchannel interface. 0 errors 2 input OLS. 0 NOS. Local switch run time information: State: Stable Local switch WWN: 20:01:00:05:9b:77:0d:c1 Running fabric name: 20:01:00:05:9b:77:0d:c1 Running priority: 2 Current domain ID: 0x51(81) Local switch configuration information: State: Enabled FCID persistence: Enabled Auto-reconfiguration: Disabled Contiguous-allocation: Disabled Configured fabric name: 20:01:00:05:30:00:28:df Optimize Mode: Disabled Configured priority: 128 Configured domain ID: 0x00(0) (preferred) Principal switch run time information: Running priority: 2 Interface Role RCF-reject --------------------------------------port-channel 56 Downstream Disabled --------------------------------------MDS-9148(config-if)# show fcdomain vsan 1 The local switch is a Subordinated Switch. VSAN 1 and VSAN 48 are up but VSAN 47 is in isolated status because VSAN 47 is not configured ! on MDS 9148 switch MDS 9148 is only carrying SAN-B traffic. 107 bytes/sec. trunk mode is on snmp link state traps are enabled Port mode is TE Port vsan is 1 Speed is 8 Gbps Trunk vsans (admin allowed and active) (1. MDS-9222i(config-if)# show int port-channel 56 port-channel 56 is trunking Hardware is Fibre Channel Port WWN is 24:38:00:05:9b:77:0d:c0 Admin port mode is auto. 0 LRR. 0 unknown class 0 too long. 128 bytes/sec.

although they could be identified using switch physical port numbers.) Usually members of a zone are identified using the end devices World Wide Port Number (WWPN). Zoning through the switches allows traffic on the switches to connect only chosen sets of servers and storage end devices on a particular VSAN. we need to provide access to our initiator. which is connected to fc 1/1 of MDS 9148 via VSAN 48 to a LUN that resides on a storage array connected to fc 1/14 of MDS 9222i (VSAN 48).Current domain ID: 0x52(82) Local switch configuration information: State: Enabled FCID persistence: Enabled Auto-reconfiguration: Disabled Contiguous-allocation: Disabled Configured fabric name: 20:01:00:05:30:00:28:df Optimize Mode: Disabled Configured priority: 128 Configured domain ID: 0x00(0) (preferred) Principal switch run time information: Running priority: 2 Interface Role RCF-reject --------------------------------------port-channel 56 Upstream Disabled --------------------------------------MDS-9222i# show port-channel database port-channel 56 Administrative channel mode is active Operational channel mode is active Last membership update succeeded First operational port is fc1/6 2 ports in total. 2 ports up Ports: fc1/5 [up] fc1/6 [up] * ! The asterisks (*) indicate which interface became online first. MDS-9148(config-if)# show port-channel database port-channel 56 Administrative channel mode is active Operational channel mode is active Last membership update succeeded First operational port is fc1/6 2 ports in total. Figure 9-12 portrays the necessary zoning configuration. . 2 ports up Ports: fc1/5 [up] fc1/6 [up] * To bring up the rest of our sample SAN topology. (Zoning on separate VSANs is completely independent.

In Table 9-12 we summarize NX-OS CLI commands for trunking and port channel. .Figure 9-12 Zoning Configuration on our Sample SAN Topology Our sample SAN topology is up with the preceding zone configuration. FC initiator via dual host bus adapter (HBA) can access same LUNs from SAN A (VSAN 47) and SAN B (VSAN 48).

.

html Cisco MDS 9000 NX-OS Software upgrade and downgrade guide is available at http://www.cisco.com/c/en/us/td/docs/switches/datacenter/mds9000/hw/9148S/install/guide/9148S_h Cisco MDS NX-OS install and upgrade guides are available at http://www.com/c/en/us/support/storage-networking/mds-9000-nx-os-san-ossoftware/products-installation-guides-list.com/c/en/us/support/storage-networking/mds-9000-nx-os-san-os- .cisco.cisco.com/en/US/docs/switches/datacenter/mds9000/hw/9250i/installation/guide/connect.html#pgfId-419386 Cisco MDS 9000 NX-OS and SAN OS software configuration guides are available at http://www.com/c/en/us/td/docs/switches/datacenter/mds9000/sw/6_2/upgrade/guides/nxos/upgrade.com/c/en/us/support/storage-networking/mds-9000-series-multilayerswitches/products-installation-guides-list.Table 9-12 Summary of NX-OS CLI Commands for Trunking and Port Channels Reference List Cisco MDS 9700 Series Hardware installation guide is available at http://www.html Cisco MDS 9250i Multiservice Fabric Switch Hardware installation guide is available at http://www.cisco.com/c/en/us/td/docs/switches/datacenter/mds9000/hw/9700/mds_9700_hig/overview Cisco MDS Install and upgrade guides are available at http://www. Cisco MDS 9148s Multilayer Switch Hardware installation guide is available at http://www.cisco.cisco.cisco.

software/products-installation-and-configuration-guides-list. .html Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. Table 9-13 lists a reference for these key topics and the page numbers on which each is found. noted with the key topic icon in the outer margin of the page.

and complete the tables and lists from memory. “Memory Tables Answer Key. includes completed tables and lists to check your work.” also on the disc. and check your answers in the Glossary: virtual storage-area networks (VSAN) zoning fport-channel-trunk enhanced small form-factor pluggable (SFP+) small form-factor pluggable (SFP) 10 gigabit small form factor pluggable (XFP) host bus adapter (HBA) Exchange Peer Parameters (EPP) Exchange Virtual Fabrics Protocol (EVFP) Extensible Markup Language (XML) common information model (CIM) Power On Auto Provisioning (POAP) switch worldwide name (sWWN) World Wide Port Name (pWWN) slow drain basic input/output system (BIOS) .Table 9-13 Key Topics for Chapter 9 Complete Tables and Lists from Memory Print a copy of Appendix B. “Memory Tables.” (found on the disc). Appendix C. or at least the section for this chapter. Define Key Terms Define the following key terms from this chapter.

random-access memory (RAM) Port Trunking Protocol (PTP) fan-out ratio fan-in ratio .

Refer to “How to View Only Part Review Questions by Part” in the Introduction to this book for help with how to make the PCPT software show you Part Review questions for this part only. which are relevant to the particular chapter. Details on each task follow the table. . For this self-assessment exercise in this book.Part II Review Keep track of your part review progress with the checklist shown in Table PII-1. Self-Assessment Questionnaire Each chapter of this book introduces several key learning items. This might seem overwhelming to you initially. answer the Part Review questions for this part of the book. you should try to answer these self-assessment questionnaires to the best of your ability. Answer Part Review Questions For this task. which will help you remember the concepts and technologies of these key topics as you progress with the book. Review Key Topics Browse back through the chapters and look for the Key Topic icons. if you’re not certain. There will be no written answers to these questions in this book. Refer to “How to View Only DIKTA Questions by Part” in the Introduction to this book for help with how to make the PCPT software show you DIKTA questions for this part only. Table PII-1 Part II Review Checklist Repeat All DIKTA Questions For this task. using the PCPT software. shown in the following list. take the time to reread those topics. This exercise lists a set of key self-assessment questions. using the PCPT software. you should go back and refresh on the key topics. If you do not remember some details. answer the “Do I Know This Already?” questions again for the chapters in this part of the book. but as you read through each new chapter you will become more familiar and comfortable with them. without looking back at the chapters or your notes.

but the chapter is structured to give you the knowledge and understanding to be able to discuss and explain these self-assessment questions. For example. For example. number 6 is about Data Center Storage Architecture. You might or might not find the answer explicitly. and so on. Note your answer and try to reference key words in the answer from the relevant chapter. number 7 is about Cisco MDS Product Family.Think of each of these open self-assessment questions as being about the chapters you’ve read. . Block Virtualization would apply to Chapter 8.

Part III: Data Center Unified Fabric Exam topics covered in Part III: Define Challenges in Today’s Data Center Networks Unified Fabric Principles Inter-Data Center Unified Fabric Components and Operation of the IEEE Data Center Bridging Standards FCoE Principles FCoE Connectivity Options FCoE and Fabric Extenders FCoE and Server Adapter Virtualization Network Topologies and Deployment Scenarios for Multihop Unified Fabric FCoE and Cisco FabricPath Chapter 10: Unified Fabric Overview Chapter 11: Data Center Bridging and FCoE Chapter 12: Multihop Unified Fabric .

security. You can find the answers in Appendix A. such as IEEE Data Center Bridging enhancements. and cloud environments. If you do not know the answer to a question or are only partially sure of the answer. and orchestration platforms for more efficient operations and greater scalability. 1. Cisco Unified Fabric delivers flexibility and consistent architecture across a diverse set of environments. In this chapter we discuss how Unified Fabric promises to be the architecture that fulfills the requirements for the next generation of Ethernet networks in the data center.” Table 10-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. What is Unified Fabric? . which are designed to operate in physical. storage. virtual.Chapter 10. Table 10-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz allows you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. which improve the robustness of Ethernet and its responsiveness. you should mark that question as wrong for purposes of the self-assessment. It uniquely integrates with servers. “Answers to the ‘Do I Know This Already?’ Quizzes. Unified Fabric supports new concepts. and services infrastructure components. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. Unified Fabric Overview This chapter covers the following exam topics: Define Challenges in Today’s Data Center Networks Unified Fabric Principles Inter-Data Center Unified Fabric Unified Fabric is a holistic network architecture comprising switching. By unifying and consolidating infrastructure components. read the entire chapter.

b. Deploy separate application-specific dedicated environments. 2. 3. SAN storage traffic c. It is a virtual path selected between two virtual machines in the data center. Virtual Supervisor Module b. vPC blocks redundant paths toward the STP root bridge. physical. It is a feature of Cisco Nexus 7000 series switches to steer the traffic toward physical service nodes. b. Virtual Application Security 7. c. Spanning tree is automatically disabled when you enable vPC. Network traffic 5. It leverages IEEE data center bridging. . c. d. Today’s networks as they are can accommodate any application demand for network and storage. d. Voice traffic b. d. Virtual Adaptive Security Appliance c. Administrators have hard time doing unified cabling. 6. It is a feature that allows virtual switches to get their basic configuration during boot. It consolidates multiple types of traffic. What is one possible way to solve the problem of different application demands in today’s data centers? a. b. Too low power consumption is not good for the electric companies. How does vPC differ from traditional spanning tree topologies? a. d.a. What Cisco virtual product is used to secure VM-to-VM communication within the same tenant space? a. What are the two types of traffic that are most commonly consolidated with Unified Fabric? a. b. It supports virtual. It leverages the Ethernet network. Multilayer design is not optimal for east-west network traffic flows. vPC leverages port-channels and has no blocked ports. Infiniband traffic d. Virtual Security Gateway d. and cloud environments. Rewrite applications to conform to the network on which they are running. c. Allow network discover application requirements. vPC leverages IS-IS protocol to construct network topology. 4. What is vPath? a. c. vPC topology causes loss of 50% of bandwidth because of blocked ports. Which of the following is one challenge of today’s data center networks? a. c. b.

ITR stands for Ingress Transformation Router. which enable strategic business growth and provide a foundation for the continuing trends of the private/public cloud deployments. where decreasing budgets have to accommodate for innovative technology and process adoption. 10. where a single client request causes several network transactions between the servers in the data center. True or false? In LISP. This prevents the data centers of today from becoming the engine of enablement for the business growth. IT departments are also under everincreasing pressure of delivering more for less. aggregation. What happens when OTV receives a unicast frame and the destination MAC address is not in the MAC address table? a. Ethernet enjoys a broad base of engineering and operational expertise resulting from its ubiquitous presence in the data center environments for many years.d. It is a feature of Cisco Nexus 1000v virtual switches to steer the traffic toward virtual service nodes. Traffic patterns have shifted from being predominantly north-south to being predominantly east-west. vPC 9. SAN c. 8. The frame is dropped. LAN b. Wireless d. c. Such traffic patterns also facilitated the transition from traditional multilayer “stove pipe” designs of access. Operational silos and isolated infrastructure solutions are not optimized for virtualized and cloud environments. It is well understood by network engineers and developers worldwide. The frame is turned into multicast and sent only to some of the remote sites. converged fabric. which can simultaneously support multiple traffic types. and core layers into a flatter data center switching fabric architecture leveraging SpineLeaf Clos topology. a. Over the years it has withstood the test of time against other technologies trying to displace it as the popular option for data center network environments. The frame is dropped and the ICMP message is sent back to the source. b. d. Modern application environments no longer rely on simple client/server transactions. False Foundation Topics Challenges of Today’s Data Center Networks Ethernet is by far the predominant choice for a single. IT departments of today’s organizations are under increasing pressure to deliver technological solutions. . Figure 10-1 depicts traditional north-south and the new east-west data center switching architectures. The frame is flooded through the overlay to all remote sites. At the same time. True b. Which parts of the infrastructure can DCNM products manage? a.

Different types of traffic carry different characteristics. whereas others absolutely must have guaranteed no-drop packet delivery. One method to solve this is to deploy multiple. which means that network available bandwidth and latency numbers are both important. This fundamental shift in traffic volumes and traffic patterns has resulted in greater reliance on the network. even though this is not a popular technology in the majority of the modern . In the converged infrastructure of the Unified Fabric. Different protocols respond differently to packet drops. FCoE protocol requires a lossless underlying fabric where packet loss is not tolerated. both of the behaviors must be accommodated. as well as services such as Remote Direct Memory Access (RDMA). which is a predominant use case behind the Unified Fabric deployment. All these require the network infrastructure to be flexible and intelligent to discover and accommodate the changes in network traffic dynamics. Packet drops can have severely adverse effects on overall application performance. Ultimately. some accept packet drops as part of a normal network behavior. The best example is the FCoE protocol. growing application demands require additional capabilities from the underlying networking infrastructure. Although bandwidth and latency matter. Now application performance is measured along with performance of the network. The demands for high performance and cluster computing. differences also exist in the capability of applications to handle packet drops. Client-to-server and server-to-server transactions usually involve short and bursty in nature transmissions. applicationspecific dedicated environments. separate. whereas most of the server-to-storage traffic consists of long-lasting and steady flows.Figure 10-1 Multilayer and Spine-Leaf Clos Architecture Increased volumes of data led to significant network and storage traffic growth across the infrastructure. are at times addressed by leveraging the low latency InfiniBand network. Networks deployed in high-frequency trading environments have to accommodate ultra-low latency packet forwarding where traditional rules of typical Enterprise application deployments do not quite suffice. It is not uncommon for the data centers of today to deploy an Ethernet network for IP traffic and a Fibre Channel storage-area network (SAN) for block-based storage connectivity.

This highly segmented deployment results in high capital expenditures (CAPEX). intelligence. and cloud environments. This is where Cisco Unified Fabric serves as a key building block for ubiquitous infrastructure to deploy applications on top. Cisco Unified Fabric Principles Cisco Unified Fabric is a multifaceted approach where architectures. Some organizations even build separate Ethernet networks for IP-based storage. It enables optimized resource utilization. resulting in fewer management points with shorter bring up time. we can see that Ethernet has a promise of meeting the majority of the requirements for all the network types. features. greater application performance. Figure 10-2 depicts a traditional concept of distinct networks purposely built for specific requirements. It provides foundational connectivity service. and capabilities are combined with concepts of convergence. and network services onto consistent networking infrastructure across physical. while greatly increasing network business value. data networking. Figure 10-2 Typical Distinct Data Center Networks When these types of separate networks are evaluated against the capabilities of the Ethernet technology. unifying storage. and high demands for management. All these combined create an opportunity for consolidation. faster application rollout. which breaks organizational silos while reducing technological complexity. . Such Ethernet fabric would be operationally simpler to provision and operate. security. and so on. virtual. Convergence of Network and Storage Convergence properties of the Cisco Unified Fabric architecture are the melding of the storage-area network (SAN) with a local-area network (LAN) delivered on top of enhanced Ethernet fabric. and lower operating costs. The following sections examine in more detail the key properties of the Unified Fabric network.data centers. high availability. high operational expenses (OPEX). scalability. Cisco Unified Fabric creates a platform of a true multiprotocol environment on a single unified network infrastructure.

bidirectional bridging between FCoE and traditional Fibre Channel fabric. . At the time of writing this book. unified ports can be software configured to operate in 1G/10G Ethernet. Figure 10-3 FCoE and Fibre Channel Bidirectional Bridging The Cisco Nexus 5000 series switches can leverage either native Fibre Channel ports or the unified ports to bridge between FCoE and traditional Fibre Channel environment. Cisco Nexus 5000 and Cisco MDS 9000 family switches can provide full. Figure 10-3 depicts bridging between FCoE and traditional Fibre Channel fabric. servers attached through traditional Fibre Channel HBAs can access newer storage arrays connected through FCoE ports. For instance.Although end-to-end converged Unified Fabric provides the utmost advantages to the organizations. Similarly. and by that providing investment protection for the current network equipment and technologies. 10G FCoE. Figure 10-4 depicts unified port operation. it also allows the flexibility of integrating into existing non-converged infrastructure. Both the Cisco MDS 9000 family of storage switches and the Cisco Nexus 5000/7000 family of Unified Fabric network switches have features that facilitate network convergence. where FCoE servers connect to Fibre Channel storage arrays leveraging the principle behind bidirectional bridging on Cisco Nexus 5000 series switches. allowing FCoE servers to connect to older Fibre Channel-only storage arrays. or 1/2/4/8G Fibre Channel mode.

a typical server requires at least five connections: two redundant LAN connections. clustering. and so on. If you add on top separate connections for backup. you can quickly see how it translates into a significant . At the time of writing this book. for each server in the data center. Example 10-1 Configuring Unified Port on the Cisco Nexus 5000 Series Switch Click here to view code image nexus5500# configure terminal nexus5500 (config)# slot 1 nexus5500 (config-slot)# port 32 type fc nexus5500 (config-slot)# copy running-config startup-config nexus5500 (config-slot)# reload Consolidation can also translate to a very significant savings. Example 10-1 shows a configuration example for defining unified ports as port type Fibre Channel on Cisco Nexus 5500 switches. two redundant SAN connections. Cisco Nexus 7000 series switches do not support unified port functionality. and an out-of-band management connection.Figure 10-4 Unified Port Operation Note Unified port functionality is not supported on Cisco Nexus 5010 and 5020 switches. For example. you can deploy Cisco Nexus Unified Fabric switches initially connected to traditional systems with HBAs and convert to FCoE or other Ethernet. With such great flexibility.and IP-based storage protocols as the time progresses.

this includes provisioning of all the Fibre Channel services. With FCoE. Figure 10-6 depicts the logical concept behind . this cabling reduction translates to fewer ports. a converged data center network provides all the functionality of existing disparate network and storage environments. For Fibre Channel. and support for virtual SANs (VSAN). Fewer cables have implications on cooling by improving the airflow in the data center. One possible concern for delivering converged infrastructure for network and storage traffic is that Ethernet protocol does not provide guaranteed packet delivery. equal cost multipathing. Fibre Channel frames are encapsulated in outer Ethernet headers for hop-by-hop delivery between initiators and targets where lossless behavior is expected. and switches. consolidation also carries operational improvements. inter-VSAN routing (IVR). and the like. intermediate Unified Fabric Ethernet switches operate in either FCoE-NPV or FCoE switching mode. all these connections can be replaced by the converged network adapters. and fewer switches and adapters also mean less power consumption. For Ethernet. link aggregation. Such consistency allows IT staff to leverage operational practices of the isolated environments and apply them to the converged environment of the Unified Fabric. such as zoning and name servers. it can yield even further significant savings. From the perspective of a large size data center. proving at last 2:1 saving in cabling. this includes support for multicast and broadcast traffic. Along with reduction in cabling. With consolidation in place. At the same time. To enforce Fibre Channel characteristics at each hop along the path between initiators and targets. and so on. which contribute to lower over-subscription and eventually high application performance. All switches composing the Unified Fabric environment run the same Cisco NX-OS switch operating system. Figure 10-5 depicts an example of how network adapter consolidation can greatly reduce the number of server adapters used in the Unified Fabric topology.investment. and fewer network layers. adapters. Figure 10-5 Server Adapter Consolidation Using Converged Network Adapters If network adapter virtualization in the form of Cisco Adapter-FEX or Cisco VM-FEX technologies is leveraged. This sometimes is referred to as FCoE Dense Mode. VLANs. fewer switches. which is essential for healthy Fibre Channel protocol operation.

Cisco Data Center Network Manager (DCNM). where FCoE traffic is forwarded over Cisco FabricPath fabric core links. support these standards for enhanced Ethernet. FICON and Fibre Channel over IP. Figure 10-6 FCoE Dense Mode At the time of writing this book. and many times non-disruptive. The only partial exception is Dynamic FCoE. With the Dynamic FCoE feature. Cisco FabricPath Spine Nodes do not act as FCoE Fibre Channel Forwarders. as well as the MDS 9500/9700 series storage switches with FCoE line cards. Cisco does not support FCoE topologies. as well as monitor and automate common network administration tasks. Convergence with Cisco Unified Fabric also extends to management. Cisco DCNM allows managing network and storage environments as a single entity. which is increasingly necessary not only for the storage traffic. no-drop. SAN and LAN convergence does not have to happen over night. Cisco DCNM provides a secure environment. Ultimately. These additional standards make sure that Unified Ethernet fabric provides lossless.having all FCoE intermediate switches operating in either FCoE-NPV or FCoE switching mode. where intermediate Ethernet switches do not have FCoE awareness. This set of standards is collectively called Data Center Bridging (DCB). For smaller size deployments. Cisco Nexus 5000/7000 series Unified Fabric switches. DCB increases overall reliability of the data center networks. Running FCoE requires additional capabilities added to the Ethernet Unified Fabric network. making sure frame hashing does not result in out-of-order packets. migration from disparate . IEEE defines a set of standards augmenting standard Ethernet protocol behavior. and they are required for successful FCoE operation. FCoE. they still perform certain FCoE functions—for example. which can manage a fully converged network. the FCF. in-order delivery of packets endto-end. however. is optimized to work with both the Cisco MDS 9000 series storage switches and Cisco Nexus 5000/7000 series Unified Fabric network switches. Cisco MDS 9250i Multiservice Fabric Switch offers support for the Unified Fabric needs of running Fibre Channel. Cisco’s primary management tool. iSCSI. but also other types of multiprotocol communication across the Unified Fabric. and Cisco Unified Fabric architecture is flexible to allow gradual. FCoE switching mode implies running FCoE Fibre Channel Forwarder. sometimes referred to as FCoE Sparse Mode.

and it simply delivers IP packets between source and destination. which are used for this solution. server network interface cards (NIC) and host bus adapters (HBA) are used to provide connectivity into isolated LAN and SAN environments. in-order delivery. iSCSI allows encapsulating original Fibre Channel payload in the TCP/IP packets. organizations can leverage an iSCSI gateway device. With convergence of the two. Migration from NICs and HBAs to the CNA can occur either during the regular server hardware refresh cycles or it can be carried out by proactively swapping them during planned maintenance windows. such as a Cisco MDS 9250i storage switch. some NICs and CNAs connected to Cisco . no-drop. the underlying network infrastructure cannot differentiate between iSCSI and any other TCP/IP application. NICs and HBAs make way for a new type of adapter called converged network adapter (CNA). and reencapsulate it in the FCoE protocol for transmission over the Unified Fabric.isolated environments to unified network infrastructure. but that increases the cost of network adapters. which are built around traditional Fibre Channel fabrics. Small Computer System Interface over IP (iSCSI) can be considered as an alternative to FCoE. iSCSI can also introduce increased CPU cycles resulting from TCP/IP overhead on the server. iSCSI does not support native Fibre Channel services and tools. For environments where storage arrays lack iSCSI functionality. iSCSI leverages TCP protocol to deliver lossless. In this case. Even though DCB is not mandatory for iSCSI operation. Figure 10-7 depicts iSCSI deployment options with and without the gateway service. which relies on IEEE DCB standards for efficient operation. TCP/IP offload can be leveraged on the network adapter to help with higher CPU utilization. Figure 10-7 iSCSI Deployment Options Contrary to FCoE. In environments where storage connectivity has less stringent requirements. extract the payload. to terminate the iSCSI TCP/IP session. Traditionally.

as well as the overall fabric and system scalability. such as. Example 10-2 Configuring No-Drop Behavior for iSCSI Traffic Click here to view code image nexus5000# configure terminal nexus5000(config)# class-map type qos match-all c1 nexus5000(config-cmap-qos)# match protocol iscsi nexus5000(config-cmap-qos)# match cos 6 nexus5000(config-cmap-qos)# exit nexus5000(config)# policy-map type qos c1 nexus5000(config-pmap-qos)# class c1 nexus5000(config-pmap-c-qos)# set qos-group 2 nexus5000(config-pmap-c-qos)# exit nexus5000(config)# class-map type network-qos c1 nexus5000(config-cmap-nq)# match qos-group 2 nexus5000(config-cmap-nq)# exit nexus5000(config)# policy-map type network-qos p1 nexus5000(config-pmap-nq)# class type network-qos c1 nexus5000(config-pmap-c-nq)# pause no-drop Although iSCSI continues to be a popular option in many storage applications. is achieved without compromising manageability and cost. Example 10-2 shows the relevant command-line interface commands to enable a no-drop class for iSCSI traffic on Nexus 5000 series switches for guaranteed traffic delivery. Potential use of other DCB standards coded in TLV format adds even further flexibility to the iSCSI deployment by allowing underlying Unified Fabric infrastructure separate iSCSI storage traffic from the rest of IP traffic. particularly with small and medium-sized businesses (SMB). In that case. it is not quite replacing the wide adoption of the Fibre Channel protocol. Unified Fabric leveraging connectivity speeds of 10G. Each network deployment has unique . fewer switches. Scalability parameters can also stretch beyond the boundary of a single data center. to the connected network adapters in a scalable manner. which contributes to fewer ports. for critical storage environments. where individual device performance. Unified Fabric is often characterized as having a multidimensional scalability. and subsequently fewer network tiers and fewer management points. either traditional or FCoE. The capability to grow the network in a manner that causes the least amount of disruption to the ongoing operation is a crucial aspect of scalability. Class of Service (COS) markings. DCBX would negotiate configuration and settings between the switch and the network adapter by leveraging typelength-value (TLV) and sub-TLV fields. part of DCB. 40G. allowing overall larger scale.Nexus 5000 series switches can be configured to accept configuration values sent by the switches leveraging Data Center Bridging Exchange protocol. Scalability and Growth Scalability is defined as the capability to grow the environment based on changing needs. and 100G offers substantially higher useable bandwidth for the server and storage connectivity. allowing a true geographic span. for example. This allows distributing configuration values.

Figure 10-10 depicts Cisco Nexus 9500 series modular switch architecture. which cannot be addressed by a rigid architecture. Cisco Nexus 5000 series switches offer a highly scalable access layer platform with 10G and 40G Ethernet capabilities and FCoE support for Unified Fabric convergence. Unified Fabric caters to the needs of ever expanding scalability by providing flexible architecture. and F-series I/O modules for lower latency. higher port capacity. Figure 10-9 depicts Cisco Nexus 5000 series switch fabric architecture. Figure 10-8 Cisco Nexus 7000 Series Switch Fabric Architecture Note Cisco Nexus 7004 switch has no fabric modules. Figure 10-8 depicts the Cisco Nexus 7000 series switch fabric architecture. . Figure 10-9 Cisco Nexus 5000 Series Switch Fabric Architecture The most recent addition to the Cisco Nexus platforms is the Cisco Nexus 9000 family switches. Cisco Nexus 7000 series switches offer highly scalable M-series I/O modules for large capacities and feature richness. Cisco Nexus 9500 series modular switches have no backplane. Its architecture does not include fabric modules. and the line cards are directly interconnected through the switch backplane. while also following the investment protection philosophy. Cisco Nexus 9000 series switches are based on a combination of Cisco’s own ASICs and Broadcom ASICs. Cisco’s entire data center switching portfolio adheres to the principles of supporting incremental growth and demands for scalability. Through an evolution of the Fabric Modules and the I/O Modules. but instead uses a unified port controller and Unified Fabric crossbar. For example. and several differentiating features. These switches provide fixed and modular switch architecture with high port density and the capability to run Application Centric Infrastructure. Cisco Nexus 7000 series switches offer expandable pay-asyou-grow scalable switch fabric architecture and non-oversubscribed behavior for high-speed Ethernet fabrics.characteristics.

Figure 10-10 Cisco Nexus 9500 Series Modular Switch Architecture On the storage switch side. persistent storage space (PSS) for process runtime information. Cisco NX-OS operating system. coupled with Role-Based Access Control (RBAC) and a . runs modular software architecture based on an underlying Linux kernel. Figure 10-11 depicts NX-OS software architecture components. component patching. Cisco MDS switches offer a highly available platform for storage area networks with expandable and resilient architecture of supporting increased Fibre Channel speeds and services. inservice software upgrade (ISSU) and so on. which powers data center network and storage switching platforms. Figure 10-11 NX-OS Software Architecture Components Cisco NX-OS leverages innovative approach in software development by offering stateful process restart.

Modern data centers observe different trends in packet forwarding where traditional client/server applications driving primarily north-south traffic patterns are replaced by a new breed of applications with predominantly east-west traffic patterns. cause “waste” of 50% of provisioned uplink bandwidth on access switches by blocking redundant uplinks. and service insertion methodologies. Figure 10-12 Spanning Tree Layer 2 Operational Logic The evolution of spanning tree topologies led to the use of multichassis link-aggregation solutions. In the past. traditional growth implied adding capacity in the form of switch ports or uplink bandwidth. Figure 10-13 depicts vPC operational logic. . vPC still runs Spanning Tree Protocol underneath as a safeguard mechanism to break Layer 2 bridging loops should they occur. Legacy spanning tree–driven topologies lack fast convergence required for the stringent demands of modern applications. Cisco NX-OS software enjoys frequent updates to keep up with the latest feature demands. Cisco implements virtual port channel (vPC) technology to help alleviate deficiencies of traditional spanning tree–driven topologies. This shift has significant implications on the data center switching fabric designs. this is no longer the case. where Etherchannels or port channels enable all-forwarding uplink topologies. This generates additional requirements on the data center switching fabric side around extensibility of Layer 2 connectivity. fabric scale. In addition to the changing network traffic patterns.rich set of remote APIs. Spanning tree Layer 2 loop-free topologies do not allow backup links and. allowing faster convergence and full utilization of the Access Switch provisioned uplink bandwidth. many applications reside on virtual machines with inherent characteristics of workload mobility and any workload anywhere. fabric capacities. Figure 10-12 depicts spanning tree Layer 2 operational logic. as such.

by using IS-IS routing protocol instead of Spanning Tree Protocol. It leverages Spine-Leaf Clos architecture to construct data center switching fabric with predicable end-to-end latency and vast east-west bisectional bandwidth. Figure 10-14 depicts Cisco FabricPath topology and forwarding. Cisco FabricPath takes a step further by completely eliminating the dependency on the Spanning Tree Protocol in constructing loop-free Layer 2 topology. . Even with vPC topologies. the scalability of solutions requiring large Layer 2 domains in support of virtual machine mobility can be limited.Figure 10-13 vPC Operational Logic Can we do better? Yes. Cisco FabricPath combines the simplicity of Layer 2 with proven scalability of Layer 3. It also allows equal cost multipathing for both unicast and multicast traffic to maximize network resource utilization. It employs the principle of conversational learning to further scale switch hardware forwarding resources.

Principles of Spine-Leaf topologies can also be applied with other encapsulation technologies. Figure 10-15 depicts VXLAN topology and forwarding. Figure 10-15 VXLAN Topology and Forwarding VXLAN also assists in breaking through the boundary of ~4. Similar to Cisco FabricPath. This is an important consideration with multitenant Unified Fabric. Use of Cisco Nexus 2000 series Fabric Extenders in conjunction with Cisco FabricPath architecture enables us to build even larger-scale switching fabrics without increasing administrative domain. Cisco FabricPath breaks the boundaries and enables large-scale virtual machine mobility domains. Figure 10-16 VXLAN Packet Fields .1Q standard widely deployed between switches. VXLAN leverages underlying Layer 3 topology to extend Layer 2 services on top (MAC-in-IP).000 VLANs supported by the IEEE 802. such as virtual extensible LANs or VXLAN. The 24-bit field potentially enables you to create more than 16 million virtual segments.Figure 10-14 Cisco FabricPath Topology and Forwarding By taking data center switching from PODs to fabric. which requires a large number of virtual segments. where you can see the 24-bit VXLAN ID field. providing similar advantages to Cisco FabricPath while enjoying broader switching platform support. sometimes also known as the Virtual Network Identifier (VNID) field. Figure 10-16 depicts the VXLAN packet format.

Policies are applied across the infrastructure and most importantly in the virtualized environment where workloads tend to be rapidly created. and intelligent services delivered to applications residing on physical or virtual network infrastructure alike. or migration of individual workloads has no impact on the overall environment operation. differentiated service. For the Cisco Nexus and Cisco MDS family of switches. Cisco NX-OS. Cisco Unified Fabric policies around security. Policies also greatly reduce the risk of human error. Figure 10-17 depicts Cisco Nexus 1000v components. where switch supervisor modules. performance. and Layer 4 through Layer 7 policies for the virtual workloads. deployment of applications required a considerable effort of sequentially configuring various physical infrastructure components. These products run in the hypervisor layer to secure virtual workloads inside a single tenant or across tenants. It provides consistency. and the like can be consistently set across the environment allowing to tune those for special application needs if necessary. control the switch line cards called Virtual Ethernet Module (VEM). Security and Intelligence Network intelligence coupled with security is an essential component of Cisco Unified Fabric. high availability. The use of consistent network policies is essential for successful Unified Fabric operation. Consistent use of policies also enables faster application deployment cycles. called Virtual Supervisor Module (VSM). the intelligence comes from a common switch operating system. Cisco Unified Fabric contains a complete portfolio of security products that are virtualization aware. Policies can be templatized and applied in a consistent repeatable manner. removal. Cisco Unified Fabric also contains a foundation for traffic switching in the hypervisor layer in the form of a Cisco Nexus 1000v product. so that creation. It operates in the manner similar to a modular chassis. In the past. and moved around. it raises legitimate security concerns. a common feature set. security. Cisco Nexus 1000v is a cross-hypervisor virtual software switch that allows delivering consistent network. removed. As multiple types of traffic are consolidated on top of the Unified Fabric infrastructure. . VSMs should be deployed in pairs to provide the needed redundancy.VXLAN fabric allows building large-span virtual machine mobility domains with significant scale and bandwidth.

Figure 10-18 depicts an example of Cisco VSG deployment to secure a three-tier application within a single tenant space. . Virtual services deployed as part of the Cisco Unified Fabric solution can secure intra-tenant communication between the virtual machine belonging to the same tenant. for easy single pane of glass operation. or they can secure intertenant communication between the virtual machines belonging to different tenants. Cisco Virtual Security Gateway (VSG) is deployed on top of Cisco Nexus 1000v virtual switch catering the case of intra-tenant security. such as VMware vCenter. It provides logical isolation and policy enforcement based on both simple networking constructs and more elaborate virtual machine attributes.Figure 10-17 Cisco Nexus 1000v Components Cisco Nexus 1000v runs NX-OS software on the VSMs and communicates with Virtual Machine Managers.

This model contributes to consistent policies across the environment. Figure 10-19 depicts vPath operation with VSG. All the subsequent packets of the flow are switched in a fast-path in the hypervisor kernel. the first packet of each flow is redirected to VSG for inspection and policy enforcement. Following inspection. With Cisco vPath. Example 10-3 VSG Port-Profile Configuration . Figure 10-19 vPath Operation with VSG VSG security policies can be deployed in tandem with switching policies enforced by the Cisco Nexus 1000v. which results in high traffic throughput. a policy decision is made and programmed into the Cisco Nexus 1000v switch residing in hypervisor kernel space.Figure 10-18 Three-Tier Application Secured by VSG The VSG product offers a very scalable mode of operation leveraging Cisco vPath technology for traffic steering. Example 10-3 shows how VSG is inserted into the Cisco Nexus 1000v port-profile for specific tenant use.

Cisco WAAS offers the following main advantages: Transparent and standards-based secure application delivery LAN-like application performance with application-specific protocol acceleration . physical ASAs are connected to the services leaf nodes. Another component of intelligent service delivery comes in the form of application acceleration. In the Spine-Leaf Clos architecture. Physical ASA appliances are many times positioned at the Unified Fabric aggregation layer. Figure 10-20 Typical Cisco ASA Deployment Topologies Both physical and virtual ASA together provide a comprehensive set of security services for Cisco Unified Fabric.10. and they can also provide security services for the physical non-virtualized workloads to create consistent policy across virtual and physical infrastructures.Click here to view code image nexus1000v# configure terminal nexus1000v(config)# port-profile host-profile nexus1000v(config-port-prof)# org root/Tenant-A nexus1000v(config-port-prof)# vn-service ip 10. It is also common to use two layers of security with virtual firewalls securing east-west traffic between the virtual machines and physical firewalls securing the north-south traffic in and out of the data center.10. This is where Cisco Wide Area Application Services (WAAS) offer a solution for application acceleration across WAN.1 vlan 10 profile vnsp-1 Inter-tenant space can be secured by either virtual firewall in the form of virtual Cisco ASA product deployed in the hypervisor layer or by the physical appliances deployed in the upstream physical network. Figure 10-20 depicts typical Cisco ASA topology in a traditional multilayer and Spine-Leaf architectures.

If left unaddressed. Cisco WAAS commonly leverages WCCP protocol traffic redirection to steer the traffic off path toward the WAAS appliance cluster. and grow to be business inhibitors. rather than transmitting the entire data. both ends of the WAN optimized connection can signal each other with very small signatures. Cisco WAAS leverages generic network traffic optimization.IT agility through reduction of branch and data center footprint Enterprisewide deployment scalability and design flexibility Enhanced end-user experience Cisco WAAS reduces consumption of WAN bandwidth. In both cases. a router module. Cisco WAAS employs TCP flow optimization (TFO) to better manage TCP window size and maximize network bandwidth use in a way superior to common TCP/IP stack behavior on the host. Cisco WAAS technology has two main deployment scenarios. Geographically dispersed organizations leverage significant WAN capacities and incur significant operational expenses for maintaining circuits and service levels. and bandwidth starvation. Cisco WAAS leverages context-aware data redundancy elimination (DRE) to reduce traffic load on WAN circuits. First. Delivering applications over WAN is often accompanied by jitter. Cisco WAAS leverages adaptive persistent session-based Lempel-Ziv (LZ) compression to shrink the size of network data traveling across. it can be deployed either at the edge of the environment in the case of a branch office or at the aggregation layer in case of a data center. these factors can adversely impact application performance. Second. which can be delivered in the form of a physical appliance. packet loss. it can be deployed in the hypervisor leveraging Cisco vPath redirection mechanisms. which cause the content to be served out of the locally maintained cache. Figure 10-21 depicts use of WAN acceleration between the branch office and the data center. leveraging deployments of WAAS components in various parts of the topology. a virtual appliance. influence user productivity. or a Cisco IOS software feature. . Some of the traffic transmitted across the WAN is repetitive and can be locally cached. subsequently optimizing application performance across long-haul circuits. as well as application-specific acceleration techniques to counter adverse impact of delivering applications across WAN. Finally. With DRE. Hypervisor deployment model brings the advantages of application acceleration close to the virtual workloads.

Figure 10-21 WAN Acceleration Between Data Center and Branch Office Cisco WAAS technology deployment requires positioning of relevant application acceleration engine on either side of the WAN connection. Refer to Chapter 20. Management of the network is at the core of its intelligence. The selection of an engine is based on performance characteristics. These factors contribute to the overall data center infrastructure reliability and uptime. DCNM can set and automatically provision a policy on converged LAN and SAN environments. thereby improving business continuity. types of workloads. Figure 10-22 depicts Cisco Prime DCNM dashboard. It dynamically monitors and takes actions when needed on both physical and virtual infrastructures. and deployment model. DCNM provides a robust set of features by streamlining the provisioning aspects of the Unified Fabric components. It provides visibility and control of the unified data center to uphold stringent servicelevel agreements. . Cisco Prime Data Center Network Manager (DCNM) manages Cisco Nexus and Cisco MDS families of switches through a single pane of glass.” for a more detailed discussion about Cisco WAAS technology. “Cisco WAAS and Application Acceleration.

At the same time. including an endto-end path containing a mix of traditional Fibre Channel and FCoE. Organizations can also leverage reports available through the preconfigured templates. Such properties are the extension of Layer 2. Network foundation to enable multi-data-center deployment must extend the network properties across the data centers participating in the distributed setup. Cisco DCNM has extensive reporting features that allow building custom reports specific to the operating environment. DCNM can also handle provisioning and monitoring aspects of FCoE deployment. . Layer 3. Figure 10-23 depicts data center interconnect technology solutions. Cisco DCNM provides automated discovery of the network components. This discovery data can be imported to the change management systems for easier operations. each data center must remain an autonomous unit of operation.Figure 10-22 Cisco Prime DCNM Dashboard Software features provided by Cisco NX-OS can be deployed with Cisco DCNM because it leverages multiple dashboards for the ease of use and representation of the overall network topology. and storage connectivity between the data centers powered by the end-to-end Unified Fabric approach. keeping track of all physical and logical network device information. Inter-Data Center Unified Fabric Geographically dispersed data centers provide added application resiliency and enhanced application reliability.

or outcomes of acquisitions. These could be artifacts of building isolated environments to address local needs. or converged protocol on top. and fault-tolerance becomes increasingly important. Layer 1 technologies in the form of DWDM or CWDM allow running any data. Layer 3 networks allow extending storage connectivity between disparate data center environments by leveraging Fibre Channel over IP protocol (FCIP). Figure 10-24 depicts the principle of transporting Fibre Channel frames across IP transport. which lay the foundation for inter-data center Unified Fabric extension. These could be long haul point-to-point links. Fibre Channel over IP extends the reach of the Fibre Channel protocol across geographic distances by encapsulating Fibre Channel frames in outer IP headers for transport across interconnecting Layer 3 routed networks. a need for interconnecting isolated storage networks while maintaining characteristics of performance. as well as private or public MPLS networks. As organizations leverage ubiquitous storage services. Fibre Channel over IP Disparate storage area networks are not uncommon. availability.Figure 10-23 Data Center Interconnect Methods Multiple technologies. . while Layer 2 bridged connectivity is achieved by leveraging Overlay Transport Virtualization (OTV). storage. Disparate storage environments are most often located in topologies segregated from each other by Layer 3 routed networks. deploying storage environments with different administrative boundaries. are in use today.

This causes unnecessary traffic load. In this case the sender only needs to retransmit the TCP segment lost in transit. which in turn have a detrimental effect on FCIP storage traffic performance. such as leveraging TCP extensions for high performance (IETF RFC 1312) and scaling TCP window size up to 1Gb. The SACK mechanism is most effective in environments with long and large transmissions. network capacity “waste. TCP/IP protocol traditionally achieves its reliability for traffic delivery by leveraging the acknowledgement mechanism for the received data segments. Path MTU Discovery (PMTUD) mechanism can be invoked on both sides of the FCIP connection to make sure the effective maximum supported path MTU is automatically discovered. Acknowledgments are used to signal from the receiver back to the sender the receipts of contiguous in-sequence TCP segments. the sustained network throughput increases.Figure 10-24 FCIP Operation Principle Leveraging TCP/IP protocol provides the FCIP guaranteed traffic delivery required for healthy storage network operation. cannot come at the expense of storage network performance. . In case of packet loss. SACK is not enabled by default on Cisco devices. in the transit IP network can also greatly influence the performance characteristics of the storage traffic carried by FCIP. and it must be enabled and supported on both ends of FCIP circuit.” and increased delays. or MTU. FCIP PMTUD is not automatically enabled on Cisco devices. it must be manually enabled if required by issuing the sack-enable CLI command under the FCIP profile configuration. Cisco implements a selective acknowledgement (SACK) mechanism where the receiver can selectively acknowledge TCP segments received after packet loss occurs. any packets carrying Fibre Channel payload are retransmitted following standard TCP/IP protocol operation. segments received after the packet loss occurs are not acknowledged and must be retransmitted. rather than all packets beyond the acknowledgement. Should packet drop occur. Large Maximum Transmission Unit size. but so do the chances of packet drops. it must be manually enabled if required by issuing the pmtu-enable CLI command under the FCIP profile configuration. This lossless storage traffic behavior allows FCIP to extend native Fibre Channel connectivity across greater distances. Intelligent control over TCP window size is needed to make sure long haul links are fully utilized while preventing excessive traffic loss and subsequent retransmission. As TCP window size is scaled up. Cisco FCIP capable platforms allow several features to maximize utilization of the intermediate Layer 3 links. however. This. Support for jumbo frames can allow transmission of full-size 2148 bytes Fibre Channel frames without requiring fragmentation and reassembly at the IP level into numerous TCP/IP segments carrying a single Fibre Channel frame.

either local or extended over the FCIP tunnel. however. while proper capacity management helps alleviate performance bottlenecks. which are considered by the Fabric Shortest Path First (FSPF) protocol for equal-cost multipathing purposes. Figure 10-25 Redundant FCIP Tunnels Leveraging Port Channeled Connections FCIP protocol allows maintaining VSAN traffic segregation by building logically isolated storage environments on top of shared physical storage infrastructure. For example. such features are highly advised to make sure that FCIP traffic is subjected to the absolute minimum influence from transient network delays and congestion. These tunnels contribute to connectivity resiliency and allow failover. circuit. and even though FCIP can handle such conditions. Along with performance characteristics. Some of the storage traffic types encapsulated inside the FCIP packets can be very sensitive to degraded network conditions. FCIP protocol is available on Cisco Nexus MDS 9500 modular switches by leveraging IP Storage Services Module (IPS) and the Cisco MDS 9250i Multiservice Fabric Switch. Overlay Transport Virtualization Certain mechanisms for Layer 2 extension across data centers. FCIP allows provisioning numerous tunnels interconnecting disparate storage networks. zoning. They fall short of making use of . Each VSAN. VSAN segregation is traditionally achieved through data-plane frame tagging. and efficiency. FCIP must also be designed following high availability and redundancy principles. Figure 10-25 depicts the principle behind provisioning multiple redundant FCIP tunnels between disparate storage network environments leveraging port channeled connections. At the time of writing this book. Utmost attention should be paid to make sure the transit Layer 3 routed network between the sites interconnected by FCIP circuits conforms to the high degree of availability. introduce challenges that can inhibit their practical use. Just like regular inter-switch links. fabric services.Deployment of Quality of Service features in the transit Layer 3 routed network is not a requirement for FCIP operation. Spanning Tree Protocol–based extension topologies do not scale well and can cause cascading failures that can propagate beyond the boundaries of a single availability domain. These redundant tunnels are represented as virtual E_port or virtual TE_port types. and path redundancy eliminates the single point of failure. Leveraging device. At the same time. reliability. overall storage performance will gain benefits from QoS policies operating at the network level. or in some cases across PODs of the same data center. FCIP tunnels can be bound into port channels for simpler FSPF topology and efficient traffic loadbalancing operation. exhibits the same characteristics in regard to routing. This frame tagging is transparently carried over FCIP tunnels to extend the isolation. and fabric management for ubiquitous deployment methodology.

Example 10-4 IS-IS Neighbor Relationships Between OTV Edge Devices Click here to view code image .multipathing topologies by logically blocking redundant links. OTV leverages a MAC-in-IP forwarding paradigm to extend Layer 2 connectivity over any IP transport. transport agnostic. A more efficient protocol is needed to leverage all available paths for traffic forwarding. Figure 10-26 depicts basic OTV traffic forwarding principle where Layer 2 frames are forwarded between the sites attached to OTV overlay. multihomed. Example 10-4 shows IS-IS neighbor relations established between the OTV edge devices through the overlay. OTV leverages the control plane protocol in the form of an IS-IS routing protocol to dissimilate MAC address reachability across the OTV edge devices participating in the overlay. however. Such a solution can be challenging for organizations not willing to develop operational knowledge around those technologies. which greatly reduces the risk often associated with large-scale Layer 2 extensions. Figure 10-26 OTV Packet Forwarding Principle OTV simulates the behavior of traditional bridged environments while introducing improvements. and multipathed Layer 2 data center interconnect technology that preserves isolation and fault domain properties between the disparate data centers. Cisco Overlay Transport Virtualization (OTV) technology provides nondisruptive. The best example is MPLS-based technologies that require label switching between the data center networks for Layer 2 extension. as well as provide intelligent traffic loadbalancing across the data center interconnect links. Let’s now look at a possible alternative. Other Layer 2 data center interconnect technologies offer a more efficient model than Spanning Tree Protocol–based topologies. they depend on specific transport network characteristics. at the same time.

54c2.016d dynamic 0 F F Po1 O 110 0000. After MAC addresses are learned.54c2.016c dynamic 0 F F Po1 * 110 0000. Example 10-5 shows a MAC address table on an OTV edge device.6e02. The use of overlay in OTV creates point-to-cloud type connectivity rather than a collection of fully meshed point-to-point links.+ . which utilize data-plane learning or flood-and-learn behavior. and scale.6e01.6e02.primary entry using vPC Peer-Link VLAN MAC Address Type age Secure NTFY Ports ---------+-----------------+--------+---------+------+----+-----------------G 001b. which allow maintaining traffic segregation across the data center boundaries. and enable First Hop Routing Protocol localization and Address Resolution Protocol (ARP) proxying without creating an additional operational overhead.6e02.Gateway MAC.43c1 OTV_EDGE2_SITE 001b.0c07.026c dynamic 0 F F Overlay1 O 110 0000. which are difficult to set up.6e01. because the transport network is transparently forwarding their traffic.Overlay MAC age .43c4 Overlay1 Level 1 1 1 State UP UP UP Hold Time 00:00:25 00:00:27 00:00:07 Interface Overlay1 Overlay1 Overlay1 This approach is different from the traditional bridged environments. Let’s now look .Routed MAC. O . Control plane protocol offers intelligence above and beyond the rudimentary behavior of flood-and-learn techniques and thus is instrumental in implementing OTV value-added features to enhance and safeguard the inter-datacenter Layer 2 environment.54c2. where you can see locally learned MAC addresses as well as MAC addresses available through the overlay. Such features stretch across loop-prevention. Local OTV edge devices hold the mapping between remote host MAC addresses and the IP address of the remote OTV edge device.010a dynamic 0 F F Po1 * 110 0000. OTV can safeguard the Layer 2 environment across the data centers or across IP-separated PODs of the same data center by employing proven Layer 3 techniques. such as VLAN ID.43c3 OTV_EDGE_SITE3 001b. The actual hosts on both sides of the connection are unaware of OTV.OTV_EDGE1_SITEA# show otv isis adjacency OTV-IS-IS process: default VPN: OTV-IS-IS adjacency database: System ID SNPA NEXUS7K-EDGE2 001b. As mentioned earlier.primary entry. Example 10-5 MAC Address Table Showing Local and OTV Learned Entries Click here to view code image OTV_EDGE1_SITEA# show mac address-table Legend: * . and multihoming mechanisms.43c2 static F F sup-eth1(R) * 110 0000. An efficient network core is essential to deliver optimal application performance across the OTV overlay. multipathing. manage.026d dynamic 0 F F Overlay1 Control plane protocol leveraged by OTV includes other Layer 2 semantics.ac6e dynamic 0 F F Po1 * 110 0000.020a dynamic 0 F F Overlay1 O 110 0000. G .6e02.54c2.020b dynamic 0 F F Overlay1 O 110 0000. OTV edge devices leverage the underlying IP transport to carry original Ethernet frames encapsulated in IP protocol between the data centers.6e01.seconds since last seen. (R) .

Spanning Tree Bridged Protocol Data Units (BPDU) are not forwarded over the OTV overlay. Because control plane protocol is used to dissimilate MAC address reachability information. Fast failover and flexibility: OTV control plane protocol.a little bit deeper into the main benefits achieved with OTV technology. OTV can operate over unicast-only or multicast-enabled core. Optimized operations: OTV control plane protocol allows the simple addition and removal of . which can rapidly reroute around failed sections of the network. with the exception being multicast-based deployment. Use of multiple ingress and egress points does not require a complex deployment and operational model. in time. where multicast-enabled core is required. which provides a very convenient topological placement to address the needs of the Layer 2 extension across the data centers. One of the flexibility attributes of OTV comes from its deployment model at the data center aggregation layer. IS-IS. where hosts and switches not participating in OTV do not have any knowledge of OTV. dictates remote MAC address reachability. IS-IS can leverage its own hello/timeout timers or it can leverage BFD for very rapid failure detection. and ARP traffic is forwarded only in a control manner by offering local ARP response behavior. Link failure detection in the core is based on routing protocols. Multipathing also applies to the traffic as it crosses the IP transit network. so spanning tree domains in different data centers keep operating independently of each other. OTV can even be deployed over existing Layer 2 data center interconnect solutions. OTV can be migrated from vPC to routed links for more streamlined infrastructure. as with some of the other Layer 2 data center interconnect technologies. OTV employs the active-active mode of operation at the edge of the overlay where all ingress and egress points forward traffic simultaneously. These behaviors create site containment similar to one available with Layer 3 routing. In case of OTV operating over multicast-enabled core. it can be seamlessly deployed on top of any existing IP infrastructure. so OTV edge devices do not need to maintain a state of tunnels interconnecting data centers. rather than tunneling. Another flexibility attribute of OTV comes from its incremental deployment characteristics. OTV does introduce a boundary for the Spanning Tree Protocol. unless specially allowed by the administrator. OTV is also fully transparent to the local Layer 2 domains. Little to no effect on existing network design: OTV leverages the IP network to transport original MAC encapsulated frames. there is a requirement to run multicast routing in the underlying transit network. If deployed over vPC. where it can bring the advantages of fault containment and improved scale. such as vPC. where ECMP behavior can utilize multiple paths and links between the data centers. Local spanning tree topologies are unaffected too. Optimal bandwidth utilization and scalability: OTV can make use of multiple edge devices better utilizing all available ingress and egress points in the data center through the built-in multipathing behavior. should one be formed in the locally attached Layer 2 environments. Multipathing is an important characteristic of OTV to maximize the utilization of available bandwidth. Fault isolation and site independence: Even though local spanning tree topologies are unaware of OTV. Any network capable of carrying IP packets can be used for OTV core. Because OTV is transport agnostic. OTV also leverages dynamic encapsulation. OTV does not allow unknown unicast traffic across the overlay. This is a huge advantage for OTV. OTV’s built-in loop prevention mechanisms avoid loops in the overlay and limit the scope of the loop.

Endpoint mobility can cause inefficient traffic-forwarding patterns as a traditional network struggles to deliver packets across the shortest path toward an endpoint. the convenience of operations comes from the greatly simplified command-line interface needed to enable and configure OTV service on the device. Layer 2 data center interconnect is not a requirement. various failure conditions. ingress traffic is still delivered to Site A because this is where the virtual machine’s IP subnet is still advertised out of. This behavior automatically establishes IS-IS neighbor relations across the shared medium without the need to define all remote OTV edge devices. which changes its location. KVM. fault tolerance. At the time of writing this book. or resource constraints. so data center interconnect requirements can be different based on specific clustered application needs. and simplified disaster recovery are some of the main use cases behind Layer 2 extension. Cisco ASR 1000 routers. and Cisco CSR 1000v software router platforms. Cross-data center workload mobility. dynamic resourcing. Hyper-V. For routable applications. such as Zen. Locator ID Separation Protocol Connectivity requirements between data center locations depend on the nature of the applications running in these data centers. the need to cater to virtualized application needs also increases. As modern data centers become increasingly virtualized. however. The traditional network forwarding paradigm does not distinguish between the identity of an endpoint (virtual machine) and its location in the network topology. Clustered applications are a “mixed bag. Most commonly deployed applications fall into one of the following categories: routable applications. nonroutable applications. and as such require data center interconnect to exhibit Layer 2 connectivity properties. such as Ethernet. and so on. the network must provide ubiquitous and optimized connectivity services across the entire mobility domain. Nonroutable applications typically expect to be placed on a share transmissions medium. These applications make use of server virtualization technologies and leverage server virtualization properties. with one example being Hadoop for Big Data distributed computation tasks. Figure 10-27 depicts traffic tromboning cases where a virtual machine migrates from Site A to Site B. virtualized applications reside on the virtual hypervisors. OTV technology is available on Cisco Nexus 7000 switches. even though at the time of writing the book only about 25% of total servers are estimated to be running a hypervisor virtualization layer. distributed application and server clustering. distributed applications. and virtualized applications. however.OTV sides from the overlay. or ESXi. As virtual machines move around the network due to administrative actions. nothing more than an IP network is required to make sure application components can communicate with each other. Distributed applications can typically make use of the IP network. clustered applications. Ultimately.” with some being able to operate over IP-routed network and some requiring Layer 2 adjacency. This great simplicity does not come on the account of deep level troubleshooting that is still available on the platforms supporting OTV functionality. which are replicated by the transit network and delivered to all other OTV edge devices that are part of the same overlay (listening to the same multicast group). . OTV edge devices express their desire to join the overlay by sending IS-IS hello messages to the multicast control group. such as virtual machine mobility. it can be very beneficial in numerous cases. Finally.

LISP routers providing both PITR and PETR functionality are called PxTRs. part of Cisco Unified Fabric holistic architecture. and the proxy tunnel router topology. Figure 10-28 shows ITR. To integrate a non-LISP environment into the LISP-enabled network. which is based on a true endpoint location in the topology. By decoupling endpoint identifiers (EID) from the Routing Locators (RLOC). as the name suggests. . LISP. LISP accommodates virtual machine mobility while maintaining shortest path forwarding through the infrastructure. LISP routers providing both ITR and ETR functionality are called xTRs. while RLOCs are the identifiers of the edge devices on both sides of a LISP-optimized environment (most likely a loopback interface IP address on the edge router or data center aggregation switch). proxy tunnel routers are used (PITR and PETR). EIDs. These edge devices are called tunnel routers. They perform LISP encapsulation sending the traffic from Ingress Tunnel Router (ITR) to Egress Tunnel Router (ETR) on its way toward the destination endpoint.Figure 10-27 Traffic Tromboning Case A better mechanism is needed to leverage network intelligence. ETR. provides a scalable method for network traffic routing. are the identifiers for the endpoints in the form of either a MAC or IP address.

Unlike the approach in conventional databases. Catering to the virtual machine mobility case. with each tunnel router only maintaining entries needed in a particular given time for communication. LISP mapping entries are distributed rather than centralized. The endpoint that moves to another subnet or across the same subnet extended between the data centers leveraging technology. known as LISP Extended Subnet Mode (ESM) mode.” This mapping is called an EID-to-RLOC mapping database. ESM leverages Layer 2 data center interconnect to extend VLANs and subsequently subnets across multiple data centers. including ingress traffic engineering . This mode is best suited for cold migration and disaster recovery scenarios where there is no Layer 2 data center interconnect between the sites. such as OTV. known as LISP Across Subnet Mode (ASM). Versatility of LISP allows it to be leveraged in a variety of solutions: Simplified and cost-effective multihoming. These entries are derived from the LISP query process and are cached for a certain period of time before being flushed. Endpoint mobility events can occur within the same subnet. When endpoints roam. first-hop router updates the LISP map server with a new RLOC for the EID. It also updates the tunnel and proxy tunnel routers that have cached this information. is called a roaming device. ASM assumes that there is no VLAN and subnet extension between the data centers. while keeping its assigned IP addresses. LISP enables a virtual machine to change its network location. This is where OTV can be leveraged. or across subnets. the LISP first-hop router must detect such an event.Figure 10-28 LISP Tunnel Routers To derive the true location of an endpoint. LISP provides an exciting new mechanism for organizations to improve routing system scalability and introduce new capabilities to their networks. Full mapping database is distributed among all tunnel routers in the LISP environment. unless refreshed. It compares the source IP address of the received packet to the range of EIDs defined for the interface on which this data packet was received. If matched. a mapping is needed between the EID that represents the endpoint and the RLOC that represents a tunnel router that this endpoint is “behind.

html Overlay Transport Virtualization: http://www.IP address portability.html Fibre Channel over IP: http://www.com/go/otv Locator ID Separation Protocol: http://www. ISR routers. .com/c/en/us/solutions/data-center-virtualization/unifiedfabric/unified_fabric.com/c/en/us/tech/storage-networking/fiber-channelover-ip-fcip/index. noted with the key topics icon in the outer margin of the page.cisco. and CSR 1000v software routers. ASR 1000 routers. Reference List Unified Fabric: http://www.cisco. LISP is supported on Nexus 7000 series switches with F3 and N7KM132XP-12 line cards.cisco.com/go/lisp Exam Preparation Tasks Review All Key Topics Review the most important topics in the chapter. including incremental deployment of IPv6 using existing IPv4 infrastructure (or IPv4 over IPv6) Simplified multitenancy and large-scale VPNs Operation and network simplification At the time of writing this book.cisco. including session persistence across mobility events IPv6 transition simplification. Table 10-2 lists a reference of these key topics and the page numbers on which each is found. including no renumbering when changing providers or adding multihoming IP address (endhost) mobility.

and check your answers in the glossary: Nexus Multilayer Director Switch (MDS) Unified Fabric Clos Spine-Leaf Fibre Channel over Ethernet (FCoE) local-area network (LAN) storage-area network (SAN) Fibre Channel Unified Port converged network adapter (CNA) Internet Small Computer Interface (iSCSI) NX-OS Spanning Tree .Table 10-2 Key Topics for Chapter 10 Define Key Terms Define the following key terms from this chapter.

virtual port channel (vPC) FabricPath Virtual Extensible LAN (VXLAN) Virtual Network Identifier (VNID) Virtual Supervisor Module (VSM) Virtual Ethernet Module (VEM) Virtual Security Gateway (VSG) virtual ASA vPath Wide Area Application Services (WAAS) TCP flow optimization (TFO) data redundancy elimination (DRE) Web Cache Communication Protocol (WCCP) Data Center Network Manager (DCNM) Overlay Transport Virtualization (OTV) Locator ID Separation Protocol (LISP) .

read the entire chapter. Table 11-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. If you do not know the answer to a question or are only partially sure of the answer.” Table 11-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics.Chapter 11. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. You can find the answers in Appendix A. highly performing and robust fabric. you should mark that question as wrong for purposes of the self-assessment. . It has a profound impact on the data center economics in a form of lower capital expenditure (CAPEX) and operational expense (OPEX) by consolidating traditionally disparate physical environments of network and storage into a single. Data Center Bridging and FCoE This chapter covers the following exam topics: Components and Operation of the IEEE Data Center Bridging Standards FCoE Principles FCoE Connectivity Options FCoE and Fabric Extenders FCoE and Server Adapter Virtualization Running Fibre Channel protocol on top of Data Center Bridging enhanced Ethernet is one of the key principles of data center Unified Fabric architecture. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. “Answers to the ‘Do I Know This Already?’ Quizzes. FCoE preserves current operational models where particular technology domain expertise is leveraged to achieve deployments conforming to best practices for both network and storage environments. In this chapter we also observe how coupling of FCoE with Cisco Fabric Extender technologies allows building even larger fabrics for ever-increasing scale demands.

FCF d. 4. It is First Priority Multicast Algorithm used by the FCoE endpoints to select the FCoE Fibre Channel Forwarder for a fabric login. It is a MAC addresses dynamically assigned to the Converged Network Adapter by the FCoE Fibre Channel Forwarder during the fabric login. SCSI 2. It is a virtual port used to internally attach the FCoE Fibre Channel Forwarder to the Ethernet switch. b. d. c. It provides lossless Ethernet service by pausing traffic based on DSCP value. It is a Fabric Extender Transceiver that can be used to connect Cisco Nexus 2000 Series Fabric Extenders to either the upstream parent switches or the servers. It provides lossless Ethernet service by pausing traffic based on Class of Service value.) a. d. It is a MAC address of the FCoE Fibre Channel Forwarder. Which of the following allows 40 Gb Ethernet connectivity by leveraging a single pair of multimode fibers? a. What function does Priority Flow Control provide? a. c. It is a port on the FCoE Fabric Extenders toward the FCoE server. It is a configuration exchange protocol to negotiate Class of Service value for the FCoE traffic. It provides lossless Ethernet service by pausing traffic based on MTU value. . 5. QSFP-H40G-CU 6. It is a Fabric Extender Transceiver that can be used to connect Cisco Nexus 2000 Series Fabric Extenders to the upstream parent switches. LDP c. FET-40G d. What is a VN_Port? a. What is an FET? a. It is an invalid port type. 3. It is a virtual port on the FCoE endpoint connected to the VF_Port on the switch. QSFP-40G-SR4 b. c. ETS b. d.1. QSFP-40G-SR-BD c. It is a MAC addresses assigned to the Converged Network Adapter by the manufacturer. None of the above. b. Which two of the following are part of the Data Center Bridging standards? (Choose two. b. b. e. PFC e. What is FPMA? a.

It is a Fault-Tolerant Extended Transceiver that can be used to connect Cisco Nexus 2000 Series Fabric Extenders to the upstream parent switches in the most reliable manner. Enhancements required for a successful Unified Fabric implementation. Yes b. It is the name of the transceiver used to connect Fabric Extender to the parent switch. c. are defined under the Data Center Bridging standards defined by the IEEE. but only when FCoE traffic on Cisco Nexus 2000 Series FCoE Fabric Extender is pinned to a single Cisco Nexus 5000 Series parent switch d. Is FCoE supported in Enhanced vPC topology? a. What type of port profile is used to dynamically instantiate vEth interface on the Cisco Nexus 5000 Series switch? a. What is Adapter-FEX? a. 2232TM-E 9. Yes. profile-veth Foundation Topics Data Center Bridging Due to the lack of Ethernet protocol support for transport characteristics required for successful storage-area network implementation. Specifically. part of the Unified Fabric solution.1Qbb. vethinterface c. where network and storage traffic are carried over the same Ethernet links. consolidation of network and storage traffic over the same Ethernet network requires special considerations.c. 10. Enhanced . Yes. It is a Fully Enabled Transceiver that can be used to connect Cisco Nexus 2000 Series Fabric Extenders to the upstream parent switches. The lack of those characteristics poses no concerns for the network traffic. 2232PP d. but only when FIP is not used 8. It is required to run FCoE on the Fabric Extender. It is the name of the Cisco next-generation Fabric Extender. 7. d. No c. These are Priority Flow Control (PFC) defined by IEEE 802. 2248TP b. dynamic-interface b. as well as necessary bandwidth and traffic priority services. Which two of the following Cisco Nexus 2000 Series Fabric Extenders support FCoE? a. vethernet d. Ethernet protocol must be enhanced. 2224TP c. It is a feature for virtualizing the network adapter on the server. to make Ethernet a viable transport for storage traffic. it requires the capability to provide lossless traffic delivery. b. d. however.

Upon receipt of a PAUSE frame. thus selectively pausing at Layer 2 only traffic requiring guaranteed delivery. For example. such bandwidth . All other traffic is subject to regular forwarding and queuing rules. PAUSE frames are transmitted when the receive buffer allocated to a no-drop queue on the current DCBcapable switch is exhausted. they indiscriminately cause buffering for types of traffic that otherwise would rely on TCP protocol for congestion management and might be better off dropped. If any Ethernet frame carrying data is dropped. and Data Center Bridging Exchange (DCBX) defined by IEEE 802. and it cannot differentiate between different types of traffic forwarded through the link. and hence lossless behavior can no longer be guaranteed. Figure 11-1 demonstrates the principle of how PFC leverages PAUSE frames to pause transmission of FCoE traffic over a DCB-enabled Ethernet link. When such per-physical link PAUSE frames are triggered. Ethernet defines the flow control mechanism under the IEEE 802. the queue becomes full. Priority Flow Control Ethernet protocol provides no guarantees for traffic delivery. enabling the current DCB-capable switch to “drain” its receive buffer. low latency. Enhanced Transmission Selection Enhanced Transmission Selection (ETS) enables optimal QoS strategy for links between DCBcapable switches and a bandwidth management of virtual links.3X standard in which flow control operates on the entire physical link level. that is. Figure 11-1 Priority Flow Control Operation With PFC. Quantized Congestion Notification (QCN) defined by IEEE 802. and it can be triggered hop-by-hop from the point of congestion toward the source of the transmission (FCoE initiator or target). PAUSE frames are signaled back to the previous hop DCB-capable Ethernet switch for the negotiated FCoE Class of Service (CoS) value (Class of Service value of 3 by default). to retransmit the lost portions. ETS creates priority groups by allowing differentiated treatment for traffic belonging to the same Priority Flow Control class of service based on bandwidth allocation. the previous hop DCB-capable switch pauses its transmission of the FCoE frames. PFC provides lossless Ethernet transport services for FCoE traffic.1Qau. PFC enables the Ethernet flow control mechanism to operate on the per-802. which are essential because Fibre Channel frames encapsulated within FCoE packets expect lossless fabric behavior. pausing either all or none of them.1AB (LLDP) with TLV extensions. it is up to the higher-level protocols. such as TCP. or best effort.1Qaz.1p Class of Service level. such as FCoE. that is. This process is known as a back pressure.Transmission Selection (ETS) defined by IEEE 802.

meaning that traffic classes are allowed to use bandwidth allocated to a different traffic class. ETS leverages advanced traffic scheduling features by providing guaranteed minimum bandwidth to the FCoE and high-performance computing traffic. is to assign 50% of the interface bandwidth to FCoE no-drop class. Bandwidth allocations made by ETS are not hard allocations. It is an extension of the Link Layer Discovery Protocol (LLDP) defined under the IEEE 802. Figure 11-3 demonstrates how DCBX messages are exchanged hop-by-hop across the DCB-enabled links.allocation allows assigning 8 Gb to the storage traffic and 2 Gb to the network traffic on the 10 Gb shared Ethernet link. traffic conditions called incasts can be encountered. such as Cisco Nexus 5000 Series switches. If the amount of ingress traffic exceeds the capacity of the switch egress interface. At times. Figure 11-2 demonstrates traffic bandwidth allocation carried out by the ETS feature. . traffic arriving on multiple ingress interfaces on a given switch is attempted to be sent out through a single switch egress interface. Data Center Bridging Exchange Data Center Bridging Exchange (DCBX) is a negotiation and exchange protocol between FCoE initiator/target and a DCB-capable switch or between two DCB-capable switches. With incast.3AB standard. until that traffic class demands the bandwidth. Figure 11-2 Enhanced Transmission Selection Operation The default behavior of Cisco switches supporting DCB standards. This approach allows supporting bursty traffic patterns while still maintaining bandwidth guarantees. special care is needed to maintain lossless link characteristics. Parameters exchanged using DCBX protocol are encoded in Type-Length-Value (TLV) format.

between two adjacent DCB-capable switches. detects configuration mismatches. Enhanced Transmission Selection. Successful FCoE operation relies on Ethernet enhancements introduced with DCB and as such. and negotiates DCB link parameters. Network Interface Virtualization (multihop FCoE behavior is discussed in Chapter 12. a back-pressure is applied from the point of congestion toward the source of transmission (FCoE initiator or target) in a hop-by-hop fashion. defined in the IEEE 802. This approach maintains feature isolation and affects only the parts of the network causing the congestion. PFC enables you to throttle down transmission for no-drop traffic classes. Quantized Congestion Notification mechanism. and logical link up/down are successfully negotiated. Figure 11-4 demonstrates the principle behind the QCN feature operation. takes a different approach of applying back-pressure from the point of congestion directly toward the source of transmission without involving intermediate switches. .Figure 11-3 Data Center Bridging Exchange Operation DCBX performs the discovery of adjacent DCB-capable switches.1Qau standard. As PFC is invoked. “Multihop Unified Fabric”). such as FCoE. Quantized Congestion Notification As discussed earlier. where notification messages are sent from the point of congestion in the FCoE fabric directly to the source of transmission. DCBX plays a pivotal role in making sure behaviors such as Priority Flow Control.

the MAC address swapping behavior occurring at FCoE Fibre Channel Forwarders makes its operation resemble routing more than bridging. When the FCoE endpoint sends FCoE frames to the network. and a Fibre Channel forwarding decision is taken on the exposed Fibre Channel frame. Figure 11-5 demonstrates the swap of outer Ethernet encapsulation MAC addresses as the FCoE frame is forwarded between initiator and target. the Fibre Channel frames are reencapsulated with Ethernet headers. Subsequently. frames are deencapsulated. When these frames reach the FCoE FCF. while source and destination MAC addresses assigned are the MAC addresses of the current and next-hop FCoE FCF or the frame target. Even though Layer 2 operation implies bridging. Ethernet headers are removed. . respectively. these frames include the source MAC address of the endpoint and the destination MAC address of the FCoE Fibre Channel Forwarder.Figure 11-4 Quantized Congestion Notification Operation FCoE protocol operates at Layer 2 as Ethernet encapsulated Fibre Channel frames are forwarded between initiators and targets across the FCoE Fibre Channel Forwarders along the path.

unless all FCoE Fibre Channel Forwarders along the path from the point of congestion toward the source of transmission participate in the QCN process. The idea behind FCoE is simple: run Fibre Channel protocol frames encapsulated in Ethernet headers. It leverages the same fundamental characteristics of the traditional Fibre Channel protocol. Fibre Channel Over Ethernet Over the years. while allowing the flexibility of running it on top of the ever-increasing high-speed Ethernet fabric. Figure 11-6 Fibre Channel Encapsulated in Ethernet The FC-BB-5 working group of INCITS Technical Committee T11 developed the FCoE standard. Fibre Channel protocol has established itself as the most reliable and best performing protocol for storage traffic transport. FCoE is . Because those FCoE frames no longer contain the original MAC address of the initiator. Cisco Nexus 5000 Series DCB-capable switches do not implement the QCN feature. Participation of all FCoE Fibre Channel Forwarders in the QCN process brings little value beyond Priority Flow Control and as such.Figure 11-5 FCoE Forwarding MAC Address Swap This MAC address swap occurs for all FCoE frames passing through the FCoE Fibre Channel Forwarders along the path. it is impossible for the intermediate FCoE Fibre Channel Forwarders to generate QCN messages directly to the source of transmission to throttle down its FCoE transmission into the network. Figure 11-6 demonstrates the principle of encapsulating the original Fibre Channel frame with Ethernet headers for transmission across the Ethernet network.

Figure 11-7 depicts the breadth of storage transport technologies. Figure 11-7 Storage Transport Technologies Landscape FCoE protocol is the essential building block of the Cisco Unified Fabric architecture for converging network and storage traffic over the same Ethernet network. FCoE_LEP has the standard Fibre Channel layers.one of the options available for storage traffic transport. The capability to converge LAN and SAN traffic over a single Ethernet network is not trivial. starting with FC-2 and continuing up the Fibre Channel Protocol stack. FCoE Logical Endpoint The FCoE Logical Endpoint (FCoE_LEP) is responsible for the encapsulation and deencapsulation functions of the FCoE traffic. because network and storage environments are governed by different principles and have different characteristics. . which must be preserved. Figure 11-8 depicts the mapping between the layers of a traditional Fibre Channel protocol and the FCoE.

FCoE does not alter higher levels and sublevels of the Fibre Channel protocol stack. Each port type plays a specific role in the overall FCoE solution. This compatibility enables all the tools used in the native Fibre Channel environment to be used in the FCoE environment. administrative. To higher-level system functions.Figure 11-8 Fibre Channel and FCoE Layers Mapping As you can see in Figure 11-8. Figure 11-9 depicts FCoE and traditional Fibre Channel port designation for the single hop storage topology. FCoE network appears as a standard Fibre Channel network. FCoE operation defines several types of ports. FCoE Port Types Similar to the traditional Fibre Channel protocol. and management models of the traditional Fibre Channel protocol. . which enables it to be ubiquitously deployed while maintaining operational.

and it is defined by the FC-BB-5 standard as a range of 256 prefixes. FCoE ENode creates at least one VN_Port toward the first-hop FCoE access switch. These first hop switches can operate in either full Fibre Channel switching mode or the FCoE-NPV mode (more on that in Chapter 12). which is dynamically assigned to it by the FCoE Fibre Channel Forwarder on the upstream FCoE switch during the fabric login process. The dynamically assigned MAC address is also known as the fabric-provided MAC address (FPMA). such as the Cisco Nexus 5000 Series switch. The first field is called FCoE MAC address prefix. The second field is called FC_ID. which is assigned to it by the CNA vendor. it does not change the Fibre Channel characteristics. or FC-MAP.Figure 11-9 FCoE and Fibre Channel Port Types in Single Hop Storage Topology FCoE ENodes could be either initiators in the form of servers or targets in the form of storage arrays. and it is basically a Fibre Channel ID assigned to the CNA during the fabric login process. such as the Cisco MDS 9000 Series switch. Each VN_Port has a unique ENode MAC address. Each one presents a VN_Port port type connected to the VF_Port port type on the first hop upstream FCoE switch. including Fibre Channel addressing. Remember. You can think of them as equivalent to Host Bus Adapters (HBA) in the traditional Fibre Channel world. An ENode also has a globally unique burnt-in MAC address. This compares closely with the traditional Fibre Channel environment where initiators and targets present the N_Port port type connected to the F_Port port type on the first-hop upstream Fibre Channel switch. . although Ethernet is used as a transport for the Fibre Channel traffic. FPMA is 48 bits in length and consists of two concatenated 24-bit fields. FCoE ENodes FCoE ENodes are the combination of FCoE Logical Endpoints (FCoE_LEP) and Fibre Channel protocol stack on the Converged Network Adapter. Figure 11-10 depicts the structure of the FPMA MAC address.

Figure 11-10 Fabric-Provided MAC Address Cisco has established simple best practices that make manipulation of FC-MAP values beyond the default value of 0E-FC-00 unnecessary. FCoE packets must be exchanged in a context of a VLAN. it’s like having two servers with the same MAC address in the same VLAN. a challenging situation can occur if each fabric allocates the same FCID to its respective FCoE ENodes. to make sure that the overall FPMA address remains unique across the FCoE fabric. To resolve this situation. and the Ethernet switch. FCoE-capable Ethernet switches must have the proper intelligence to enforce Fibre Channel protocol characteristics along the forwarding path. as well as the encapsulation and deencapsulation of the FCoE traffic. When Fibre Channel frames are encapsulated and transported over the Ethernet network in a converged fashion. Fibre Channel Forwarder Simply put. FCoE Fibre Channel Forwarder is responsible for the host fabric login. In essence. Figure 11-11 Fibre Channel Forwarder. FCoE_LEP. With only default FC-MAP. you can leverage two different FC-MAP values to maintain FPMA uniqueness. The 256 FC-MAP values are still available in case of a nonunique FC_ID address. Cisco strongly recommends not mapping multiple Fibre Channel fabrics into the same Ethernet FCoE VLAN. if two Fibre Channel fabrics are mapped to the same Ethernet FCoE VLAN. and Ethernet Switch FCoE Encapsulation As fundamentally Ethernet traffic. the FCoE_LEP. Figure 11-11 depicts a relationship between FCoE Fibre Channel Forwarder. Just as . FCoE Fibre Channel Forwarder (FCF) is a logical Fibre Channel switch entity residing in the FCoE-capable Ethernet switch. For example. FCoE ENodes across the two fabrics will end up with identical FPMA values. This will ensure the uniqueness of an FPMA address without the need for multiple FC-MAP values. One or more FCoE_LEPs are used to attach the FCoE Fibre Channel Forwarder internally to the Ethernet switch.

respectively.1Q tags are used to segregate between network and FCoE VLANs on the wire. Reserved fields have been inserted to allow future protocol extensibility. should be carried over these VLANs.1Q tag serves the same purpose as in traditional Ethernet network. reserved fields can also be leveraged for padding. so do virtual SANs by creating logical separation for the storage traffic. where all FCoE packets tagged with specific VLAN ID are assumed to belong to the corresponding Fibre Channel VSAN. The IEEE 802. that is. IEEE 802.3 standard defines the minimum-length Ethernet frame as 64 bytes (including Ethernet headers). With FCoE.virtual LANs create logical separation for the network traffic. especially assigned by IEEE for the protocol to indicate Fibre Channel payload. and the incorrectly tagged frames are simply discarded. Source and Destination MAC Address defines Layer 2 connectivity addressing for FCoE frames. . Cisco Nexus 5000 Series switches enable mapping between FCoE VLANs and Fibre Channel VSANs. no other types of traffic. FCoE leverages EtherType value of 0x8906.1Q tag. With FCoE.1Q priority field is used to indicate the Class of Service marking to provide an adequate Quality of Service. The IEEE 802. such as IP network traffic. that is. Each field is 48 bits in length.1Q tag is 4 bytes in length. IEEE 802. Each field is 8 bits in length. If the Fibre Channel payload and all the encapsulating headers result in an Ethernet frame size smaller than 64 bytes. The IEEE 802. The Cisco Nexus 5000 Series switches implementation also expects all VLANs carrying FCoE traffic to be exclusively used for this purpose. Figure 11-12 FCoE Frame Format Let’s take a closer look at some of the frame header fields. Start of Frame (SOF) and End of Frame (EOF) fields indicate the start and end of the encapsulated Fibre Channel frame. it enables creation of multiple virtual network topologies across a single physical network infrastructure. Both ends of the FCoE link must agree on the same FCoE VLAN ID. Inside the IEEE 802. as FCoE frames traverse the Ethernet links. Figure 11-12 depicts FCoE frame format.

Example 11-1 depicts the MTU assigned to FCoE traffic under class class-fcoe. Before enabling FCoE on the Cisco Nexus 5500 Series switches. Non-FCoE traffic is set up by default for the Ethernet standard 1500 byte MTU (excluding Ethernet headers). including Fibre Channel cyclic redundancy check (CRC). Ethernet Frame Check Sequence (FCS) represents the last 4 bytes of the Ethernet frame. On the Cisco Nexus 5010 and 5020 switches. is included in the payload. class-fcoe default system class is automatically created at the system startup and cannot be deleted. Cisco Nexus 5000 Series switches define by default the Maximum Transmission Unit (MTU) of 2240 bytes for FCoE traffic in class-fcoe. This is a standard FCS field in any Ethernet frame.Encapsulated Fibre Channel frame in its entirety. Because of this. Figure 11-13 FCoE Frame Length As you can see. FCoE traffic is assigned to the class based on the FCoE EtherType value (0x8906). After FCoE features are enabled. These MTU settings are observed for FCoE traffic arriving directly on the physical switch interfaces or through the host interfaces on the Cisco Nexus 2000 Series FCoE Fabric Extenders attached to the parent Cisco Nexus 5000 Series switches. Example 11-1 Displaying FCoE MTU Click here to view code image Nexus5000# show policy-map Type qos policy-maps ==================== policy-map type qos default-in-policy class type qos class-fcoe set qos-group 1 Type network-qos policy-maps =============================== policy-map type network-qos default-nq-policy class type network-qos class-fcoe pause no-drop mtu 2240 . This class provides no-drop service to facilitate guaranteed delivery of FCoE frames end-to-end. you must manually create class-fcoe default system class. Jumbo Frame support must be enabled to allow successful transmission of all FCoE frame sizes. the total size of an Ethernet frame carrying a Fibre Channel payload can be up to 2180 bytes. Figure 11-13 depicts the overall length of the FCoE frame.

FIP continues operating in the background performing virtual link maintenance functions—specifically. After the virtual link has been established. and ultimately performing fabric login/discovery operation (FLOGI or FDISC). server) or the target (for example. storage array). Establishment of the virtual link carries three main steps: discovering the FCoE VLAN. discovering the MAC address of an upstream FCoE Fibre Channel Forwarder. an exchange of Fibre Channel payload encapsulated with the Ethernet headers can begin. All FCoE Fibre Channel Forwarders listen to the All-FCF-MAC MAC . FIP also differentiates from the FCoE protocol messages by using EtherType value 0x8914 in the encapsulating Ethernet header fields. potentially across multiple Ethernet segments. continuously verifying reachability between the two virtual Fibre Channel (vFC) interfaces on the Ethernet network and offering primitives to delete the virtual link in response to administrative actions to that effect. Figure 11-14 depicts the phases of FIP operation. The FCoE ENode discovers the VLAN to which it needs to send the FCoE traffic by sending an FIP VLAN discovery request to the All-FCF-MAC multicast MAC address. Figure 11-14 FCoE Initialization Protocol Operation FIP VLAN discovery relies on the use of native VLAN leveraged by the initiator (for example.FCoE Initialization Protocol FCoE Initialization Protocol (FIP) is the FCoE control plane protocol responsible for establishing and maintaining Fibre Channel virtual links between pairs of adjacent FCoE devices (FCoE ENodes or FCoE Fibre Channel Forwarders).

From the operating system perspective. After FCoE VLAN has been discovered. Following the discovery of FCoE VLAN and the FCoE Fibre Channel Forwarder. The converged nature of CNAs requires them to operate at 10 Gb speeds or higher. Note FIP VLAN discovery is not a mandatory component of the FC-BB-5 standard. CNA is usually a PCI express (PCIe) type adapter. FIP performs fabric login or discovery by sending FLOGI or FDISC unicast frames from VN_Port to VF_Port. Discovery messages are also used by the FCoE Fibre Channel Forwarder to inform any potential FCoE ENode attached through the FCoE VLAN that its VF_Ports are available for virtual link establishment with the ENode’s VN_Ports. VLAN ID 1002 is commonly assumed in such case. Converged Network Adapters are also available in the mezzanine form factor for the use in server blade systems. Cisco Nexus 5000 Series switches support FIP VLAN Discovery functionality and will respond to the querying FCoE ENode. FCoE Fibre Channel Forwarder discovery messages are sent to the All-ENode-MAC multicast MAC address over the FCoE VLAN discovered earlier by the FIP protocol. which contains the functions of a traditional Host Bus Adapter (HBA) and a network interface card (NIC) all in one. in which case FIP VLAN discovery functionality is not invoked. just like in the case of physically segregated adapters. . Converged Network Adapter Converged Network Adapter (CNA) is in essence an FCoE ENode. Figure 11-15 depicts the fundamental principle behind leveraging a separate set of operating system drivers when converging NIC and HBA with Converged Network Adapter. This is the phase when the FCoE ENode MAC address used for FCoE traffic is assigned and virtual link establishment is complete.address on the native VLAN. they still require two sets of drivers. These messages are sent out periodically. one for the Ethernet and the other for Fibre Channel. as such. which can accept and process fabric login requests. CNAs are represented as two separate devices. The FC-BB-5 standard enables FCoE ENodes to send out advertisements to the All-FCF-MAC multicast MAC address to solicit unicast discovery advertisement back from FCoE Fibre Channel Forwarder to the requesting ENode. The FCoE ENode can also be manually configured for the FCoE VLAN it should use. which could delay FCoE Fibre Channel Forwarder discovery for the new FCoE ENodes joining the fabric during the send interval. FIP performs the function of discovering the upstream FCoE Fibre Channel Forwarder.

both network and storage traffic is sent over a single unified wire connection from the server to the upstream first hop FCoE Access Layer switch. FCoE Speed Rapid progress in the speed increase of Ethernet networks versus a slower pace of speed increase in the traditional Fibre Channel networks makes FCoE a compelling technology for high throughput storage-area networks. HBAs. rarely do Fibre Channel connections from the endpoints to the first hop FCoE switches exceed the speed of 8 Gb. Table 11-2 8 Gb Fibre Channel and 10 Gb FCoE When comparing with higher speeds. Although 16 Gb Fibre Channel is only 33% faster than 10 Gb FCoE. 10 Gb FCoE achieves 50% more throughput than 8 Gb Fibre Channel. Table 11-3 16 Gb Fibre Channel and 40 Gb FCoE . The difference might not seem significant. however. 40 Gb FCoE is four times faster than 10 Gb FCoE! Table 11-3 depicts the throughputs and encoding for the 16 Gb Fibre Channel and 40 Gb FCoE traffic. During the time of writing this book. Table 11-2 depicts the throughputs and encoding for the 8 Gb Fibre Channel and 10 Gb FCoE traffic. despite appearing to be only 2 Gb faster. and CNAs With CNAs. At the same time. we discover even greater differences. due to different encoding schemes. server CNAs and FCoE storage arrays offer at least 10 Gb FCoE connectivity speeds.Figure 11-15 Separate Ethernet and Fibre Channel Drivers for NICs. such as Cisco Nexus 5000 Series switch.

Such modes result from the fact that light will propagate in the fiber core only at discrete angles within the cone of acceptance. Current multimode cabling standards include 50 microns and 62. Step-Index and Graded Index offer two different fiber designs. FCoE Cabling Options for the Cisco Nexus 5000 Series Switches This section reviews the options for physically connecting Converged Network Adapters (CNA) to Cisco Nexus 5000 Series switches. Figure 11-16 Fiber Optic Core and Coating Fiber manufacturers use various vapor deposition processes to make the pre-forms. and connecting the switches to an upstream LAN and Fibre Channel SAN environments. Optical fibers are hair-thin structures created by forming pre-forms. Figure 11-16 depicts the fiber optic core and coating. Multimode fiber refers to the fact that numerous modes or light rays are simultaneously carried through. Multimode fiber type has a much larger core diameter compared to single-mode fiber. which are glass rods drawn into fine threads of glass protected by a plastic coating. given the similar characteristics between Fibre Channel and FCoE. you can clearly see the very distinct throughput advantages in favor of the FCoE.Speed is definitely not the only factor in storage-area networks. however.5 microns core diameter. Figure 11-17 depicts Step-Index multimode fiber design. allowing for a larger number of modes. with Graded Index being more popular because it offers hundreds of times more bandwidth than Step-Index fiber does. The fibers drawn from these pre-forms are then typically packaged into cable configurations. . Optical Cabling An optical fiber is a flexible filament of very clear glass capable of carrying information in the form of light. which are then placed into an operating environment for decades of reliable performance.

Current single-mode cabling standards include 9 microns core diameter. . Figure 11-18 Graded Index Multimode Fiber Design Single-mode (or monomode) fiber enjoys lower fiber attenuation than multimode fiber and retains better fidelity of each light pulse because it exhibits no dispersion caused by multiple modes. Figure 11-19 depicts a single-mode fiber design.Figure 11-17 Step-Index Multimode Fiber Design Figure 11-18 depicts Graded Index multimode fiber design. Consequently. information can be transmitted over longer distances.

while allowing both receive and transmit operations to take place. and Fibre Channel applications across all Cisco switching and routing platforms.3z 1000BASE-SX standard. as an example. SFP is also sometimes referred to as mini Gigabit Interface Converter (GBIC) because both provide similar functionality. SFP Transceiver Modules A Small Form-Factor Pluggable (SFP) is a compact.62 miles) over laser-optimized 50 microns multimode fiber cable. 10 Gigabit Ethernet. Following are the different types of SFP transceivers existing at the time of writing this book: 1000BASE-T SFP 1 Gigabit Ethernet copper transceiver for copper networks operates on standard Category 5 unshielded twisted-pair copper cabling for links of up to 100 meters (~330 feet) in length. LC is the most popular one for both multimode and single-mode fiber optic connections. SFP type transceivers can be leveraged in a variety of high-speed Ethernet and Fibre Channel deployments. and 1000 Megabit (1 Gigabit) auto-negotiation and auto-MDI/MDIX. SFP is responsible for converting between the electrical and optical signals. It can also support distances of up to 1 kilometer (~0. MDI-X is Medium Dependent Interface Crossover. and it operates over either 50 microns multimode fiber links for a distance of up to 550 meters (~1800 feet) or over 62.3z 1000BASE-LX standard. Figure 11-20 depicts SFP module LC connector. input/output device that is used to link the ports on different network devices together. hot swappable.Figure 11-19 Single-Mode Fiber There are many fiber optic connector types. These modules support 10 Megabit. 100 Megabit.2 miles) and up to 550 meters (~1800 feet) over any multimode fiber. 1000BASE-SX SFP 1 Gigabit Ethernet fiber transceiver is used only for multimode fiber networks. It is compatible with the IEEE 802. and 40 Gigabit Ethernet connectivity. 1000BASE-LX/LH SFP 1 Gigabit Ethernet long-range fiber transceiver operates over standard single-mode fiber-optic link spanning distances of up to 10 kilometers (~6. Due to its small form factor. Figure 11-20 Fiber Optic SFP Module LC Connector Let’s now review various transceiver module options available at the time of writing this book for deploying 1 Gigabit Ethernet. Sonet/SDH. . MDI is Medium Dependent Interface. 1000BASE-LX/LH transceivers transmitting in the 1300 nanometers wavelength over multimode FDDI-grade OM1/OM2 fiber require mode conditioning patch cables. Transceiver Modules Cisco Transceiver Modules support Ethernet. It is compatible with the IEEE 802.5 microns Fiber Distributed Data Interface (FDDI)-grade multimode fiber for a distance of up to 220 meters (~720 feet).

number of splices. but the precise link span length depends on multiple factors.1000BASE-EX SFP 1 Gigabit Ethernet extended-range fiber transceiver operates over standard single-mode fiber-optic link spanning long-reaching distances of up to 40 kilometers (~25 miles). This module has a 20-pin connector on the electrical interface and duplex Lucent Connector (LC) on the optical interface. Both Cisco Nexus 5000 Series switches and Cisco Nexus 2000 Series FCoE Fabric Extenders support SFP-10G-SR SFP. The communication over a single strand of fiber is achieved by separating the transmission wavelength of the two devices.5 microns. Connectivity up to 300 meters (~1000 feet) is possible when leveraging 2. This SFP provides an optical link budget of 21 dB. compatible with the IEEE 802. 1000BASE-BX10-D and 1000BASE-BX10-U transceivers represent two sides of the same fiber link. Note the presence of a wavelength-division multiplexing (WDM) splitter integrated into the SFP to split the 1310 nanometer and 1490 nanometer light paths. A 5dB inline optical attenuator should be inserted between the fiber-optic cable and the receiving port on the SFP at each end of the link for back-to-back connectivity.2 miles). and connectors. It operates on 850 nanometer wavelength. Figure 11-21 depicts the principle of wavelength multiplexing to achieve transmission over a single strand fiber.25 miles) is possible when leveraging 4.3ah 1000BASE-BX10-D and 1000BASE-BX10-U standards. Figure 11-21 Bidirectional Transmission over a Single Strand.000MHz/km OM3 multimode fiber with a core size of 50 microns. operate over a single strand of standard single-mode fiber for a transmission range of up to 10 kilometers (~6. Connectivity up to 400 meters (~0. leveraging an LC type connector. Single-Mode Fiber SFP+ Transceiver Modules The SFP+10 Gigabit Ethernet transceiver module is a bidirectional device with a transmitter and receiver in the same physical package. whereas 1000BASE-BX10-U transmits at a 1310 nanometer wavelength and receives a 1490 nanometer signal. such as fiber quality. . 1000BASE-BX-D and 1000BASE-BX-U 1 Gigabit Ethernet transceivers.5 miles).700MHz/km OM4 multimode fiber with a core size of 50 microns. 1000BASE-ZX SFP 1 Gigabit Ethernet extra long-range fiber transceiver operates over standard single-mode fiber-optic link spanning extra long-reaching distances of up to approximately 70 kilometers (~43. The following options exist for 10 Gigabit Ethernet SFP transceivers during the time of writing this book: SFP-10G-SR SFP 10 Gigabit Ethernet short-range transceiver supports a link length of 26 meters (~85 feet) over standard FDDI-grade multimode fiber with a core size of 62. 1000BASE-BX10-D transmits a 1490 nanometer wavelength and receives a 1310 nanometer signal.

Both Cisco Nexus 5000 Series switches and Cisco Nexus 2000 Series FCoE Fabric Extenders support SFP-10G-LR SFP. leveraging an LC type connector. such as Cisco Nexus 5000 Series switches. such as Cisco Nexus 5000 Series switches. SFP-10G-LRM Module also supports link lengths of 300 meters (~1000 feet) over standard single-mode fiber (G. the transmitter should be coupled through a mode conditioning patch cord. It operates over a core size of 50 microns and at 850 nanometers wavelength. FET-10G is supported only on fabric uplinks between the Cisco Nexus 2000 Series FCoE Fabric Extender and a Cisco parent switch. leveraging an LC type connector. This interface is not specified as part of the 10 Gigabit Ethernet standard and is instead built according to Cisco specifications. No mode conditioning patch cord is required for applications over OM3 or OM4. At the time of writing this book. It operates on 1550 nanometer wavelength.652 single-mode fiber. It operates on 1550 nanometer wavelength. Cisco Nexus 5500 switches and Cisco Nexus 2000 Series FCoE Fabric Extenders support SFP-10G-ER SFP. SFP-10G-ER SFP 10 Gigabit Ethernet extended range transceiver supports a link length of up to 40 kilometers (~25 miles) over standard G. At the time of writing this book.SFP-10G-LRM SFP 10 Gigabit Ethernet short-range transceiver supports link lengths of 220 meters (~720 feet) over standard FDDI-grade multimode fiber with a core size of 62. Cisco Nexus 5000 switches do not support SFP-10G-ER SFP. SFP-10G-ZR 10 Gigabit Ethernet very long-range transceiver supports link lengths of up to about 80 kilometers (~50 miles) over standard G.5 miles). It operates on 1310 nanometer wavelength. It operates on 1310 nanometer wavelength. A 5dB 1550 nanometer fixed loss attenuator is required for distances greater than 20 kilometers (~12.5 microns. the SFP-10G-ZR SFP transceiver is neither supported on Cisco Nexus 5000 Series switches nor Cisco Nexus 2000 Series FCoE Fabric Extenders. These cables can also be used for fabric uplinks between the Cisco Nexus 2000 Series FCoE Fabric Extender and a Cisco parent switch. SFP-10G-LR SFP 10 Gigabit Ethernet long-range transceiver supports a link length of 10 kilometers (~6.652). leveraging an LC type connector. FET-10G Fabric Extender Transceiver (FET) 10 Gigabit Ethernet cost-effective short-range transceiver supports link lengths of 25 meters (~80 feet) over OM2 multimode fiber and the link length of up to 100m (~330 feet) over laser-optimized OM3 or OM4 multimode fiber. SFP+ Copper Twinax direct-attach 10 Gigabit Ethernet pre-terminated cables are suitable for very short distances and offer a cost-effective way to connect within racks and across adjacent racks. neither Cisco Nexus 5000 Series switches nor Cisco Nexus 2000 Series FCoE Fabric Extenders support SFP-10G-LRM optics. To ensure that specifications are met over FDDI-grade OM1 and OM2 fibers. Connectivity is typically between the server and a Top-of-Rack switch or a server and a Cisco Nexus 2000 Series FCoE Fabric Extender.2 miles) over standard G.652 single-mode fiber. leveraging an LC type connector.652 single-mode fiber. QSFP Transceiver Modules The Cisco 40GBASE QSFP (Quad Small Form-Factor Pluggable) hot-swappable input/output device portfolio (see Figure 11-22) offers a wide variety of high-density and low-power 40 Gigabit Ethernet connectivity options for the data center. .

It can also be used in a 4×10 Gigabit Ethernet mode for interoperability with 10GBASE-SR interfaces up to 100 meters (~330 feet) and 150 meters (~500 feet) over OM3 and OM4 fibers. Figure 11-23 depicts the use of wavelength multiplexing technology allowing 40 Gigabit Ethernet connectivity across just a pair of multimode fiber strands. respectively. BiDi transceivers offer a compelling solution. which connects the 40GBASE-SR4 module to four 10GBASE-SR . respectively. QSFP-40G-SR4 QSFP 40 Gigabit Ethernet short-range transceiver operates over 850 nanometer and 50 micron core size multimode fiber. over laser-optimized OM3 and OM4 multimode fibers.3ba standard. Figure 11-23 Cisco QSFP BiDi Wavelength Multiplexing Each QSFP 40 Gigabit Ethernet BiDi transceiver consists of two 20 Gigabit Ethernet transmit and receive channels enabling an aggregated 40 Gigabit Ethernet link over a two-strand multimode fiber connection. enabling its use on all QSFP 40 Gigabit Ethernet platforms.Figure 11-22 Cisco 40GBASE QSFP Transceiver It offers flexibility of interface choice for different distance requirements and fiber types. It supports link lengths of 100 meters (~330 feet) and 150 meters (~500 feet). The 4×10 Gigabit Ethernet connectivity is achieved using an external 12-fiber parallel to 2-fiber duplex breakout cable. and its highspeed electrical interface is compliant with the IEEE 802. BiDi transceiver complies with the QSFP MSA specification. It supports link lengths of 100 meters (~330 feet) and 150 meters (~500 feet) over laser-optimized OM3 and OM4 multimode fibers. while being interoperable with other IEEE-compliant 40GBASE interfaces. It primarily enables high-bandwidth 40 Gigabit Ethernet optical links over 12-fiber parallel fiber terminated with MPO/MTP multifiber connectors. The following options exist for 40 Gigabit Ethernet QSFP transceivers during the time of writing this book: QSFP 40Gbps Bidirectional (BiDi) 40 Gigabit Ethernet short-range cost-effective transceiver is a pluggable optical transceiver with a duplex LC connector interface over 50 micron core size multimode fiber. which enables reuse of existing 10 Gigabit Ethernet duplex multimode cabling infrastructure for migration to 40 Gigabit Ethernet connectivity. respectively.

optical interfaces. Figure 11-24 Cisco 40GBASE-CR4 QSFP Direct-Attach Copper Cables Active optical cables (AOC) are much thinner and lighter than copper cables. 40GBASE-CR4 QSFP to QSFP direct-attach 40 Gigabit Ethernet short-range pre-terminated cables offer a highly cost-effective way to establish a 40 Gigabit Ethernet link between QSFP ports of Cisco switches within the rack and across adjacent racks. This module can be used for native 40 Gigabit Ethernet optical links over 12-fiber parallel cables with MPO/MTP connectors or in a 4×10 Gigabit Ethernet mode with parallel to duplex fiber breakout cables for connectivity to four 10GBASE-SR interfaces. respectively. AOCs enable efficient system airflow and have no EMI issues. It extends the reach of the IEEE 40GBASE-SR4 interface to 300 meters (~1000 feet) and 400 meters (~0. Such cables can either be active copper. Both QSFP-40G-LR4 and QSFP-40GE-LR4 operate over 1310 nanometer single-mode fiber. or active optical cables. The 40 Gigabit Ethernet signal is carried over four wavelengths. Cisco offers both active and passive copper cables. Figure 11-25 depicts short range pre-terminated 40Gb QSPF to QSFP active optical cable. QSFP-40GE-LR4 QSFP 40 Gigabit Ethernet long-range transceiver supports 40 Gigabit . Multiplexing and de-multiplexing of the four wavelengths are managed within the device. Figure 11-24 depicts short range pre-terminated 40 Gigabit Ethernet QSPF to QSFP copper cable.652 single-mode fiber with duplex LC connectors.2 miles) over a standard pair of G.25 miles) over laseroptimized OM3 and OM4 multimode parallel fiber. which makes cabling easier. QSFP-40G-CSR4 QSFP 40 Gigabit Ethernet short-range transceiver operates over 850 nanometer and 50 micron core size multimode fiber. passive copper. Each 10 Gigabit lane of this module is compliant with IEEE 10GBASE-SR specifications. Figure 11-25 Cisco 40GBASE-CR4 QSFP Active Optical Cables QSFP-40G-LR4 QSFP 40 Gigabit Ethernet long-range transceiver module supports link lengths of up to 10 kilometers (~6. At the time of writing this book. which is critical in high-density racks.

QSFP to four SFP+ Direct-Attach 40 Gigabit Ethernet pre-terminated breakout cables are suitable for very short distances and offer a highly cost-effective way to connect within the rack and across adjacent racks. FET-40G QSFP 40 Gigabit Ethernet cost-effective short-range transceiver connects links between the Cisco Nexus 2000 Series FCoE Fabric Extender and the Cisco parent switch. Such cables can either be active copper. AOCs enable efficient system airflow and have no EMI issues. Figure 11-26 depicts the 40 Gigabit Ethernet copper breakout cable for connecting four 10 Gigabit Ethernet capable nodes (servers) into a single 40 Gigabit Ethernet QSPF switch interface. which is critical in high-density racks. The 40 Gigabit Ethernet signal is carried over four wavelengths. These active optical breakout cables connect to a 40 Gigabit Ethernet QSFP port of a Cisco switch on one end and to four 10 Gigabit Ethernet SFP+ ports of a Cisco switch on the other end.Ethernet rate only. which makes cabling easier. It is interoperable with QSFP-40G-LR4 for distances up to 2 kilometers (~1. or active optical cables. These breakout cables connect to a 40 Gigabit Ethernet QSFP port of a Cisco switch on one end and to for 10 Gigabit Ethernet SFP+ ports of a Cisco switch on the other end. . This module can be used for native 40 Gigabit Ethernet optical links over 12-fiber ribbon cables with MPO/MTP connectors or in 4×10Gb mode with parallel to duplex fiber breakout cables for connectivity to four FET10G interfaces. Multiplexing and de-multiplexing of the four wavelengths are managed within the device. The interconnect will work over parallel 850 nanometer. respectively. such as Cisco Nexus 5000 Series switches.25 miles) over a standard pair of G. 50 micron core size laser-optimized OM3 and OM4 multimode fiber across distances of up to 100 meters (~330 feet) and 150 meters (~500 feet). WSP-Q40GLR4L QSFP 40 Gigabit Ethernet long-range transceiver supports link lengths of up to 2 kilometers (~1.25 miles). passive copper. whereas the QSFP-40G-LR4 supports OTU3 data rate in addition to 40 Gigabit Ethernet Ethernet rate.652 single-mode fiber with duplex LC connectors. Figure 11-26 Cisco 40Gb QSFP to Four SFP+ Copper Breakout Cable Active optical breakout cables (AOC) are much thinner and lighter than copper breakout cables. Figure 11-27 depicts the 40 Gigabit Ethernet active optical breakout cable for connecting four 10 Gigabit Ethernet capable nodes (servers) into a single 40 Gigabit Ethernet QSPF switch interface.

1BR (Bridge Port Extension) standard and enables the following main benefits: Common. at Cisco’s discretion. at Cisco’s discretion. Delivering FCoE Using Cisco Fabric Extender Architecture Cisco Fabric Extender (FEX) technology delivers an extensible and scalable fabric solution. Cisco may withhold support under warranty or a Cisco support program. In the course of providing support for a Cisco networking product Cisco may require that the end user install Cisco transceivers if Cisco determines that removing third-party parts will assist Cisco in diagnosing the cause of a support issue. Cisco will continue to provide support for the affected product under warranty or covered by a Cisco support program. the following warning message will be displayed on the terminal: Caution When Cisco determines that a fault or defect can be traced to the use of third-party transceivers installed by a customer or reseller. scalable. When a product fault or defect occurs in the network. such as SMARTnet service. It is based on the IEEE 802. then. Cisco may withhold support under warranty or a Cisco support program.Figure 11-27 Cisco 40Gb QSFP to Four SFP+ Active Optical Breakout Cable Unsupported Transceiver Modules Cisco highly discourages the use of non-officially certified and approved transceivers inside Cisco network devices. with no reliance on . If a product fault or defect is detected and Cisco believes the fault or defect can be traced to the use of third-party transceivers (or other non-Cisco components). then. When the switch detects a non-Cisco certified transceiver. and Cisco concludes that the fault or defect is not attributable to the use of third-party transceivers (or other non-Cisco components) installed inside Cisco network devices. which complements Cisco Unified Fabric strategy. and adaptive server and server-adapter agnostic architecture across data center racks and points of delivery (PoD) Simplified operations with investment protection by delivering new features and functionality on the parent switches with automatic inheritance for physical and virtual server-attached infrastructure Single point of management and policy enforcement across thousands of network access ports Scalable 1 Gigabit Ethernet and 10 Gigabit Ethernet server access.

leverage Adapter-FEX on the server to . and it is possible to have cascading Fabric Extender topologies deployed.Spanning Tree Protocol for logical loop-free topology Low and predictable latency at scale for any-to-any traffic patterns Extended and expanded switching fabric access layer all the way to the server hypervisor for virtualized servers or to the server converged network interface card in case of nonvirtualized servers Reduced power and cooling needs Overall reduced operating expenses and capital expenditures Cisco Fabric Extender technology is delivered in the four following architectural choices depicted in Figure 11-28. Cisco Nexus 2000 Series Fabric Extenders (FEX): Extends the fabric to the top of rack 2. Cisco Virtual Machine Fabric Extender (VM-FEX): Extends the fabric into the hypervisor for virtual machines 4. Cisco Adapter Fabric Extender (Adapter-FEX): Extends the fabric into the server 3. Cisco UCS 2000 Series Fabric Extender (I/O module): Extends the fabric into the Cisco UCS Blade Chassis Cisco Fabric Extender architectural choices are not mutually exclusive. for example. Figure 11-28 Cisco Fabric Extender Architectural Choices 1.

must be forwarded to the parent switch for forwarding. Figure 11-29 depicts the optimal cabling strategy made available by leveraging Cisco Fabric Extender architecture for Top-of-Rack deployment.connect to Cisco Nexus 2000 Series FCoE Fabric Extender. Note To reduce the load on the control plane of the parent device. Collectively. 2248PQ. At the time of writing this book. FCoE and Cisco Nexus 2000 Series Fabric Extenders Extending FCoE to the top of rack by leveraging Cisco Nexus 2000 Series Fabric Extenders creates a scale-out Unified Fabric solution. The following protocols are supported: Link Layer Discovery Protocol (LLDP). and Link Aggregation Control Protocol (LACP). 2232TM. and management functions are performed on the parent switch. This model allows simple cabling and easier rollout of racks and PODs. so any traffic. 2232TM-E. administration. 2348UPQ. . Cisco Nexus 2000 Series FCoE Fabric Extenders support optimal cabling strategy that simplifies network operations. even between the two ports on the same FCoE Fabric Extender. Cisco Nexus 2000 Series FCoE Fabric Extenders operate as remote line cards to the Cisco Nexus 5000 Series parent switch. Longer inter-rack horizontal runs of fiber: Cisco Nexus 2000 Series FCoE Fabric Extenders in each rack are connected to parent switches that are placed at the end or middle of the row. they are referred to as Cisco Nexus 2000 Series FCoE Fabric Extenders. Physical host interfaces on a Cisco Nexus 2000 Series FCoE Fabric Extender are represented as logical interfaces on the Cisco Nexus 5000 Series parent switch. More on that later in this chapter. Cisco Discovery Protocol (CDP). and 2348TQ. or FCoE Fabric Extenders for simplicity. the following Cisco Nexus 2000 Series Fabric Extender models support FCoE: 2232PP. Cisco NX-OS provides the capability to offload link-level protocol processing to the FCoE Fabric Extender CPU. Data Center Bridging Exchange (DCBX). All operation. FCoE Fabric Extenders have no local switching capabilities. Short intra-rack server cable runs: Twinax or fiber cables connect servers to top-of-rack Cisco Nexus 2000 Series FCoE Fabric Extenders.

In that case. This parent switch exclusively manages the ports on the FCoE Fabric Extender and provides its features. The following sections examine each one in the context of FCoE. also known as network interfaces (NIF). If the rack server density is low enough. and as such should be carefully considered. FCoE Fabric Extenders can also be deployed every few racks to accommodate the required switch interface capacity. which can influence power consumption and cooling. For example. Connection could be in a form of a single link or a port channel. FCoE and Straight-Through Fabric Extender Attachment Topology In the straight-through Fabric Extender attachment topology. Two attachment topologies exist when connecting Cisco Nexus 2000 Series FCoE Fabric Extenders to Cisco Nexus 5000 Series parent switch: straight-through and vPC. Figure 11-30 depicts straight-through FCoE Fabric Extender attachment topology.Figure 11-29 FCoE Fabric Extenders Cabling Strategy Each rack could have two FCoE Fabric Extenders to provide in-rack server connectivity redundancy. server cabling will be extended beyond a single rack (for the racks where FCoE Fabric Extenders are absent). Cisco Nexus 2232 FCoE Fabric Extender has eight 10 Gigabit Ethernet fabric uplinks. . each FCoE Fabric Extender is connected to a single Cisco Nexus parent switch. up to a maximum number of uplinks available on the FCoE Fabric Extender.

. State: Active The number of uplinks also determines the oversubscription levels introduced by the FCoE Fabric Extender into server traffic. Example 11-2 Displaying FCoE Fabric Extender Registration Info on the Parent Switch Click here to view code image nexus5000# show fex 100 FEX: 100 Description: FEX0100 state: Online FEX version: 6.Interface Up. will keep being up on the parent switch.Figure 11-30 Straight-Through FCoE Fabric Extender Attachment Topology As long as a single uplink between the FCoE Fabric Extender and the parent switch exists. also known as host interfaces (HIF).0(2)N1(1)] Extender Serial: FOC1616R00Q Extender Model: N2K-C2248PQ-10GE.Interface Up. State: Active Eth2/4 . State: Active Eth2/1 . For example. Example 11-2 shows FCoE Fabric Extender registration information as displayed on the Cisco Nexus 5000 Series parent switch. State: Active Eth2/2 . State: Active Eth2/3 . and its front panel interfaces.0(2)N1(1) [Switch version: 6.Interface Up. FCoE Fabric Extender will keep being registered. Part No: 73-14775-02 Pinning-mode: static Max-links: 1 Fabric port for control traffic: Eth2/3 FCoE Admin: false FCoE Oper: true FCoE FEX AA Configured: false Fabric interface state: Po100 . Table 11-4 shows the oversubscription levels that each server will experience as its traffic passes through the Fabric Extender.Interface Up. if all host interfaces on the Cisco Nexus 2232 FCoE Fabric Extender are connected to 1 Gb servers.Interface Up.

you can choose to leverage either static or dynamic pinning. In the case of Cisco Nexus 2232 FCoE Fabric Extenders. Note It is recommended to use a power of 2 to determine the amount of FCoE Fabric Extender fabric uplinks. higher oversubscription levels will translate into more PAUSE frame and overall slower storage connectivity. With static pinning mode. static-pinning mapping looks like what’s shown in Table 11-5. With straight-through FCoE Fabric Extender attachment topology.Table 11-4 Number of Fabric Extender Uplinks and Oversubscription Levels It is recommended that you provision as many fabric uplinks as possible to limit the potential adverse effects of the oversubscription. where X/Y indicates the interface toward the Cisco Nexus 2000 Series FCoE Fabric Extender. Table 11-5 Static Pinning Mapping on the Cisco Nexus 2232 FCoE Fabric Extender It is also possible to display the static pinning mapping by issuing the show interface ethernet X/Y fex-intf command on the Cisco Nexus 5000 Series parent switch command-line interface. For FCoE traffic. This facilitates better port channel hashing and more even traffic distribution across the FCoE Fabric Extender fabric uplinks. a range of host interfaces on the Cisco Nexus 2000 Series FCoE Fabric Extender toward the servers are statically pinned to a given fabric uplink port. Example 11-3 Display Static Pinning Mapping on the Cisco Nexus 2232 FCoE Fabric Extender Click here to view code image . as shown in Example 11-3.

if the second fabric uplink interface on the Cisco Nexus 2232 FCoE Fabric Extender fails. Figure 11-31 FCoE Fabric Extender Static Pinning and Server NIC Failover ... For example. FCoE Fabric Extender will disable the server facing host interfaces. If the servers are dual-homed to two different Cisco Nexus 2000 Series FCoE Fabric Extenders. If a server is singly connected. or if the servers are dual-homed to the different pinning groups on the same Cisco Nexus 2000 Series FCoE Fabric Extender. in which case the server will failover to the redundant network connection. it will lose its network connectivity until FCoE Fabric Extender uplink is restored or until the server is physically replugged into an operational Ethernet port... host interfaces 5 to 8 pinned to that fabric uplink interface will be brought down and the servers. which are pinned to that specific fabric uplink interface. nexus5000# show interface ethernet 1/8 fex-intf Fabric FEX Interface Interfaces --------------------------------------------------Eth1/8 Eth100/1/32 Eth100/1/31 Eth100/1/30 Eth100/1/1 Eth100/1/29 The essence of the static pinning operation is that if Cisco Nexus 2000 Series FCoE Fabric Extender’s fabric uplink interface toward the Cisco Nexus 5000 Series parent switch fails. a failover event will be triggered on the server.nexus5000# show interface ethernet 1/1 fex-intf Fabric FEX Interface Interfaces --------------------------------------------------Eth1/1 Eth100/1/4 Eth100/1/3 Eth100/1/2 . which are connected to these host interfaces. Figure 11-31 depicts server connectivity failover in Cisco Nexus 2232 FCoE Fabric Extender static pinning deployment. will see their Ethernet link go down.

With dynamic pinning mode. This results in decreased uplink bandwidth. the oversubscription levels will raise with any failed individual FCoE Fabric Extender uplink interface. server traffic crossing the FCoE Fabric Extender is hashed onto the remaining member interfaces of the uplink port channel. where each group of four 1 Gb FCoE Fabric Extender host interfaces shares the bandwidth of one 10 GB FCoE Fabric Extender uplink. even in the case of FCoE Fabric Extender uplink interface failure. a port channeled fabric uplink interface is leveraged to pass the traffic between the Cisco Nexus 2000 Series FCoE Fabric Extender and the Cisco Nexus 5000 Series parent switch. a failure of a single port channel member interface does not bring down any host facing interfaces on the FCoE Fabric Extender. each FCoE Fabric Extender is connected to both Cisco Nexus parent switches. If an individual port channel fabric uplink member interface failure occurs. which translates to higher oversubscription levels for the server traffic. while in case of dynamic pinning. The possible downside of static pinning is that it relies on healthy server network connectivity failover operation. This connection could be in a form of a single link or a port . Figure 11-32 depicts dynamic pinning behavior on the Cisco Nexus 2232 FCoE Fabric Extender. which should be regularly verified.Static pinning configuration on the Cisco Nexus 2232 FCoE Fabric Extender results in a 4:1 oversubscription ratio. and as such. FCoE and vPC Fabric Extender Attachment Topology In the vPC FCoE Fabric Extender attachment topology. Figure 11-32 FCoE Fabric Extender Dynamic Pinning and Server NIC Failover The most significant difference between static and dynamic pinning modes is that static pinning mode allows maintaining the same oversubscription levels for the traversing server traffic.

FCoE Fabric Extender will keep being registered. Both parent switches manage the ports on that FCoE Fabric Extender and provide its features. the number of uplinks determines the oversubscription levels introduced by the FCoE Fabric Extender for server traffic. Example 11-4 shows an example of Cisco Nexus 5000 Series parent switch configuration of the FCoE Fabric Extender uplink port channel for vPC attached FCoE Fabric Extender. As long as a single uplink exists between the FCoE Fabric Extender and any one of its two parent switches. Cisco Nexus 5000 Series parent switches automatically check for compatibility of some of these parameters on the vPC . up to a maximum number of uplinks available on the FCoE Fabric Extender. Example 11-4 Fabric Extender Uplink Port Channel for vPC Attached FCoE Fabric Extender Click here to view code image nexus5000-1(config)# interface e1/1-4 nexus5000-1(config-if-range)# channel-group 100 nexus5000-1(config)# interface po100 nexus5000-1(config-if)# vpc 100 nexus5000-1(config-if)# switchport mode fex-fabric nexus5000-1(config-if)# fex associate 100 Relevant configuration parameters must be identical on both parent switches. As in the case of straight-through FCoE Fabric Extender attachment.channel. and its front panel interfaces will keep being up on the parent switches. Figure 11-33 depicts FCoE Fabric Extender attachment topology leveraging vPC topology with either single or port-channeled fabric uplink interfaces toward each of the Cisco Nexus 5000 Series parent switches. they must be placed in a vPC domain. Figure 11-33 vPC FCoE Fabric Extender Attachment Topology Because in case of vPC FCoE Fabric Extender attachment topology both Cisco Nexus 5000 Series parent switches control the FCoE Fabric Extender.

FCoE Fabric Extenders. It is referred to as Enhanced vPC. To simplify the configuration tasks. The per-interface parameters must be consistent per interface. configuration changes that are made to host interfaces on a vPC attached FCoE Fabric Extender must be manually synchronized between the two parent switches. put in a vPC domain. Cisco Nexus 5000 Series parent switches support a configuration-synchronization feature. therefore. In this case. and the global parameters must be consistent globally. even though network traffic is dual homed between the FCoE Fabric Extender and a parent switch pair. whether it is accomplished at the port level. High availability is an essential requirement in most data center designs.interfaces. the FCoE traffic must be single homed to maintain SAN Side A / Side B isolation. vPC FCoE Fabric Extender attachment topology does not support static pinning. or active) Link speed per port channel Duplex mode per port channel Trunk mode per port channel (native VLAN) Spanning Tree Protocol mode Spanning Tree Protocol region configuration for Multiple Spanning Tree (MST) Protocol Enable or disable state per VLAN Spanning Tree Protocol global settings Spanning Tree Protocol interface settings Quality of service (QoS) configuration and parameters (PFC. such as Cisco Nexus 5500 switches. while network traffic is allowed to use both parent switches. To achieve connectivity redundancy. Port channel mode (on. or even at the physical network level. FCoE servers can use a port channeled connection toward a pair of upstream Cisco Nexus 2000 Series FCoE Fabric Extenders attached to two different Cisco Nexus parent switches. MTU) Note The two parent switches are configured independently. which allows the configuration for host interfaces on the Cisco Nexus 2000 Series FCoE Fabric Extender to be synchronized between the two parent switches. Figure 11-34 depicts Enhanced vPC topology where each FCoE Fabric Extender is pinned to only a single Cisco Nexus 5500 parent switch for the FCoE traffic. DWRR. the supervisor level. so dynamic pinning is automatically leveraged instead. Note Cisco Nexus 5000 (5010 and 5020) switches do not support Enhanced vPC. . and Cisco Nexus parent switches. This setup creates a dual-layer vPC between servers. off.

Example 11-5 shows the command-line interface configuration required to pin the FCoE Fabric Extender to only a single Cisco Nexus 5500 parent switch for the FCoE traffic in Enhanced vPC topology. This is also true for the return FCoE traffic from a Cisco Nexus 5500 parent switches toward Cisco Nexus 2000 Series FCoE Fabric Extenders where FCoE servers are connected. As a result. and the FCoE traffic from FEX 102 is sent only to N5500-2. traffic load on the FCoE Fabric Extender uplink ports is not evenly distributed because FCoE traffic from a Fabric Extender is pinned to only a single Cisco Nexus . restricting FCoE traffic to a single Cisco Nexus parent switch is called Fabric Extender pinning.Figure 11-34 FCoE Traffic Pinning in Enhanced vPC Topology With Enhanced vPC topology. although FEX 101 and FEX 102 are connected and registered to both N5500-1 and N5500-2. In the Enhanced vPC topology. Example 11-5 FCoE Traffic Pinning in Enhanced vPC Topology Configuration Click here to view code image nexus5500-1(config)# fex 101 nexus5500-1(config-fex)# fcoe nexus5500-2(config)# fex 102 nexus5500-2(config-fex)# fcoe In this configuration. the FCoE traffic from FEX 101 is sent only to N5500-1. SAN A and SAN B separation is achieved.

. Adapter-FEX Cisco Adapter-FEX technology virtualizes PCIe bus. This results in a reduced number of network and storage switch ports. so network and FCoE traffic each get half of the guaranteed bandwidth in case of congestion. require servers to be equipped with a large number of network connections. Enough FCoE Fabric Extender uplink bandwidth should be provisioned to prevent the undesirable oversubscription of links that carry both FCoE and network traffic. To prevent such an imbalance of traffic distribution across FCoE Fabric Extender uplinks. leveraging Adapter-FEX technology. however. each type of traffic can take all available bandwidth. FCoE and Server Fabric Extenders Previous sections describe how Cisco Fabric Extender architecture leveraging Cisco Nexus 2000 Series FCoE Fabric Extenders provides substantial advantages for FCoE deployment. When there is no congestion. Figure 11-35 Adapter-FEX Server Topologies The Cisco Adapter-FEX technology addresses these main challenges: Organizations are deploying virtualized workloads to meet the strong need for saving costs and reducing physical equipment. as well as the server physical adapters. We are now going to further extend those advantages into the servers. while the network traffic is hashed across all FCoE Fabric Extender uplinks. Straight-through FCoE Fabric Extender topology also allows easier control of how the bandwidth of FCoE Fabric Extender uplink is shared between the network and FCoE traffic by configuring an ingress and egress queuing policy. Figure 11-35 depicts server connectivity topologies with and without Cisco Nexus 2000 Series FCoE Fabric Extenders supporting Adapter-FEX solution deployment. as discussed earlier in this chapter. presenting a server operating system with a set of virtualized adapters for network and storage connectivity. The default QoS template allocates half of the bandwidth to each one. including the ones going to the second Cisco Nexus parent switch.parent switch for SAN side separation. you can leverage straight-through topology for FCoE deployments. Virtualization technologies and the increased number of CPU cores.

it is challenging to provide guaranteed bandwidth. and so on. which directly affects power. disparate provisioning. in which case it has full network connectivity via its physical (nonvirtualized) interface to the upstream Cisco Nexus 5500 switch or Cisco Nexus 2000 Series FCoE Fabric Extender. network administrators experience lack of visibility into the traffic that is exchanged among virtual machines residing on the same host. management. Example 11-6 Installing and Activating the Cisco Adapter-FEX Feature on Cisco Nexus 5500 Switches Click here to view code image nexus5500(config)# install feature-set virtualization nexus5500(config)# feature-set virtualization Note Cisco Nexus 5000 (5010 and 5020) switches do not support Adapter-FEX. Network administrators face challenges in establishing policies. No licensing requirement exists on the Cisco Nexus 5500 switch to run Adapter-FEX. and maintaining policy consistently across mobility events.This increase has a tremendous impact on CapEx and OpEx because of the large number of adapters. The server can either be directly connected to the Cisco Nexus 5500 switch or it can be connected through the Cisco Nexus 2000 Series FCoE Fabric Extender. Features. and cable management costs. and operational models. however. Example 11-7 depicts command-line interface configuration required to enable VN-Tag mode on the switch interface or the FCoE Fabric Extender host interface. They also often encounter a dramatic increase in the number of management points. private VLANs. cooling. as shown in Example 11-6. which in turn is connected and registered with the Cisco Nexus 5500 parent switch. enforcing policies. QOS. cables. the network adapter on the server operates in a Classical Ethernet mode. . such as ACLs. and switch ports. Before vEth interfaces can be created. the physical Cisco Nexus 5500 switch interface the server is connected to must be configured with VN-Tag mode. Virtual Ethernet Interfaces By virtualizing the server Converged Network Adapter. With a consolidated infrastructure. Before VN-Tag mode is activated. and inconsistency between the physical and virtual data center access layers. Data Center Bridging Exchange (DCBX) protocol must also run between the Cisco Nexus 5500 switch interface and the connected server. it does require feature installation and activation. latency. In virtualized environments. available on the Cisco Nexus 5500 switches can be equally applied on the vEth interfaces representing vNICs. Adapter-FEX allows instantiating virtual network interface card (vNIC) server interface mapped to the virtual Ethernet (vEth) interface on the upstream Cisco Nexus 5500 switch. Network administrators struggle to link the virtualized network infrastructure and virtualized server platforms to the physical network infrastructure without degrading performance and with a consistent feature set and operational model. and isolation of traffic across multiple cores and virtual machines.

The vNIC on the Converged Network Adapter also must be configured to use channel 50 for successful binding. the server starts to include Network Interface Virtualization (NIV) capability type-length-value (TLV) fields in its Data Center Bridging Exchange (DCBX) advertisements. With a static vEth switch interface. Figure 11-36 Displaying Network Adapter Mode of Operation on Cisco UCS C-Series VIC It is important to distinguish between static and dynamic vEth interface provisioning. the network administrator specifies which channel a given vEth interface uses. as depicted in Figure 11-36.Example 11-7 Switch Interface VN-Tag Mode Configuration Click here to view code image nexus5500-1(config)# int eth101/1/1 nexus5500-1(config-if)# switchport mode vntag nexus5500-1(config)# int eth1/1 nexus5500-1(config-if)# switchport mode vntag After VN-Tag mode had been activated. as well as between fixed and floating vEth interfaces. the server is connected to a Cisco Nexus 2000 Series FCoE Fabric Extender host interface Ethernet 101/1/1. Example 11-8 Static vEth Interface Provisioning Click here to view code image nexus5500(config)# interface veth10 nexus5500(config-if)# bind interface ethernet 101/1/1 channel 50 . which is bound to a vEth interface 10 on the upstream Cisco Nexus 5500 switch. Example 11-8 shows vEth configuration and the channel binding. before the server administrator completes all the necessary configuration tasks on the server side to bring up the vEth-to-vNIC relationship. leveraging channel 50. that is. In this configuration. to validate that the Converged Network Adapter is operating in NIV mode—that is. Be sure to reboot the server to switch Converged Network Adapter. the network administrator manually configures interface provisioning model parameters. operation from Classical Ethernet to Network Interface Virtualization. The server administrator must define a vNIC on the Converged Network Adapter with the same channel number. supporting Converged Network Adapter virtualization—you can inspect the Cisco UCS Integrated Management Controller (CIMC) tool and observe the Encap type of NIV listed for the physical adapter. It is possible to associate configuration to the interface before it is even brought up. When creating a static vEth interface. the UCS Virtual Interface Card. For the Cisco UCS C-series servers.

It is required to have a unique channel number given to each network interface. upon reboot of the Cisco Nexus 5500 switch. where each vNIC in the Converged Network Adapter might be associated with the vNIC of the virtual machine. causing them to be connected through . which does not support migration across physical switch interfaces. Example 11-9 vethernet Port Profile for Adapter-FEX Click here to view code image port-profile type vethernet NIC-VLAN10 switchport access vlan 10 switchport mode access state enabled Port profiles are configured on the Cisco Nexus 5500 switch and are communicated to the Converged Network Adapter on the server through the VIC protocol. Virtual machines often migrate from one physical server to another. Such manual provisioning process could be prone to human error and. Example 11-9 depicts an example of a vethernet port profile setting port mode to access and assigning it to VLAN 10. This way. to associate port profiles to vNICs. The server administrator can also leverage Converged Network Adapter management tools. the association of the vEth interface to the physical switch interface and the channel will be already present. Example 11-10 Dynamic vEth Interface Provisioning Click here to view code image nexus5500(config)# interface vethernet32769 nexus5500(config-if)# inherit port-profile NIC-VLAN50 nexus5500(config-if)# bind interface Ethernet105/1/2 channel 1 nexus5500(config)# interface vethernet32770 nexus5500(config-if)# inherit port-profile NIC-VLAN60 nexus5500(config-if)# bind interface Ethernet105/1/2 channel 2 The interface numbering for dynamically instantiated vEth interfaces is always above 32768. A fixed vEth interface is a virtual interface. This enables the network administrator to use the lower range for statically provisioned vEth interfaces without being concerned about interface number overlap. switch running configuration must be saved by issuing copy start run CLI command. Adapter-FEX uses a vethernet type port profile to interface with the server administrative provisioning tool. as such. which can be applied identically to multiple switch interfaces. This could present an issue in server virtualization environments. which in turn triggers the dynamic creation of the vEth interface on the upstream Cisco Nexus 5500 switch. such as UCS Cisco Integrated Management Controller. the dynamic provisioning model is preferred. To preserve this dynamically instantiated vEth interface across reboots. Example 11-10 shows a dynamic provisioning model to instantiate two vEth interfaces on the Cisco Nexus 5500 switch leveraging port profiles. The dynamic provisioning model of Adapter-FEX is based on the concept of port profiles of type vethernet. Port profiles provide a template configuration.

Example 11-11 shows configuration of binding virtual Fibre Channel interface for Side A fabric. and so on. Virtual Interface Redundancy Given that the server can have two physical interfaces. so any FCoE traffic generated by the server is sent from vHBA to the vFC. Shutting down the server vHBA interface has no effect on the server vNIC interfaces. zoning. even when instantiated on the same Converged Network Adapter on the server side. VN-Tag channel 10 is leveraged on both sides. or to a port channel interface when the server is connected to the Cisco Nexus 5500 switches over a vPC. apply to virtual Fibre Channel interfaces just as they apply to physical Fibre Channel interfaces. to a MAC address when the server is remotely connected over a Layer 2 bridge. interface Ethernet1/1 is connected to a server with virtualized Converged Network Adapter. interface 0 and interface 1. Adapter-FEX allows instantiating a virtual Host Bus Adapter (vHBA) server interface mapped to the virtual Fibre Channel (vFC) interface on the upstream Cisco Nexus 5500 switch. All Fibre Channel properties. Side B fabric configuration applied on the other Cisco Nexus 5500 switch is identical. A virtual vEth interface that “migrates” across the different physical switch interfaces along with a virtual machine is called a floating vEth interface. vNICs and vHBAs are separate logical entities. so they can be shut down. The binding is to a physical switch interface when the server is directly connected to the Cisco Nexus 5500 switch or the Cisco Nexus 2000 Series FCoE Fabric Extender. with network interface virtualization powered by Adapter-FEX. mapping the vHBA defined on the server to the vEth interface defined on Cisco Nexus 5500 switch. a single instantiated vNIC might be able to use both for the failover. Traditionally. on its Converged Network Adapter. The vFC interfaces on the Cisco Nexus 5500 switches are equivalent to the regular Fibre Channel interfaces.different physical switch interfaces. The vEth interface is in turn bound to the vFC interface. Example 11-11 virtual Fibre Channel Interface Binding Click here to view code image nexus5500-SideA(config)# configure terminal nexus5500-SideA (config)# interface ethernet 1/1 nexus5500-SideA (config-if)# switchport mode vntag nexus5500-SideA nexus5500-SideA nexus5500-SideA nexus5500-SideA (config)# interface veth 10 (config-if)# bind interface eth 1/1 channel 10 (config-if)# switchport mode trunk (config-if)# switchport trunk allowed vlan 50 nexus5500-SideA (config)# interface vfc 10 nexus5500-SideA (config-if)# bind interface veth 10 nexus5500-SideA (config-if)# no shutdown In the previous example. FCoE and Virtual Fibre Channel Interfaces By virtualizing the server Converged Network Adapter. A virtual Fibre Channel interface must be bound before it can be used. resulting in shutting down of the corresponding server vHBA interface. such as addressing. Remember. .

The VN-Tag/channel provides the virtual connection between the vNIC in the Converged Network Adapter and the vEth interface on the upstream Cisco Nexus 5500 switch. which was terminated on either one or two upstream switches. In the latter case. With the Active/Standby model. Loss of network connectivity on the active server NIC forced a failover event.to achieve server network connectivity redundancy. one that does not involve the server operating system. each vNIC can be associated with both physical interface 0 and the physical interface 1 on the virtualized Converged Network Adapter. There is also no need to run vPC between the upstream Cisco Nexus 5500 switches. upstream switches had to support multilink aggregation features. With fabric failover. server administrators had a choice between the Active/Standby and Active/Active models. Figure 11-37 contrasts the two traditional server NIC connectivity redundancy models. With the Active/Active model. one leveraging NIC failover and the other making use of vPC. . all server NICs were typically bundled into a port channel. passing network traffic to and from the network. Figure 11-38 depicts fabric failover operation. in which case the second server NIC became active. This approach is known as fabric failover. Figure 11-37 Server NIC Connectivity Redundancy Models The use of the Adapter-FEX feature allows a different approach to network connectivity failover. which simplifies the setup even further. and it is available on Cisco UCS blade and rack servers. one of the server NICs was chosen to actively pass network traffic to and from the network. such as virtual PortChannel.

instantiated vHBAs do not leverage fabric failover functionality. and vice versa. Cisco Nexus 5500 switches enable the choice of topology for connecting FCoE servers with the Adapter-FEX feature through the Cisco Nexus 2000 Series FCoE Fabric Extenders. and they can be leveraged at the same time in a cascading fashion. SAN best practices dictate physical separation of Side A/Side B fabrics for proper redundancy. This also implies that at least two vHBAs should be instantiated in the Converged Network Adapter to achieve FCoE connectivity redundancy. FCoE and Cascading Fabric Extender Topologies So far. storage traffic fails over to Side B. At the same time. but not both. because FCoE Fabric Extenders are attached to both Cisco Nexus 5500 parent .Figure 11-38 Fabric Failover Operation For FCoE connectivity. As mentioned earlier. and failover is handled in a more traditional way between the vHBA adapters. In case of Side A failure. With Adapter-FEX. The following sections describe three of these choices: Adapter-FEX with Cisco Nexus 2000 Series FCoE Fabric Extenders in vPC topology Adapter-FEX with Cisco Nexus 2000 Series FCoE Fabric Extenders in straight-through topology with vPC domain Adapter-FEX with Cisco Nexus 2000 Series FCoE Fabric Extenders in straight-through topology (without vPC domain) Adapter-FEX and FCoE Fabric Extenders in vPC Topology In this case. each vHBA is mapped to a separate server physical interface. we discussed solutions for delivering scale-out FCoE deployment leveraging either Cisco Nexus 2000 Series FCoE Fabric Extenders or the Server Fabric Extenders in the form of AdapterFEX. Cisco Nexus 2000 Series FCoE Fabric Extenders are attached in a vPC topology to the upstream Cisco Nexus 5500 parent switches. The Adapter-FEX feature instantiates two vHBA interfaces. these two solutions are not mutually exclusive. each mapped to one of the two server physical interfaces in the Converged Network Adapter.

This is achieved by pinning the FCoE Fabric Extender (for FCoE and FIP traffic) to only a single Cisco Nexus 5500 parent switch. Example 11-12 Virtual Fibre Channel Interface Configuration and Binding Click here to view code image . FCoE VLANs should also not be trunked over a vPC peerlink between the two Cisco Nexus 5500 parent switches to maintain Side A/Side B separation. To follow SAN best practices with Side A/Side B separation. they are in fact connected to both Side A and Side B SAN fabrics. even though it is physically connected to both.1Q tags enables you to segregate network and storage traffic over the same link.switches in vPC topology. and using FCoE initialization protocol (FIP) facilitates negotiation of the VLAN used for FCoE traffic. with the exception of using a different Cisco Nexus 2000 Series FCoE Fabric Extender and Cisco Nexus 5500 switch. Figure 11-39 Adapter-FEX and FCoE in vPC Fabric Extender Topology The configuration is carried out by defining the Cisco Nexus 2000 Series FCoE Fabric Extender host interface as mode VN-Tag. Configuration of Side B fabric is practically identical. each Cisco Nexus 2000 Series FCoE Fabric Extender must belong to just one of the two fabrics. The vEth interface is then bound to the vFC interface. Figure 11-39 depicts the use of Adapter-FEX and FCoE in a vPC Fabric Extender attachment topology. The use of IEEE 802. vEth interface on the Cisco Nexus 5500 switch is then associated with the vHBA channel defined in the server Converged Network Adapter settings. The vEth interface used for sending Ethernet encapsulated Fibre Channel traffic (FCoE) must be defined as trunk. Example 11-12 shows the Virtual Fibre Channel interface configuration and binding for servers connected to Cisco Nexus 2000 Series FCoE Fabric Extender host interfaces in Side A fabric.

each mapped to one of the two physical interfaces on the server Converged Network Adapter. For traffic load distribution. and the other half of the vNICs can be bound to physical interface 1 as their primary path toward the network. Cisco Nexus 2000 Series FCoE Fabric Extenders are attached in a straight-through topology to the upstream Cisco Nexus 5500 parent switches. Figure 11-40 depicts the use of Adapter-FEX and FCoE in a Fabric Extender straight-through attachment topology with vPC on the upstream Cisco Nexus 5500 switches. There is no need to use Fabric Extender pinning for Side A/Side B separation. but not both. . server Converged Network Adapter physical interface 0 and physical interface 1 can be utilized by vNICs in a round-robin fashion. while switches participate in a vPC domain. Adapter-FEX and FCoE Fabric Extenders in Straight-Through Topology with vPC In this case. meaning half of the vNICs can be bound to physical interface 0 as the primary path toward the network.nexus5500(config)# configure terminal nexus5500(config)# fex 101 nexus5500(config-fex)# fcoe nexus5500(config)# interface ethernet 101/1/1 nexus5500(config-if)# switchport mode vntag nexus5500(config)# interface veth 10 nexus5500(config-if)# bind interface eth 101/1/1 channel 10 nexus5500(config-if)# switchport mode trunk nexus5500(config-if)# switchport trunk allowed vlan 50 nexus5500(config)# interface vfc 10 nexus5500(config-if)# bind interface veth 10 nexus5500(config-if)# no shutdown Server vNICs can leverage fabric failover for network traffic. Adapter-FEX instantiates two vHBA interfaces.

server Converged Network Adapter physical interface 0 and physical interface 1 can be utilized by vNICs in a round-robin fashion. Adapter-FEX and FCoE Fabric Extenders in Straight-Through Topology In this case. half of the vNICs can be bound to physical interface 0 as the primary path toward the network.Figure 11-40 Adapter-FEX and FCoE in Straight-Through Fabric Extender Topology with vPC FCoE VLANs should also not be trunked over a vPC peer-link between the two Cisco Nexus 5500 parent switches to maintain Side A/Side B separation. while those switches do not participate in a vPC domain. that is. For traffic load distribution. Server vNICs can leverage fabric failover for network traffic. Figure 11-41 depicts the use of Adapter-FEX and FCoE in a Fabric Extender straight-through attachment topology without vPC on the upstream Cisco Nexus 5500 switches. . and the other half of the vNICs can be bound to physical interface 1 as their primary path toward the network. Cisco Nexus 2000 Series FCoE Fabric Extenders are attached in a straight-through topology to the upstream Cisco Nexus 5500 parent switches.

Second.html Exam Preparation Tasks . Reference List Data Center Bridging: http://www. First. Fabric failover does not rely on the upstream vPC domain.cisco. The server administrator should be aware of this condition.Figure 11-41 Adapter-FEX and FCoE in Straight-Through Fabric Extender Topology Without vPC This case is similar to the previous one with two distinct differences. which is not a major concern because fabric failover provides superior redundancy and high availability to using port channels.cisco.html Fibre Channel over Ethernet: http://www.com/c/en/us/solutions/data-centervirtualization/fibre-channel-over-ethernet-fcoe/index.cisco.html Adapter-FEX feature: http://www.com/c/en/us/products/switches/nexus-2000series-fabric-extenders/index.html Cisco Fabric Extender Technology: http://www. the lack of a vPC domain between upstream Cisco Nexus 5500 switches disallows the use of a port channeled vPC connection from server vNICs. the lack of a vPC peer link between Cisco Nexus 5500 switches simplifies configuration tasks and minimizes the risk of human error in the sense that Side A and Side B are isolated by design.com/c/en/us/solutions/data-center-virtualization/adapterfabric-extender-adapter-fex/index.com/c/en/us/solutions/data-center-virtualization/ieee-802-1data-center-bridging/index.cisco.

1Qau Data Center Bridging Exchange (DCBX) IEEE 802.1Qaz Quantized Congestion Notification (QCN) IEEE 802.1Qbb Enhanced Transmission Selection (ETS) IEEE 802.Review All Key Topics Review the most important topics in the chapter. Table 11-6 Key Topics for Chapter 11 Define Key Terms Define the following key terms from this chapter.1AB Link Layer Discovery Protocol (LLDP) . noted with the key topic icon in the outer margin of the page. and check your answers in the Glossary: Data Center Bridging Adapter-FEX Virtual Machine Fabric Extender (VM-FEX) Priority Flow Control (PFC) IEEE 802. Table 11-6 lists a reference for these key topics and the page numbers on which each is found.

Type-Length-Value (TLV) Fibre Channel over Ethernet (FCoE) virtual network interface card (vNIC) virtual Host Bus Adapter (vHBA) virtual Ethernet (vEth) port profile vethernet network interface card (NIC) Converged Network Adapter (CNA) VN-Tag .

you should mark that question as wrong for purposes of the self-assessment. “Do I Know This Already?” Quiz The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. “Answers to the ‘Do I Know This Already?’ Quizzes. F2 b. F2e . Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security. You can find the answers in Appendix A. and flexible converged infrastructure. Multihop Unified Fabric This chapter covers the following exam topics: Network Topologies and Deployment Scenarios for Multihop Unified Fabric FCoE and Cisco FabricPath Stretching Unified Fabric beyond the boundary of a single hop allows building a scalable. If you do not know the answer to a question or are only partially sure of the answer. Multihop deployment carries the advantages introduced by the Unified Fabric across the entire environment. Which of the following Cisco Nexus 7000 series switches I/O modules support FCoE multihop? a. read the entire chapter. 1. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics. Table 12-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions.Chapter 12. In this chapter we look closer into the characteristics of the multihop Unified Fabric and examine various multihop topologies you can leverage for such deployment. robust.” Table 12-1 “Do I Know This Already?” Section-to-Question Mapping Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter.

but only if they have 16 Gb Fibre Channel I/O modules d. What do Cisco Nexus 7000 Series switches running FCoE-NPIV mode do? a. F2a d. F1 e.c. Number of server adapters b. Yes. True b. Have interfaces of a type VF_Port d. M2 2. What do FIP snooping bridges do? a. They snoop FIP packets during the fabric login process to make sure there is no man-in-themiddle attack from a node masquerading as an FCoE Fibre Channel Forwarder. False 5. Accept multiple fabric logins on the same physical interface b. True or false? A switch running in FCoE-NPV mode proxies fabric logins toward the upstream FCoE Fibre Channel Forwarder. They snoop FIP packets to provide lossless transport over multiple Ethernet segments. Proxy fabric logins toward the upstream Fibre Channel Forwarder switch c. c. Number of discrete switching systems d. unless you issue the system fcoe-npv force command-line interface command 6. Number of storage I/O operations 3. They snoop CDP and LLDP packets to make sure there is no man-in-the-middle attack from a node masquerading as an FCoE Fibre Channel Forwarder. a. b. VE_Port c. What is an FCoE port type on the aggregation layer switch running in full Fibre Channel switching mode toward the access layer switch also running in full Fibre Channel switching mode? a. They snoop FCoE data packets to make sure there is no man-in-the-middle attack from a node masquerading as an FCoE Fibre Channel Forwarder. VF_Port b. Yes b. VNP_Port d. Do not run Fibre Channel services 7. Number of total IP phones c. No c. What are the main drivers for access layer consolidation in the Unified Fabric environment? a. No. VN_Port . 4. d. Can Cisco Nexus 7000 Series switches be put into FCoE-NPV mode? a.

True or false? In Dynamic FCoE topology. Gradual introduction of FCoE into the larger environment while providing investment protection d.8. The most common case occurs with consolidation at the access layer. a. What is the reason to leverage the dedicated wire approach in multihop Unified Fabric deployment between access and aggregation layer switches? a. the interface that connects to the FCoE server is defined as a shared . In this case. FCoE is also supported over Cisco Nexus 2000 series Fabric Extenders attached to the parent Cisco Nexus 5000 Series switches. such as Cisco Nexus 5000 series switches. the FCoE traffic is forwarded over which of the following? a. FabricPath spine nodes run in full Fibre Channel switching mode. the data network can employ any technology or protocol of choice without influencing the storage network c. vPC orphan port d. Server traffic leverages a unified wire approach to consolidate network and storage traffic over the same physical cable. Various topologies exist for deploying multihop Unified Fabric. True b. FabricPath FCoE port 10. which replace network interface cards and host bus adapters by converging network and storage traffic over a single 10G (or faster) physical connection from the server to its first hop switch. Decoupling of topological dependencies between network and storage traffic. Access layer consolidation offers advantages in the form of reducing the number of adapters and the amount of discrete switching platforms to accommodate network and storage connectivity. All of the above 9. where switching elements can be deployed in the Top-of-Rack (TOR) or in the Mid/End-of-Row (MOR/EOR) fashion with or without Cisco Nexus 2000 series Fabric Extenders. With Dynamic FCoE. that is. Better traffic oversubscription management for the storage traffic with lesser dependency on features such as Priority Flow Control b. The reduction in server adapters results from the use of FCoE and Converge Network Adapters. These topologies offer consolidation in the different tiers of the data center switching design. vPC Peer-Link c. Cisco Nexus 7000 Series switches can be leveraged for direct server connectivity. In larger access layer environments. False Foundation Topics First Hop Access Layer Consolidation Multihop Unified Fabric carries network and storage traffic in a converged manner across an Ethernet medium beyond a single-hop topology. FabricPath core port b.

Cisco Nexus 7000 F3 line cards do not yet support FCoE functionality. For any other EtherType value. Sup 2 or Sup2e are required. Figure 12-1 Shared Interface Operation on Cisco Nexus 7000 Series Switches At the time of writing this book. FCoE is also not supported over Cisco Nexus 2000 Series Fabric Extenders attached to the parent Cisco Nexus 7000 Series switches.interface. If the EtherType value matches 0x8906 or 0x8914. Example 12-1 shows a configuration example where the Ethernet interface is defined as a shared interface. the frame is forwarded to one of the network VDCs. Example 12-1 Shared Interface Configuration Example Click here to view code image N7K(config)# vdc fcoe type storage N7K(config-vdc)# allocate shared interface e2/1 The reduction in discrete switching platforms results from the use of Cisco Nexus 5000/7000 Series . which means the switch examines the EtherType value in the incoming Ethernet frame header to determine whether the frame carries FCoE or FIP payload. and in case of F2 or F2e line cards. Figure 12-1 depicts the logic behind storage and network frame forwarding over the shared interfaces on the Cisco Nexus 7000 Series switches based on the EtherType field Ethernet header value in the incoming frames from the directly attached FCoE servers. F1 line cards support FCoE with Sup1. the frame is forwarded to the storage VDC defined ahead of time on the switch. At this time.

Cisco Nexus 7000 Series switches do not currently support native Fibre Channel connectivity. At the time of writing this book. Because Fibre Channel links are pointto-point links. or full Fibre Channel switching. nodes (ENodes) that want to communicate through the fabric must log in to the fabric first. This prevents erroneous or malicious behavior resulting from spoofing Fibre Channel addressing. however. Each one comes with its own set of characteristics. A separate set of cables assists in operational transition from disparate fabrics to the Unified Fabric. access layer switches can employ one of the three modes of operation: FIP snooping. After an FIP snooping bridge sees a CNA log in to the fabric through a specific FCoE Fibre Channel Forwarder. physical point-to-point relationships between the ENodes and FCoE Fibre Channel Forwarders is replaced by the FIP protocol. the Fibre Channel switch has complete control of the traffic received from the ENodes into the fabric. between a CNA on the server and the upstream FCoE Fibre Channel Forwarder. To support multihop FCoE behavior. Cisco Nexus 5000 Series switches leverage native Fibre Channel interfaces or the unified ports that are configured for the Fibre Channel mode of operation to connect to traditional Fibre Channel SAN fabric. while building logical point-to-point links between the two. this option is not always optimal in a scale-out Unified Fabric deployment. . which have not yet converged into the Unified Fabric. Figure 12-2 depicts FIP snooping behavior. In this case. FIP Snooping In traditional Fibre Channel Storage-Area Network (SAN). With FCoE. This is a particularly important functionality in a shared medium like Ethernet to make sure that rogue devices do not insert themselves into the fabric pretending to be FCoE Fibre Channel Forwarders.switches participating in the multihop Unified Fabric topology while allowing connectivity to the traditional Fibre Channel SAN fabrics. aggregation layer consolidation is typically achieved by leveraging a separate set of cables between the access layer and aggregation layer Unified Fabric switches. which prevents man-in-the-middle attacks where FCFMAC (the MAC address of the FCoE Fibre Channel Forwarder) spoofing can occur. The FIP snooping feature helps secure the logical point-to-point relationships between ENodes and FCoE Fibre Channel Forwarders. where aggregation layer switches process both network and storage traffic coming from the access layer. N_Port Virtualization (NPV). which we review next. FIP snooping bridges (FSB) are lossless Ethernet switches typically deployed at the access layer of the Unified Fabric environment that are capable of inspecting or “snooping” FIP messages. Aggregation Layer FCoE Multihop The next level of consolidation occurs at the aggregation layer. including Fabric Login (FLOGI) messages. It is still possible to make sure that each ENode physically connects to the FCoE Fibre Channel Forwarder. it dynamically creates an access control list to guarantee that the communication between that CNA and FCoE Fibre Channel Forwarder remains point-to-point. access layer and aggregation layer switches work in tandem to deliver a scale-out Unified Fabric solution. which enables the ENodes and the FCoE Fibre Channel Forwarders to be separated by several DCB-enabled Ethernet segments. preventing manin-the